1
|
Singh YP, Lobiyal DK. A comparative study of early stage Alzheimer's disease classification using various transfer learning CNN frameworks. NETWORK (BRISTOL, ENGLAND) 2024:1-29. [PMID: 39367861 DOI: 10.1080/0954898x.2024.2406946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 08/30/2023] [Accepted: 09/06/2024] [Indexed: 10/07/2024]
Abstract
The current research explores the improvements in predictive performance and computational efficiency that machine learning and deep learning methods have made over time. Specifically, the application of transfer learning concepts within Convolutional Neural Networks (CNNs) has proved useful for diagnosing and classifying the various stages of Alzheimer's disease. Using base architectures such as Xception, InceptionResNetV2, DenseNet201, InceptionV3, ResNet50, and MobileNetV2, this study extends these models by adding batch normalization (BN), dropout, and dense layers. These enhancements improve the model's effectiveness and precision in addressing the specified medical issue. The proposed model is rigorously validated and evaluated using publicly available Kaggle MRI Alzheimer's data consisting of 1280 testing images and 5120 patient training images. For comprehensive performance evaluation, precision, recall, F1-score, and accuracy metrics are utilized. The findings indicate that the Xception method is the most promising of those considered. Without employing five K-fold techniques, this model obtains a 99% accuracy and 0.135 loss score. In addition, integrating five K-fold methods enhances the accuracy to 99.68% while decreasing the loss score to 0.120. The research further included the evaluation of the Receiver Operating Characteristic Area Under the Curve (ROC-AUC) for various classes and models. As a result, our model may detect and diagnose Alzheimer's disease quickly and accurately.
Collapse
Affiliation(s)
| | - Daya Krishan Lobiyal
- School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
2
|
Wang H, Zhu H, Ding L, Yang K. Attention pyramid pooling network for artificial diagnosis on pulmonary nodules. PLoS One 2024; 19:e0302641. [PMID: 38753596 PMCID: PMC11098435 DOI: 10.1371/journal.pone.0302641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 04/09/2024] [Indexed: 05/18/2024] Open
Abstract
The development of automated tools using advanced technologies like deep learning holds great promise for improving the accuracy of lung nodule classification in computed tomography (CT) imaging, ultimately reducing lung cancer mortality rates. However, lung nodules can be difficult to detect and classify, from CT images since different imaging modalities may provide varying levels of detail and clarity. Besides, the existing convolutional neural network may struggle to detect nodules that are small or located in difficult-to-detect regions of the lung. Therefore, the attention pyramid pooling network (APPN) is proposed to identify and classify lung nodules. First, a strong feature extractor, named vgg16, is used to obtain features from CT images. Then, the attention primary pyramid module is proposed by combining the attention mechanism and pyramid pooling module, which allows for the fusion of features at different scales and focuses on the most important features for nodule classification. Finally, we use the gated spatial memory technique to decode the general features, which is able to extract more accurate features for classifying lung nodules. The experimental results on the LIDC-IDRI dataset show that the APPN can achieve highly accurate and effective for classifying lung nodules, with sensitivity of 87.59%, specificity of 90.46%, accuracy of 88.47%, positive predictive value of 95.41%, negative predictive value of 76.29% and area under receiver operating characteristic curve of 0.914.
Collapse
Affiliation(s)
- Hongfeng Wang
- School of Network Engineering, Zhoukou Normal University, Zhoukou, China
| | - Hai Zhu
- School of Network Engineering, Zhoukou Normal University, Zhoukou, China
| | - Lihua Ding
- College of Public Health, Zhengzhou University, Zhengzhou, China
| | - Kaili Yang
- Henan Provincial People’s Hospital, People’s Hospital of Zhengzhou University, Henan University People’s Hospital, Zhengzhou, China
| |
Collapse
|
3
|
Munuswamy Selvaraj K, Gnanagurusubbiah S, Roby Roy RR, John Peter JH, Balu S. Enhancing skin lesion classification with advanced deep learning ensemble models: a path towards accurate medical diagnostics. Curr Probl Cancer 2024; 49:101077. [PMID: 38480028 DOI: 10.1016/j.currproblcancer.2024.101077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 01/27/2024] [Accepted: 02/28/2024] [Indexed: 04/29/2024]
Abstract
Skin cancer, including the highly lethal malignant melanoma, poses a significant global health challenge with a rising incidence rate. Early detection plays a pivotal role in improving survival rates. This study aims to develop an advanced deep learning-based approach for accurate skin lesion classification, addressing challenges such as limited data availability, class imbalance, and noise. Modern deep neural network designs, such as ResNeXt101, SeResNeXt101, ResNet152V2, DenseNet201, GoogLeNet, and Xception, which are used in the study and ze optimised using the SGD technique. The dataset comprises diverse skin lesion images from the HAM10000 and ISIC datasets. Noise and artifacts are tackled using image inpainting, and data augmentation techniques enhance training sample diversity. The ensemble technique is utilized, creating both average and weighted average ensemble models. Grid search optimizes model weight distribution. The individual models exhibit varying performance, with metrics including recall, precision, F1 score, and MCC. The "Average ensemble model" achieves harmonious balance, emphasizing precision, F1 score, and recall, yielding high performance. The "Weighted ensemble model" capitalizes on individual models' strengths, showcasing heightened precision and MCC, yielding outstanding performance. The ensemble models consistently outperform individual models, with the average ensemble model attaining a macro-average ROC-AUC score of 96 % and the weighted ensemble model achieving a macro-average ROC-AUC score of 97 %. This research demonstrates the efficacy of ensemble techniques in significantly improving skin lesion classification accuracy. By harnessing the strengths of individual models and addressing their limitations, the ensemble models exhibit robust and reliable performance across various metrics. The findings underscore the potential of ensemble techniques in enhancing medical diagnostics and contributing to improved patient outcomes in skin lesion diagnosis.
Collapse
Affiliation(s)
- Kavitha Munuswamy Selvaraj
- Department of Electronics and Communication Engineering, R.M.K. Engineering College, RSM Nagar, Chennai, Tamil Nadu, India.
| | - Sumathy Gnanagurusubbiah
- Department of Computational Intelligence, SRM Institute of Science and Technology, kattankulathur, Tamil Nadu, India
| | - Reena Roy Roby Roy
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - Jasmine Hephzipah John Peter
- Department of Electronics and Communication Engineering, R.M.K. Engineering College, RSM Nagar, Chennai, Tamil Nadu, India
| | - Sarala Balu
- Department of Electronics and Communication Engineering, R.M.K. Engineering College, RSM Nagar, Chennai, Tamil Nadu, India
| |
Collapse
|
4
|
Zhong R, Gao T, Li J, Li Z, Tian X, Zhang C, Lin X, Wang Y, Gao L, Hu K. The global research of artificial intelligence in lung cancer: a 20-year bibliometric analysis. Front Oncol 2024; 14:1346010. [PMID: 38371616 PMCID: PMC10869611 DOI: 10.3389/fonc.2024.1346010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/18/2024] [Indexed: 02/20/2024] Open
Abstract
Background Lung cancer (LC) is the second-highest incidence and the first-highest mortality cancer worldwide. Early screening and precise treatment of LC have been the research hotspots in this field. Artificial intelligence (AI) technology has advantages in many aspects of LC and widely used such as LC early diagnosis, LC differential classification, treatment and prognosis prediction. Objective This study aims to analyze and visualize the research history, current status, current hotspots, and development trends of artificial intelligence in the field of lung cancer using bibliometric methods, and predict future research directions and cutting-edge hotspots. Results A total of 2931 articles published between 2003 and 2023 were included, contributed by 15,848 authors from 92 countries/regions. Among them, China (40%) with 1173 papers,USA (24.80%) with 727 papers and the India(10.2%) with 299 papers have made outstanding contributions in this field, accounting for 75% of the total publications. The primary research institutions were Shanghai Jiaotong University(n=66),Chinese Academy of Sciences (n=63) and Harvard Medical School (n=52).Professor Qian Wei(n=20) from Northeastern University in China were ranked first in the top 10 authors while Armato SG(n=458 citations) was the most co-cited authors. Frontiers in Oncology(121 publications; IF 2022,4.7; Q2) was the most published journal. while Radiology (3003 citations; IF 2022, 19.7; Q1) was the most co-cited journal. different countries and institutions should further strengthen cooperation between each other. The most common keywords were lung cancer, classification, cancer, machine learning and deep learning. Meanwhile, The most cited papers was Nicolas Coudray et al.2018.NAT MED(1196 Total Citations). Conclusions Research related to AI in lung cancer has significant application prospects, and the number of scholars dedicated to AI-related research on lung cancer is continually growing. It is foreseeable that non-invasive diagnosis and precise minimally invasive treatment through deep learning and machine learning will remain a central focus in the future. Simultaneously, there is a need to enhance collaboration not only among various countries and institutions but also between high-quality medical and industrial entities.
Collapse
Affiliation(s)
- Ruikang Zhong
- Beijing University of Chinese Medicine, Beijing, China
| | - Tangke Gao
- Beijing University of Chinese Medicine, Beijing, China
| | - Jinghua Li
- Beijing University of Chinese Medicine, Beijing, China
| | - Zexing Li
- Beijing University of Chinese Medicine, Beijing, China
| | - Xue Tian
- Guang'an Men Hospital, China Academy of Chinese Medical Sciences, Beijing, China
| | - Chi Zhang
- Beijing University of Chinese Medicine, Beijing, China
| | - Ximing Lin
- Beijing University of Chinese Medicine, Beijing, China
| | - Yuehui Wang
- Beijing University of Chinese Medicine, Beijing, China
| | - Lei Gao
- Dongfang Hospital, Beijing University of Chinese Medicine, Beijing, China
| | - Kaiwen Hu
- Dongfang Hospital, Beijing University of Chinese Medicine, Beijing, China
| |
Collapse
|
5
|
Ma L, Wan C, Hao K, Cai A, Liu L. A novel fusion algorithm for benign-malignant lung nodule classification on CT images. BMC Pulm Med 2023; 23:474. [PMID: 38012620 PMCID: PMC10683224 DOI: 10.1186/s12890-023-02708-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 10/12/2023] [Indexed: 11/29/2023] Open
Abstract
The accurate recognition of malignant lung nodules on CT images is critical in lung cancer screening, which can offer patients the best chance of cure and significant reductions in mortality from lung cancer. Convolutional Neural Network (CNN) has been proven as a powerful method in medical image analysis. Radiomics which is believed to be of interest based on expert opinion can describe high-throughput extraction from CT images. Graph Convolutional Network explores the global context and makes the inference on both graph node features and relational structures. In this paper, we propose a novel fusion algorithm, RGD, for benign-malignant lung nodule classification by incorporating Radiomics study and Graph learning into the multiple Deep CNNs to form a more complete and distinctive feature representation, and ensemble the predictions for robust decision-making. The proposed method was conducted on the publicly available LIDC-IDRI dataset in a 10-fold cross-validation experiment and it obtained an average accuracy of 93.25%, a sensitivity of 89.22%, a specificity of 95.82%, precision of 92.46%, F1 Score of 0.9114 and AUC of 0.9629. Experimental results illustrate that the RGD model achieves superior performance compared with the state-of-the-art methods. Moreover, the effectiveness of the fusion strategy has been confirmed by extensive ablation studies. In the future, the proposed model which performs well on the pulmonary nodule classification on CT images will be applied to increase confidence in the clinical diagnosis of lung cancer.
Collapse
Affiliation(s)
- Ling Ma
- College of Software, Nankai University, Tianjin, 300350, China
| | - Chuangye Wan
- College of Software, Nankai University, Tianjin, 300350, China
| | - Kexin Hao
- College of Software, Nankai University, Tianjin, 300350, China
| | - Annan Cai
- College of Software, Nankai University, Tianjin, 300350, China
| | - Lizhi Liu
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, Guangdong, China.
| |
Collapse
|
6
|
Chen S, Duan J, Zhang N, Qi M, Li J, Wang H, Wang R, Ju R, Duan Y, Qi S. MSA-YOLOv5: Multi-scale attention-based YOLOv5 for automatic detection of acute ischemic stroke from multi-modality MRI images. Comput Biol Med 2023; 165:107471. [PMID: 37716245 DOI: 10.1016/j.compbiomed.2023.107471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 09/02/2023] [Accepted: 09/04/2023] [Indexed: 09/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Acute ischemic stroke (AIS) is a common neurological disorder characterized by the sudden onset of cerebral ischemia, leading to functional impairments. Swift and precise detection of AIS lesions is crucial for stroke diagnosis and treatment but poses a significant challenge. This study aims to leverage multimodal fusion technology to combine complementary information from various modalities, thereby enhancing the detection performance of AIS target detection models. METHODS In this retrospective study of AIS, we collected data from 316 AIS patients and created a multi-modality magnetic resonance imaging (MRI) dataset. We propose a Multi-Scale Attention-based YOLOv5 (MSA-YOLOv5), targeting challenges such as small lesion size and blurred borders at low resolutions. Specifically, we augment YOLOv5 with a prediction head to detect objects at various scales. Next, we replace the original prediction head with a Multi-Scale Swin Transformer Prediction Head (MS-STPH), which reduces computational complexity to linear levels and enhances the ability to detect small lesions. We incorporate a Second-Order channel attention (SOCA) module to adaptively rescale channel features by employing second-order feature statistics for more discriminative representations. Finally, we further validate the effectiveness of our method using the ISLES 2022 dataset. RESULTS On our in-house AIS dataset, MSA-YOLOv5 achieves a 79.0% mAP0.5, substantially surpassing other single-stage models. Compared to two-stage models, it maintains a comparable performance level while significantly reducing the number of parameters and resolution. On the ISLES 2022 dataset, MSA-YOLOv5 attains an 80.0% mAP0.5, outperforming other network models by a considerable margin. MS-STPH and SOCA modules can significantly increase mAP0.5 by 2.7% and 1.9%, respectively. Visualization interpretability results show that the proposed MSA-YOLOv5 restricts high attention in the small regions of AIS lesions. CONCLUSIONS The proposed MSA-YOLOv5 is capable of automatically and effectively detecting acute ischemic stroke lesions in multimodal images, particularly for small lesions and artifacts. Our enhanced model reduces the number of parameters while improving detection accuracy. This model can potentially assist radiologists in providing more accurate diagnosis, and enable clinicians to develop better treatment plans.
Collapse
Affiliation(s)
- Shannan Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Jinfeng Duan
- Department of Cardiovascular Surgery, General Hospital of Northern Theater Command, Shenyang, China; Postgraduate College, China Medical University, Shenyang, China.
| | - Nan Zhang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Miao Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Jinze Li
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Hong Wang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Rongqiang Wang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Ronghui Ju
- Department of Radiology, The People's Hospital of Liaoning Province, Shenyang, China.
| | - Yang Duan
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| |
Collapse
|
7
|
Shivwanshi RR, Nirala N. Hyperparameter optimization and development of an advanced CNN-based technique for lung nodule assessment. Phys Med Biol 2023; 68:175038. [PMID: 37567211 DOI: 10.1088/1361-6560/acef8c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 08/11/2023] [Indexed: 08/13/2023]
Abstract
Objective. This paper aims to propose an advanced methodology for assessing lung nodules using automated techniques with computed tomography (CT) images to detect lung cancer at an early stage.Approach. The proposed methodology utilizes a fixed-size 3 × 3 kernel in a convolution neural network (CNN) for relevant feature extraction. The network architecture comprises 13 layers, including six convolution layers for deep local and global feature extraction. The nodule detection architecture is enhanced by incorporating a transfer learning-based EfficientNetV_2 network (TLEV2N) to improve training performance. The classification of nodules is achieved by integrating the EfficientNet_V2 architecture of CNN for more accurate benign and malignant classification. The network architecture is fine-tuned to extract relevant features using a deep network while maintaining performance through suitable hyperparameters.Main results. The proposed method significantly reduces the false-negative rate, with the network achieving an accuracy of 97.56% and a specificity of 98.4%. Using the 3 × 3 kernel provides valuable insights into minute pixel variation and enables the extraction of information at a broader morphological level. The continuous responsiveness of the network to fine-tune initial values allows for further optimization possibilities, leading to the design of a standardized system capable of assessing diversified thoracic CT datasets.Significance. This paper highlights the potential of non-invasive techniques for the early detection of lung cancer through the analysis of low-dose CT images. The proposed methodology offers improved accuracy in detecting lung nodules and has the potential to enhance the overall performance of early lung cancer detection. By reconfiguring the proposed method, further advancements can be made to optimize outcomes and contribute to developing a standardized system for assessing diverse thoracic CT datasets.
Collapse
|
8
|
Baidya Kayal E, Ganguly S, Sasi A, Sharma S, DS D, Saini M, Rangarajan K, Kandasamy D, Bakhshi S, Mehndiratta A. A proposed methodology for detecting the malignant potential of pulmonary nodules in sarcoma using computed tomographic imaging and artificial intelligence-based models. Front Oncol 2023; 13:1212526. [PMID: 37671060 PMCID: PMC10476362 DOI: 10.3389/fonc.2023.1212526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 07/31/2023] [Indexed: 09/07/2023] Open
Abstract
The presence of lung metastases in patients with primary malignancies is an important criterion for treatment management and prognostication. Computed tomography (CT) of the chest is the preferred method to detect lung metastasis. However, CT has limited efficacy in differentiating metastatic nodules from benign nodules (e.g., granulomas due to tuberculosis) especially at early stages (<5 mm). There is also a significant subjectivity associated in making this distinction, leading to frequent CT follow-ups and additional radiation exposure along with financial and emotional burden to the patients and family. Even 18F-fluoro-deoxyglucose positron emission technology-computed tomography (18F-FDG PET-CT) is not always confirmatory for this clinical problem. While pathological biopsy is the gold standard to demonstrate malignancy, invasive sampling of small lung nodules is often not clinically feasible. Currently, there is no non-invasive imaging technique that can reliably characterize lung metastases. The lung is one of the favored sites of metastasis in sarcomas. Hence, patients with sarcomas, especially from tuberculosis prevalent developing countries, can provide an ideal platform to develop a model to differentiate lung metastases from benign nodules. To overcome the lack of optimal specificity of CT scan in detecting pulmonary metastasis, a novel artificial intelligence (AI)-based protocol is proposed utilizing a combination of radiological and clinical biomarkers to identify lung nodules and characterize it as benign or metastasis. This protocol includes a retrospective cohort of nearly 2,000-2,250 sample nodules (from at least 450 patients) for training and testing and an ambispective cohort of nearly 500 nodules (from 100 patients; 50 patients each from the retrospective and prospective cohort) for validation. Ground-truth annotation of lung nodules will be performed using an in-house-built segmentation tool. Ground-truth labeling of lung nodules (metastatic/benign) will be performed based on histopathological results or baseline and/or follow-up radiological findings along with clinical outcome of the patient. Optimal methods for data handling and statistical analysis are included to develop a robust protocol for early detection and classification of pulmonary metastasis at baseline and at follow-up and identification of associated potential clinical and radiological markers.
Collapse
Affiliation(s)
- Esha Baidya Kayal
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Shuvadeep Ganguly
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Archana Sasi
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Swetambri Sharma
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Dheeksha DS
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Manish Saini
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Krithika Rangarajan
- Radiodiagnosis, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | | | - Sameer Bakhshi
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Amit Mehndiratta
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
- Department of Biomedical Engineering, All India Institute of Medical Sciences, New Delhi, Delhi, India
| |
Collapse
|
9
|
Wang H, Zhu H, Ding L, Yang K. A diagnostic classification of lung nodules using multiple-scale residual network. Sci Rep 2023; 13:11322. [PMID: 37443333 PMCID: PMC10345110 DOI: 10.1038/s41598-023-38350-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 07/06/2023] [Indexed: 07/15/2023] Open
Abstract
Computed tomography (CT) scans have been shown to be an effective way of improving diagnostic efficacy and reducing lung cancer mortality. However, distinguishing benign from malignant nodules in CT imaging remains challenging. This study aims to develop a multiple-scale residual network (MResNet) to automatically and precisely extract the general feature of lung nodules, and classify lung nodules based on deep learning. The MResNet aggregates the advantages of residual units and pyramid pooling module (PPM) to learn key features and extract the general feature for lung nodule classification. Specially, the MResNet uses the ResNet as a backbone network to learn contextual information and discriminate feature representation. Meanwhile, the PPM is used to fuse features under four different scales, including the coarse scale and the fine-grained scale to obtain more general lung features of the CT image. MResNet had an accuracy of 99.12%, a sensitivity of 98.64%, a specificity of 97.87%, a positive predictive value (PPV) of 99.92%, and a negative predictive value (NPV) of 97.87% in the training set. Additionally, its area under the receiver operating characteristic curve (AUC) was 0.9998 (0.99976-0.99991). MResNet's accuracy, sensitivity, specificity, PPV, NPV, and AUC in the testing set were 85.23%, 92.79%, 72.89%, 84.56%, 86.34%, and 0.9275 (0.91662-0.93833), respectively. The developed MResNet performed exceptionally well in estimating the malignancy risk of pulmonary nodules found on CT. The model has the potential to provide reliable and reproducible malignancy risk scores for clinicians and radiologists, thereby optimizing lung cancer screening management.
Collapse
Affiliation(s)
- Hongfeng Wang
- School of Network Engineering, Zhoukou Normal University, Zhoukou, 466001, China
| | - Hai Zhu
- School of Network Engineering, Zhoukou Normal University, Zhoukou, 466001, China
| | - Lihua Ding
- College of Public Health, Zhengzhou University, Zhengzhou, 450001, China
| | - Kaili Yang
- Henan Provincial People's Hospital, Henan Eye Hospital, Henan Eye Institute, People's Hospital of Zhengzhou University, Henan University People's Hospital, Zhengzhou, 450003, China.
| |
Collapse
|
10
|
Shao J, Zhou L, Yeung SYF, Lei T, Zhang W, Yuan X. Pulmonary Nodule Detection and Classification Using All-Optical Deep Diffractive Neural Network. Life (Basel) 2023; 13:life13051148. [PMID: 37240793 DOI: 10.3390/life13051148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 04/29/2023] [Accepted: 05/07/2023] [Indexed: 05/28/2023] Open
Abstract
A deep diffractive neural network (D2NN) is a fast optical computing structure that has been widely used in image classification, logical operations, and other fields. Computed tomography (CT) imaging is a reliable method for detecting and analyzing pulmonary nodules. In this paper, we propose using an all-optical D2NN for pulmonary nodule detection and classification based on CT imaging for lung cancer. The network was trained based on the LIDC-IDRI dataset, and the performance was evaluated on a test set. For pulmonary nodule detection, the existence of nodules scanned from CT images were estimated with two-class classification based on the network, achieving a recall rate of 91.08% from the test set. For pulmonary nodule classification, benign and malignant nodules were also classified with two-class classification with an accuracy of 76.77% and an area under the curve (AUC) value of 0.8292. Our numerical simulations show the possibility of using optical neural networks for fast medical image processing and aided diagnosis.
Collapse
Affiliation(s)
- Junjie Shao
- Nanophotonics Research Center, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
| | - Lingxiao Zhou
- Nanophotonics Research Center, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
| | - Sze Yan Fion Yeung
- State Key Laboratory on Advanced Displays and Optoelectronics Technologies, Department of Electronic & Computer Engineering, Hong Kong University of Science and Technology, Hong Kong SAR, China
| | - Ting Lei
- Nanophotonics Research Center, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
| | - Wanlong Zhang
- Nanophotonics Research Center, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
| | - Xiaocong Yuan
- Nanophotonics Research Center, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
- Research Center for Humanoid Sensing, Research Institute of Intelligent Sensing, Zhejiang Lab, Hangzhou 311100, China
| |
Collapse
|
11
|
Cai J, Guo L, Zhu L, Xia L, Qian L, Lure YMF, Yin X. Impact of localized fine tuning in the performance of segmentation and classification of lung nodules from computed tomography scans using deep learning. Front Oncol 2023; 13:1140635. [PMID: 37056345 PMCID: PMC10088514 DOI: 10.3389/fonc.2023.1140635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 03/16/2023] [Indexed: 03/30/2023] Open
Abstract
BackgroundAlgorithm malfunction may occur when there is a performance mismatch between the dataset with which it was developed and the dataset on which it was deployed.MethodsA baseline segmentation algorithm and a baseline classification algorithm were developed using public dataset of Lung Image Database Consortium to detect benign and malignant nodules, and two additional external datasets (i.e., HB and XZ) including 542 cases and 486 cases were involved for the independent validation of these two algorithms. To explore the impact of localized fine tuning on the individual segmentation and classification process, the baseline algorithms were fine tuned with CT scans of HB and XZ datasets, respectively, and the performance of the fine tuned algorithms was tested to compare with the baseline algorithms.ResultsThe proposed baseline algorithms of both segmentation and classification experienced a drop when directly deployed in external HB and XZ datasets. Comparing with the baseline validation results in nodule segmentation, the fine tuned segmentation algorithm obtained better performance in Dice coefficient, Intersection over Union, and Average Surface Distance in HB dataset (0.593 vs. 0.444; 0.450 vs. 0.348; 0.283 vs. 0.304) and XZ dataset (0.601 vs. 0.486; 0.482 vs. 0.378; 0.225 vs. 0.358). Similarly, comparing with the baseline validation results in benign and malignant nodule classification, the fine tuned classification algorithm had improved area under the receiver operating characteristic curve value, accuracy, and F1 score in HB dataset (0.851 vs. 0.812; 0.813 vs. 0.769; 0.852 vs. 0.822) and XZ dataset (0.724 vs. 0.668; 0.696 vs. 0.617; 0.737 vs. 0.668).ConclusionsThe external validation performance of localized fine tuned algorithms outperformed the baseline algorithms in both segmentation process and classification process, which showed that localized fine tuning may be an effective way to enable a baseline algorithm generalize to site-specific use.
Collapse
Affiliation(s)
- Jingwei Cai
- Radiology Department, Affiliated Hospital of Hebei University, Baoding, Hebei, China
- Clinical Medical College, Hebei University, Baoding, Hebei, China
| | - Lin Guo
- Shenzhen Zhiying Medical Imaging, Shenzhen, Guangdong, China
| | - Litong Zhu
- Department of Medicine, Queen Mary Hospital, University of Hong, Hong Kong, Hong Kong SAR, China
| | - Li Xia
- Shenzhen Zhiying Medical Imaging, Shenzhen, Guangdong, China
| | - Lingjun Qian
- Shenzhen Zhiying Medical Imaging, Shenzhen, Guangdong, China
| | | | - Xiaoping Yin
- Radiology Department, Affiliated Hospital of Hebei University, Baoding, Hebei, China
- *Correspondence: Xiaoping Yin,
| |
Collapse
|
12
|
Qiao J, Fan Y, Zhang M, Fang K, Li D, Wang Z. Ensemble framework based on attributes and deep features for benign-malignant classification of lung nodule. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
13
|
Mridha MF, Prodeep AR, Hoque ASMM, Islam MR, Lima AA, Kabir MM, Hamid MA, Watanobe Y. A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
Affiliation(s)
- M. F. Mridha
- Department of Computer Science and Engineering, American International University Bangladesh, Dhaka 1229, Bangladesh
| | - Akibur Rahman Prodeep
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - A. S. M. Morshedul Hoque
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Abdul Hamid
- Department of Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
14
|
Chen Y, Chen X. A brain-like classification method for computed tomography images based on adaptive feature matching dual-source domain heterogeneous transfer learning. Front Hum Neurosci 2022; 16:1019564. [PMID: 36304588 PMCID: PMC9592699 DOI: 10.3389/fnhum.2022.1019564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/07/2022] [Indexed: 12/04/2022] Open
Abstract
Transfer learning can improve the robustness of deep learning in the case of small samples. However, when the semantic difference between the source domain data and the target domain data is large, transfer learning easily introduces redundant features and leads to negative transfer. According the mechanism of the human brain focusing on effective features while ignoring redundant features in recognition tasks, a brain-like classification method based on adaptive feature matching dual-source domain heterogeneous transfer learning is proposed for the preoperative aided diagnosis of lung granuloma and lung adenocarcinoma for patients with solitary pulmonary solid nodule in the case of small samples. The method includes two parts: (1) feature extraction and (2) feature classification. In the feature extraction part, first, By simulating the feature selection mechanism of the human brain in the process of drawing inferences about other cases from one instance, an adaptive selected-based dual-source domain feature matching network is proposed to determine the matching weight of each pair of feature maps and each pair of convolution layers between the two source networks and the target network, respectively. These two weights can, respectively, adaptive select the features in the source network that are conducive to the learning of the target task, and the destination of feature transfer to improve the robustness of the target network. Meanwhile, a target network based on diverse branch block is proposed, which made the target network have different receptive fields and complex paths to further improve the feature expression ability of the target network. Second, the convolution kernel of the target network is used as the feature extractor to extract features. In the feature classification part, an ensemble classifier based on sparse Bayesian extreme learning machine is proposed that can automatically decide how to combine the output of base classifiers to improve the classification performance. Finally, the experimental results (the AUCs were 0.9542 and 0.9356, respectively) on the data of two center data show that this method can provide a better diagnostic reference for doctors.
Collapse
Affiliation(s)
- Yehang Chen
- Laboratory of Artificial Intelligence of Biomedicine, Guilin University of Aerospace Technology, Guilin, China
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Xiangmeng Chen
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
- *Correspondence: Xiangmeng Chen,
| |
Collapse
|
15
|
Deep Learning Assessment for Mining Important Medical Image Features of Various Modalities. Diagnostics (Basel) 2022; 12:diagnostics12102333. [DOI: 10.3390/diagnostics12102333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 09/13/2022] [Accepted: 09/22/2022] [Indexed: 11/16/2022] Open
Abstract
Deep learning (DL) is a well-established pipeline for feature extraction in medical and nonmedical imaging tasks, such as object detection, segmentation, and classification. However, DL faces the issue of explainability, which prohibits reliable utilisation in everyday clinical practice. This study evaluates DL methods for their efficiency in revealing and suggesting potential image biomarkers. Eleven biomedical image datasets of various modalities are utilised, including SPECT, CT, photographs, microscopy, and X-ray. Seven state-of-the-art CNNs are employed and tuned to perform image classification in tasks. The main conclusion of the research is that DL reveals potential biomarkers in several cases, especially when the models are trained from scratch in domains where low-level features such as shapes and edges are not enough to make decisions. Furthermore, in some cases, device acquisition variations slightly affect the performance of DL models.
Collapse
|
16
|
Lung Cancer Nodules Detection via an Adaptive Boosting Algorithm Based on Self-Normalized Multiview Convolutional Neural Network. JOURNAL OF ONCOLOGY 2022; 2022:5682451. [PMID: 36199795 PMCID: PMC9529389 DOI: 10.1155/2022/5682451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 06/28/2022] [Accepted: 07/19/2022] [Indexed: 11/18/2022]
Abstract
Lung cancer is the deadliest cancer killing almost 1.8 million people in 2020. The new cases are expanding alarmingly. Early lung cancer manifests itself in the form of nodules in the lungs. One of the most widely used techniques for both lung cancer early and noninvasive diagnosis is computed tomography (CT). However, the intensive workload of radiologists to read a large number of scans for nodules detection gives rise to issues like false detection and missed detection. To overcome these issues, we proposed an innovative strategy titled adaptive boosting self-normalized multiview convolution neural network (AdaBoost-SNMV-CNN) for lung cancer nodules detection across CT scans. In AdaBoost-SNMV-CNN, MV-CNN function as a baseline learner while the scaled exponential linear unit (SELU) activation function normalizes the layers by considering their neighbors' information and a special drop-out technique (α-dropout). The proposed method was trained and tested using the widely Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) and Early Lung Cancer Action Program (ELCAP) datasets. AdaBoost-SNMV-CNN achieved an accuracy of 92%, sensitivity of 93%, and specificity of 92% for lung nodules detection on the LIDC-IDRI dataset. Meanwhile, on the ELCAP dataset, the accuracy for detecting lung nodules was 99%, sensitivity 100%, and specificity 98%. AdaBoost-SNMV-CNN outperformed the majority of the model in accuracy, sensitivity, and specificity. The multiviews confer the model's good generalization and learning ability for diverse features of lung nodules, the model architecture is simple, and has a minimal computational time of around 102 minutes. We believe that AdaBoost-SNMV-CNN has good accuracy for the detection of lung nodules and anticipate its potential application in the noninvasive clinical diagnosis of lung cancer. This model can be of good assistance to the radiologist and will be of interest to researchers involved in the designing and development of advanced systems for the detection of lung nodules to accomplish the goal of noninvasive diagnosis of lung cancer.
Collapse
|
17
|
Huang Z, Zhang G, Liu J, Huang M, Zhong L, Shu J. LRFNet: A deep learning model for the assessment of liver reserve function based on Child-Pugh score and CT image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106993. [PMID: 35793571 DOI: 10.1016/j.cmpb.2022.106993] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/09/2022] [Accepted: 06/29/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Liver reserve function should be accurately evaluated in patients with hepatic cellular cancer before surgery to evaluate the degree of liver tolerance to surgical methods. Meanwhile, liver reserve function is also an important indicator for disease analysis and prognosis of patients. Child-Pugh score is the most widely used liver reserve function evaluation and scoring system. However, this method also has many shortcomings such as poor accuracy and subjective factors. To achieve comprehensive evaluation of liver reserve function, we developed a deep learning model to fuse bimodal features of Child-Pugh score and computed tomography (CT) image. METHODS 1022 enhanced abdomen CT images of 121 patients with hepatocellular carcinoma and impaired liver reserve function were retrospectively collected. Firstly, CT images were pre-processed by de-noising, data amplification and normalization. Then, new branches were added between the dense blocks of the DenseNet structure, and the center clipping operation was introduced to obtain a lightweight deep learning model liver reserve function network (LRFNet) with rich liver scale features. LRFNet extracted depth features related to liver reserve function from CT images. Finally, the extracted features are input into a deep learning classifier composed of fully connected layers to classify CT images into Child-Pugh A, B and C. Precision, Specificity, Sensitivity, and Area Under Curve are used to evaluate the performance of the model. RESULTS The AUC by our LRFNet model based on CT image for Child-Pugh A, B and C classification of liver reserve function was 0.834, 0.649 and 0.876, respectively, and with an average AUC of 0.774, which was better than the traditional clinical subjective Child-Pugh classification method. CONCLUSION Deep learning model based on CT images can accurately classify Child-Pugh grade of liver reserve function in hepatocellular carcinoma patients, provide a comprehensive method for clinicians to assess liver reserve function before surgery.
Collapse
Affiliation(s)
- Zhiwei Huang
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Guo Zhang
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Jiong Liu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Mengping Huang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Lisha Zhong
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China.
| | - Jian Shu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, China; Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China.
| |
Collapse
|
18
|
Classification and Reconstruction of Biomedical Signals Based on Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6548811. [PMID: 35909845 PMCID: PMC9334110 DOI: 10.1155/2022/6548811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 05/25/2022] [Accepted: 06/30/2022] [Indexed: 11/18/2022]
Abstract
The efficient biological signal processing method can effectively improve the efficiency of researchers to explore the work of life mechanism, so as to better reveal the relationship between physiological structure and function, thus promoting the generation of major biological discoveries; high-precision medical signal analysis strategy can, to a certain extent, share the pressure of doctors’ clinical diagnosis and assist them to formulate more favorable plans for disease prevention and treatment, so as to alleviate patients’ physical and mental pain and improve the overall health level of the society. This article in biomedical signal is very representative of the two types of signals: mammary gland molybdenum target X-ray image (mammography) and the EEG signal as the research object, combined with the deep learning field of CNN; the most representative model is two kinds of biomedical signal classification, and reconstruction methods conducted a series of research: (1) a new classification method of breast masses based on multi-layer CNN is proposed. The method includes a CNN feature representation network for breast masses and a feature decision mechanism that simulates the physician’s diagnosis process. By comparing with the objective classification accuracy of other methods for the identification of benign and malignant breast masses, the method achieved the highest classification accuracy of 97.0% under different values of c and gamma, which further verified the effectiveness of the proposed method in the identification of breast masses based on molybdenum target X-ray images. (2) An EEG signal classification method based on spatiotemporal fusion CNN is proposed. This method includes a multi-channel input classification network focusing on spatial information of EEG signals, a single-channel input classification network focusing on temporal information of EEG signals, and a spatial-temporal fusion strategy. Through comparative experiments on EEG signal classification tasks, the effectiveness of the proposed method was verified from the aspects of objective classification accuracy, number of model parameters, and subjective evaluation of CNN feature representation validity. It can be seen that the method proposed in this paper not only has high accuracy, but also can be well applied to the classification and reconstruction of biomedical signals.
Collapse
|
19
|
Naseer I, Akram S, Masood T, Jaffar A, Khan MA, Mosavi A. Performance Analysis of State-of-the-Art CNN Architectures for LUNA16. SENSORS 2022; 22:s22124426. [PMID: 35746208 PMCID: PMC9227226 DOI: 10.3390/s22124426] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 06/07/2022] [Accepted: 06/08/2022] [Indexed: 02/01/2023]
Abstract
The convolutional neural network (CNN) has become a powerful tool in machine learning (ML) that is used to solve complex problems such as image recognition, natural language processing, and video analysis. Notably, the idea of exploring convolutional neural network architecture has gained substantial attention as well as popularity. This study focuses on the intrinsic various CNN architectures: LeNet, AlexNet, VGG16, ResNet-50, and Inception-V1, which have been scrutinized and compared with each other for the detection of lung cancer using publicly available LUNA16 datasets. Furthermore, multiple performance optimizers: root mean square propagation (RMSProp), adaptive moment estimation (Adam), and stochastic gradient descent (SGD), were applied for this comparative study. The performances of the three CNN architectures were measured for accuracy, specificity, sensitivity, positive predictive value, false omission rate, negative predictive value, and F1 score. The experimental results showed that the CNN AlexNet architecture with the SGD optimizer achieved the highest validation accuracy for CT lung cancer with an accuracy of 97.42%, misclassification rate of 2.58%, 97.58% sensitivity, 97.25% specificity, 97.58% positive predictive value, 97.25% negative predictive value, false omission rate of 2.75%, and F1 score of 97.58%. AlexNet with the SGD optimizer was the best and outperformed compared to the other state-of-the-art CNN architectures.
Collapse
Affiliation(s)
- Iftikhar Naseer
- Faculty of Computer Science & Information Technology, The Superior University, Lahore 54600, Pakistan; (I.N.); (S.A.); (T.M.); (A.J.)
| | - Sheeraz Akram
- Faculty of Computer Science & Information Technology, The Superior University, Lahore 54600, Pakistan; (I.N.); (S.A.); (T.M.); (A.J.)
| | - Tehreem Masood
- Faculty of Computer Science & Information Technology, The Superior University, Lahore 54600, Pakistan; (I.N.); (S.A.); (T.M.); (A.J.)
| | - Arfan Jaffar
- Faculty of Computer Science & Information Technology, The Superior University, Lahore 54600, Pakistan; (I.N.); (S.A.); (T.M.); (A.J.)
| | - Muhammad Adnan Khan
- Department of Software, Gachon University, Seongnam 13120, Korea
- Correspondence:
| | - Amir Mosavi
- John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary;
- Institute of Information Engineering, Automation and Mathematics, Slovak University of Technology in Bratislava, 81107 Bratislava, Slovakia
- Faculty of Civil Engineering, Technical University of Dresden, 01062 Dresden, Germany
| |
Collapse
|
20
|
2dCNN-BiCuDNNLSTM: Hybrid Deep-Learning-Based Approach for Classification of COVID-19 X-ray Images. SUSTAINABILITY 2022. [DOI: 10.3390/su14116785] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The coronavirus (COVID-19) is a major global disaster of humankind, in the 21st century. COVID-19 initiates breathing infection, including pneumonia, common cold, sneezing, and coughing. Initial detection becomes crucial, to classify the virus and limit its spread. COVID-19 infection is similar to other types of pneumonia, and it may result in severe pneumonia, with bundles of illness onsets. This research is focused on identifying people affected by COVID-19 at a very early stage, through chest X-ray images. Chest X-ray classification is a beneficial method in the identification, follow up, and evaluation of treatment efficiency, for people with pneumonia. This research, also, considered chest X-ray classification as a basic method to evaluate the existence of lung irregularities in symptomatic patients, alleged for COVID-19 disease. The aim of this research is to classify COVID-19 samples from normal chest X-ray images and pneumonia-affected chest X-ray images of people, for early identification of the disease. This research will help people in diagnosing individuals for viruses and insisting that people receive proper treatment as well as preventive action, to stop the spread of the virus. To provide accurate classification of disease in patients’ chest X-ray images, this research proposed a novel classification model, named 2dCNN-BiCuDNNLSTM, which combines two-dimensional Convolutional Neural Network (CNN) and a Bidirectional CUDA Deep Neural Network Long Short-Term Memory (BiCuDNNLSTM). Deep learning is known for identifying the patterns in available data that will be helpful in accurate classification of disease. The proposed model (2dCNN and BiCuDNNLSTM layers, with proper hyperparameters) can differentiate normal chest X-rays from viral pneumonia and COVID-19 ones, with high accuracy. A total of 6863 X-ray images (JPEG) (1000 COVID-19 patients, 3863 normal cases, and 2000 pneumonia patients) have been engaged, to examine the achievement of the suggested neural network; 80% of the images dataset for every group is received for proposed model training, 10% is accepted for validation, and 10% is accepted for testing. It is observed that the proposed model acquires the towering classification accuracy of 93%. The proposed network is used for predictive analysis, to prompt people regarding the risk of early detection of COVID-19. X-ray images help to classify people with COVID-19 variants and to indicate the severity of disease in the future. This study demonstrates the effectiveness of the proposed CUDA-enabled hybrid deep learning models, to classify the X-ray image data, with a high accuracy of detecting COVID-19. It reveals that the proposed model can be applicable in numerous virus classifications. The chest X-ray classification is a commonly available and reasonable approach, for diagnosing people with lower respiratory signs or suspected COVID-19. Therefore, it is demonstrated that the proposed model has an efficient and promising accomplishment for classifying COVID-19 through X-ray images. The proposed hybrid model can, efficiently, preserve the comprehensive characteristic facts of the image data, for more exceptional concluding classification results than an individual neural network.
Collapse
|
21
|
Liu D, Liu F, Tie Y, Qi L, Wang F. Res-trans networks for lung nodule classification. Int J Comput Assist Radiol Surg 2022; 17:1059-1068. [PMID: 35290646 DOI: 10.1007/s11548-022-02576-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 02/02/2022] [Indexed: 12/09/2022]
Abstract
PURPOSE Lung cancer usually presents as pulmonary nodules on early diagnostic images, and accurately estimating the malignancy of pulmonary nodules is crucial to the prevention and diagnosis of lung cancer. Recently, deep learning algorithms based on convolutional neural networks have shown potential for pulmonary nodules classification. However, the size of the nodules is very diverse, ranging from 3 to 30 mm, which makes classifying them to be a challenging task. In this study, we propose a novel architecture called Res-trans networks to classify nodules in computed tomography (CT) scans. METHODS We designed local and global blocks to extract features that capture the long-range dependencies between pixels to adapt to the correct classification of lung nodules of different sizes. Specifically, we designed residual blocks with convolutional operations to extract local features and transformer blocks with self-attention to capture global features. Moreover, the Res-trans network has a sequence fusion block that aggregates and extracts the sequence feature information output by the transformer block that improves classification accuracy. RESULTS Our proposed method is extensively evaluated on the public LIDC-IDRI dataset, which contains 1,018 CT scans. A tenfold cross-validation result shows that our method obtains better performance with AUC = 0.9628 and Accuracy = 0.9292 compared with recently leading methods. CONCLUSION In this paper, a network that can capture local and global features is proposed to classify nodules in chest CT. Experimental results show that our proposed method has better classification performance and can help radiologists to accurately analyze lung nodules.
Collapse
Affiliation(s)
- Dongxu Liu
- School of Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Fenghui Liu
- Department of Respiratory and Sleep Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yun Tie
- School of Information Engineering, Zhengzhou University, Zhengzhou, China.
| | - Lin Qi
- School of Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Feng Wang
- Department of Oncology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| |
Collapse
|
22
|
Asif S, Zhao M, Tang F, Zhu Y. A deep learning-based framework for detecting COVID-19 patients using chest X-rays. MULTIMEDIA SYSTEMS 2022; 28:1495-1513. [PMID: 35341212 PMCID: PMC8939400 DOI: 10.1007/s00530-022-00917-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 03/09/2022] [Indexed: 06/02/2023]
Abstract
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) has caused outbreaks of new coronavirus disease (COVID-19) around the world. Rapid and accurate detection of COVID-19 coronavirus is an important step in limiting the spread of the COVID-19 epidemic. To solve this problem, radiography techniques (such as chest X-rays and computed tomography (CT)) can play an important role in the early prediction of COVID-19 patients, which will help to treat patients in a timely manner. We aimed to quickly develop a highly efficient lightweight CNN architecture for detecting COVID-19-infected patients. The purpose of this paper is to propose a robust deep learning-based system for reliably detecting COVID-19 from chest X-ray images. First, we evaluate the performance of various pre-trained deep learning models (InceptionV3, Xception, MobileNetV2, NasNet and DenseNet201) recently proposed for medical image classification. Second, a lightweight shallow convolutional neural network (CNN) architecture is proposed for classifying X-ray images of a patient with a low false-negative rate. The data set used in this work contains 2,541 chest X-rays from two different public databases, which have confirmed COVID-19 positive and healthy cases. The performance of the proposed model is compared with the performance of pre-trained deep learning models. The results show that the proposed shallow CNN provides a maximum accuracy of 99.68% and more importantly sensitivity, specificity and AUC of 99.66%, 99.70% and 99.98%. The proposed model has fewer parameters and low complexity compared to other deep learning models. The experimental results of our proposed method show that it is superior to the existing state-of-the-art methods. We believe that this model can help healthcare professionals to treat COVID-19 patients through improved and faster patient screening.
Collapse
Affiliation(s)
- Sohaib Asif
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Ming Zhao
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Fengxiao Tang
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Yusen Zhu
- School of Mathematics, Hunan University, Changsha, China
| |
Collapse
|
23
|
Zhao Y, Hu B, Wang Y, Yin X, Jiang Y, Zhu X. Identification of gastric cancer with convolutional neural networks: a systematic review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:11717-11736. [PMID: 35221775 PMCID: PMC8856868 DOI: 10.1007/s11042-022-12258-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 06/20/2021] [Accepted: 01/14/2022] [Indexed: 06/14/2023]
Abstract
The identification of diseases is inseparable from artificial intelligence. As an important branch of artificial intelligence, convolutional neural networks play an important role in the identification of gastric cancer. We conducted a systematic review to summarize the current applications of convolutional neural networks in the gastric cancer identification. The original articles published in Embase, Cochrane Library, PubMed and Web of Science database were systematically retrieved according to relevant keywords. Data were extracted from published papers. A total of 27 articles were retrieved for the identification of gastric cancer using medical images. Among them, 19 articles were applied in endoscopic images and 8 articles were applied in pathological images. 16 studies explored the performance of gastric cancer detection, 7 studies explored the performance of gastric cancer classification, 2 studies reported the performance of gastric cancer segmentation and 2 studies analyzed the performance of gastric cancer delineating margins. The convolutional neural network structures involved in the research included AlexNet, ResNet, VGG, Inception, DenseNet and Deeplab, etc. The accuracy of studies was 77.3 - 98.7%. Good performances of the systems based on convolutional neural networks have been showed in the identification of gastric cancer. Artificial intelligence is expected to provide more accurate information and efficient judgments for doctors to diagnose diseases in clinical work.
Collapse
Affiliation(s)
- Yuxue Zhao
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Bo Hu
- Department of Thoracic Surgery, Qingdao Municipal Hospital, Qingdao, China
| | - Ying Wang
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Xiaomeng Yin
- Pediatrics Intensive Care Unit, Qingdao Municipal Hospital, Qingdao, China
| | - Yuanyuan Jiang
- International Medical Services, Qilu Hospital of Shandong University, Jinan, China
| | - Xiuli Zhu
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| |
Collapse
|
24
|
Adaptive morphology aided 2-pathway convolutional neural network for lung nodule classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103347] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
|
25
|
Lindsay WD, Sachs N, Gee JC, Mortani Barbosa EJ. Transparent Machine Learning Models to Diagnose Suspicious Thoracic Lesions Leveraging CT Guided Biopsy Data. Acad Radiol 2022; 29 Suppl 2:S156-S164. [PMID: 34373194 DOI: 10.1016/j.acra.2021.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 06/28/2021] [Accepted: 07/03/2021] [Indexed: 11/30/2022]
Abstract
RATIONALE AND OBJECTIVES To train and validate machine learning models capable of classifying suspicious thoracic lesions as benign or malignant and to further classify malignant lesions by pathologic subtype while quantifying feature importance for each classification. MATERIALS AND METHODS 796 patients who had undergone CT guided thoracic biopsy for a concerning thoracic lesion (79.3% lung, 11.4% mediastinum, 6.5% pleura, 2.7% chest wall) were retrospectively enrolled. Lesions were classified as malignant or benign based on ground-truth pathology result, and malignant lesions were classified as primary or secondary cancer. Clinical variables were extracted from EMR and radiology reports. Supervised binary and multiclass classification models were trained to classify lesions based on the input features and evaluated on a held-out test set. Model specific feature analyses were performed to identify variables most predictive of each class, as well as to assess the independent importance of clinical, and imaging features. RESULTS Binary classification models achieved a top accuracy of 80.6%, with predictive features included smoking history, age, lesion size, and lesion location. Multiclass classification models achieved a top weighted average f1-score of 0.73. Features predictive of primary cancer included smoking history, race, and age, while features predictive of secondary cancer included lesion location, and a history of cancer. CONCLUSION Machine learning models enable classification of suspicious thoracic lesions based on clinical and imaging variables, achieving clinically useful performance while identifying importance of individual input features on a pathology-proven dataset. We believe models such as these are more likely to be trusted and adopted by clinicians.
Collapse
Affiliation(s)
- William D Lindsay
- Perelman School of Medicine, University of Pennsylvania Health System, Philadelphia, Pennsylvania; Department of Bioengineering, School of Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Nicholas Sachs
- Perelman School of Medicine, University of Pennsylvania Health System, Philadelphia, Pennsylvania
| | - James C Gee
- Department of Bioengineering, School of Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Eduardo J Mortani Barbosa
- Department of Bioengineering, School of Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania.
| |
Collapse
|
26
|
Zhang K, Qi S, Cai J, Zhao D, Yu T, Yue Y, Yao Y, Qian W. Content-based image retrieval with a Convolutional Siamese Neural Network: Distinguishing lung cancer and tuberculosis in CT images. Comput Biol Med 2022; 140:105096. [PMID: 34872010 DOI: 10.1016/j.compbiomed.2021.105096] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 11/17/2021] [Accepted: 11/27/2021] [Indexed: 12/21/2022]
Abstract
BACKGROUND CT findings of lung cancer and tuberculosis are sometimes similar, potentially leading to misdiagnosis. This study aims to combine deep learning and content-based image retrieval (CBIR) to distinguish lung cancer (LC) from nodular/mass atypical tuberculosis (NMTB) in CT images. METHODS This study proposes CBIR with a convolutional Siamese neural network (CBIR-CSNN). First, the lesion patches are cropped out to compose LC and NMTB datasets and the pairs of two arbitrary patches form a patch-pair dataset. Second, this patch-pair dataset is utilized to train a CSNN. Third, a test patch is treated as a query. The distance between this query and 20 patches in both datasets is calculated using the trained CSNN. The patches closest to the query are used to give the final prediction by majority voting. One dataset of 719 patients is used to train and test the CBIR-CSNN. Another external dataset with 30 patients is employed to verify CBIR-CSNN. RESULTS The CBIR-CSNN achieves excellent performance at the patch level with an mAP (Mean Average Precision) of 0.953, an accuracy of 0.947, and an area under the curve (AUC) of 0.970. At the patient level, the CBIR-CSNN correctly predicted all labels. In the external dataset, the CBIR-CSNN has an accuracy of 0.802 and AUC of 0.858 at the patch level, and 0.833 and 0.902 at the patient level. CONCLUSIONS This CBIR-CSNN can accurately and automatically distinguish LC from NMTB using CT images. CBIR-CSNN has excellent representation capability, compatibility with few-shot learning, and visual explainability.
Collapse
Affiliation(s)
- Kai Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110169, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110169, China.
| | - Jiumei Cai
- Department of Health Medicine, General Hospital of Northern Theater Command, Shenyang, 110003, China; Department of Medical Imaging, Liaoning Cancer Hospital & Institute, Cancer Hospital of China Medical University, Shenyang, 110042, China.
| | - Dan Zhao
- Department of Medical Imaging, Liaoning Cancer Hospital & Institute, Cancer Hospital of China Medical University, Shenyang, 110042, China.
| | - Tao Yu
- Department of Medical Imaging, Liaoning Cancer Hospital & Institute, Cancer Hospital of China Medical University, Shenyang, 110042, China.
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, 110004, China.
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07030, USA.
| | - Wei Qian
- Department of Electrical and Computer Engineering, University of Texas at El Paso, El Paso, TX, 79968, USA.
| |
Collapse
|
27
|
Li D, Yuan S, Yao G. Classification of lung nodules based on the DCA-Xception network. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:993-1008. [PMID: 35912787 DOI: 10.3233/xst-221219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
BACKGROUND Developing deep learning networks to classify between benign and malignant lung nodules usually requires many samples. Due to the precious nature of medical samples, it is difficult to obtain many samples. OBJECTIVE To investigate and test a DCA-Xception network combined with a new data enhancement method to improve performance of lung nodule classification. METHODS First, the Wasserstein Generative Adversarial Network (WGAN) with conditions and five data enhancement methods such as flipping, rotating, and adding Gaussian noise are used to extend the samples to solve the problems of unbalanced sample classification and the insufficient samples. Then, a DCA-Xception network is designed to classify lung nodules. Using this network, information around the target is obtained by introducing an adaptive dual-channel feature extraction module, and the network learns features more accurately by introducing a convolutional attention module. The network is trained and validated using 274 lung nodules (154 benign and 120 malignant) and tested using 52 lung nodules (23 benign and 29 malignant). RESULTS The experiments show that the network has an accuracy of 83.46% and an AUC of 0.929. The features extracted using this network achieve an accuracy of 85.24% on the K-nearest neighbor and random forest classifiers. CONCLUSION This study demonstrates that the DCA-Xception network yields higher performance in classification of lung nodules than the performance using the classical classification networks as well as pre-trained networks.
Collapse
Affiliation(s)
- Dongjie Li
- Heilongjiang Key Laboratory of Complex Intelligent System and Integration, Harbin University of Science and Technology, Harbin, China
| | - Shanliang Yuan
- Heilongjiang Key Laboratory of Complex Intelligent System and Integration, Harbin University of Science and Technology, Harbin, China
| | - Gang Yao
- Heilongjiang Atomic Energy Research Institute, Harbin, China
| |
Collapse
|
28
|
Astaraki M, Yang G, Zakko Y, Toma-Dasu I, Smedby Ö, Wang C. A Comparative Study of Radiomics and Deep-Learning Based Methods for Pulmonary Nodule Malignancy Prediction in Low Dose CT Images. Front Oncol 2021; 11:737368. [PMID: 34976794 PMCID: PMC8718670 DOI: 10.3389/fonc.2021.737368] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 11/29/2021] [Indexed: 01/08/2023] Open
Abstract
OBJECTIVES Both radiomics and deep learning methods have shown great promise in predicting lesion malignancy in various image-based oncology studies. However, it is still unclear which method to choose for a specific clinical problem given the access to the same amount of training data. In this study, we try to compare the performance of a series of carefully selected conventional radiomics methods, end-to-end deep learning models, and deep-feature based radiomics pipelines for pulmonary nodule malignancy prediction on an open database that consists of 1297 manually delineated lung nodules. METHODS Conventional radiomics analysis was conducted by extracting standard handcrafted features from target nodule images. Several end-to-end deep classifier networks, including VGG, ResNet, DenseNet, and EfficientNet were employed to identify lung nodule malignancy as well. In addition to the baseline implementations, we also investigated the importance of feature selection and class balancing, as well as separating the features learned in the nodule target region and the background/context region. By pooling the radiomics and deep features together in a hybrid feature set, we investigated the compatibility of these two sets with respect to malignancy prediction. RESULTS The best baseline conventional radiomics model, deep learning model, and deep-feature based radiomics model achieved AUROC values (mean ± standard deviations) of 0.792 ± 0.025, 0.801 ± 0.018, and 0.817 ± 0.032, respectively through 5-fold cross-validation analyses. However, after trying out several optimization techniques, such as feature selection and data balancing, as well as adding context features, the corresponding best radiomics, end-to-end deep learning, and deep-feature based models achieved AUROC values of 0.921 ± 0.010, 0.824 ± 0.021, and 0.936 ± 0.011, respectively. We achieved the best prediction accuracy from the hybrid feature set (AUROC: 0.938 ± 0.010). CONCLUSION The end-to-end deep-learning model outperforms conventional radiomics out of the box without much fine-tuning. On the other hand, fine-tuning the models lead to significant improvements in the prediction performance where the conventional and deep-feature based radiomics models achieved comparable results. The hybrid radiomics method seems to be the most promising model for lung nodule malignancy prediction in this comparative study.
Collapse
Affiliation(s)
- Mehdi Astaraki
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Huddinge, Sweden,Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden,*Correspondence: Mehdi Astaraki,
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, United Kingdom,National Heart and Lung Institute, Imperial College London, London, United Kingdom
| | - Yousuf Zakko
- Imaging and Function, Radiology Department, Karolinska University Hospital, Solna, Stockholm, Sweden
| | - Iuliana Toma-Dasu
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden,Department of Physics, Stockholm University, Stockholm, Sweden
| | - Örjan Smedby
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Huddinge, Sweden
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Huddinge, Sweden
| |
Collapse
|
29
|
Kaur P, Harnal S, Tiwari R, Alharithi FS, Almulihi AH, Noya ID, Goyal N. A Hybrid Convolutional Neural Network Model for Diagnosis of COVID-19 Using Chest X-ray Images. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:12191. [PMID: 34831960 PMCID: PMC8618754 DOI: 10.3390/ijerph182212191] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 11/15/2021] [Accepted: 11/15/2021] [Indexed: 12/23/2022]
Abstract
COVID-19 declared as a pandemic that has a faster rate of infection and has impacted the lives and the country's economy due to forced lockdowns. Its detection using RT-PCR is required long time and due to which its infection has grown exponentially. This creates havoc for the shortage of testing kits in many countries. This work has proposed a new image processing-based technique for the health care systems named "C19D-Net", to detect "COVID-19" infection from "Chest X-Ray" (XR) images, which can help radiologists to improve their accuracy of detection COVID-19. The proposed system extracts deep learning (DL) features by applying the InceptionV4 architecture and Multiclass SVM classifier to classify and detect COVID-19 infection into four different classes. The dataset of 1900 Chest XR images has been collected from two publicly accessible databases. Images are pre-processed with proper scaling and regular feeding to the proposed model for accuracy attainments. Extensive tests are conducted with the proposed model ("C19D-Net") and it has succeeded to achieve the highest COVID-19 detection accuracy as 96.24% for 4-classes, 95.51% for three-classes, and 98.1% for two-classes. The proposed method has outperformed well in expressions of "precision", "accuracy", "F1-score" and "recall" in comparison with most of the recent previously published methods. As a result, for the present situation of COVID-19, the proposed "C19D-Net" can be employed in places where test kits are in short supply, to help the radiologists to improve their accuracy of detection of COVID-19 patients through XR-Images.
Collapse
Affiliation(s)
- Prabhjot Kaur
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India; (P.K.); (S.H.)
| | - Shilpi Harnal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India; (P.K.); (S.H.)
| | - Rajeev Tiwari
- Department of Systemics, School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand, India;
| | - Fahd S. Alharithi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia; (F.S.A.); (A.H.A.)
| | - Ahmed H. Almulihi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia; (F.S.A.); (A.H.A.)
| | - Irene Delgado Noya
- Higher Polytechnic School/Industrial Organization Engineering, Universidad Europea del Atlántico, 39011 Santander, Spain;
- Department of Project Management, Universidad Internacional Iberoamericana, Campeche 24560, Mexico
| | - Nitin Goyal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India; (P.K.); (S.H.)
| |
Collapse
|
30
|
Naik A, Edla DR. Lung nodule classification using combination of CNN, second and higher order texture features. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189847] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Lung cancer is the most common cancer throughout the world and identification of malignant tumors at an early stage is needed for diagnosis and treatment of patient thus avoiding the progression to a later stage. In recent times, deep learning architectures such as CNN have shown promising results in effectively identifying malignant tumors in CT scans. In this paper, we combine the CNN features with texture features such as Haralick and Gray level run length matrix features to gather benefits of high level and spatial features extracted from the lung nodules to improve the accuracy of classification. These features are further classified using SVM classifier instead of softmax classifier in order to reduce the overfitting problem. Our model was validated on LUNA dataset and achieved an accuracy of 93.53%, sensitivity of 86.62%, the specificity of 96.55%, and positive predictive value of 94.02%.
Collapse
Affiliation(s)
- Amrita Naik
- Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| | - Damodar Reddy Edla
- Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| |
Collapse
|
31
|
Qi S, Xu C, Li C, Tian B, Xia S, Ren J, Yang L, Wang H, Yu H. DR-MIL: deep represented multiple instance learning distinguishes COVID-19 from community-acquired pneumonia in CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106406. [PMID: 34536634 PMCID: PMC8426140 DOI: 10.1016/j.cmpb.2021.106406] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 09/02/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Given that the novel coronavirus disease 2019 (COVID-19) has become a pandemic, a method to accurately distinguish COVID-19 from community-acquired pneumonia (CAP) is urgently needed. However, the spatial uncertainty and morphological diversity of COVID-19 lesions in the lungs, and subtle differences with respect to CAP, make differential diagnosis non-trivial. METHODS We propose a deep represented multiple instance learning (DR-MIL) method to fulfill this task. A 3D volumetric CT scan of one patient is treated as one bag and ten CT slices are selected as the initial instances. For each instance, deep features are extracted from the pre-trained ResNet-50 with fine-tuning and represented as one deep represented instance score (DRIS). Each bag with a DRIS for each initial instance is then input into a citation k-nearest neighbor search to generate the final prediction. A total of 141 COVID-19 and 100 CAP CT scans were used. The performance of DR-MIL is compared with other potential strategies and state-of-the-art models. RESULTS DR-MIL displayed an accuracy of 95% and an area under curve of 0.943, which were superior to those observed for comparable methods. COVID-19 and CAP exhibited significant differences in both the DRIS and the spatial pattern of lesions (p<0.001). As a means of content-based image retrieval, DR-MIL can identify images used as key instances, references, and citers for visual interpretation. CONCLUSIONS DR-MIL can effectively represent the deep characteristics of COVID-19 lesions in CT images and accurately distinguish COVID-19 from CAP in a weakly supervised manner. The resulting DRIS is a useful supplement to visual interpretation of the spatial pattern of lesions when screening for COVID-19.
Collapse
Affiliation(s)
- Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Caiwen Xu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Bin Tian
- Department of Radiology, The Second People's Hospital of Guiyang, Guiyang, China
| | - Shuyue Xia
- Department of Respiratory Medicine, Central Hospital Affiliated to Shenyang Medical College, Shenyang, China
| | - Jigang Ren
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Liming Yang
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Hanlin Wang
- Department of Radiology, General Hospital of the Yangtze River Shipping, Wuhan, China.
| | - Hui Yu
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China.
| |
Collapse
|
32
|
Gu Y, Chi J, Liu J, Yang L, Zhang B, Yu D, Zhao Y, Lu X. A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput Biol Med 2021; 137:104806. [PMID: 34461501 DOI: 10.1016/j.compbiomed.2021.104806] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 08/23/2021] [Accepted: 08/23/2021] [Indexed: 12/17/2022]
Abstract
Lung cancer has one of the highest mortalities of all cancers. According to the National Lung Screening Trial, patients who underwent low-dose computed tomography (CT) scanning once a year for 3 years showed a 20% decline in lung cancer mortality. To further improve the survival rate of lung cancer patients, computer-aided diagnosis (CAD) technology shows great potential. In this paper, we summarize existing CAD approaches applying deep learning to CT scan data for pre-processing, lung segmentation, false positive reduction, lung nodule detection, segmentation, classification and retrieval. Selected papers are drawn from academic journals and conferences up to November 2020. We discuss the development of deep learning, describe several important aspects of lung nodule CAD systems and assess the performance of the selected studies on various datasets, which include LIDC-IDRI, LUNA16, LIDC, DSB2017, NLST, TianChi, and ELCAP. Overall, in the detection studies reviewed, the sensitivity of these techniques is found to range from 61.61% to 98.10%, and the value of the FPs per scan is between 0.125 and 32. In the selected classification studies, the accuracy ranges from 75.01% to 97.58%. The precision of the selected retrieval studies is between 71.43% and 87.29%. Based on performance, deep learning based CAD technologies for detection and classification of pulmonary nodules achieve satisfactory results. However, there are still many challenges and limitations remaining including over-fitting, lack of interpretability and insufficient annotated data. This review helps researchers and radiologists to better understand CAD technology for pulmonary nodule detection, segmentation, classification and retrieval. We summarize the performance of current techniques, consider the challenges, and propose directions for future high-impact research.
Collapse
Affiliation(s)
- Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jingqian Chi
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jiaqi Liu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Xiaoqi Lu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China; College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China
| |
Collapse
|
33
|
Joshi A, Sivaswamy J, Joshi GD. Lung nodule malignancy classification with weakly supervised explanation generation. J Med Imaging (Bellingham) 2021; 8:044502. [PMID: 34423071 DOI: 10.1117/1.jmi.8.4.044502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 08/05/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Explainable AI aims to build systems that not only give high performance but also are able to provide insights that drive the decision making. However, deriving this explanation is often dependent on fully annotated (class label and local annotation) data, which are not readily available in the medical domain. Approach: This paper addresses the above-mentioned aspects and presents an innovative approach to classifying a lung nodule in a CT volume as malignant or benign, and generating a morphologically meaningful explanation for the decision in the form of attributes such as nodule margin, sphericity, and spiculation. A deep learning architecture that is trained using a multi-phase training regime is proposed. The nodule class label (benign/malignant) is learned with full supervision and is guided by semantic attributes that are learned in a weakly supervised manner. Results: Results of an extensive evaluation of the proposed system on the LIDC-IDRI dataset show good performance compared with state-of-the-art, fully supervised methods. The proposed model is able to label nodules (after full supervision) with an accuracy of 89.1% and an area under curve of 0.91 and to provide eight attributes scores as an explanation, which is learned from a much smaller training set. The proposed system's potential to be integrated with a sub-optimal nodule detection system was also tested, and our system handled 95% of false positive or random regions in the input well by labeling them as benign, which underscores its robustness. Conclusions: The proposed approach offers a way to address computer-aided diagnosis system design under the constraint of sparse availability of fully annotated images.
Collapse
Affiliation(s)
- Aniket Joshi
- International Institute of Information Technology, Hyderabad, India
| | | | | |
Collapse
|
34
|
YAĞIN FH, GÜLDOĞAN E, UCUZAL H, ÇOLAK C. A Computer-Assisted Diagnosis Tool for Classifying COVID-19 based on Chest X-Ray Images. KONURALP TIP DERGISI 2021. [DOI: 10.18521/ktd.947192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
35
|
Chang R, Qi S, Yue Y, Zhang X, Song J, Qian W. Predictive Radiomic Models for the Chemotherapy Response in Non-Small-Cell Lung Cancer based on Computerized-Tomography Images. Front Oncol 2021; 11:646190. [PMID: 34307127 PMCID: PMC8293296 DOI: 10.3389/fonc.2021.646190] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Accepted: 06/16/2021] [Indexed: 01/10/2023] Open
Abstract
The heterogeneity and complexity of non-small cell lung cancer (NSCLC) tumors mean that NSCLC patients at the same stage can have different chemotherapy prognoses. Accurate predictive models could recognize NSCLC patients likely to respond to chemotherapy so that they can be given personalized and effective treatment. We propose to identify predictive imaging biomarkers from pre-treatment CT images and construct a radiomic model that can predict the chemotherapy response in NSCLC. This single-center cohort study included 280 NSCLC patients who received first-line chemotherapy treatment. Non-contrast CT images were taken before and after the chemotherapy, and clinical information were collected. Based on the Response Evaluation Criteria in Solid Tumors and clinical criteria, the responses were classified into two categories: response (n = 145) and progression (n = 135), then all data were divided into two cohorts: training cohort (224 patients) and independent test cohort (56 patients). In total, 1629 features characterizing the tumor phenotype were extracted from a cube containing the tumor lesion cropped from the pre-chemotherapy CT images. After dimensionality reduction, predictive models of the chemotherapy response of NSCLC with different feature selection methods and different machine-learning classifiers (support vector machine, random forest, and logistic regression) were constructed. For the independent test cohort, the predictive model based on a random-forest classifier with 20 radiomic features achieved the best performance, with an accuracy of 85.7% and an area under the receiver operating characteristic curve of 0.941 (95% confidence interval, 0.898–0.982). Of the 20 selected features, four were first-order statistics of image intensity and the others were texture features. For nine features, there were significant differences between the response and progression groups (p < 0.001). In the response group, three features, indicating heterogeneity, were overrepresented and one feature indicating homogeneity was underrepresented. The proposed radiomic model with pre-chemotherapy CT features can predict the chemotherapy response of patients with non-small cell lung cancer. This radiomic model can help to stratify patients with NSCLC, thereby offering the prospect of better treatment.
Collapse
Affiliation(s)
- Runsheng Chang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Xiaoye Zhang
- Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Jiangdian Song
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wei Qian
- Department of Electrical and Computer Engineering, University of Texas at El Paso, El Paso, TX, United States
| |
Collapse
|
36
|
Arumuga Maria Devi T, Mebin Jose VI. Three Stream Network Model for Lung Cancer Classification in the CT Images. OPEN COMPUTER SCIENCE 2021. [DOI: 10.1515/comp-2020-0145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Abstract
Lung cancer is considered to be one of the deadly diseases that threaten the survival of human beings. It is a challenging task to identify lung cancer in its early stage from the medical images because of the ambiguity in the lung regions. This paper proposes a new architecture to detect lung cancer obtained from the CT images. The proposed architecture has a three-stream network to extract the manual and automated features from the images. Among these three streams, automated feature extraction as well as the classification is done using residual deep neural network and custom deep neural network. Whereas the manual features are the handcrafted features obtained using high and low-frequency sub-bands in the frequency domain that are classified using a Support Vector Machine Classifier. This makes the architecture robust enough to capture all the important features required to classify lung cancer from the input image. Hence, there is no chance of missing feature information. Finally, all the obtained prediction scores are combined by weighted based fusion. The experimental results show 98.2% classification accuracy which is relatively higher in comparison to other existing methods.
Collapse
|
37
|
Automatic classification of solitary pulmonary nodules in PET/CT imaging employing transfer learning techniques. Med Biol Eng Comput 2021; 59:1299-1310. [PMID: 34003394 DOI: 10.1007/s11517-021-02378-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Accepted: 05/06/2021] [Indexed: 12/19/2022]
Abstract
Early and automatic diagnosis of Solitary Pulmonary Nodules (SPN) in Computed Tomography (CT) chest scans can provide early treatment for patients with lung cancer, as well as doctor liberation from time-consuming procedures. The purpose of this study is the automatic and reliable characterization of SPNs in CT scans extracted from Positron Emission Tomography and Computer Tomography (PET/CT) system. To achieve the aforementioned task, Deep Learning with Convolutional Neural Networks (CNN) is applied. The strategy of training specific CNN architectures from scratch and the strategy of transfer learning, by utilizing state-of-the-art pre-trained CNNs, are compared and evaluated. To enhance the training sets, data augmentation is performed. The publicly available database of CT scans, named as Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), is also utilized to further expand the training set and is added to the PET/CT dataset. The results highlight the effectiveness of transfer learning and data augmentation for the classification task of small datasets. The best accuracy obtained on the PET/CT dataset reached 94%, utilizing a modification proposal of a state-of-the-art CNN, called VGG16, and enhancing the training set with LIDC-IDRI dataset. Besides, the proposed modification outperforms in terms of sensitivity several similar researches, which exploit the benefits of transfer learning. Overview of the experiment setup. The two datasets containing nodule representations are combined to evaluate the effectiveness of transfer learning over the traditional approach of training Convolutional Neural Networks from scratch.
Collapse
|
38
|
Kalane P, Patil S, Patil BP, Sharma DP. Automatic detection of COVID-19 disease using U-Net architecture based fully convolutional network. Biomed Signal Process Control 2021; 67:102518. [PMID: 33643425 PMCID: PMC7896819 DOI: 10.1016/j.bspc.2021.102518] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 02/03/2021] [Accepted: 02/18/2021] [Indexed: 12/28/2022]
Abstract
The severe acute respiratory syndrome coronavirus 2, called a SARS-CoV-2 virus, emerged from China at the end of 2019, has caused a disease named COVID-19, which has now evolved as a pandemic. Amongst the detected Covid-19 cases, several cases are also found asymptomatic. The presently available Reverse Transcription - Polymerase Chain Reaction (RT-PCR) system for detecting COVID-19 lacks due to limited availability of test kits and relatively low positive symptoms in the early stages of the disease, urging the need for alternative solutions. The tool based on Artificial Intelligence might help the world to develop an additional COVID-19 disease mitigation policy. In this paper, an automated Covid-19 detection system has been proposed, which uses indications from Computer Tomography (CT) images to train the new powered deep learning model- U-Net architecture. The performance of the proposed system has been evaluated using 1000 Chest CT images. The images were obtained from three different sources - Two different GitHub repository sources and the Italian Society of Medical and Interventional Radiology's excellent collection. Out of 1000 images, 552 images were of normal persons, and 448 images were obtained from COVID-19 affected people. The proposed algorithm has achieved a sensitivity and specificity of 94.86% and 93.47% respectively, with an overall accuracy of 94.10%. The U-Net architecture used for Chest CT image analysis has been found effective. The proposed method can be used for primary screening of COVID-19 affected persons as an additional tool available to clinicians.
Collapse
Affiliation(s)
| | - Sarika Patil
- Department of Electronics and Telecommunication Engineering, Sinhgad College of Engineering, Savitribai Phule Pune University, Pune, India
| | - B P Patil
- Army Institute of Technology, Savitribai Phule Pune University, Pune, India
| | - Davinder Pal Sharma
- Department of Physics, The University of the West Indies, St. Augustine, Trinidad and Tobago
| |
Collapse
|
39
|
Xie Y, Yu Z. Promotion time cure rate model with a neural network estimated nonparametric component. Stat Med 2021; 40:3516-3532. [PMID: 33928665 DOI: 10.1002/sim.8980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 03/18/2021] [Accepted: 03/25/2021] [Indexed: 11/07/2022]
Abstract
Promotion time cure rate models (PCM) are often used to model the survival data with a cure fraction. Medical images or biomarkers derived from medical images can be the key predictors in survival models. However, incorporating images in the PCM is challenging using traditional nonparametric methods such as splines. We propose to use neural network to model the nonparametric or unstructured predictors' effect in the PCM context. Expectation-maximization algorithm with neural network for the M-step is used for parameter estimation. Asymptotic properties of the proposed estimates are derived. Simulation studies show good performance in terms of both prediction and estimation. We finally apply our methods to analyze the brain images from open access series of imaging studies data.
Collapse
Affiliation(s)
- Yujing Xie
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Zhangsheng Yu
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China.,Department of Bioinformatics and Biostatistics, SJTU-Yale Joint Center for Biostatistics, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
40
|
Zhang B, Qi S, Pan X, Li C, Yao Y, Qian W, Guan Y. Deep CNN Model Using CT Radiomics Feature Mapping Recognizes EGFR Gene Mutation Status of Lung Adenocarcinoma. Front Oncol 2021; 10:598721. [PMID: 33643902 PMCID: PMC7907520 DOI: 10.3389/fonc.2020.598721] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 12/17/2020] [Indexed: 12/12/2022] Open
Abstract
To recognize the epidermal growth factor receptor (EGFR) gene mutation status in lung adenocarcinoma (LADC) has become a prerequisite of deciding whether EGFR-tyrosine kinase inhibitor (EGFR-TKI) medicine can be used. Polymerase chain reaction assay or gene sequencing is for measuring EGFR status, however, the tissue samples by surgery or biopsy are required. We propose to develop deep learning models to recognize EGFR status by using radiomics features extracted from non-invasive CT images. Preoperative CT images, EGFR mutation status and clinical data have been collected in a cohort of 709 patients (the primary cohort) and an independent cohort of 205 patients. After 1,037 CT-based radiomics features are extracted from each lesion region, 784 discriminative features are selected for analysis and construct a feature mapping. One Squeeze-and-Excitation (SE) Convolutional Neural Network (SE-CNN) has been designed and trained to recognize EGFR status from the radiomics feature mapping. SE-CNN model is trained and validated by using 638 patients from the primary cohort, tested by using the rest 71 patients (the internal test cohort), and further tested by using the independent 205 patients (the external test cohort). Furthermore, SE-CNN model is compared with machine learning (ML) models using radiomics features, clinical features, and both features. EGFR(-) patients show the smaller age, higher odds of female, larger lesion volumes, and lower odds of subtype of acinar predominant adenocarcinoma (APA), compared with EGFR(+). The most discriminative features are for texture (614, 78.3%) and the features of first order of intensity (158, 20.1%) and the shape features (12, 1.5%) follow. SE-CNN model can recognize EGFR mutation status with an AUC of 0.910 and 0.841 for the internal and external test cohorts, respectively. It outperforms the CNN model without SE, the fine-tuned VGG16 and VGG19, three ML models, and the state-of-art models. Utilizing radiomics feature mapping extracted from non-invasive CT images, SE-CNN can precisely recognize EGFR mutation status of LADC patients. The proposed method combining radiomics features and deep leaning is superior to ML methods and can be expanded to other medical applications. The proposed SE-CNN model may help make decision on usage of EGFR-TKI medicine.
Collapse
Affiliation(s)
- Baihua Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Xiaohuan Pan
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Chen Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, United States
| | - Wei Qian
- Department of Electrical and Computer Engineering, University of Texas at El Paso, El Paso, TX, United States
| | - Yubao Guan
- Department of Radiology, The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| |
Collapse
|
41
|
Lung Nodule Classification Using Biomarkers, Volumetric Radiomics, and 3D CNNs. J Digit Imaging 2021; 34:647-666. [PMID: 33532893 PMCID: PMC8329152 DOI: 10.1007/s10278-020-00417-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 05/08/2020] [Accepted: 12/30/2020] [Indexed: 02/07/2023] Open
Abstract
We present a hybrid algorithm to estimate lung nodule malignancy that combines imaging biomarkers from Radiologist’s annotation with image classification of CT scans. Our algorithm employs a 3D Convolutional Neural Network (CNN) as well as a Random Forest in order to combine CT imagery with biomarker annotation and volumetric radiomic features. We analyze and compare the performance of the algorithm using only imagery, only biomarkers, combined imagery + biomarkers, combined imagery + volumetric radiomic features, and finally the combination of imagery + biomarkers + volumetric features in order to classify the suspicion level of nodule malignancy. The National Cancer Institute (NCI) Lung Image Database Consortium (LIDC) IDRI dataset is used to train and evaluate the classification task. We show that the incorporation of semi-supervised learning by means of K-Nearest-Neighbors (KNN) can increase the available training sample size of the LIDC-IDRI, thereby further improving the accuracy of malignancy estimation of most of the models tested although there is no significant improvement with the use of KNN semi-supervised learning if image classification with CNNs and volumetric features is combined with descriptive biomarkers. Unexpectedly, we also show that a model using image biomarkers alone is more accurate than one that combines biomarkers with volumetric radiomics, 3D CNNs, and semi-supervised learning. We discuss the possibility that this result may be influenced by cognitive bias in LIDC-IDRI because malignancy estimates were recorded by the same radiologist panel as biomarkers, as well as future work to incorporate pathology information over a subset of study participants.
Collapse
|
42
|
Abstract
Lung cancer is one of the most common diseases among humans and one of the major causes of growing mortality. Medical experts believe that diagnosing lung cancer in the early phase can reduce death with the illustration of lung nodule through computed tomography (CT) screening. Examining the vast amount of CT images can reduce the risk. However, the CT scan images incorporate a tremendous amount of information about nodules, and with an increasing number of images make their accurate assessment very challenging tasks for radiologists. Recently, various methods are evolved based on handcraft and learned approach to assist radiologists. In this paper, we reviewed different promising approaches developed in the computer-aided diagnosis (CAD) system to detect and classify the nodule through the analysis of CT images to provide radiologists' assistance and present the comprehensive analysis of different methods.
Collapse
Affiliation(s)
- Shailesh Kumar Thakur
- Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, India.
| | - Dhirendra Pratap Singh
- Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, India
| | - Jaytrilok Choudhary
- Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, India
| |
Collapse
|
43
|
Cui L, Han S, Qi S, Duan Y, Kang Y, Luo Y. Deep symmetric three-dimensional convolutional neural networks for identifying acute ischemic stroke via diffusion-weighted images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2021; 29:551-566. [PMID: 33967077 DOI: 10.3233/xst-210861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
BACKGROUND Acute ischemic stroke (AIS) results in high morbidity, disability, and mortality. Early and automatic diagnosis of AIS can help clinicians administer the appropriate interventions. OBJECTIVE To develop a deep symmetric 3D convolutional neural network (DeepSym-3D-CNN) for automated AIS diagnosis via diffusion-weighted imaging (DWI) images. METHODS This study includes 190 study subjects (97 AIS and 93 Non-AIS) by collecting both DWI and Apparent Diffusion Coefficient (ADC) images. 3D DWI brain images are split into left and right hemispheres and input into two paths. A map with 125×253×14×12 features is extracted by each path of Inception Modules. After the features computed from two paths are subtracted through L-2 normalization, four multi-scale convolution layers produce the final predation. Three comparative models using DWI images including MedicalNet with transfer learning, Simple DeepSym-3D-CNN (each 3D Inception Module is replaced by a simple 3D-CNN layer), and L-1 DeepSym-3D-CNN (L-2 normalization is replaced by L-1 normalization) are constructed. Moreover, using ADC images and the combination of DWI and ADC images as inputs, the performance of DeepSym-3D-CNN is also investigated. Performance levels of all three models are evaluated by 5-fold cross-validation and the values of area under ROC curve (AUC) are compared by DeLong's test. RESULTS DeepSym-3D-CNN achieves an accuracy of 0.850 and an AUC of 0.864. DeLong's test of AUC values demonstrates that DeepSym-3D-CNN significantly outperforms other comparative models (p < 0.05). The highlighted regions in the feature maps of DeepSym-3D-CNN spatially match with AIS lesions. Meanwhile, DeepSym-3D-CNN using DWI images presents the significant higher AUC than that either using ADC images or using DWI-ADC images based on DeLong's test (p < 0.05). CONCLUSIONS DeepSym-3D-CNN is a potential method for automatically identifying AIS via DWI images and can be extended to other diseases with asymmetric lesions.
Collapse
Affiliation(s)
- Liyuan Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shanhua Han
- Radiology Department, Shanghai Fourth People's Hospital Affiliated to Tongji University School of Medicine, Shanghai, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Yang Duan
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
| | - Yan Kang
- Medical Device Innovation Research Center, Shenzhen Technology University, Shenzhen, China
- Engineering Research Center of Medical Imaging and Intelligent Analysis, Ministry of Education, Northeastern University, Shenyang, China
| | - Yu Luo
- Radiology Department, Shanghai Fourth People's Hospital Affiliated to Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
44
|
Hussain E, Hasan M, Rahman MA, Lee I, Tamanna T, Parvez MZ. CoroDet: A deep learning based classification for COVID-19 detection using chest X-ray images. CHAOS, SOLITONS, AND FRACTALS 2021; 142:110495. [PMID: 33250589 PMCID: PMC7682527 DOI: 10.1016/j.chaos.2020.110495] [Citation(s) in RCA: 133] [Impact Index Per Article: 44.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 11/18/2020] [Indexed: 05/02/2023]
Abstract
BACKGROUND AND OBJECTIVE The Coronavirus 2019, or shortly COVID-19, is a viral disease that causes serious pneumonia and impacts our different body parts from mild to severe depending on patient's immune system. This infection was first reported in Wuhan city of China in December 2019, and afterward, it became a global pandemic spreading rapidly around the world. As the virus spreads through human to human contact, it has affected our lives in a devastating way, including the vigorous pressure on the public health system, the world economy, education sector, workplaces, and shopping malls. Preventing viral spreading requires early detection of positive cases and to treat infected patients as quickly as possible. The need for COVID-19 testing kits has increased, and many of the developing countries in the world are facing a shortage of testing kits as new cases are increasing day by day. In this situation, the recent research using radiology imaging (such as X-ray and CT scan) techniques can be proven helpful to detect COVID-19 as X-ray and CT scan images provide important information about the disease caused by COVID-19 virus. The latest data mining and machine learning techniques such as Convolutional Neural Network (CNN) can be applied along with X-ray and CT scan images of the lungs for the accurate and rapid detection of the disease, assisting in mitigating the problem of scarcity of testing kits. METHODS Hence a novel CNN model called CoroDet for automatic detection of COVID-19 by using raw chest X-ray and CT scan images have been proposed in this study. CoroDet is developed to serve as an accurate diagnostics for 2 class classification (COVID and Normal), 3 class classification (COVID, Normal, and non-COVID pneumonia), and 4 class classification (COVID, Normal, non-COVID viral pneumonia, and non-COVID bacterial pneumonia). RESULTS The performance of our proposed model was compared with ten existing techniques for COVID detection in terms of accuracy. A classification accuracy of 99.1% for 2 class classification, 94.2% for 3 class classification, and 91.2% for 4 class classification was produced by our proposed model, which is obviously better than the state-of-the-art-methods used for COVID-19 detection to the best of our knowledge. Moreover, the dataset with x-ray images that we prepared for the evaluation of our method is the largest datasets for COVID detection as far as our knowledge goes. CONCLUSION The experimental results of our proposed method CoroDet indicate the superiority of CoroDet over the existing state-of-the-art-methods. CoroDet may assist clinicians in making appropriate decisions for COVID-19 detection and may also mitigate the problem of scarcity of testing kits.
Collapse
Affiliation(s)
- Emtiaz Hussain
- Department of Computer Science and Engineering, Brac University, Dhaka, Bangladesh
| | - Mahmudul Hasan
- Department of Computer Science and Engineering, Brac University, Dhaka, Bangladesh
| | - Md Anisur Rahman
- School of Computing & Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia
| | - Ickjai Lee
- Discipline of Information Technology, College of Science & Engineering, James Cook University, Cairns, QLD 4870, Australia
| | - Tasmi Tamanna
- Department of Immunology, Bangladesh University of Health Sciences, Dhaka, Bangladesh
| | | |
Collapse
|
45
|
Blanc D, Racine V, Khalil A, Deloche M, Broyelle JA, Hammouamri I, Sinitambirivoutin E, Fiammante M, Verdier E, Besson T, Sadate A, Lederlin M, Laurent F, Chassagnon G, Ferretti G, Diascorn Y, Brillet PY, Cassagnes L, Caramella C, Loubet A, Abassebay N, Cuingnet P, Ohana M, Behr J, Ginzac A, Veyssiere H, Durando X, Bousaïd I, Lassau N, Brehant J. Artificial intelligence solution to classify pulmonary nodules on CT. Diagn Interv Imaging 2020; 101:803-810. [PMID: 33168496 DOI: 10.1016/j.diii.2020.10.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 10/12/2020] [Accepted: 10/13/2020] [Indexed: 12/20/2022]
Abstract
PURPOSE The purpose of this study was to create an algorithm to detect and classify pulmonary nodules in two categories based on their volume greater than 100 mm3 or not, using machine learning and deep learning techniques. MATERIALS AND METHOD The dataset used to train the model was provided by the organization team of the SFR (French Radiological Society) Data Challenge 2019. An asynchronous and parallel 3-stages pipeline was developed to process all the data (a data "pre-processing" stage; a "nodule detection" stage; a "classifier" stage). Lung segmentation was achieved using 3D U-NET algorithm; nodule detection was done using 3D Retina-UNET and classifier stage with a support vector machine algorithm on selected features. Performances were assessed using area under receiver operating characteristics curve (AUROC). RESULTS The pipeline showed good performance for pathological nodule detection and patient diagnosis. With the preparation dataset, an AUROC of 0.9058 (95% confidence interval [CI]: 0.8746-0.9362) was obtained, 87% yielding accuracy (95% CI: 84.83%-91.03%) for the "nodule detection" stage, corresponding to 86% specificity (95% CI: 82%-92%) and 89% sensitivity (95% CI: 84.83%-91.03%). CONCLUSION A fully functional pipeline using 3D U-NET, 3D Retina-UNET and classifier stage with a support vector machine algorithm was developed, resulting in high capabilities for pulmonary nodule classification.
Collapse
Affiliation(s)
- D Blanc
- QuantaCell, IRMB, Hôpital Saint-Eloi, 34090 Montpellier, France
| | - V Racine
- QuantaCell, IRMB, Hôpital Saint-Eloi, 34090 Montpellier, France
| | - A Khalil
- Department of Radiology, Neuroradiology unit, Assistance Publique-Hôpitaux de Paris, Hôpital Bichat Claude Bernard, 75018 Paris, France; Université de Paris, 75010, Paris, France
| | - M Deloche
- >IBM Cognitive Systems Lab, 34000 Montpellier, France
| | - J-A Broyelle
- >IBM Cognitive Systems Lab, 34000 Montpellier, France
| | - I Hammouamri
- >IBM Cognitive Systems Lab, 34000 Montpellier, France
| | | | - M Fiammante
- IBM Cognitive Systems France, 92270 Bois-Colombes, France
| | - E Verdier
- IBM Cognitive Systems France, 92270 Bois-Colombes, France
| | - T Besson
- IBM Cognitive Systems France, 92270 Bois-Colombes, France
| | - A Sadate
- Department of Radiology and Medical Imaging, CHU Nîmes, University Montpellier, EA2415, 30029 Nîmes, France
| | - M Lederlin
- Department of Radiology, Hôpital Universitaire Pontchaillou, 35000 Rennes, France
| | - F Laurent
- Department of thoracic and cardiovascular Imaging, Respiratory Diseases Service, Respiratory Functional Exploration Service, Hôpital universitaire de Bordeaux, CIC 1401, 33600 Pessac, France
| | - G Chassagnon
- Department of Radiology, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, 75014, Paris, France & Université de Paris, 75006 Paris, France
| | - G Ferretti
- Department of Radiology and Medical Imaging, CHU Grenoble Alpes, 38700 Grenoble, France
| | - Y Diascorn
- Department of Radiology, Hôpital Universitaire Pasteur, Nice, France
| | - P-Y Brillet
- Inserm UMR 1272, Université Sorbonne Paris Nord, Assistance Publique-Hôpitaux de Paris, Department of Radiology, Hôpital Avicenne, 93430 Bobigny, France
| | - Lucie Cassagnes
- Department of radiology B, CHU Gabriel Montpied, 63003 Clermont-Ferrand, France
| | - C Caramella
- Department of Radiology, Institut Gustave Roussy, 94800 Villejuif, France
| | - A Loubet
- Department of Neuroradiology, Hôpital Gui-de-Chauliac, CHRU de Montpellier, 34000 Montpellier, France
| | - N Abassebay
- Department of Radiology, CH Douai, 59507 Douai, France
| | - P Cuingnet
- Department of Radiology, CH Douai, 59507 Douai, France
| | - M Ohana
- Department of Radiology, Nouvel Hôpital Civil, 67000 Strasbourg, France
| | - J Behr
- Department of Radiology, CHRU de Jean-Minjoz Besançon, 25030 Besançon, France
| | - A Ginzac
- Clinical Research Unit, Clinical Research and Innovation Delegation, Centre de Lutte contre le Cancer, Centre Jean Perrin, 63011 Clermont-Ferrand Cedex 1, France; Université Clermont Auvergne,INSERM, U1240 Imagerie Moléculaire et Stratégies Théranostiques, Centre Jean Perrin, 63011 Clermont-Ferrand, France; Clinical Investigation Center, UMR501, 63011 Clermont-Ferrand, France
| | - H Veyssiere
- Clinical Research Unit, Clinical Research and Innovation Delegation, Centre de Lutte contre le Cancer, Centre Jean Perrin, 63011 Clermont-Ferrand Cedex 1, France; Université Clermont Auvergne,INSERM, U1240 Imagerie Moléculaire et Stratégies Théranostiques, Centre Jean Perrin, 63011 Clermont-Ferrand, France; Clinical Investigation Center, UMR501, 63011 Clermont-Ferrand, France
| | - X Durando
- Clinical Research Unit, Clinical Research and Innovation Delegation, Centre de Lutte contre le Cancer, Centre Jean Perrin, 63011 Clermont-Ferrand Cedex 1, France; Université Clermont Auvergne,INSERM, U1240 Imagerie Moléculaire et Stratégies Théranostiques, Centre Jean Perrin, 63011 Clermont-Ferrand, France; Clinical Investigation Center, UMR501, 63011 Clermont-Ferrand, France; Department of Medical Oncology, Centre Jean Perrin, 63011 Clermont-Ferrand, France
| | - I Bousaïd
- Digital Transformation and Information Systems Division, Gustave Roussy, 94800 Villejuif, France
| | - N Lassau
- Multimodal Biomedical Imaging Laboratory Paris-Saclay, BIOMAPS, UMR 1281, Université Paris-Saclay, Inserm, CNRS, CEA, Department of Radiology, Institut Gustave Roussy, 94800, Villejuif, France
| | - J Brehant
- Department of Radiology, Centre Jean Perrin, 63011 Clermont-Ferrand, France.
| |
Collapse
|
46
|
Mastouri R, Khlifa N, Neji H, Hantous-Zannad S. A bilinear convolutional neural network for lung nodules classification on CT images. Int J Comput Assist Radiol Surg 2020; 16:91-101. [PMID: 33140257 DOI: 10.1007/s11548-020-02283-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 10/21/2020] [Indexed: 12/19/2022]
Abstract
PURPOSE Lung cancer is the most frequent cancer worldwide and is the leading cause of cancer-related deaths. Its early detection and treatment at the stage of a lung nodule improve the prognosis. In this study was proposed a new classification approach named bilinear convolutional neural network (BCNN) for the classification of lung nodules on CT images. METHODS Convolutional neural network (CNN) is considered as the leading model in deep learning and is highly recommended for the design of computer-aided diagnosis systems thanks to its promising results on medical image analysis. The proposed BCNN scheme consists of two-stream CNNs (VGG16 and VGG19) as feature extractors followed by a support vector machine (SVM) classifier for false positive reduction. Series of experiments are performed by introducing the bilinear vector features extracted from three BCNN combinations into various types of SVMs that we adopted instead of the original softmax to determine the most suitable classifier for our study. RESULTS The method performance was evaluated on 3186 images from the public LUNA16 database. We found that the BCNN [VGG16, VGG19] combination with and without SVM surpassed the [VGG16]2 and [VGG19]2 architectures, achieved an accuracy rate of 91.99% against 91.84% and 90.58%, respectively, and an area under the curve (AUC) rate of 95.9% against 94.8% and 94%, respectively. CONCLUSION The proposed method improved the outcomes of conventional CNN-based architectures and showed promising and satisfying results, compared to other works, with an affordable complexity. We believe that the proposed BCNN can be used as an assessment tool for radiologists to make a precise analysis of lung nodules and an early diagnosis of lung cancers.
Collapse
Affiliation(s)
- Rekka Mastouri
- Higher Institute of Medical Technologies of Tunis, Research Laboratory of Biophysics and Medical Technologies, University of Tunis el Manar, 1006, Tunis, Tunisia.
| | - Nawres Khlifa
- Higher Institute of Medical Technologies of Tunis, Research Laboratory of Biophysics and Medical Technologies, University of Tunis el Manar, 1006, Tunis, Tunisia
| | - Henda Neji
- Faculty of Medicine of Tunis, University of Tunis el Manar, 1007, Tunis, Tunisia.,Medical Imaging Department, Abderrahmen Mami Hospital, 2035, Ariana, Tunisia
| | - Saoussen Hantous-Zannad
- Faculty of Medicine of Tunis, University of Tunis el Manar, 1007, Tunis, Tunisia.,Medical Imaging Department, Abderrahmen Mami Hospital, 2035, Ariana, Tunisia
| |
Collapse
|
47
|
Liu Z, Li L, Li T, Luo D, Wang X, Luo D. Does a Deep Learning-Based Computer-Assisted Diagnosis System Outperform Conventional Double Reading by Radiologists in Distinguishing Benign and Malignant Lung Nodules? Front Oncol 2020; 10:545862. [PMID: 33163395 PMCID: PMC7581733 DOI: 10.3389/fonc.2020.545862] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 09/14/2020] [Indexed: 01/10/2023] Open
Abstract
Background In differentiating indeterminate pulmonary nodules, multiple studies indicated the superiority of deep learning–based computer-assisted diagnosis system (DL-CADx) over conventional double reading by radiologists, which needs external validation. Therefore, our aim was to externally validate the performance of a commercial DL-CADx in differentiating benign and malignant lung nodules. Methods In this retrospective study, 233 patients with 261 pathologically confirmed lung nodules were enrolled. Double reading was used to rate each nodule using a four-scale malignancy score system, including unlikely (0–25%), malignancy cannot be completely excluded (25–50%), highly likely (50–75%), and considered as malignant (75–100%), with any disagreement resolved through discussion. DL-CADx automatically rated each nodule with a malignancy likelihood ranging from 0 to 100%, which was then quadrichotomized accordingly. Intraclass correlation coefficient (ICC) was used to evaluate the agreement in malignancy risk rating between DL-CADx and double reading, with ICC value of <0.5, 0.5 to 0.75, 0.75 to 0.9, and >0.9 indicating poor, moderate, good, and perfect agreement, respectively. With malignancy likelihood >50% as cut-off value for malignancy and pathological results as gold standard, sensitivity, specificity, and accuracy were calculated for double reading and DL-CADx, separately. Results Among the 261 nodules, 247 nodules were successfully detected by DL-CADx with detection rate of 94.7%. Regarding malignancy rating, DL-CADx was in moderate agreement with double reading (ICC = 0.555, 95% CI 0.424 to 0.655). DL-CADx misdiagnosed 40 true malignant nodules as benign nodules and 30 true benign nodules as malignant nodules with sensitivity, specificity, and accuracy of 79.2, 45.5, and 71.7%, respectively. In contrast, double reading achieved better performance with 16 true malignant nodules misdiagnosed as benign nodules and 26 true benign nodules as malignant nodules with sensitivity, specificity, and accuracy of 91.7, 52.7, and 83.0%, respectively. Conclusion Compared with double reading, DL-CADx we used still shows inferior performance in differentiating malignant and benign nodules.
Collapse
Affiliation(s)
- Zhou Liu
- Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China
| | - Li Li
- Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China
| | - Tianran Li
- Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China
| | - Douqiang Luo
- Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China.,Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Xiaoliang Wang
- Department of Pathology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China
| | - Dehong Luo
- Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, China.,Department of Radiology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| |
Collapse
|
48
|
Classification of Lung Nodules Based on Deep Residual Networks and Migration Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:8975078. [PMID: 32318102 PMCID: PMC7149413 DOI: 10.1155/2020/8975078] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 01/30/2020] [Accepted: 02/12/2020] [Indexed: 01/22/2023]
Abstract
The classification process of lung nodule detection in a traditional computer-aided detection (CAD) system is complex, and the classification result is heavily dependent on the performance of each step in lung nodule detection, causing low classification accuracy and high false positive rate. In order to alleviate these issues, a lung nodule classification method based on a deep residual network is proposed. Abandoning traditional image processing methods and taking the 50-layer ResNet network structure as the initial model, the deep residual network is constructed by combining residual learning and migration learning. The proposed approach is verified by conducting experiments on the lung computed tomography (CT) images from the publicly available LIDC-IDRI database. An average accuracy of 98.23% and a false positive rate of 1.65% are obtained based on the ten-fold cross-validation method. Compared with the conventional support vector machine (SVM)-based CAD system, the accuracy of our method improved by 9.96% and the false positive rate decreased by 6.95%, while the accuracy improved by 1.75% and 2.42%, respectively, and the false positive rate decreased by 2.07% and 2.22%, respectively, in contrast to the VGG19 model and InceptionV3 convolutional neural networks. The experimental results demonstrate the effectiveness of our proposed method in lung nodule classification for CT images.
Collapse
|
49
|
Wang L, Chen A, Zhang Y, Wang X, Zhang Y, Shen Q, Xue Y. AK-DL: A Shallow Neural Network Model for Diagnosing Actinic Keratosis with Better Performance Than Deep Neural Networks. Diagnostics (Basel) 2020; 10:diagnostics10040217. [PMID: 32294962 PMCID: PMC7235884 DOI: 10.3390/diagnostics10040217] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 04/07/2020] [Accepted: 04/09/2020] [Indexed: 11/29/2022] Open
Abstract
Actinic keratosis (AK) is one of the most common precancerous skin lesions, which is easily confused with benign keratosis (BK). At present, the diagnosis of AK mainly depends on histopathological examination, and ignorance can easily occur in the early stage, thus missing the opportunity for treatment. In this study, we designed a shallow convolutional neural network (CNN) named actinic keratosis deep learning (AK-DL) and further developed an intelligent diagnostic system for AK based on the iOS platform. After data preprocessing, the AK-DL model was trained and tested with AK and BK images from dataset HAM10000. We further compared it with mainstream deep CNN models, such as AlexNet, GoogLeNet, and ResNet, as well as traditional medical image processing algorithms. Our results showed that the performance of AK-DL was better than the mainstream deep CNN models and traditional medical image processing algorithms based on the AK dataset. The recognition accuracy of AK-DL was 0.925, the area under the receiver operating characteristic curve (AUC) was 0.887, and the training time was only 123.0 s. An iOS app of intelligent diagnostic system was developed based on the AK-DL model for accurate and automatic diagnosis of AK. Our results indicate that it is better to employ a shallow CNN in the recognition of AK.
Collapse
Affiliation(s)
- Liyang Wang
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
| | - Angxuan Chen
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; (A.C.); (Y.Z.); (Y.Z.)
| | - Yan Zhang
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; (A.C.); (Y.Z.); (Y.Z.)
| | - Xiaoya Wang
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
| | - Yu Zhang
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; (A.C.); (Y.Z.); (Y.Z.)
| | - Qun Shen
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
| | - Yong Xue
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
- Correspondence:
| |
Collapse
|
50
|
Accurate Identification of Tomograms of Lung Nodules Using CNN: Influence of the Optimizer, Preprocessing and Segmentation. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7297567 DOI: 10.1007/978-3-030-49076-8_23] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
The diagnosis of pulmonary nodules plays an important role in the treatment of lung cancer, thus improving the diagnosis is the primary concern. This article shows a comparison of the results in the identification of computed tomography scans with pulmonary nodules, through the use of different optimizers (Adam and Nadam); the effect of the use of pre-processing and segmentation techniques using CNNs is also thoroughly explored. The dataset employed was Lung TIME which is publicly available. When no preprocessing or segmentation was applied, training accuracy above 90.24% and test accuracy above 86.8% were obtained. In contrast, when segmentation was applied without preprocessing, a training accuracy above 97.19% and test accuracy above 95.07% were reached. On the other hand, when preprocessing and segmentation was applied, a training accuracy above 96.41% and test accuracy above 94.71% were achieved. On average, the Adam optimizer scored a training accuracy of 96.17% and a test accuracy of 95.23%. Whereas, the Nadam optimizer obtained 96.25% and 95.2%, respectively. It is concluded that CNN has a good performance even when working with images with noise. The performance of the network was similar when working with preprocessing and segmentation than when using only segmentation. Also, it can be inferred that, the application of preprocessing and segmentation is an excellent option when it is required to improve accuracy in CNNs.
Collapse
|