1
|
Lin CY, Guo SM, Lien JJJ, Tsai TY, Liu YS, Lai CH, Hsu IL, Chang CC, Tseng YL. Development of a modified 3D region proposal network for lung nodule detection in computed tomography scans: a secondary analysis of lung nodule datasets. Cancer Imaging 2024; 24:40. [PMID: 38509635 PMCID: PMC10953193 DOI: 10.1186/s40644-024-00683-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 03/03/2024] [Indexed: 03/22/2024] Open
Abstract
BACKGROUND Low-dose computed tomography (LDCT) has been shown useful in early lung cancer detection. This study aimed to develop a novel deep learning model for detecting pulmonary nodules on chest LDCT images. METHODS In this secondary analysis, three lung nodule datasets, including Lung Nodule Analysis 2016 (LUNA16), Lung Nodule Received Operation (LNOP), and Lung Nodule in Health Examination (LNHE), were used to train and test deep learning models. The 3D region proposal network (RPN) was modified via a series of pruning experiments for better predictive performance. The performance of each modified deep leaning model was evaluated based on sensitivity and competition performance metric (CPM). Furthermore, the performance of the modified 3D RPN trained on three datasets was evaluated by 10-fold cross validation. Temporal validation was conducted to assess the reliability of the modified 3D RPN for detecting lung nodules. RESULTS The results of pruning experiments indicated that the modified 3D RPN composed of the Cross Stage Partial Network (CSPNet) approach to Residual Network (ResNet) Xt (CSP-ResNeXt) module, feature pyramid network (FPN), nearest anchor method, and post-processing masking, had the optimal predictive performance with a CPM of 92.2%. The modified 3D RPN trained on the LUNA16 dataset had the highest CPM (90.1%), followed by the LNOP dataset (CPM: 74.1%) and the LNHE dataset (CPM: 70.2%). When the modified 3D RPN trained and tested on the same datasets, the sensitivities were 94.6%, 84.8%, and 79.7% for LUNA16, LNOP, and LNHE, respectively. The temporal validation analysis revealed that the modified 3D RPN tested on LNOP test set achieved a CPM of 71.6% and a sensitivity of 85.7%, and the modified 3D RPN tested on LNHE test set had a CPM of 71.7% and a sensitivity of 83.5%. CONCLUSION A modified 3D RPN for detecting lung nodules on LDCT scans was designed and validated, which may serve as a computer-aided diagnosis system to facilitate lung nodule detection and lung cancer diagnosis.
Collapse
Affiliation(s)
- Chia-Ying Lin
- Department of Medical Imaging, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, No.1, University Road, 701, Tainan City, Taiwan
| | - Shu-Mei Guo
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Jenn-Jier James Lien
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Tzung-Yi Tsai
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Yi-Sheng Liu
- Department of Medical Imaging, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, No.1, University Road, 701, Tainan City, Taiwan
| | - Chao-Han Lai
- Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| | - I-Lin Hsu
- Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| | - Chao-Chun Chang
- Division of Thoracic Surgery, Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan.
| | - Yau-Lin Tseng
- Division of Thoracic Surgery, Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| |
Collapse
|
2
|
Hendrix W, Hendrix N, Scholten ET, Mourits M, Trap-de Jong J, Schalekamp S, Korst M, van Leuken M, van Ginneken B, Prokop M, Rutten M, Jacobs C. Deep learning for the detection of benign and malignant pulmonary nodules in non-screening chest CT scans. COMMUNICATIONS MEDICINE 2023; 3:156. [PMID: 37891360 PMCID: PMC10611755 DOI: 10.1038/s43856-023-00388-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
BACKGROUND Outside a screening program, early-stage lung cancer is generally diagnosed after the detection of incidental nodules in clinically ordered chest CT scans. Despite the advances in artificial intelligence (AI) systems for lung cancer detection, clinical validation of these systems is lacking in a non-screening setting. METHOD We developed a deep learning-based AI system and assessed its performance for the detection of actionable benign nodules (requiring follow-up), small lung cancers, and pulmonary metastases in CT scans acquired in two Dutch hospitals (internal and external validation). A panel of five thoracic radiologists labeled all nodules, and two additional radiologists verified the nodule malignancy status and searched for any missed cancers using data from the national Netherlands Cancer Registry. The detection performance was evaluated by measuring the sensitivity at predefined false positive rates on a free receiver operating characteristic curve and was compared with the panel of radiologists. RESULTS On the external test set (100 scans from 100 patients), the sensitivity of the AI system for detecting benign nodules, primary lung cancers, and metastases is respectively 94.3% (82/87, 95% CI: 88.1-98.8%), 96.9% (31/32, 95% CI: 91.7-100%), and 92.0% (104/113, 95% CI: 88.5-95.5%) at a clinically acceptable operating point of 1 false positive per scan (FP/s). These sensitivities are comparable to or higher than the radiologists, albeit with a slightly higher FP/s (average difference of 0.6). CONCLUSIONS The AI system reliably detects benign and malignant pulmonary nodules in clinically indicated CT scans and can potentially assist radiologists in this setting.
Collapse
Affiliation(s)
- Ward Hendrix
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
| | - Nils Hendrix
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
- Jheronimus Academy of Data Science, Sint Janssingel 92, 5211 DA, 's-Hertogenbosch, The Netherlands
| | - Ernst T Scholten
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Mariëlle Mourits
- Radiology Department, Canisius Wilhelmina Hospital, Weg door Jonkerbos 100, 6532 SZ, Nijmegen, The Netherlands
| | - Joline Trap-de Jong
- Radiology Department, St. Antonius Hospital, Koekoekslaan 1, 3435 CM, Nieuwegein, The Netherlands
| | - Steven Schalekamp
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Mike Korst
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
| | - Maarten van Leuken
- Radiology Department, Canisius Wilhelmina Hospital, Weg door Jonkerbos 100, 6532 SZ, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Mathias Prokop
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, University Medical Center Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Matthieu Rutten
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
| | - Colin Jacobs
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands.
| |
Collapse
|
3
|
Lin YC, Lin G, Pandey S, Yeh CH, Wang JJ, Lin CY, Ho TY, Ko SF, Ng SH. Fully automated segmentation and radiomics feature extraction of hypopharyngeal cancer on MRI using deep learning. Eur Radiol 2023; 33:6548-6556. [PMID: 37338554 PMCID: PMC10415433 DOI: 10.1007/s00330-023-09827-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 03/29/2023] [Accepted: 04/14/2023] [Indexed: 06/21/2023]
Abstract
OBJECTIVES To use convolutional neural network for fully automated segmentation and radiomics features extraction of hypopharyngeal cancer (HPC) tumor in MRI. METHODS MR images were collected from 222 HPC patients, among them 178 patients were used for training, and another 44 patients were recruited for testing. U-Net and DeepLab V3 + architectures were used for training the models. The model performance was evaluated using the dice similarity coefficient (DSC), Jaccard index, and average surface distance. The reliability of radiomics parameters of the tumor extracted by the models was assessed using intraclass correlation coefficient (ICC). RESULTS The predicted tumor volumes by DeepLab V3 + model and U-Net model were highly correlated with those delineated manually (p < 0.001). The DSC of DeepLab V3 + model was significantly higher than that of U-Net model (0.77 vs 0.75, p < 0.05), particularly in those small tumor volumes of < 10 cm3 (0.74 vs 0.70, p < 0.001). For radiomics extraction of the first-order features, both models exhibited high agreement (ICC: 0.71-0.91) with manual delineation. The radiomics extracted by DeepLab V3 + model had significantly higher ICCs than those extracted by U-Net model for 7 of 19 first-order features and for 8 of 17 shape-based features (p < 0.05). CONCLUSION Both DeepLab V3 + and U-Net models produced reasonable results in automated segmentation and radiomic features extraction of HPC on MR images, whereas DeepLab V3 + had a better performance than U-Net. CLINICAL RELEVANCE STATEMENT The deep learning model, DeepLab V3 + , exhibited promising performance in automated tumor segmentation and radiomics extraction for hypopharyngeal cancer on MRI. This approach holds great potential for enhancing the radiotherapy workflow and facilitating prediction of treatment outcomes. KEY POINTS • DeepLab V3 + and U-Net models produced reasonable results in automated segmentation and radiomic features extraction of HPC on MR images. • DeepLab V3 + model was more accurate than U-Net in automated segmentation, especially on small tumors. • DeepLab V3 + exhibited higher agreement for about half of the first-order and shape-based radiomics features than U-Net.
Collapse
Affiliation(s)
- Yu-Chun Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
- Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan
- Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
- Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Sumit Pandey
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
| | - Chih-Hua Yeh
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
| | - Jiun-Jie Wang
- Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan
| | - Chien-Yu Lin
- Department of Radiation Oncology, Chang Gung Memorial Hospital at Linkou and Chang Gung University, Taoyuan, Taiwan
| | - Tsung-Ying Ho
- Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital and Chang Gung University, Taoyuan, Taiwan
| | - Sheung-Fat Ko
- Department of Radiology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung, Taiwan
| | - Shu-Hang Ng
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan.
| |
Collapse
|
4
|
Suganuma N, Yoshida S, Takeuchi Y, Nomura YK, Suzuki K. Artificial Intelligence in Quantitative Chest Imaging Analysis for Occupational Lung Disease. Semin Respir Crit Care Med 2023; 44:362-369. [PMID: 37072023 DOI: 10.1055/s-0043-1767760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2023]
Abstract
Occupational lung disease manifests complex radiologic findings which have long been a challenge for computer-assisted diagnosis (CAD). This journey started in the 1970s when texture analysis was developed and applied to diffuse lung disease. Pneumoconiosis appears on radiography as a combination of small opacities, large opacities, and pleural shadows. The International Labor Organization International Classification of Radiograph of Pneumoconioses has been the main tool used to describe pneumoconioses and is an ideal system that can be adapted for CAD using artificial intelligence (AI). AI includes machine learning which utilizes deep learning or an artificial neural network. This in turn includes a convolutional neural network. The tasks of CAD are systematically described as classification, detection, and segmentation of the target lesions. Alex-net, VGG16, and U-Net are among the most common algorithms used in the development of systems for the diagnosis of diffuse lung disease, including occupational lung disease. We describe the long journey in the pursuit of CAD of pneumoconioses including our recent proposal of a new expert system.
Collapse
Affiliation(s)
- Narufumi Suganuma
- Department of Environmental Medicine, Kochi Medical School, Nankoku, Kochi, Japan
| | - Shinichi Yoshida
- School of Information, Kochi University of Technology, Nankoku, Kochi, Japan
| | - Yuma Takeuchi
- Department of Environmental Medicine, Kochi Medical School, Nankoku, Kochi, Japan
- Department of Radiology, Kochi Medical School Hospital, Nankoku, Kochi, Japan
| | - Yoshua K Nomura
- Department of Environmental Medicine, Kochi Medical School, Nankoku, Kochi, Japan
| | - Kazuhiro Suzuki
- Department of Radiology, School of Medicine, Juntendo University, Bunkyo City, Tokyo, Japan
| |
Collapse
|
5
|
Dimitriadis A, Trivizakis E, Papanikolaou N, Tsiknakis M, Marias K. Enhancing cancer differentiation with synthetic MRI examinations via generative models: a systematic review. Insights Imaging 2022; 13:188. [PMID: 36503979 PMCID: PMC9742072 DOI: 10.1186/s13244-022-01315-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 07/24/2022] [Indexed: 12/14/2022] Open
Abstract
Contemporary deep learning-based decision systems are well-known for requiring high-volume datasets in order to produce generalized, reliable, and high-performing models. However, the collection of such datasets is challenging, requiring time-consuming processes involving also expert clinicians with limited time. In addition, data collection often raises ethical and legal issues and depends on costly and invasive procedures. Deep generative models such as generative adversarial networks and variational autoencoders can capture the underlying distribution of the examined data, allowing them to create new and unique instances of samples. This study aims to shed light on generative data augmentation techniques and corresponding best practices. Through in-depth investigation, we underline the limitations and potential methodology pitfalls from critical standpoint and aim to promote open science research by identifying publicly available open-source repositories and datasets.
Collapse
Affiliation(s)
- Avtantil Dimitriadis
- grid.4834.b0000 0004 0635 685XComputational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece ,grid.419879.a0000 0004 0393 8299Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71410 Heraklion, Greece
| | - Eleftherios Trivizakis
- grid.4834.b0000 0004 0635 685XComputational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece ,grid.8127.c0000 0004 0576 3437Medical School, University of Crete, 71003 Heraklion, Greece
| | - Nikolaos Papanikolaou
- grid.4834.b0000 0004 0635 685XComputational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece ,grid.421010.60000 0004 0453 9636Computational Clinical Imaging Group, Centre of the Unknown, Champalimaud Foundation, 1400-038 Lisbon, Portugal ,grid.18886.3fThe Royal Marsden NHS Foundation Trust, THe Institute of Cancer Research, London, UK
| | - Manolis Tsiknakis
- grid.4834.b0000 0004 0635 685XComputational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece ,grid.419879.a0000 0004 0393 8299Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71410 Heraklion, Greece
| | - Kostas Marias
- grid.4834.b0000 0004 0635 685XComputational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece ,grid.419879.a0000 0004 0393 8299Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71410 Heraklion, Greece
| |
Collapse
|
6
|
Yu AC, Mohajer B, Eng J. External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review. Radiol Artif Intell 2022; 4:e210064. [PMID: 35652114 DOI: 10.1148/ryai.210064] [Citation(s) in RCA: 83] [Impact Index Per Article: 41.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 03/09/2022] [Accepted: 04/12/2022] [Indexed: 01/17/2023]
Abstract
Purpose To assess generalizability of published deep learning (DL) algorithms for radiologic diagnosis. Materials and Methods In this systematic review, the PubMed database was searched for peer-reviewed studies of DL algorithms for image-based radiologic diagnosis that included external validation, published from January 1, 2015, through April 1, 2021. Studies using nonimaging features or incorporating non-DL methods for feature extraction or classification were excluded. Two reviewers independently evaluated studies for inclusion, and any discrepancies were resolved by consensus. Internal and external performance measures and pertinent study characteristics were extracted, and relationships among these data were examined using nonparametric statistics. Results Eighty-three studies reporting 86 algorithms were included. The vast majority (70 of 86, 81%) reported at least some decrease in external performance compared with internal performance, with nearly half (42 of 86, 49%) reporting at least a modest decrease (≥0.05 on the unit scale) and nearly a quarter (21 of 86, 24%) reporting a substantial decrease (≥0.10 on the unit scale). No study characteristics were found to be associated with the difference between internal and external performance. Conclusion Among published external validation studies of DL algorithms for image-based radiologic diagnosis, the vast majority demonstrated diminished algorithm performance on the external dataset, with some reporting a substantial performance decrease.Keywords: Meta-Analysis, Computer Applications-Detection/Diagnosis, Neural Networks, Computer Applications-General (Informatics), Epidemiology, Technology Assessment, Diagnosis, Informatics Supplemental material is available for this article. © RSNA, 2022.
Collapse
Affiliation(s)
- Alice C Yu
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| | - Bahram Mohajer
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| | - John Eng
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| |
Collapse
|
7
|
Ali S, Li J, Pei Y, Khurram R, Rehman KU, Rasool AB. State-of-the-Art Challenges and Perspectives in Multi-Organ Cancer Diagnosis via Deep Learning-Based Methods. Cancers (Basel) 2021; 13:5546. [PMID: 34771708 PMCID: PMC8583666 DOI: 10.3390/cancers13215546] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 10/28/2021] [Accepted: 10/29/2021] [Indexed: 11/16/2022] Open
Abstract
Thus far, the most common cause of death in the world is cancer. It consists of abnormally expanding areas that are threatening to human survival. Hence, the timely detection of cancer is important to expanding the survival rate of patients. In this survey, we analyze the state-of-the-art approaches for multi-organ cancer detection, segmentation, and classification. This article promptly reviews the present-day works in the breast, brain, lung, and skin cancer domain. Afterwards, we analytically compared the existing approaches to provide insight into the ongoing trends and future challenges. This review also provides an objective description of widely employed imaging techniques, imaging modality, gold standard database, and related literature on each cancer in 2016-2021. The main goal is to systematically examine the cancer diagnosis systems for multi-organs of the human body as mentioned. Our critical survey analysis reveals that greater than 70% of deep learning researchers attain promising results with CNN-based approaches for the early diagnosis of multi-organ cancer. This survey includes the extensive discussion part along with current research challenges, possible solutions, and prospects. This research will endow novice researchers with valuable information to deepen their knowledge and also provide the room to develop new robust computer-aid diagnosis systems, which assist health professionals in bridging the gap between rapid diagnosis and treatment planning for cancer patients.
Collapse
Affiliation(s)
- Saqib Ali
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (S.A.); (J.L.); (K.u.R.)
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (S.A.); (J.L.); (K.u.R.)
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu 965-8580, Japan
| | - Rooha Khurram
- Beijing Key Laboratory for Green Catalysis and Separation, Department of Chemistry and Chemical Engineering, Beijing University of Technology, Beijing 100124, China;
| | - Khalil ur Rehman
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (S.A.); (J.L.); (K.u.R.)
| | - Abdul Basit Rasool
- Research Institute for Microwave and Millimeter-Wave (RIMMS), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan;
| |
Collapse
|
8
|
Zheng B, Yang D, Zhu Y, Liu Y, Hu J, Bai C. 3D gray density coding feature for benign-malignant pulmonary nodule classification on chest CT. Med Phys 2021; 48:7826-7836. [PMID: 34655238 DOI: 10.1002/mp.15298] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 09/13/2021] [Accepted: 09/30/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Early detection is significant to reduce lung cancer-related death. Computer-aided detection system (CADs) can help radiologists to make an early diagnosis. In this paper, we propose a novel 3D gray density coding feature (3D GDC) and fuse it with extracted geometric features. The fusion feature and random forest are used for benign-malignant pulmonary nodule classification on Chest CT. METHODS First, a dictionary model is created to acquire codebook. It is used to obtain feature descriptors and includes 3D block database (BD) and distance matrix clustering centers. 3D BD is balanced and randomly selecting from benign and malignant pulmonary nodules of training data. Clustering centers is got by clustering the distance matrix, which is the distance between every two blocks in 3D BD. Then, feature descriptor is obtained by coding the pulmonary nodule with codebook, and 3D GDC feature is the result of histogram statistics on feature descriptor. Second, geometric features are extracted for fusion feature. Finally, random forest is performed for benign-malignant pulmonary nodule classification with fusion feature of the 3D gray density coding feature and the geometric features. RESULTS We verify the effectiveness of our method on the public LIDC-IDRI dataset and the private ZSHD dataset. For LIDC-IDRI dataset, compared with other state-of-the-art methods, we achieve more satisfactory results with 93.17 ± 1.94% for accuracy and 97.53 ± 1.62% for AUC. As for private ZSHD dataset, it contains a total of 238 lung nodules from 203 patients. The accuracy and AUC achieved by our method are 90.0% and 93.15%. CONCLUSIONS The results show that our method can provide doctors with more accurate results of benign-malignant pulmonary nodule classification for auxiliary diagnosis, and our method is more interpretable than 3D CNN methods, which can provide doctors with more auxiliary information.
Collapse
Affiliation(s)
- BingBing Zheng
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Dawei Yang
- Department of Pulmonary Medicine, Shanghai Respiratory Research Institute, Zhongshan Hospital, Fudan University, Shanghai, China.,Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai, China
| | - Yu Zhu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Yatong Liu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Jie Hu
- Department of Pulmonary Medicine, Shanghai Respiratory Research Institute, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Chunxue Bai
- Department of Pulmonary Medicine, Shanghai Respiratory Research Institute, Zhongshan Hospital, Fudan University, Shanghai, China.,Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai, China
| |
Collapse
|