1
|
Gao C, Wu L, Wu W, Huang Y, Wang X, Sun Z, Xu M, Gao C. Deep learning in pulmonary nodule detection and segmentation: a systematic review. Eur Radiol 2024:10.1007/s00330-024-10907-0. [PMID: 38985185 DOI: 10.1007/s00330-024-10907-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/09/2024] [Accepted: 05/10/2024] [Indexed: 07/11/2024]
Abstract
OBJECTIVES The accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. This study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature. METHODS This study utilized a systematic review with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, searching PubMed, Embase, Web of Science Core Collection, and the Cochrane Library databases up to May 10, 2023. The Quality Assessment of Diagnostic Accuracy Studies 2 criteria was used to assess the risk of bias and was adjusted with the Checklist for Artificial Intelligence in Medical Imaging. The study analyzed and extracted model performance, data sources, and task-focus information. RESULTS After screening, we included nine studies meeting our inclusion criteria. These studies were published between 2019 and 2023 and predominantly used public datasets, with the Lung Image Database Consortium Image Collection and Image Database Resource Initiative and Lung Nodule Analysis 2016 being the most common. The studies focused on detection, segmentation, and other tasks, primarily utilizing Convolutional Neural Networks for model development. Performance evaluation covered multiple metrics, including sensitivity and the Dice coefficient. CONCLUSIONS This study highlights the potential power of deep learning in lung nodule detection and segmentation. It underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research. CLINICAL RELEVANCE STATEMENT Deep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. Future research should address methodological shortcomings and variability to enhance its clinical utility. KEY POINTS Deep learning shows potential in the detection and segmentation of pulmonary nodules. There are methodological gaps and biases present in the existing literature. Factors such as external validation and transparency affect the clinical application.
Collapse
Affiliation(s)
- Chuan Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Linyu Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Wei Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yichao Huang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xinyue Wang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Zhichao Sun
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Maosheng Xu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Chen Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| |
Collapse
|
2
|
Sweetline BC, Vijayakumaran C, Samydurai A. Overcoming the Challenge of Accurate Segmentation of Lung Nodules: A Multi-crop CNN Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:988-1007. [PMID: 38347393 PMCID: PMC11169448 DOI: 10.1007/s10278-024-01004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 12/06/2023] [Accepted: 12/22/2023] [Indexed: 06/13/2024]
Abstract
Lung nodules are generated based on the growth of small and round- or oval-shaped cells in the lung, which are either cancerous or non-cancerous. Accurate segmentation of these nodules is crucial for early detection and diagnosis of lung cancer. However, lung nodules can have various shapes, sizes, and densities, making their accurate segmentation a difficult task. Moreover, they can be easily confused with other structures in the lung, including blood vessels and airways, further complicating the segmentation process. To address this challenge, this paper proposes a novel multi-crop convolutional neural network (multi-crop CNN) model that utilizes different sized cropped regions of CT scan images for accurate segmentation of lung nodules. The model consists of three modules, namely the feature representation module, boundary refinement module, and segmentation module. The feature representation module captures features from the lung CT scan image using cropped regions of different sizes, while the boundary refinement module combines the boundary maps and feature maps to generate a final feature map for the segmentation process. The segmentation module produces a high-resolution segmentation map that shows improved accuracy in segmenting cancerous lung nodules. The proposed multi-crop CNN model is evaluated on two segmentation datasets namely LUNA 16 and LIDC-IDRI with an accuracy of 98.3% and 98.5%, respectively. The performances are measured in terms of accuracy, recall, precision, dice coefficient, specificity, AUC/ROC, Hausdorff distance, Jaccard index, and average Hausdorff. Overall, the proposed multi-crop CNN model demonstrates the potential to enhance the lung nodule segmentation accuracy, which could lead to earlier detection and diagnosis of lung cancer and ultimately reduce mortality rates associated with the disease.
Collapse
Affiliation(s)
- B Christina Sweetline
- Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur, India.
| | - C Vijayakumaran
- Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur, India
| | - A Samydurai
- Department of Computer Science and Engineering, SRM Valliammai Engineering College, Kattankulathur, India
| |
Collapse
|
3
|
Xu X, Du L, Yin D. Dual-branch feature fusion S3D V-Net network for lung nodules segmentation. J Appl Clin Med Phys 2024; 25:e14331. [PMID: 38478388 PMCID: PMC11163502 DOI: 10.1002/acm2.14331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 02/01/2024] [Accepted: 03/04/2024] [Indexed: 06/11/2024] Open
Abstract
BACKGROUND Accurate segmentation of lung nodules can help doctors get more accurate results and protocols in early lung cancer diagnosis and treatment planning, so that patients can be better detected and treated at an early stage, and the mortality rate of lung cancer can be reduced. PURPOSE Currently, the improvement of lung nodule segmentation accuracy has been limited by his heterogeneous performance in the lungs, the imbalance between segmentation targets and background pixels, and other factors. We propose a new 2.5D lung nodule segmentation network model for lung nodule segmentation. This network model can well improve the extraction of edge information of lung nodules, and fuses intra-slice and inter-slice features, which makes good use of the three-dimensional structural information of lung nodules and can more effectively improve the accuracy of lung nodule segmentation. METHODS Our approach is based on a typical encoding-decoding network structure for improvement. The improved model captures the features of multiple nodules in both 3-D and 2-D CT images, complements the information of the segmentation target's features and enhances the texture features at the edges of the pulmonary nodules through the dual-branch feature fusion module (DFFM) and the reverse attention context module (RACM), and employs central pooling instead of the maximal pooling operation, which is used to preserve the features around the target and to eliminate the edge-irrelevant features, to further improve the performance of the segmentation of the pulmonary nodules. RESULTS We evaluated this method on a wide range of 1186 nodules from the LUNA16 dataset, and averaging the results of ten cross-validated, the proposed method achieved the mean dice similarity coefficient (mDSC) of 84.57%, the mean overlapping error (mOE) of 18.73% and average processing of a case is about 2.07 s. Moreover, our results were compared with inter-radiologist agreement on the LUNA16 dataset, and the average difference was 0.74%. CONCLUSION The experimental results show that our method improves the accuracy of pulmonary nodules segmentation and also takes less time than more 3-D segmentation methods in terms of time.
Collapse
Affiliation(s)
- Xiaoru Xu
- School of Automation and Information EngineeringSichuan University of Science and EngineeringZigongPeople's Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & EngineeringZigongPeople's Republic of China
| | - Lingyan Du
- School of Automation and Information EngineeringSichuan University of Science and EngineeringZigongPeople's Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & EngineeringZigongPeople's Republic of China
| | - Dongsheng Yin
- School of Automation and Information EngineeringSichuan University of Science and EngineeringZigongPeople's Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & EngineeringZigongPeople's Republic of China
| |
Collapse
|
4
|
Byeon H, Al-Kubaisi M, Dutta AK, Alghayadh F, Soni M, Bhende M, Chunduri V, Suresh Babu K, Jeet R. Brain tumor segmentation using neuro-technology enabled intelligence-cascaded U-Net model. Front Comput Neurosci 2024; 18:1391025. [PMID: 38634017 PMCID: PMC11021780 DOI: 10.3389/fncom.2024.1391025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 03/21/2024] [Indexed: 04/19/2024] Open
Abstract
According to experts in neurology, brain tumours pose a serious risk to human health. The clinical identification and treatment of brain tumours rely heavily on accurate segmentation. The varied sizes, forms, and locations of brain tumours make accurate automated segmentation a formidable obstacle in the field of neuroscience. U-Net, with its computational intelligence and concise design, has lately been the go-to model for fixing medical picture segmentation issues. Problems with restricted local receptive fields, lost spatial information, and inadequate contextual information are still plaguing artificial intelligence. A convolutional neural network (CNN) and a Mel-spectrogram are the basis of this cough recognition technique. First, we combine the voice in a variety of intricate settings and improve the audio data. After that, we preprocess the data to make sure its length is consistent and create a Mel-spectrogram out of it. A novel model for brain tumor segmentation (BTS), Intelligence Cascade U-Net (ICU-Net), is proposed to address these issues. It is built on dynamic convolution and uses a non-local attention mechanism. In order to reconstruct more detailed spatial information on brain tumours, the principal design is a two-stage cascade of 3DU-Net. The paper's objective is to identify the best learnable parameters that will maximize the likelihood of the data. After the network's ability to gather long-distance dependencies for AI, Expectation-Maximization is applied to the cascade network's lateral connections, enabling it to leverage contextual data more effectively. Lastly, to enhance the network's ability to capture local characteristics, dynamic convolutions with local adaptive capabilities are used in place of the cascade network's standard convolutions. We compared our results to those of other typical methods and ran extensive testing utilising the publicly available BraTS 2019/2020 datasets. The suggested method performs well on tasks involving BTS, according to the experimental data. The Dice scores for tumor core (TC), complete tumor, and enhanced tumor segmentation BraTS 2019/2020 validation sets are 0.897/0.903, 0.826/0.828, and 0.781/0.786, respectively, indicating high performance in BTS.
Collapse
Affiliation(s)
- Haewon Byeon
- Department of Digital Anti-Aging Healthcare, Inje University, Gimhae, Republic of Korea
| | - Mohannad Al-Kubaisi
- Department of Computer Science, Al-Maarif University College, Al-Anbar Governorate, Iraq
| | - Ashit Kumar Dutta
- Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Riyadh, Saudi Arabia
| | - Faisal Alghayadh
- Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Riyadh, Saudi Arabia
| | - Mukesh Soni
- Department of CSE, University Centre for Research and Development, Chandigarh University, Mohali, Punjab, India
| | - Manisha Bhende
- Dr. D. Y. Patil Vidyapeeth, Pune, Dr. D. Y. Patil School of Science & Technology, Tathawade, Pune, India
| | - Venkata Chunduri
- Department of Mathematics and Computer Science, Indiana State University, Terre Haute, IN, United States
| | - K. Suresh Babu
- Department of Biochemistry, Symbiosis Medical College for Women, Symbiosis International (Deemed University), Pune, India
| | - Rubal Jeet
- Chandigarh Engineering College, Jhanjeri, Mohali, India
| |
Collapse
|
5
|
Ma X, Song H, Jia X, Wang Z. An improved V-Net lung nodule segmentation model based on pixel threshold separation and attention mechanism. Sci Rep 2024; 14:4743. [PMID: 38413699 PMCID: PMC10899216 DOI: 10.1038/s41598-024-55178-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 02/21/2024] [Indexed: 02/29/2024] Open
Abstract
Accurate labeling of lung nodules in computed tomography (CT) images is crucial in early lung cancer diagnosis and before nodule resection surgery. However, the irregular shape of lung nodules in CT images and the complex lung environment make it much more challenging to segment lung nodules accurately. On this basis, we propose an improved V-Net segmentation method based on pixel threshold separation and attention mechanism for lung nodules. This method first offers a data augment strategy to solve the problem of insufficient samples in 3D medical datasets. In addition, we integrate the feature extraction module based on pixel threshold separation into the model to enhance the feature extraction ability under different thresholds on the one hand. On the other hand, the model introduces channel and spatial attention modules to make the model pay more attention to important semantic information and improve its generalization ability and accuracy. Experiments show that the Dice similarity coefficients of the improved model on the public datasets LUNA16 and LNDb are 94.9% and 81.1% respectively, and the sensitivities reach 92.7% and 76.9% respectively. which is superior to most existing UNet architecture models and comparable to the manual level segmentation results by medical technologists.
Collapse
Affiliation(s)
- Xiaopu Ma
- School of Computer Science and Technology, Nanyang Normal University, Nanyang, 473061, China.
| | - Handing Song
- School of Life Sciences and Agricultural Engineering, Nanyang Normal University, Nanyang, 473061, China
| | - Xiao Jia
- School of Computer Science and Technology, Nanyang Normal University, Nanyang, 473061, China
| | - Zhan Wang
- School of Life Sciences and Agricultural Engineering, Nanyang Normal University, Nanyang, 473061, China
| |
Collapse
|
6
|
Liu Y, Hsu HY, Lin T, Peng B, Saqi A, Salvatore MM, Jambawalikar S. Lung nodule malignancy classification with associated pulmonary fibrosis using 3D attention-gated convolutional network with CT scans. J Transl Med 2024; 22:51. [PMID: 38216992 PMCID: PMC10787502 DOI: 10.1186/s12967-023-04798-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 12/11/2023] [Indexed: 01/14/2024] Open
Abstract
BACKGROUND Chest Computed tomography (CT) scans detect lung nodules and assess pulmonary fibrosis. While pulmonary fibrosis indicates increased lung cancer risk, current clinical practice characterizes nodule risk of malignancy based on nodule size and smoking history; little consideration is given to the fibrotic microenvironment. PURPOSE To evaluate the effect of incorporating fibrotic microenvironment into classifying malignancy of lung nodules in chest CT images using deep learning techniques. MATERIALS AND METHODS We developed a visualizable 3D classification model trained with in-house CT dataset for the nodule malignancy classification task. Three slightly-modified datasets were created: (1) nodule alone (microenvironment removed); (2) nodule with surrounding lung microenvironment; and (3) nodule in microenvironment with semantic fibrosis metadata. For each of the models, tenfold cross-validation was performed. Results were evaluated using quantitative measures, such as accuracy, sensitivity, specificity, and area-under-curve (AUC), as well as qualitative assessments, such as attention maps and class activation maps (CAM). RESULTS The classification model trained with nodule alone achieved 75.61% accuracy, 50.00% sensitivity, 88.46% specificity, and 0.78 AUC; the model trained with nodule and microenvironment achieved 79.03% accuracy, 65.46% sensitivity, 85.86% specificity, and 0.84 AUC. The model trained with additional semantic fibrosis metadata achieved 80.84% accuracy, 74.67% sensitivity, 84.95% specificity, and 0.89 AUC. Our visual evaluation of attention maps and CAM suggested that both the nodules and the microenvironment contributed to the task. CONCLUSION The nodule malignancy classification performance was found to be improving with microenvironment data. Further improvement was found when incorporating semantic fibrosis information.
Collapse
Affiliation(s)
- Yucheng Liu
- Department of Radiology, Columbia University Irving Medical Center, 3-124B Milstein Hospital Bldg, 177 Fort Washington Avenue, New York, NY, 10032, USA.
| | - Hao Yun Hsu
- Department of Radiology, Columbia University Irving Medical Center, 3-124B Milstein Hospital Bldg, 177 Fort Washington Avenue, New York, NY, 10032, USA
| | - Tiffany Lin
- Department of Radiology, Columbia University Irving Medical Center, 3-124B Milstein Hospital Bldg, 177 Fort Washington Avenue, New York, NY, 10032, USA
| | - Boyu Peng
- Department of Radiology, Columbia University Irving Medical Center, 3-124B Milstein Hospital Bldg, 177 Fort Washington Avenue, New York, NY, 10032, USA
| | - Anjali Saqi
- Department of Pathology, Columbia University Irving Medical Center, New York, NY, USA
| | - Mary M Salvatore
- Department of Radiology, Columbia University Irving Medical Center, 3-124B Milstein Hospital Bldg, 177 Fort Washington Avenue, New York, NY, 10032, USA
| | - Sachin Jambawalikar
- Department of Radiology, Columbia University Irving Medical Center, 3-124B Milstein Hospital Bldg, 177 Fort Washington Avenue, New York, NY, 10032, USA
| |
Collapse
|
7
|
Mateus P, Volmer L, Wee L, Aerts HJWL, Hoebers F, Dekker A, Bermejo I. Image based prognosis in head and neck cancer using convolutional neural networks: a case study in reproducibility and optimization. Sci Rep 2023; 13:18176. [PMID: 37875663 PMCID: PMC10598263 DOI: 10.1038/s41598-023-45486-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 10/19/2023] [Indexed: 10/26/2023] Open
Abstract
In the past decade, there has been a sharp increase in publications describing applications of convolutional neural networks (CNNs) in medical image analysis. However, recent reviews have warned of the lack of reproducibility of most such studies, which has impeded closer examination of the models and, in turn, their implementation in healthcare. On the other hand, the performance of these models is highly dependent on decisions on architecture and image pre-processing. In this work, we assess the reproducibility of three studies that use CNNs for head and neck cancer outcome prediction by attempting to reproduce the published results. In addition, we propose a new network structure and assess the impact of image pre-processing and model selection criteria on performance. We used two publicly available datasets: one with 298 patients for training and validation and another with 137 patients from a different institute for testing. All three studies failed to report elements required to reproduce their results thoroughly, mainly the image pre-processing steps and the random seed. Our model either outperforms or achieves similar performance to the existing models with considerably fewer parameters. We also observed that the pre-processing efforts significantly impact the model's performance and that some model selection criteria may lead to suboptimal models. Although there have been improvements in the reproducibility of deep learning models, our work suggests that wider implementation of reporting standards is required to avoid a reproducibility crisis.
Collapse
Affiliation(s)
- Pedro Mateus
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands.
| | - Leroy Volmer
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Leonard Wee
- Clinical Data Science, Maastricht University, Maastricht, The Netherlands
| | - Hugo J W L Aerts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, The Netherlands
- Departments of Radiation Oncology and Radiology, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Frank Hoebers
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| |
Collapse
|
8
|
Guedes Pinto E, Penha D, Ravara S, Monaghan C, Hochhegger B, Marchiori E, Taborda-Barata L, Irion K. Factors influencing the outcome of volumetry tools for pulmonary nodule analysis: a systematic review and attempted meta-analysis. Insights Imaging 2023; 14:152. [PMID: 37741928 PMCID: PMC10517915 DOI: 10.1186/s13244-023-01480-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 07/08/2023] [Indexed: 09/25/2023] Open
Abstract
Health systems worldwide are implementing lung cancer screening programmes to identify early-stage lung cancer and maximise patient survival. Volumetry is recommended for follow-up of pulmonary nodules and outperforms other measurement methods. However, volumetry is known to be influenced by multiple factors. The objectives of this systematic review (PROSPERO CRD42022370233) are to summarise the current knowledge regarding factors that influence volumetry tools used in the analysis of pulmonary nodules, assess for significant clinical impact, identify gaps in current knowledge and suggest future research. Five databases (Medline, Scopus, Journals@Ovid, Embase and Emcare) were searched on the 21st of September, 2022, and 137 original research studies were included, explicitly testing the potential impact of influencing factors on the outcome of volumetry tools. The summary of these studies is tabulated, and a narrative review is provided. A subset of studies (n = 16) reporting clinical significance were selected, and their results were combined, if appropriate, using meta-analysis. Factors with clinical significance include the segmentation algorithm, quality of the segmentation, slice thickness, the level of inspiration for solid nodules, and the reconstruction algorithm and kernel in subsolid nodules. Although there is a large body of evidence in this field, it is unclear how to apply the results from these studies in clinical practice as most studies do not test for clinical relevance. The meta-analysis did not improve our understanding due to the small number and heterogeneity of studies testing for clinical significance. CRITICAL RELEVANCE STATEMENT: Many studies have investigated the influencing factors of pulmonary nodule volumetry, but only 11% of these questioned their clinical relevance in their management. The heterogeneity among these studies presents a challenge in consolidating results and clinical application of the evidence. KEY POINTS: • Factors influencing the volumetry of pulmonary nodules have been extensively investigated. • Just 11% of studies test clinical significance (wrongly diagnosing growth). • Nodule size interacts with most other influencing factors (especially for smaller nodules). • Heterogeneity among studies makes comparison and consolidation of results challenging. • Future research should focus on clinical applicability, screening, and updated technology.
Collapse
Affiliation(s)
- Erique Guedes Pinto
- R. Marquês de Ávila E Bolama, Universidade da Beira Interior Faculdade de Ciências da Saúde, 6201-001, Covilhã, Portugal.
| | - Diana Penha
- R. Marquês de Ávila E Bolama, Universidade da Beira Interior Faculdade de Ciências da Saúde, 6201-001, Covilhã, Portugal
- Liverpool Heart and Chest Hospital NHS Foundation Trust, Thomas Dr, Liverpool, L14 3PE, UK
| | - Sofia Ravara
- R. Marquês de Ávila E Bolama, Universidade da Beira Interior Faculdade de Ciências da Saúde, 6201-001, Covilhã, Portugal
| | - Colin Monaghan
- Liverpool Heart and Chest Hospital NHS Foundation Trust, Thomas Dr, Liverpool, L14 3PE, UK
| | | | - Edson Marchiori
- Faculdade de Medicina, Universidade Federal Do Rio de Janeiro, Bloco K - Av. Carlos Chagas Filho, 373 - 2º Andar, Sala 49 - Cidade Universitária da Universidade Federal Do Rio de Janeiro, Rio de Janeiro - RJ, 21044-020, Brasil
- Faculdade de Medicina, Universidade Federal Fluminense, Av. Marquês Do Paraná, 303 - Centro, Niterói - RJ, 24220-000, Brasil
| | - Luís Taborda-Barata
- R. Marquês de Ávila E Bolama, Universidade da Beira Interior Faculdade de Ciências da Saúde, 6201-001, Covilhã, Portugal
| | - Klaus Irion
- Manchester University NHS Foundation Trust, Manchester Royal Infirmary, Oxford Rd, Manchester, M13 9WL, UK
| |
Collapse
|
9
|
Zhi L, Jiang W, Zhang S, Zhou T. Deep neural network pulmonary nodule segmentation methods for CT images: Literature review and experimental comparisons. Comput Biol Med 2023; 164:107321. [PMID: 37595518 DOI: 10.1016/j.compbiomed.2023.107321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 05/08/2023] [Accepted: 08/07/2023] [Indexed: 08/20/2023]
Abstract
Automatic and accurate segmentation of pulmonary nodules in CT images can help physicians perform more accurate quantitative analysis, diagnose diseases, and improve patient survival. In recent years, with the development of deep learning technology, pulmonary nodule segmentation methods based on deep neural networks have gradually replaced traditional segmentation methods. This paper reviews the recent pulmonary nodule segmentation algorithms based on deep neural networks. First, the heterogeneity of pulmonary nodules, the interpretability of segmentation results, and external environmental factors are discussed, and then the open-source 2D and 3D models in medical segmentation tasks in recent years are applied to the Lung Image Database Consortium and Image Database Resource Initiative (LIDC) and Lung Nodule Analysis 16 (Luna16) datasets for comparison, and the visual diagnostic features marked by radiologists are evaluated one by one. According to the analysis of the experimental data, the following conclusions are drawn: (1) In the pulmonary nodule segmentation task, the performance of the 2D segmentation models DSC is generally better than that of the 3D segmentation models. (2) 'Subtlety', 'Sphericity', 'Margin', 'Texture', and 'Size' have more influence on pulmonary nodule segmentation, while 'Lobulation', 'Spiculation', and 'Benign and Malignant' features have less influence on pulmonary nodule segmentation. (3) Higher accuracy in pulmonary nodule segmentation can be achieved based on better-quality CT images. (4) Good contextual information acquisition and attention mechanism design positively affect pulmonary nodule segmentation.
Collapse
Affiliation(s)
- Lijia Zhi
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Wujun Jiang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China.
| | - Shaomin Zhang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Tao Zhou
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| |
Collapse
|
10
|
Tyagi S, Kushnure DT, Talbar SN. An amalgamation of vision transformer with convolutional neural network for automatic lung tumor segmentation. Comput Med Imaging Graph 2023; 108:102258. [PMID: 37315396 DOI: 10.1016/j.compmedimag.2023.102258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/29/2023] [Accepted: 05/29/2023] [Indexed: 06/16/2023]
Abstract
Lung cancer has the highest mortality rate. Its diagnosis and treatment analysis depends upon the accurate segmentation of the tumor. It becomes tedious if done manually as radiologists are overburdened with numerous medical imaging tests due to the increase in cancer patients and the COVID pandemic. Automatic segmentation techniques play an essential role in assisting medical experts. The segmentation approaches based on convolutional neural networks have provided state-of-the-art performances. However, they cannot capture long-range relations due to the region-based convolutional operator. Vision Transformers can resolve this issue by capturing global multi-contextual features. To explore this advantageous feature of the vision transformer, we propose an approach for lung tumor segmentation using an amalgamation of the vision transformer and convolutional neural network. We design the network as an encoder-decoder structure with convolution blocks deployed in the initial layers of the encoder to capture the features carrying essential information and the corresponding blocks in the final layers of the decoder. The deeper layers utilize the transformer blocks with a self-attention mechanism to capture more detailed global feature maps. We use a recently proposed unified loss function that combines cross-entropy and dice-based losses for network optimization. We trained our network on a publicly available NSCLC-Radiomics dataset and tested its generalizability on our dataset collected from a local hospital. We could achieve average dice coefficients of 0.7468 and 0.6847 and Hausdorff distances of 15.336 and 17.435 on public and local test data, respectively.
Collapse
Affiliation(s)
- Shweta Tyagi
- Centre of Excellence in Signal and Image Processing, Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India.
| | - Devidas T Kushnure
- Centre of Excellence in Signal and Image Processing, Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Sanjay N Talbar
- Centre of Excellence in Signal and Image Processing, Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| |
Collapse
|
11
|
M DL, M DP. An Improved Convolution Neural Network and Modified Regularized K-Means-Based Automatic Lung Nodule Detection and Classification. J Digit Imaging 2023; 36:1431-1446. [PMID: 37106212 PMCID: PMC10406790 DOI: 10.1007/s10278-023-00809-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 03/03/2023] [Accepted: 03/08/2023] [Indexed: 04/29/2023] Open
Abstract
If lung cancer is not detected in its initial phases, it can be fatal. However, because of the quantity and structure of its nodules, lung cancer is difficult to detect early. For accurate detections, radiologists require assistance from automated tools. Numerous expert methods have been created over time to assist radiologists in the diagnosis of lung cancer. However, this requires accurate research. Therefore, in this article, we propose a framework to precisely detect lung cancer by categorizing it between benign and malignant nodules. To achieve this objective, an efficient deep-learning algorithm is presented. The presented technique consists of four stages, namely pre-processing, segmentation, classification, and severity stage analysis. Initially, the collected image is given to the pre-processing stage to eliminate the distortion present in the image. Then, the noise-free image is given to the segmentation stage. For segmentation, in this paper, modified regularized K-means (MRKM) clustering algorithm is presented. After the segmentation process, the segmented nodule image is fed to the classification stage to categorize the nodule as benign or malignant (risk nodule). For classification, an improved convolution neural network (ICNN) is presented. The proposed ICNN is designed by modifying CNN with the integration of the adaptive tree seed optimization (ATSO) algorithm. Finally, the stage identification is carried out based on the size of the nodule and we classify the malignant nodule as S1-S4. The presented technique attained the maximum accuracy of 96.5% and performance compared with existing state-of-art methods.
Collapse
Affiliation(s)
- Dhasny Lydia M
- Department of Data Science and Business Systems, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu India
| | - Dr. Prakash M
- Department of Data Science and Business Systems, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu India
| |
Collapse
|
12
|
LCSCNet: A multi-level approach for lung cancer stage classification using 3D dense convolutional neural networks with concurrent squeeze-and-excitation module. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
13
|
Modak S, Abdel-Raheem E, Rueda L. Applications of Deep Learning in Disease Diagnosis of Chest Radiographs: A Survey on Materials and Methods. BIOMEDICAL ENGINEERING ADVANCES 2023. [DOI: 10.1016/j.bea.2023.100076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
|
14
|
Qiao J, Fan Y, Zhang M, Fang K, Li D, Wang Z. Ensemble framework based on attributes and deep features for benign-malignant classification of lung nodule. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
15
|
Tang T, Li F, Jiang M, Xia X, Zhang R, Lin K. Improved Complementary Pulmonary Nodule Segmentation Model Based on Multi-Feature Fusion. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1755. [PMID: 36554161 PMCID: PMC9778431 DOI: 10.3390/e24121755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/23/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
Accurate segmentation of lung nodules from pulmonary computed tomography (CT) slices plays a vital role in the analysis and diagnosis of lung cancer. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in the automatic segmentation of lung nodules. However, they are still challenged by the large diversity of segmentation targets, and the small inter-class variances between the nodule and its surrounding tissues. To tackle this issue, we propose a features complementary network according to the process of clinical diagnosis, which made full use of the complementarity and facilitation among lung nodule location information, global coarse area, and edge information. Specifically, we first consider the importance of global features of nodules in segmentation and propose a cross-scale weighted high-level feature decoder module. Then, we develop a low-level feature decoder module for edge feature refinement. Finally, we construct a complementary module to make information complement and promote each other. Furthermore, we weight pixels located at the nodule edge on the loss function and add an edge supervision to the deep supervision, both of which emphasize the importance of edges in segmentation. The experimental results demonstrate that our model achieves robust pulmonary nodule segmentation and more accurate edge segmentation.
Collapse
Affiliation(s)
- Tiequn Tang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
- School of Physics and Electronic Engineering, Fuyang Normal University, Fuyang 236037, China
| | - Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Minshan Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
- Department of Biomedical Engineering, Florida International University, Miami, FL 33174, USA
| | - Xunpeng Xia
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Rongfu Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Kailin Lin
- Fudan University Shanghai Cancer Center, Shanghai 200032, China
| |
Collapse
|
16
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:cancers14225569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
17
|
Akila AS, Anitha J, Arun SA. Two-stage lung nodule detection framework using enhanced UNet and convolutional LSTM networks in CT images. Comput Biol Med 2022; 149:106059. [DOI: 10.1016/j.compbiomed.2022.106059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 08/09/2022] [Accepted: 08/27/2022] [Indexed: 11/29/2022]
|
18
|
Tyagi S, Talbar SN. CSE-GAN: A 3D conditional generative adversarial network with concurrent squeeze-and-excitation blocks for lung nodule segmentation. Comput Biol Med 2022; 147:105781. [DOI: 10.1016/j.compbiomed.2022.105781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 06/16/2022] [Accepted: 06/19/2022] [Indexed: 11/03/2022]
|
19
|
Wan J, Yue S, Ma J, Ma X. A coarse-to-fine full attention guided capsule network for medical image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103682] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
20
|
Li J, Chen H, Li Y, Peng Y, Sun J, Pan P. Cross-modality synthesis aiding lung tumor segmentation on multi-modal MRI images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103655] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
21
|
One-dimensional convolutional neural networks for low/high arousal classification from electrodermal activity. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103203] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|