1
|
Carles M, Kuhn D, Fechter T, Baltas D, Mix M, Nestle U, Grosu AL, Martí-Bonmatí L, Radicioni G, Gkika E. Development and evaluation of two open-source nnU-Net models for automatic segmentation of lung tumors on PET and CT images with and without respiratory motion compensation. Eur Radiol 2024; 34:6701-6711. [PMID: 38662100 PMCID: PMC11399280 DOI: 10.1007/s00330-024-10751-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Revised: 02/22/2024] [Accepted: 03/28/2024] [Indexed: 04/26/2024]
Abstract
OBJECTIVES In lung cancer, one of the main limitations for the optimal integration of the biological and anatomical information derived from Positron Emission Tomography (PET) and Computed Tomography (CT) is the time and expertise required for the evaluation of the different respiratory phases. In this study, we present two open-source models able to automatically segment lung tumors on PET and CT, with and without motion compensation. MATERIALS AND METHODS This study involved time-bin gated (4D) and non-gated (3D) PET/CT images from two prospective lung cancer cohorts (Trials 108237 and 108472) and one retrospective. For model construction, the ground truth (GT) was defined by consensus of two experts, and the nnU-Net with 5-fold cross-validation was applied to 560 4D-images for PET and 100 3D-images for CT. The test sets included 270 4D- images and 19 3D-images for PET and 80 4D-images and 27 3D-images for CT, recruited at 10 different centres. RESULTS In the performance evaluation with the multicentre test sets, the Dice Similarity Coefficients (DSC) obtained for our PET model were DSC(4D-PET) = 0.74 ± 0.06, improving 19% relative to the DSC between experts and DSC(3D-PET) = 0.82 ± 0.11. The performance for CT was DSC(4D-CT) = 0.61 ± 0.28 and DSC(3D-CT) = 0.63 ± 0.34, improving 4% and 15% relative to DSC between experts. CONCLUSIONS Performance evaluation demonstrated that the automatic segmentation models have the potential to achieve accuracy comparable to manual segmentation and thus hold promise for clinical application. The resulting models can be freely downloaded and employed to support the integration of 3D- or 4D- PET/CT and to facilitate the evaluation of its impact on lung cancer clinical practice. CLINICAL RELEVANCE STATEMENT We provide two open-source nnU-Net models for the automatic segmentation of lung tumors on PET/CT to facilitate the optimal integration of biological and anatomical information in clinical practice. The models have superior performance compared to the variability observed in manual segmentations by the different experts for images with and without motion compensation, allowing to take advantage in the clinical practice of the more accurate and robust 4D-quantification. KEY POINTS Lung tumor segmentation on PET/CT imaging is limited by respiratory motion and manual delineation is time consuming and suffer from inter- and intra-variability. Our segmentation models had superior performance compared to the manual segmentations by different experts. Automating PET image segmentation allows for easier clinical implementation of biological information.
Collapse
Affiliation(s)
- Montserrat Carles
- La Fe Health Research Institute, Biomedical Imaging Research Group (GIBI230-PREBI) and Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB) Unique Scientific and Technical Infra-structures (ICTS), Valencia, Spain.
| | - Dejan Kuhn
- Department of Radiation Oncology, Division of Medical Physics, University Medical Center Freiburg, Faculty of Medicine, Freiburg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Fechter
- Department of Radiation Oncology, Division of Medical Physics, University Medical Center Freiburg, Faculty of Medicine, Freiburg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dimos Baltas
- Department of Radiation Oncology, Division of Medical Physics, University Medical Center Freiburg, Faculty of Medicine, Freiburg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Michael Mix
- Department of Nuclear Medicine, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| | - Ursula Nestle
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
- Department of Radiation Oncology, Kliniken Maria Hilf GmbH Moenchengladbach, Moechengladbach, Germany
| | - Anca L Grosu
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| | - Luis Martí-Bonmatí
- La Fe Health Research Institute, Biomedical Imaging Research Group (GIBI230-PREBI) and Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB) Unique Scientific and Technical Infra-structures (ICTS), Valencia, Spain
| | - Gianluca Radicioni
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| | - Eleni Gkika
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| |
Collapse
|
2
|
Wang TW, Hong JS, Huang JW, Liao CY, Lu CF, Wu YT. Systematic review and meta-analysis of deep learning applications in computed tomography lung cancer segmentation. Radiother Oncol 2024; 197:110344. [PMID: 38806113 DOI: 10.1016/j.radonc.2024.110344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 05/20/2024] [Accepted: 05/22/2024] [Indexed: 05/30/2024]
Abstract
BACKGROUND Accurate segmentation of lung tumors on chest computed tomography (CT) scans is crucial for effective diagnosis and treatment planning. Deep Learning (DL) has emerged as a promising tool in medical imaging, particularly for lung cancer segmentation. However, its efficacy across different clinical settings and tumor stages remains variable. METHODS We conducted a comprehensive search of PubMed, Embase, and Web of Science until November 7, 2023. We assessed the quality of these studies by using the Checklist for Artificial Intelligence in Medical Imaging and the Quality Assessment of Diagnostic Accuracy Studies-2 tools. This analysis included data from various clinical settings and stages of lung cancer. Key performance metrics, such as the Dice similarity coefficient, were pooled, and factors affecting algorithm performance, such as clinical setting, algorithm type, and image processing techniques, were examined. RESULTS Our analysis of 37 studies revealed a pooled Dice score of 79 % (95 % CI: 76 %-83 %), indicating moderate accuracy. Radiotherapy studies had a slightly lower score of 78 % (95 % CI: 74 %-82 %). A temporal increase was noted, with recent studies (post-2022) showing improvement from 75 % (95 % CI: 70 %-81 %). to 82 % (95 % CI: 81 %-84 %). Key factors affecting performance included algorithm type, resolution adjustment, and image cropping. QUADAS-2 assessments identified ambiguous risks in 78 % of studies due to data interval omissions and concerns about generalizability in 8 % due to nodule size exclusions, and CLAIM criteria highlighted areas for improvement, with an average score of 27.24 out of 42. CONCLUSION This meta-analysis demonstrates DL algorithms' promising but varied efficacy in lung cancer segmentation, particularly higher efficacy noted in early stages. The results highlight the critical need for continued development of tailored DL models to improve segmentation accuracy across diverse clinical settings, especially in advanced cancer stages with greater challenges. As recent studies demonstrate, ongoing advancements in algorithmic approaches are crucial for future applications.
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan; School of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Jia-Sheng Hong
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Jing-Wen Huang
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung 407, Taiwan
| | - Chien-Yi Liao
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao Tung University, Taipei, Taiwan; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan; National Yang Ming Chiao Tung University, Brain Research Center, Taiwan.
| |
Collapse
|
3
|
Gao C, Wu L, Wu W, Huang Y, Wang X, Sun Z, Xu M, Gao C. Deep learning in pulmonary nodule detection and segmentation: a systematic review. Eur Radiol 2024:10.1007/s00330-024-10907-0. [PMID: 38985185 DOI: 10.1007/s00330-024-10907-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/09/2024] [Accepted: 05/10/2024] [Indexed: 07/11/2024]
Abstract
OBJECTIVES The accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. This study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature. METHODS This study utilized a systematic review with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, searching PubMed, Embase, Web of Science Core Collection, and the Cochrane Library databases up to May 10, 2023. The Quality Assessment of Diagnostic Accuracy Studies 2 criteria was used to assess the risk of bias and was adjusted with the Checklist for Artificial Intelligence in Medical Imaging. The study analyzed and extracted model performance, data sources, and task-focus information. RESULTS After screening, we included nine studies meeting our inclusion criteria. These studies were published between 2019 and 2023 and predominantly used public datasets, with the Lung Image Database Consortium Image Collection and Image Database Resource Initiative and Lung Nodule Analysis 2016 being the most common. The studies focused on detection, segmentation, and other tasks, primarily utilizing Convolutional Neural Networks for model development. Performance evaluation covered multiple metrics, including sensitivity and the Dice coefficient. CONCLUSIONS This study highlights the potential power of deep learning in lung nodule detection and segmentation. It underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research. CLINICAL RELEVANCE STATEMENT Deep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. Future research should address methodological shortcomings and variability to enhance its clinical utility. KEY POINTS Deep learning shows potential in the detection and segmentation of pulmonary nodules. There are methodological gaps and biases present in the existing literature. Factors such as external validation and transparency affect the clinical application.
Collapse
Affiliation(s)
- Chuan Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Linyu Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Wei Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yichao Huang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xinyue Wang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Zhichao Sun
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Maosheng Xu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Chen Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| |
Collapse
|
4
|
Shafi SM, Chinnappan SK. Segmenting and classifying lung diseases with M-Segnet and Hybrid Squeezenet-CNN architecture on CT images. PLoS One 2024; 19:e0302507. [PMID: 38753712 PMCID: PMC11098347 DOI: 10.1371/journal.pone.0302507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 04/07/2024] [Indexed: 05/18/2024] Open
Abstract
Diagnosing lung diseases accurately and promptly is essential for effectively managing this significant public health challenge on a global scale. This paper introduces a new framework called Modified Segnet-based Lung Disease Segmentation and Severity Classification (MSLDSSC). The MSLDSSC model comprises four phases: "preprocessing, segmentation, feature extraction, and classification." Initially, the input image undergoes preprocessing using an improved Wiener filter technique. This technique estimates the power spectral density of the noisy and original images and computes the SNR assisted by PSNR to evaluate image quality. Next, the preprocessed image undergoes Segmentation to identify and separate the RoI from the background objects in the lung image. We employ a Modified Segnet mechanism that utilizes a proposed hard tanh-Softplus activation function for effective Segmentation. Following Segmentation, features such as MLDN, entropy with MRELBP, shape features, and deep features are extracted. Following the feature extraction phase, the retrieved feature set is input into a hybrid severity classification model. This hybrid model comprises two classifiers: SDPA-Squeezenet and DCNN. These classifiers train on the retrieved feature set and effectively classify the severity level of lung diseases.
Collapse
Affiliation(s)
- Syed Mohammed Shafi
- School of Computer Science and Engineering Vellore Institute of Technology, Vellore, India
| | | |
Collapse
|
5
|
Zhang P, Gao C, Huang Y, Chen X, Pan Z, Wang L, Dong D, Li S, Qi X. Artificial intelligence in liver imaging: methods and applications. Hepatol Int 2024; 18:422-434. [PMID: 38376649 DOI: 10.1007/s12072-023-10630-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/18/2023] [Indexed: 02/21/2024]
Abstract
Liver disease is regarded as one of the major health threats to humans. Radiographic assessments hold promise in terms of addressing the current demands for precisely diagnosing and treating liver diseases, and artificial intelligence (AI), which excels at automatically making quantitative assessments of complex medical image characteristics, has made great strides regarding the qualitative interpretation of medical imaging by clinicians. Here, we review the current state of medical-imaging-based AI methodologies and their applications concerning the management of liver diseases. We summarize the representative AI methodologies in liver imaging with focusing on deep learning, and illustrate their promising clinical applications across the spectrum of precise liver disease detection, diagnosis and treatment. We also address the current challenges and future perspectives of AI in liver imaging, with an emphasis on feature interpretability, multimodal data integration and multicenter study. Taken together, it is revealed that AI methodologies, together with the large volume of available medical image data, might impact the future of liver disease care.
Collapse
Affiliation(s)
- Peng Zhang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Chaofei Gao
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Yifei Huang
- Department of Gastroenterology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xiangyi Chen
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Zhuoshi Pan
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Lan Wang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Shao Li
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China.
| | - Xiaolong Qi
- Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Nurturing Center of Jiangsu Province for State Laboratory of AI Imaging & Interventional Radiology, Southeast University, Nanjing, China.
| |
Collapse
|
6
|
Shyamala Bharathi P, Shalini C. Advanced hybrid attention-based deep learning network with heuristic algorithm for adaptive CT and PET image fusion in lung cancer detection. Med Eng Phys 2024; 126:104138. [PMID: 38621836 DOI: 10.1016/j.medengphy.2024.104138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/17/2024] [Accepted: 03/02/2024] [Indexed: 04/17/2024]
Abstract
Lung cancer is one of the most deadly diseases in the world. Lung cancer detection can save the patient's life. Despite being the best imaging tool in the medical sector, clinicians find it challenging to interpret and detect cancer from Computed Tomography (CT) scan data. One of the most effective ways for the diagnosis of certain malignancies like lung tumours is Positron Emission Tomography (PET) imaging. So many diagnosis models have been implemented nowadays to diagnose various diseases. Early lung cancer identification is very important for predicting the severity level of lung cancer in cancer patients. To explore the effective model, an image fusion-based detection model is proposed for lung cancer detection using an improved heuristic algorithm of the deep learning model. Firstly, the PET and CT images are gathered from the internet. Further, these two collected images are fused for further process by using the Adaptive Dilated Convolution Neural Network (AD-CNN), in which the hyperparameters are tuned by the Modified Initial Velocity-based Capuchin Search Algorithm (MIV-CapSA). Subsequently, the abnormal regions are segmented by influencing the TransUnet3+. Finally, the segmented images are fed into the Hybrid Attention-based Deep Networks (HADN) model, encompassed with Mobilenet and Shufflenet. Therefore, the effectiveness of the novel detection model is analyzed using various metrics compared with traditional approaches. At last, the outcome evinces that it aids in early basic detection to treat the patients effectively.
Collapse
Affiliation(s)
- P Shyamala Bharathi
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India.
| | - C Shalini
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| |
Collapse
|
7
|
Liu C, Liu H, Zhang X, Guo J, Lv P. Multi-scale and multi-view network for lung tumor segmentation. Comput Biol Med 2024; 172:108250. [PMID: 38493603 DOI: 10.1016/j.compbiomed.2024.108250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 02/17/2024] [Accepted: 03/06/2024] [Indexed: 03/19/2024]
Abstract
Lung tumor segmentation in medical imaging is a critical step in the diagnosis and treatment planning for lung cancer. Accurate segmentation, however, is challenging due to the variability in tumor size, shape, and contrast against surrounding tissues. In this work, we present MSMV-Net, a novel deep learning architecture that integrates multi-scale multi-view (MSMV) learning modules and multi-scale uncertainty-based deep supervision (MUDS) for enhanced segmentation of lung tumors in computed tomography images. MSMV-Net capitalizes on the strengths of multi-view analysis and multi-scale feature extraction to address the limitations posed by small 3D lung tumors. The results indicate that MSMV-Net achieves state-of-the-art performance in lung tumor segmentation, recording a global Dice score of 55.60% on the LUNA dataset and 59.94% on the MSD dataset. Ablation studies conducted on the MSD dataset further validate that our method enhances segmentation accuracy.
Collapse
Affiliation(s)
- Caiqi Liu
- Department of Gastrointestinal Medical Oncology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang, China; Key Laboratory of Molecular Oncology of Heilongjiang Province, Harbin, Heilongjiang, China
| | - Han Liu
- The Institute for Global Health, University College London, London, England, United Kingdom
| | - Xuehui Zhang
- Beidahuang Industry Group General Hospital, Harbin, Heilongjiang, China
| | - Jierui Guo
- Center for Bioinformatics, Faculty of computing, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Pengju Lv
- School of Medical Informatics, Daqing Campus, Harbin Medical University, Daqing, Heilongjiang, China.
| |
Collapse
|
8
|
Yang Y, Wang P, Yang Z, Zeng Y, Chen F, Wang Z, Rizzo S. Segmentation method of magnetic resonance imaging brain tumor images based on improved UNet network. Transl Cancer Res 2024; 13:1567-1583. [PMID: 38617525 PMCID: PMC11009801 DOI: 10.21037/tcr-23-1858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 03/01/2024] [Indexed: 04/16/2024]
Abstract
Background Glioma is a primary malignant craniocerebral tumor commonly found in the central nervous system. According to research, preoperative diagnosis of glioma and a full understanding of its imaging features are very significant. Still, the traditional segmentation methods of image dispensation and machine wisdom are not acceptable in glioma segmentation. This analysis explores the potential of magnetic resonance imaging (MRI) brain tumor images as an effective segmentation method of glioma. Methods This study used 200 MRI images from the affiliated hospital and applied the 2-dimensional residual block UNet (2DResUNet). Features were extracted from input images using a 2×2 kernel size (64-kernel) 1-step 2D convolution (Conv) layer. The 2DDenseUNet model implemented in this study incorporates a ResBlock mechanism within the UNet architecture, as well as a Gaussian noise layer for data augmentation at the input stage, and a pooling layer for replacing the conventional 2D convolutional layers. Finally, the performance of the proposed protocol and its effective measures in glioma segmentation were verified. Results The outcomes of the 5-fold cross-validation evaluation show that the proposed 2DResUNet and 2DDenseUNet structure has a high sensitivity despite the slightly lower evaluation result on the Dice score. At the same time, compared with other models used in the experiment, the DM-DA-UNet model proposed in this paper was significantly improved in various indicators, increasing the reliability of the model and providing a reference and basis for the accurate formulation of clinical treatment strategies. The method used in this study showed stronger feature extraction ability than the UNet model. In addition, our findings demonstrated that using generalized die harm and prejudiced cross entropy as loss functions in the training process effectively alleviated the class imbalance of glioma data and effectively segmented glioma. Conclusions The method based on the improved UNet network has obvious advantages in the MRI brain tumor portrait segmentation procedure. The result showed that we developed a 2D residual block UNet, which can improve the incorporation of glioma segmentation into the clinical process.
Collapse
Affiliation(s)
- Yang Yang
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Peng Wang
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Zhenyu Yang
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Yuecheng Zeng
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Feng Chen
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Zhiyong Wang
- Department of Neurosurgery, Xiangyang Central Hospital (Hospital of Hubei University of Arts and Science), Xiangyang, China
| | - Stefania Rizzo
- Imaging della Svizzera Italiana (IIMSI), Ente Ospedaliero Cantonale (EOC), Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera italiana, Lugano, Switzerland
| |
Collapse
|
9
|
Shi J, Wang Z, Ruan S, Zhao M, Zhu Z, Kan H, An H, Xue X, Yan B. Rethinking automatic segmentation of gross target volume from a decoupling perspective. Comput Med Imaging Graph 2024; 112:102323. [PMID: 38171254 DOI: 10.1016/j.compmedimag.2023.102323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 10/19/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Accurate and reliable segmentation of Gross Target Volume (GTV) is critical in cancer Radiation Therapy (RT) planning, but manual delineation is time-consuming and subject to inter-observer variations. Recently, deep learning methods have achieved remarkable success in medical image segmentation. However, due to the low image contrast and extreme pixel imbalance between GTV and adjacent tissues, most existing methods usually obtained limited performance on automatic GTV segmentation. In this paper, we propose a Heterogeneous Cascade Framework (HCF) from a decoupling perspective, which decomposes the GTV segmentation into independent recognition and segmentation subtasks. The former aims to screen out the abnormal slices containing GTV, while the latter performs pixel-wise segmentation of these slices. With the decoupled two-stage framework, we can efficiently filter normal slices to reduce false positives. To further improve the segmentation performance, we design a multi-level Spatial Alignment Network (SANet) based on the feature pyramid structure, which introduces a spatial alignment module into the decoder to compensate for the information loss caused by downsampling. Moreover, we propose a Combined Regularization (CR) loss and Balance-Sampling Strategy (BSS) to alleviate the pixel imbalance problem and improve network convergence. Extensive experiments on two public datasets of StructSeg2019 challenge demonstrate that our method outperforms state-of-the-art methods, especially with significant advantages in reducing false positives and accurately segmenting small objects. The code is available at https://github.com/shijun18/GTV_AutoSeg.
Collapse
Affiliation(s)
- Jun Shi
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Zhaohui Wang
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Shulan Ruan
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Minfan Zhao
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Ziqi Zhu
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Hongyu Kan
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China.
| | - Hong An
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, 230026, China; Laoshan Laboratory Qingdao, Qindao, 266221, China.
| | - Xudong Xue
- Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Bing Yan
- Department of radiation oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, China.
| |
Collapse
|
10
|
Dan Y, Jin W, Yue X, Wang Z. Enhancing medical image segmentation with a multi-transformer U-Net. PeerJ 2024; 12:e17005. [PMID: 38435997 PMCID: PMC10909362 DOI: 10.7717/peerj.17005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 02/05/2024] [Indexed: 03/05/2024] Open
Abstract
Various segmentation networks based on Swin Transformer have shown promise in medical segmentation tasks. Nonetheless, challenges such as lower accuracy and slower training convergence have persisted. To tackle these issues, we introduce a novel approach that combines the Swin Transformer and Deformable Transformer to enhance overall model performance. We leverage the Swin Transformer's window attention mechanism to capture local feature information and employ the Deformable Transformer to adjust sampling positions dynamically, accelerating model convergence and aligning it more closely with object shapes and sizes. By amalgamating both Transformer modules and incorporating additional skip connections to minimize information loss, our proposed model excels at rapidly and accurately segmenting CT or X-ray lung images. Experimental results demonstrate the remarkable, showcasing the significant prowess of our model. It surpasses the performance of the standalone Swin Transformer's Swin Unet and converges more rapidly under identical conditions, yielding accuracy improvements of 0.7% (resulting in 88.18%) and 2.7% (resulting in 98.01%) on the COVID-19 CT scan lesion segmentation dataset and Chest X-ray Masks and Labels dataset, respectively. This advancement has the potential to aid medical practitioners in early diagnosis and treatment decision-making.
Collapse
Affiliation(s)
- Yongping Dan
- School of Electronic and Information, Zhongyuan University Of Technology, Zhengzhou, Henan, China
| | - Weishou Jin
- School of Electronic and Information, Zhongyuan University Of Technology, Zhengzhou, Henan, China
| | - Xuebin Yue
- Research Organization of Science and Technology, Ritsumeikan University, Kusatsu, Japan
| | - Zhida Wang
- School of Electronic and Information, Zhongyuan University Of Technology, Zhengzhou, Henan, China
| |
Collapse
|
11
|
Zhao Z, Du S, Xu Z, Yin Z, Huang X, Huang X, Wong C, Liang Y, Shen J, Wu J, Qu J, Zhang L, Cui Y, Wang Y, Wee L, Dekker A, Han C, Liu Z, Shi Z, Liang C. SwinHR: Hemodynamic-powered hierarchical vision transformer for breast tumor segmentation. Comput Biol Med 2024; 169:107939. [PMID: 38194781 DOI: 10.1016/j.compbiomed.2024.107939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 12/12/2023] [Accepted: 01/01/2024] [Indexed: 01/11/2024]
Abstract
Accurate and automated segmentation of breast tumors in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a critical role in computer-aided diagnosis and treatment of breast cancer. However, this task is challenging, due to random variation in tumor sizes, shapes, appearances, and blurred boundaries of tumors caused by inherent heterogeneity of breast cancer. Moreover, the presence of ill-posed artifacts in DCE-MRI further complicate the process of tumor region annotation. To address the challenges above, we propose a scheme (named SwinHR) integrating prior DCE-MRI knowledge and temporal-spatial information of breast tumors. The prior DCE-MRI knowledge refers to hemodynamic information extracted from multiple DCE-MRI phases, which can provide pharmacokinetics information to describe metabolic changes of the tumor cells over the scanning time. The Swin Transformer with hierarchical re-parameterization large kernel architecture (H-RLK) can capture long-range dependencies within DCE-MRI while maintaining computational efficiency by a shifted window-based self-attention mechanism. The use of H-RLK can extract high-level features with a wider receptive field, which can make the model capture contextual information at different levels of abstraction. Extensive experiments are conducted in large-scale datasets to validate the effectiveness of our proposed SwinHR scheme, demonstrating its superiority over recent state-of-the-art segmentation methods. Also, a subgroup analysis split by MRI scanners, field strength, and tumor size is conducted to verify its generalization. The source code is released on (https://github.com/GDPHMediaLab/SwinHR).
Collapse
Affiliation(s)
- Zhihe Zhao
- School of Medicine, South China University of Technology, Guangzhou, 510006, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Siyao Du
- Department of Radiology, The First Hospital of China Medical University, Shenyang, Liaoning Province, 110001, China
| | - Zeyan Xu
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Yunnan Cancer Center, Kunming, 650118, China
| | - Zhi Yin
- Department of Radiology, Shanxi Province Cancer Hospital/ Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, 030013, China
| | - Xiaomei Huang
- Department of Medical Imaging, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Xin Huang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Shantou University Medical College, Shantou, 515041, China
| | - Chinting Wong
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Yanting Liang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Jing Shen
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China
| | - Jianlin Wu
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China
| | - Jinrong Qu
- Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, 450008, China
| | - Lina Zhang
- Department of Radiology, The First Hospital of China Medical University, Shenyang, Liaoning Province, 110001, China
| | - Yanfen Cui
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Department of Radiology, Shanxi Province Cancer Hospital/ Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, 030013, China
| | - Ying Wang
- Department of Medical Ultrasonics, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510120, China
| | - Leonard Wee
- Clinical Data Science, Faculty of Health Medicine Life Sciences, Maastricht University, Maastricht, 6229 ET, The Netherlands; Department of Radiation Oncology (Maastro), GROW School of Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (Maastro), GROW School of Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Chu Han
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| | - Zhenwei Shi
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| | - Changhong Liang
- School of Medicine, South China University of Technology, Guangzhou, 510006, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| |
Collapse
|
12
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
13
|
Khorshidi A. Tumor segmentation via enhanced area growth algorithm for lung CT images. BMC Med Imaging 2023; 23:189. [PMID: 37986046 PMCID: PMC10662793 DOI: 10.1186/s12880-023-01126-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Accepted: 10/16/2023] [Indexed: 11/22/2023] Open
Abstract
BACKGROUND Since lung tumors are in dynamic conditions, the study of tumor growth and its changes is of great importance in primary diagnosis. METHODS Enhanced area growth (EAG) algorithm is introduced to segment the lung tumor in 2D and 3D modes on 60 patients CT images from four different databases by MATLAB software. The contrast augmentation, color intensity and maximum primary tumor radius determination, thresholding, start and neighbor points' designation in an array, and then modifying the points in the braid on average are the early steps of the proposed algorithm. To determine the new tumor boundaries, the maximum distance from the color-intensity center point of the primary tumor to the modified points is appointed via considering a larger target region and new threshold. The tumor center is divided into different subsections and then all previous stages are repeated from new designated points to define diverse boundaries for the tumor. An interpolation between these boundaries creates a new tumor boundary. The intersections with the tumor boundaries are firmed for edge correction phase, after drawing diverse lines from the tumor center at relevant angles. Each of the new regions is annexed to the core region to achieve a segmented tumor surface by meeting certain conditions. RESULTS The multipoint-growth-starting-point grouping fashioned a desired consequence in the precise delineation of the tumor. The proposed algorithm enhanced tumor identification by more than 16% with a reasonable accuracy acceptance rate. At the same time, it largely assurances the independence of the last outcome from the starting point. By significance difference of p < 0.05, the dice coefficients were 0.80 ± 0.02 and 0.92 ± 0.03, respectively, for primary and enhanced algorithms. Lung area determination alongside automatic thresholding and also starting from several points along with edge improvement may reduce human errors in radiologists' interpretation of tumor areas and selection of the algorithm's starting point. CONCLUSIONS The proposed algorithm enhanced tumor detection by more than 18% with a sufficient acceptance ratio of accuracy. Since the enhanced algorithm is independent of matrix size and image thickness, it is very likely that it can be easily applied to other contiguous tumor images. TRIAL REGISTRATION PAZHOUHAN, PAZHOUHAN98000032. Registered 4 January 2021, http://pazhouhan.gerums.ac.ir/webreclist/view.action?webreclist_code=19300.
Collapse
Affiliation(s)
- Abdollah Khorshidi
- School of Paramedical, Gerash University of Medical Sciences, P.O. Box: 7441758666, Gerash, Iran.
| |
Collapse
|
14
|
Amorrortu R, Garcia M, Zhao Y, El Naqa I, Balagurunathan Y, Chen DT, Thieu T, Schabath MB, Rollison DE. Overview of approaches to estimate real-world disease progression in lung cancer. JNCI Cancer Spectr 2023; 7:pkad074. [PMID: 37738580 PMCID: PMC10637832 DOI: 10.1093/jncics/pkad074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 08/28/2023] [Accepted: 09/18/2023] [Indexed: 09/24/2023] Open
Abstract
BACKGROUND Randomized clinical trials of novel treatments for solid tumors normally measure disease progression using the Response Evaluation Criteria in Solid Tumors. However, novel, scalable approaches to estimate disease progression using real-world data are needed to advance cancer outcomes research. The purpose of this narrative review is to summarize examples from the existing literature on approaches to estimate real-world disease progression and their relative strengths and limitations, using lung cancer as a case study. METHODS A narrative literature review was conducted in PubMed to identify articles that used approaches to estimate real-world disease progression in lung cancer patients. Data abstracted included data source, approach used to estimate real-world progression, and comparison to a selected gold standard (if applicable). RESULTS A total of 40 articles were identified from 2008 to 2022. Five approaches to estimate real-world disease progression were identified including manual abstraction of medical records, natural language processing of clinical notes and/or radiology reports, treatment-based algorithms, changes in tumor volume, and delta radiomics-based approaches. The accuracy of these progression approaches were assessed using different methods, including correlations between real-world endpoints and overall survival for manual abstraction (Spearman rank ρ = 0.61-0.84) and area under the curve for natural language processing approaches (area under the curve = 0.86-0.96). CONCLUSIONS Real-world disease progression has been measured in several observational studies of lung cancer. However, comparing the accuracy of methods across studies is challenging, in part, because of the lack of a gold standard and the different methods used to evaluate accuracy. Concerted efforts are needed to define a gold standard and quality metrics for real-world data.
Collapse
Affiliation(s)
| | - Melany Garcia
- Department of Cancer Epidemiology, Moffitt Cancer Center, Tampa, FL, USA
| | - Yayi Zhao
- Department of Cancer Epidemiology, Moffitt Cancer Center, Tampa, FL, USA
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, USA
| | | | - Dung-Tsa Chen
- Department of Biostatistics and Bionformatics, Moffitt Cancer Center, Tampa, FL, USA
| | - Thanh Thieu
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, USA
| | - Matthew B Schabath
- Department of Cancer Epidemiology, Moffitt Cancer Center, Tampa, FL, USA
| | - Dana E Rollison
- Department of Cancer Epidemiology, Moffitt Cancer Center, Tampa, FL, USA
| |
Collapse
|
15
|
Wang L, Wang J, Zhu L, Fu H, Li P, Cheng G, Feng Z, Li S, Heng PA. Dual Multiscale Mean Teacher Network for Semi-Supervised Infection Segmentation in Chest CT Volume for COVID-19. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:6363-6375. [PMID: 37015538 DOI: 10.1109/tcyb.2022.3223528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating coronavirus 2019 (COVID-19). However, there are still some challenges for developing AI system: 1) most current COVID-19 infection segmentation methods mainly relied on 2-D CT images, which lack 3-D sequential constraint; 2) existing 3-D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3-D volume; and 3) the emergent breaking out of COVID-19 makes it hard to annotate sufficient CT volumes for training deep model. To address these issues, we first build a multiple dimensional-attention convolutional neural network (MDA-CNN) to aggregate multiscale information along different dimension of input feature maps and impose supervision on multiple predictions from different convolutional neural networks (CNNs) layers. Second, we assign this MDA-CNN as a basic network into a novel dual multiscale mean teacher network (DM [Formula: see text]-Net) for semi-supervised COVID-19 lung infection segmentation on CT volumes by leveraging unlabeled data and exploring the multiscale information. Our DM [Formula: see text]-Net encourages multiple predictions at different CNN layers from the student and teacher networks to be consistent for computing a multiscale consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from multiple predictions of MDA-CNN. Third, we collect two COVID-19 segmentation datasets to evaluate our method. The experimental results show that our network consistently outperforms the compared state-of-the-art methods.
Collapse
|
16
|
M GJ, S B. DeepNet model empowered cuckoo search algorithm for the effective identification of lung cancer nodules. FRONTIERS IN MEDICAL TECHNOLOGY 2023; 5:1157919. [PMID: 37752910 PMCID: PMC10518616 DOI: 10.3389/fmedt.2023.1157919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 08/22/2023] [Indexed: 09/28/2023] Open
Abstract
Introduction Globally, lung cancer is a highly harmful type of cancer. An efficient diagnosis system can enable pathologists to recognize the type and nature of lung nodules and the mode of therapy to increase the patient's chance of survival. Hence, implementing an automatic and reliable system to segment lung nodules from a computed tomography (CT) image is useful in the medical industry. Methods This study develops a novel fully convolutional deep neural network (hereafter called DeepNet) model for segmenting lung nodules from CT scans. This model includes an encoder/decoder network that achieves pixel-wise image segmentation. The encoder network exploits a Visual Geometry Group (VGG-19) model as a base architecture, while the decoder network exploits 16 upsampling and deconvolution modules. The encoder used in this model has a very flexible structural design that can be modified and trained for any resolution based on the size of input scans. The decoder network upsamples and maps the low-resolution attributes of the encoder. Thus, there is a considerable drop in the number of variables used for the learning process as the network recycles the pooling indices of the encoder for segmentation. The Thresholding method and the cuckoo search algorithm determines the most useful features when categorizing cancer nodules. Results and discussion The effectiveness of the intended DeepNet model is cautiously assessed on the real-world database known as The Cancer Imaging Archive (TCIA) dataset and its effectiveness is demonstrated by comparing its representation with some other modern segmentation models in terms of selected performance measures. The empirical analysis reveals that DeepNet significantly outperforms other prevalent segmentation algorithms with 0.962 ± 0.023% of volume error, 0.968 ± 0.011 of dice similarity coefficient, 0.856 ± 0.011 of Jaccard similarity index, and 0.045 ± 0.005s average processing time.
Collapse
Affiliation(s)
- Grace John M
- Department of Electronics and Communication, Karpagam Academy of Higher Education, Coimbatore, India
| | | |
Collapse
|
17
|
Zhi L, Jiang W, Zhang S, Zhou T. Deep neural network pulmonary nodule segmentation methods for CT images: Literature review and experimental comparisons. Comput Biol Med 2023; 164:107321. [PMID: 37595518 DOI: 10.1016/j.compbiomed.2023.107321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 05/08/2023] [Accepted: 08/07/2023] [Indexed: 08/20/2023]
Abstract
Automatic and accurate segmentation of pulmonary nodules in CT images can help physicians perform more accurate quantitative analysis, diagnose diseases, and improve patient survival. In recent years, with the development of deep learning technology, pulmonary nodule segmentation methods based on deep neural networks have gradually replaced traditional segmentation methods. This paper reviews the recent pulmonary nodule segmentation algorithms based on deep neural networks. First, the heterogeneity of pulmonary nodules, the interpretability of segmentation results, and external environmental factors are discussed, and then the open-source 2D and 3D models in medical segmentation tasks in recent years are applied to the Lung Image Database Consortium and Image Database Resource Initiative (LIDC) and Lung Nodule Analysis 16 (Luna16) datasets for comparison, and the visual diagnostic features marked by radiologists are evaluated one by one. According to the analysis of the experimental data, the following conclusions are drawn: (1) In the pulmonary nodule segmentation task, the performance of the 2D segmentation models DSC is generally better than that of the 3D segmentation models. (2) 'Subtlety', 'Sphericity', 'Margin', 'Texture', and 'Size' have more influence on pulmonary nodule segmentation, while 'Lobulation', 'Spiculation', and 'Benign and Malignant' features have less influence on pulmonary nodule segmentation. (3) Higher accuracy in pulmonary nodule segmentation can be achieved based on better-quality CT images. (4) Good contextual information acquisition and attention mechanism design positively affect pulmonary nodule segmentation.
Collapse
Affiliation(s)
- Lijia Zhi
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Wujun Jiang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China.
| | - Shaomin Zhang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Tao Zhou
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| |
Collapse
|
18
|
Simeth J, Jiang J, Nosov A, Wibmer A, Zelefsky M, Tyagi N, Veeraraghavan H. Deep learning-based dominant index lesion segmentation for MR-guided radiation therapy of prostate cancer. Med Phys 2023; 50:4854-4870. [PMID: 36856092 PMCID: PMC11098147 DOI: 10.1002/mp.16320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/11/2023] [Accepted: 01/29/2023] [Indexed: 03/02/2023] Open
Abstract
BACKGROUND Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL. PURPOSE To construct and validate a model for deep-learning-based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR-guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences. METHODS Five deep-learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter-rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN-DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet-SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two-raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open-source GitHub repository. RESULTS In general, MRRN-DS more accurately segmented tumors than other methods on the testing datasets. MRRN-DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p < 0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p = 0.04). FPSnet-SL was similarly accurate as MRRN-DS in Dataset2 (p = 0.30), but MRRN-DS significantly outperformed FPSnet and FPSnet-SL in both Dataset1 (0.60 vs. 0.51 [p = 0.01] and 0.54 [p = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [p = 0.002] and 0.24 [p = 0.004] respectively). Finally, MRRN-DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41). CONCLUSIONS MRRN-DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN-DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.
Collapse
Affiliation(s)
- Josiah Simeth
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Anton Nosov
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Andreas Wibmer
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Michael Zelefsky
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
19
|
Dunn B, Pierobon M, Wei Q. Automated Classification of Lung Cancer Subtypes Using Deep Learning and CT-Scan Based Radiomic Analysis. Bioengineering (Basel) 2023; 10:690. [PMID: 37370621 DOI: 10.3390/bioengineering10060690] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 06/02/2023] [Accepted: 06/03/2023] [Indexed: 06/29/2023] Open
Abstract
Artificial intelligence and emerging data science techniques are being leveraged to interpret medical image scans. Traditional image analysis relies on visual interpretation by a trained radiologist, which is time-consuming and can, to some degree, be subjective. The development of reliable, automated diagnostic tools is a key goal of radiomics, a fast-growing research field which combines medical imaging with personalized medicine. Radiomic studies have demonstrated potential for accurate lung cancer diagnoses and prognostications. The practice of delineating the tumor region of interest, known as segmentation, is a key bottleneck in the development of generalized classification models. In this study, the incremental multiple resolution residual network (iMRRN), a publicly available and trained deep learning segmentation model, was applied to automatically segment CT images collected from 355 lung cancer patients included in the dataset "Lung-PET-CT-Dx", obtained from The Cancer Imaging Archive (TCIA), an open-access source for radiological images. We report a failure rate of 4.35% when using the iMRRN to segment tumor lesions within plain CT images in the lung cancer CT dataset. Seven classification algorithms were trained on the extracted radiomic features and tested for their ability to classify different lung cancer subtypes. Over-sampling was used to handle unbalanced data. Chi-square tests revealed the higher order texture features to be the most predictive when classifying lung cancers by subtype. The support vector machine showed the highest accuracy, 92.7% (0.97 AUC), when classifying three histological subtypes of lung cancer: adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. The results demonstrate the potential of AI-based computer-aided diagnostic tools to automatically diagnose subtypes of lung cancer by coupling deep learning image segmentation with supervised classification. Our study demonstrated the integrated application of existing AI techniques in the non-invasive and effective diagnosis of lung cancer subtypes, and also shed light on several practical issues concerning the application of AI in biomedicine.
Collapse
Affiliation(s)
- Bryce Dunn
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA
| | - Mariaelena Pierobon
- School of Systems Biology, Center for Applied Proteomics and Molecular Medicine, George Mason University, Fairfax, VA 22030, USA
| | - Qi Wei
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA
| |
Collapse
|
20
|
VJ MJ, S K. Multi-classification approach for lung nodule detection and classification with proposed texture feature in X-ray images. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-28. [PMID: 37362672 PMCID: PMC10188326 DOI: 10.1007/s11042-023-15281-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 10/22/2022] [Accepted: 04/06/2023] [Indexed: 06/28/2023]
Abstract
Lung cancer is a widespread type of cancer around the world. It is, moreover, a lethal type of tumor. Nevertheless, analysis signifies that earlier recognition of lung cancer considerably develops the possibilities of survival. By deploying X-rays and Computed Tomography (CT) scans, radiologists could identify hazardous nodules at an earlier period. However, when more citizens adopt these diagnoses, the workload rises for radiologists. Computer Assisted Diagnosis (CAD)-based detection systems can identify these nodules automatically and could assist radiologists in reducing their workloads. However, they result in lower sensitivity and a higher count of false positives. The proposed work introduces a new approach for Lung Nodule (LN) detection. At first, Histogram Equalization (HE) is done during pre-processing. As the next step, improved Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH) based segmentation is done. Then, the characteristics, including "Gray Level Run-Length Matrix (GLRM), Gray Level Co-Occurrence Matrix (GLCM), and the proposed Local Vector Pattern (LVP)," are retrieved. These features are then categorized utilizing an optimized Convolutional Neural Network (CNN) and itdetectsnodule or non-nodule images. Subsequently, Long Short-Term Memory (LSTM) is deployed to categorize nodule types (benign, malignant, or normal). The CNN weights are fine-tuned by the Chaotic Population-based Beetle Swarm Algorithm (CP-BSA). Finally, the superiority of the proposed approach is confirmed across various measures. The developed approach has exhibited a high precision value of 0.9575 for the best case scenario, and high sensitivity value of 0.9646 for the mean case scenario. The superiority of the proposed approach is confirmed across various measures.
Collapse
Affiliation(s)
- Mary Jaya VJ
- Department of Computer Science, Assumption Autonomous College, Changanassery, Kerala India
| | - Krishnakumar S
- Department of Electronics, School of Technology and Applied Sciences, Mahatma Gandhi University Research Centre, Kochi, Kerala India
| |
Collapse
|
21
|
Qiao P, Li H, Song G, Han H, Gao Z, Tian Y, Liang Y, Li X, Zhou SK, Chen J. Semi-Supervised CT Lesion Segmentation Using Uncertainty-Based Data Pairing and SwapMix. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1546-1562. [PMID: 37015649 DOI: 10.1109/tmi.2022.3232572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Semi-supervised learning (SSL) methods show their powerful performance to deal with the issue of data shortage in the field of medical image segmentation. However, existing SSL methods still suffer from the problem of unreliable predictions on unannotated data due to the lack of manual annotations for them. In this paper, we propose an unreliability-diluted consistency training (UDiCT) mechanism to dilute the unreliability in SSL by assembling reliable annotated data into unreliable unannotated data. Specifically, we first propose an uncertainty-based data pairing module to pair annotated data with unannotated data based on a complementary uncertainty pairing rule, which avoids two hard samples being paired off. Secondly, we develop SwapMix, a mixed sample data augmentation method, to integrate annotated data into unannotated data for training our model in a low-unreliability manner. Finally, UDiCT is trained by minimizing a supervised loss and an unreliability-diluted consistency loss, which makes our model robust to diverse backgrounds. Extensive experiments on three chest CT datasets show the effectiveness of our method for semi-supervised CT lesion segmentation.
Collapse
|
22
|
Ebadi N, Li R, Das A, Roy A, Nikos P, Najafirad P. CBCT-guided adaptive radiotherapy using self-supervised sequential domain adaptation with uncertainty estimation. Med Image Anal 2023; 86:102800. [PMID: 37003101 DOI: 10.1016/j.media.2023.102800] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 01/29/2023] [Accepted: 03/14/2023] [Indexed: 03/17/2023]
Abstract
Adaptive radiotherapy (ART) is an advanced technology in modern cancer treatment that incorporates progressive changes in patient anatomy into active plan/dose adaption during the fractionated treatment. However, the clinical application relies on the accurate segmentation of cancer tumors on low-quality on-board images, which has posed challenges for both manual delineation and deep learning-based models. In this paper, we propose a novel sequence transduction deep neural network with an attention mechanism to learn the shrinkage of the cancer tumor based on patients' weekly cone-beam computed tomography (CBCT). We design a self-supervised domain adaption (SDA) method to learn and adapt the rich textural and spatial features from pre-treatment high-quality computed tomography (CT) to CBCT modality in order to address the poor image quality and lack of labels. We also provide uncertainty estimation for sequential segmentation, which aids not only in the risk management of treatment planning but also in the calibration and reliability of the model. Our experimental results based on a clinical non-small cell lung cancer (NSCLC) dataset with sixteen patients and ninety-six longitudinal CBCTs show that our model correctly learns weekly deformation of the tumor over time with an average dice score of 0.92 on the immediate next step, and is able to predict multiple steps (up to 5 weeks) for future patient treatments with an average dice score reduction of 0.05. By incorporating the tumor shrinkage predictions into a weekly re-planning strategy, our proposed method demonstrates a significant decrease in the risk of radiation-induced pneumonitis up to 35% while maintaining the high tumor control probability.
Collapse
Affiliation(s)
- Nima Ebadi
- Department of Electrical and Computer Engineering, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| | - Ruiqi Li
- Department of Radiation Oncology, UT Health San Antonio, San Antonio, TX 78229, United States of America.
| | - Arun Das
- Department of Electrical and Computer Engineering, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America; Department of Medicine, The University of Pittsburgh, Pittsburgh, PA 15260, United States of America.
| | - Arkajyoti Roy
- Department of Management Science and Statistics, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| | - Papanikolaou Nikos
- Department of Radiation Oncology, UT Health San Antonio, San Antonio, TX 78229, United States of America.
| | - Peyman Najafirad
- Department of Computer Science, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| |
Collapse
|
23
|
Paudyal R, Shah AD, Akin O, Do RKG, Konar AS, Hatzoglou V, Mahmood U, Lee N, Wong RJ, Banerjee S, Shin J, Veeraraghavan H, Shukla-Dave A. Artificial Intelligence in CT and MR Imaging for Oncological Applications. Cancers (Basel) 2023; 15:cancers15092573. [PMID: 37174039 PMCID: PMC10177423 DOI: 10.3390/cancers15092573] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/15/2023] Open
Abstract
Cancer care increasingly relies on imaging for patient management. The two most common cross-sectional imaging modalities in oncology are computed tomography (CT) and magnetic resonance imaging (MRI), which provide high-resolution anatomic and physiological imaging. Herewith is a summary of recent applications of rapidly advancing artificial intelligence (AI) in CT and MRI oncological imaging that addresses the benefits and challenges of the resultant opportunities with examples. Major challenges remain, such as how best to integrate AI developments into clinical radiology practice, the vigorous assessment of quantitative CT and MR imaging data accuracy, and reliability for clinical utility and research integrity in oncology. Such challenges necessitate an evaluation of the robustness of imaging biomarkers to be included in AI developments, a culture of data sharing, and the cooperation of knowledgeable academics with vendor scientists and companies operating in radiology and oncology fields. Herein, we will illustrate a few challenges and solutions of these efforts using novel methods for synthesizing different contrast modality images, auto-segmentation, and image reconstruction with examples from lung CT as well as abdome, pelvis, and head and neck MRI. The imaging community must embrace the need for quantitative CT and MRI metrics beyond lesion size measurement. AI methods for the extraction and longitudinal tracking of imaging metrics from registered lesions and understanding the tumor environment will be invaluable for interpreting disease status and treatment efficacy. This is an exciting time to work together to move the imaging field forward with narrow AI-specific tasks. New AI developments using CT and MRI datasets will be used to improve the personalized management of cancer patients.
Collapse
Affiliation(s)
- Ramesh Paudyal
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Akash D Shah
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Oguz Akin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard K G Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amaresha Shridhar Konar
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Vaios Hatzoglou
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Usman Mahmood
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Nancy Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard J Wong
- Head and Neck Service, Department of Surgery, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | | | | | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amita Shukla-Dave
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| |
Collapse
|
24
|
Sebastian AE, Dua D. Lung Nodule Detection via Optimized Convolutional Neural Network: Impact of Improved Moth Flame Algorithm. SENSING AND IMAGING 2023; 24:11. [PMID: 36936054 PMCID: PMC10009866 DOI: 10.1007/s11220-022-00406-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 09/30/2022] [Accepted: 11/02/2022] [Indexed: 06/18/2023]
Abstract
Lung cancer is a high-risk disease that affects people all over the world, and lung nodules are the most common sign of early lung cancer. Since early identification of lung cancer can considerably improve a lung scanner patient's chances of survival, an accurate and efficient nodule detection system can be essential. Automatic lung nodule recognition decreases radiologists' effort, as well as the risk of misdiagnosis and missed diagnoses. Hence, this article developed a new lung nodule detection model with four stages like "Image pre-processing, segmentation, feature extraction and classification". In this processes, pre-processing is the first step, in which the input image is subjected to a series of operations. Then, the "Otsu Thresholding model" is used to segment the pre-processed pictures. Then in the third stage, the LBP features are retrieved that is then classified via optimized Convolutional Neural Network (CNN). In this, the activation function and convolutional layer count of CNN is optimally tuned via a proposed algorithm known as Improved Moth Flame Optimization (IMFO). At the end, the betterment of the scheme is validated by carrying out analysis in terms of certain measures. Especially, the accuracy of the proposed work is 6.85%, 2.91%, 1.75%, 0.73%, 1.83%, as well as 4.05% superior to the extant SVM, KNN, CNN, MFO, WTEEB as well as GWO + FRVM methods respectively.
Collapse
Affiliation(s)
| | - Disha Dua
- Indira Gandhi Delhi Technical University for Women, Delhi, Delhi, India
| |
Collapse
|
25
|
Thompson HM, Kim JK, Jimenez-Rodriguez RM, Garcia-Aguilar J, Veeraraghavan H. Deep Learning-Based Model for Identifying Tumors in Endoscopic Images From Patients With Locally Advanced Rectal Cancer Treated With Total Neoadjuvant Therapy. Dis Colon Rectum 2023; 66:383-391. [PMID: 35358109 PMCID: PMC10185333 DOI: 10.1097/dcr.0000000000002295] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
BACKGROUND A barrier to the widespread adoption of watch-and-wait management for locally advanced rectal cancer is the inaccuracy and variability of identifying tumor response endoscopically in patients who have completed total neoadjuvant therapy (chemoradiotherapy and systemic chemotherapy). OBJECTIVE This study aimed to develop a novel method of identifying the presence or absence of a tumor in endoscopic images using deep convolutional neural network-based automatic classification and to assess the accuracy of the method. DESIGN In this prospective pilot study, endoscopic images obtained before, during, and after total neoadjuvant therapy were grouped on the basis of tumor presence. A convolutional neural network was modified for probabilistic classification of tumor versus no tumor and trained with an endoscopic image set. After training, a testing endoscopic imaging set was applied to the network. SETTINGS The study was conducted at a comprehensive cancer center. PATIENTS Images were analyzed from 109 patients who were diagnosed with locally advanced rectal cancer between December 2012 and July 2017 and who underwent total neoadjuvant therapy. MAIN OUTCOME MEASURES The main outcomes were accuracy of identifying tumor presence or absence in endoscopic images measured as area under the receiver operating characteristic for the training and testing image sets. RESULTS A total of 1392 images were included; 1099 images (468 of no tumor and 631 of tumor) were for training and 293 images (151 of no tumor and 142 of tumor) for testing. The area under the receiver operating characteristic for training and testing was 0.83. LIMITATIONS The study had a limited number of images in each set and was conducted at a single institution. CONCLUSIONS The convolutional neural network method is moderately accurate in distinguishing tumor from no tumor. Further research should focus on validating the convolutional neural network on a large image set. See Video Abstract at http://links.lww.com/DCR/B959 . MODELO BASADO EN APRENDIZAJE PROFUNDO PARA IDENTIFICAR TUMORES EN IMGENES ENDOSCPICAS DE PACIENTES CON CNCER DE RECTO LOCALMENTE AVANZADO TRATADOS CON TERAPIA NEOADYUVANTE TOTAL ANTECEDENTES:Una barrera para la aceptación generalizada del tratamiento de Observar y Esperar para el cáncer de recto localmente avanzado, es la imprecisión y la variabilidad en la identificación de la respuesta tumoral endoscópica, en pacientes que completaron la terapia neoadyuvante total (quimiorradioterapia y quimioterapia sistémica).OBJETIVO:Desarrollar un método novedoso para identificar la presencia o ausencia de un tumor en imágenes endoscópicas utilizando una clasificación automática basada en redes neuronales convolucionales profundas y evaluar la precisión del método.DISEÑO:Las imágenes endoscópicas obtenidas antes, durante y después de la terapia neoadyuvante total se agruparon en base de la presencia del tumor. Se modificó una red neuronal convolucional para la clasificación probabilística de tumor versus no tumor y se entrenó con un conjunto de imágenes endoscópicas. Después del entrenamiento, se aplicó a la red un conjunto de imágenes endoscópicas de prueba.ENTORNO CLINICO:El estudio se realizó en un centro oncológico integral.PACIENTES:Analizamos imágenes de 109 pacientes que fueron diagnosticados de cáncer de recto localmente avanzado entre diciembre de 2012 y julio de 2017 y que se sometieron a terapia neoadyuvante total.PRINCIPALES MEDIDAS DE VALORACION:La precisión en la identificación de la presencia o ausencia de tumores en imágenes endoscópicas medidas como el área bajo la curva de funcionamiento del receptor para los conjuntos de imágenes de entrenamiento y prueba.RESULTADOS:Se incluyeron mil trescientas noventa y dos imágenes: 1099 (468 sin tumor y 631 con tumor) para entrenamiento y 293 (151 sin tumor y 142 con tumor) para prueba. El área bajo la curva operativa del receptor para entrenamiento y prueba fue de 0,83.LIMITACIONES:El estudio tuvo un número limitado de imágenes en cada conjunto y se realizó en una sola institución.CONCLUSIÓN:El método de la red neuronal convolucional es moderadamente preciso para distinguir el tumor de ningún tumor. La investigación adicional debería centrarse en validar la red neuronal convolucional en un conjunto de imágenes mayor. Consulte Video Resumen en http://links.lww.com/DCR/B959 . (Traducción -Dr. Fidel Ruiz Healy ).
Collapse
Affiliation(s)
- Hannah M Thompson
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Jin K Kim
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, New York
| | | | - Julio Garcia-Aguilar
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York
| |
Collapse
|
26
|
Lee J, Lee MJ, Kim BS, Hong H. Automated lung tumor segmentation robust to various tumor sizes using a consistency learning-based multi-scale dual-attention network. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:879-892. [PMID: 37424487 DOI: 10.3233/xst-230003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
BACKGROUND It is often difficult to automatically segment lung tumors due to the large tumor size variation ranging from less than 1 cm to greater than 7 cm depending on the T-stage. OBJECTIVE This study aims to accurately segment lung tumors of various sizes using a consistency learning-based multi-scale dual-attention network (CL-MSDA-Net). METHODS To avoid under- and over-segmentation caused by different ratios of lung tumors and surrounding structures in the input patch according to the size of the lung tumor, a size-invariant patch is generated by normalizing the ratio to the average size of the lung tumors used for the training. Two input patches, a size-invariant patch and size-variant patch are trained on a consistency learning-based network consisting of dual branches that share weights to generate a similar output for each branch with consistency loss. The network of each branch has a multi-scale dual-attention module that learns image features of different scales and uses channel and spatial attention to enhance the scale-attention ability to segment lung tumors of different sizes. RESULTS In experiments with hospital datasets, CL-MSDA-Net showed an F1-score of 80.49%, recall of 79.06%, and precision of 86.78%. This resulted in 3.91%, 3.38%, and 2.95% higher F1-scores than the results of U-Net, U-Net with a multi-scale module, and U-Net with a multi-scale dual-attention module, respectively. In experiments with the NSCLC-Radiomics datasets, CL-MSDA-Net showed an F1-score of 71.7%, recall of 68.24%, and precision of 79.33%. This resulted in 3.66%, 3.38%, and 3.13% higher F1-scores than the results of U-Net, U-Net with a multi-scale module, and U-Net with a multi-scale dual-attention module, respectively. CONCLUSIONS CL-MSDA-Net improves the segmentation performance on average for tumors of all sizes with significant improvements especially for small sized tumors.
Collapse
Affiliation(s)
- Jumin Lee
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| | - Min-Jin Lee
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| | | | - Helen Hong
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| |
Collapse
|
27
|
Zhao G, Liang K, Pan C, Zhang F, Wu X, Hu X, Yu Y. Graph Convolution Based Cross-Network Multiscale Feature Fusion for Deep Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:183-195. [PMID: 36112564 DOI: 10.1109/tmi.2022.3207093] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Vessel segmentation is widely used to help with vascular disease diagnosis. Vessels reconstructed using existing methods are often not sufficiently accurate to meet clinical use standards. This is because 3D vessel structures are highly complicated and exhibit unique characteristics, including sparsity and anisotropy. In this paper, we propose a novel hybrid deep neural network for vessel segmentation. Our network consists of two cascaded subnetworks performing initial and refined segmentation respectively. The second subnetwork further has two tightly coupled components, a traditional CNN-based U-Net and a graph U-Net. Cross-network multi-scale feature fusion is performed between these two U-shaped networks to effectively support high-quality vessel segmentation. The entire cascaded network can be trained from end to end. The graph in the second subnetwork is constructed according to a vessel probability map as well as appearance and semantic similarities in the original CT volume. To tackle the challenges caused by the sparsity and anisotropy of vessels, a higher percentage of graph nodes are distributed in areas that potentially contain vessels while a higher percentage of edges follow the orientation of potential nearby vessels. Extensive experiments demonstrate our deep network achieves state-of-the-art 3D vessel segmentation performance on multiple public and in-house datasets.
Collapse
|
28
|
Rehman A, Butt MA, Zaman M. Attention Res-UNet. INTERNATIONAL JOURNAL OF DECISION SUPPORT SYSTEM TECHNOLOGY 2023. [DOI: 10.4018/ijdsst.315756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
During a dermoscopy examination, accurate and automatic skin lesion detection and segmentation can assist medical experts in resecting problematic areas and decrease the risk of deaths due to skin cancer. In order to develop fully automated deep learning model for skin lesion segmentation, the authors design a model Attention Res-UNet by incorporating residual connections, squeeze and excite units, atrous spatial pyramid pooling, and attention gates in basic UNet architecture. This model uses focal tversky loss function to achieve better trade off among recall and precision when training on smaller size lesions while improving the overall outcome of the proposed model. The results of experiments have demonstrated that this design, when evaluated on publicly available ISIC 2018 skin lesion segmentation dataset, outperforms the existing standard methods with a Dice score of 89.14% and IoU of 81.16%; and achieves better trade off among precision and recall. The authors have also performed statistical test of this model with other standard methods and evaluated that this model is statistically significant.
Collapse
|
29
|
Zhang F, Wang Q, Fan E, Lu N, Chen D, Jiang H, Wang Y. Automatic segmentation of the tumor in nonsmall-cell lung cancer by combining coarse and fine segmentation. Med Phys 2022. [PMID: 36514264 DOI: 10.1002/mp.16158] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 08/05/2022] [Accepted: 11/26/2022] [Indexed: 12/15/2022] Open
Abstract
OBJECTIVES Radiotherapy plays an important role in the treatment of nonsmall-cell lung cancer (NSCLC). Accurate delineation of tumor is the key to successful radiotherapy. Compared with the commonly used manual delineation ways, which are time-consuming and laborious, the automatic segmentation methods based on deep learning can greatly improve the treatment efficiency. METHODS In this paper, we introduce an automatic segmentation method by combining coarse and fine segmentations for NSCLC. Coarse segmentation network is the first level, identifing the rough region of the tumor. In this network, according to the tissue structure distribution of the thoracic cavity where tumor is located, we designed a competition method between tumors and organs at risk (OARs), which can increase the proportion of the identified tumor covering the ground truth and reduce false identification. Fine segmentation network is the second level, carrying out precise segmentation on the results of the coarse level. These two networks are independent of each other during training. When they are used, morphological processing of small scale corrosion and large scale expansion is used for the coarse segmentation results, and the outcomes are sent to the fine segmentation part as input, so as to achieve the complementary advantages of the two networks. RESULTS In the experiment, CT images of 200 patients with NSCLC are used to train the network, and CT images of 60 patients are used to test. Finally, our method produced the Dice similarity coefficient of 0.78 ± 0.10. CONCLUSIONS The experimental results show that the proposed method can accurately segment the tumor with NSCLC, and can also provide support for clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Fuli Zhang
- Radiation Oncology Department, The Seventh Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Qiusheng Wang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Enyu Fan
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Na Lu
- Senior Department of Oncology, The Fifth Medical Center of PLA General Hospital, Beijing, China
| | - Diandian Chen
- Senior Department of Oncology, The Fifth Medical Center of PLA General Hospital, Beijing, China
| | - Huayong Jiang
- Senior Department of Oncology, The Fifth Medical Center of PLA General Hospital, Beijing, China
| | - Yadi Wang
- Senior Department of Oncology, The Fifth Medical Center of PLA General Hospital, Beijing, China
| |
Collapse
|
30
|
Artificial intelligence for prediction of response to cancer immunotherapy. Semin Cancer Biol 2022; 87:137-147. [PMID: 36372326 DOI: 10.1016/j.semcancer.2022.11.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/02/2022] [Accepted: 11/08/2022] [Indexed: 11/13/2022]
Abstract
Artificial intelligence (AI) indicates the application of machines to imitate intelligent behaviors for solving complex tasks with minimal human intervention, including machine learning and deep learning. The use of AI in medicine improves health-care systems in multiple areas such as diagnostic confirmation, risk stratification, analysis, prognosis prediction, treatment surveillance, and virtual health support, which has considerable potential to revolutionize and reshape medicine. In terms of immunotherapy, AI has been applied to unlock underlying immune signatures to associate with responses to immunotherapy indirectly as well as predict responses to immunotherapy responses directly. The AI-based analysis of high-throughput sequences and medical images can provide useful information for management of cancer immunotherapy considering the excellent abilities in selecting appropriate subjects, improving therapeutic regimens, and predicting individualized prognosis. In present review, we aim to evaluate a broad framework about AI-based computational approaches for prediction of response to cancer immunotherapy on both indirect and direct manners. Furthermore, we summarize our perspectives about challenges and opportunities of further AI applications on cancer immunotherapy relating to clinical practicability.
Collapse
|
31
|
Liu S, Tang X, Cai T, Zhang Y, Wang C. COVID-19 CT image segmentation based on improved Res2Net. Med Phys 2022; 49:7583-7595. [PMID: 35916116 PMCID: PMC9538682 DOI: 10.1002/mp.15882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 06/27/2022] [Accepted: 07/18/2022] [Indexed: 01/08/2023] Open
Abstract
PURPOSE Corona virus disease 2019 (COVID-19) is threatening the health of the global people and bringing great losses to our economy and society. However, computed tomography (CT) image segmentation can make clinicians quickly identify the COVID-19-infected regions. Accurate segmentation infection area of COVID-19 can contribute screen confirmed cases. METHODS We designed a segmentation network for COVID-19-infected regions in CT images. To begin with, multilayered features were extracted by the backbone network of Res2Net. Subsequently, edge features of the infected regions in the low-level feature f2 were extracted by the edge attention module. Second, we carefully designed the structure of the attention position module (APM) to extract high-level feature f5 and detect infected regions. Finally, we proposed a context exploration module consisting of two parallel explore blocks, which can remove some false positives and false negatives to reach more accurate segmentation results. RESULTS Experimental results show that, on the public COVID-19 dataset, the Dice, sensitivity, specificity,S α ${S}_\alpha $ ,E ∅ m e a n $E_\emptyset ^{mean}$ , and mean absolute error (MAE) of our method are 0.755, 0.751, 0.959, 0.795, 0.919, and 0.060, respectively. Compared with the latest COVID-19 segmentation model Inf-Net, the Dice similarity coefficient of our model has increased by 7.3%; the sensitivity (Sen) has increased by 5.9%. On contrary, the MAE has dropped by 2.2%. CONCLUSIONS Our method performs well on COVID-19 CT image segmentation. We also find that our method is so portable that can be suitable for various current popular networks. In a word, our method can help screen people infected with COVID-19 effectively and save the labor power of clinicians and radiologists.
Collapse
Affiliation(s)
- Shangwang Liu
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
- Engineering Lab of Intelligence Business & Internet of ThingsXinxiangHenanChina
| | - Xiufang Tang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Tongbo Cai
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Yangyang Zhang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| | - Changgeng Wang
- School of Computer and Information EngineeringHenan Normal UniversityXinxiangHenanChina
| |
Collapse
|
32
|
Tang T, Li F, Jiang M, Xia X, Zhang R, Lin K. Improved Complementary Pulmonary Nodule Segmentation Model Based on Multi-Feature Fusion. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1755. [PMID: 36554161 PMCID: PMC9778431 DOI: 10.3390/e24121755] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/23/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
Accurate segmentation of lung nodules from pulmonary computed tomography (CT) slices plays a vital role in the analysis and diagnosis of lung cancer. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in the automatic segmentation of lung nodules. However, they are still challenged by the large diversity of segmentation targets, and the small inter-class variances between the nodule and its surrounding tissues. To tackle this issue, we propose a features complementary network according to the process of clinical diagnosis, which made full use of the complementarity and facilitation among lung nodule location information, global coarse area, and edge information. Specifically, we first consider the importance of global features of nodules in segmentation and propose a cross-scale weighted high-level feature decoder module. Then, we develop a low-level feature decoder module for edge feature refinement. Finally, we construct a complementary module to make information complement and promote each other. Furthermore, we weight pixels located at the nodule edge on the loss function and add an edge supervision to the deep supervision, both of which emphasize the importance of edges in segmentation. The experimental results demonstrate that our model achieves robust pulmonary nodule segmentation and more accurate edge segmentation.
Collapse
Affiliation(s)
- Tiequn Tang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
- School of Physics and Electronic Engineering, Fuyang Normal University, Fuyang 236037, China
| | - Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Minshan Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
- Department of Biomedical Engineering, Florida International University, Miami, FL 33174, USA
| | - Xunpeng Xia
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Rongfu Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Kailin Lin
- Fudan University Shanghai Cancer Center, Shanghai 200032, China
| |
Collapse
|
33
|
Zhang X, Zhang B, Deng S, Meng Q, Chen X, Xiang D. Cross modality fusion for modality-specific lung tumor segmentation in PET-CT images. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac994e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 10/11/2022] [Indexed: 11/09/2022]
Abstract
Abstract
Although positron emission tomography-computed tomography (PET-CT) images have been widely used, it is still challenging to accurately segment the lung tumor. The respiration, movement and imaging modality lead to large modality discrepancy of the lung tumors between PET images and CT images. To overcome these difficulties, a novel network is designed to simultaneously obtain the corresponding lung tumors of PET images and CT images. The proposed network can fuse the complementary information and preserve modality-specific features of PET images and CT images. Due to the complementarity between PET images and CT images, the two modality images should be fused for automatic lung tumor segmentation. Therefore, cross modality decoding blocks are designed to extract modality-specific features of PET images and CT images with the constraints of the other modality. The edge consistency loss is also designed to solve the problem of blurred boundaries of PET images and CT images. The proposed method is tested on 126 PET-CT images with non-small cell lung cancer, and Dice similarity coefficient scores of lung tumor segmentation reach 75.66 ± 19.42 in CT images and 79.85 ± 16.76 in PET images, respectively. Extensive comparisons with state-of-the-art lung tumor segmentation methods have also been performed to demonstrate the superiority of the proposed network.
Collapse
|
34
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:5569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
35
|
Xie H, Chen Z, Deng J, Zhang J, Duan H, Li Q. Automatic segmentation of the gross target volume in radiotherapy for lung cancer using transresSEUnet 2.5D Network. J Transl Med 2022; 20:524. [PMID: 36371220 PMCID: PMC9652981 DOI: 10.1186/s12967-022-03732-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/28/2022] [Indexed: 11/15/2022] Open
Abstract
Objective This paper intends to propose a method of using TransResSEUnet2.5D network for accurate automatic segmentation of the Gross Target Volume (GTV) in Radiotherapy for lung cancer. Methods A total of 11,370 computed tomograms (CT), deriving from 137 cases, of lung cancer patients under radiotherapy developed by radiotherapists were used as the training set; 1642 CT images in 20 cases were used as the validation set, and 1685 CT images in 20 cases were used as the test set. The proposed network was tuned and trained to obtain the best segmentation model and its performance was measured by the Dice Similarity Coefficient (DSC) and with 95% Hausdorff distance (HD95). Lastly, as to demonstrate the accuracy of the automatic segmentation of the network proposed in this study, all possible mirrors of the input images were put into Unet2D, Unet2.5D, Unet3D, ResSEUnet3D, ResSEUnet2.5D, and TransResUnet2.5D, and their respective segmentation performances were compared and assessed. Results The segmentation results of the test set showed that TransResSEUnet2.5D performed the best in the DSC (84.08 ± 0.04) %, HD95 (8.11 ± 3.43) mm and time (6.50 ± 1.31) s metrics compared to the other three networks. Conclusions The TransResSEUnet 2.5D proposed in this study can automatically segment the GTV of radiotherapy for lung cancer patients with more accuracy.
Collapse
|
36
|
Zhang X, Jiang R, Huang P, Wang T, Hu M, Scarsbrook AF, Frangi AF. Dynamic feature learning for COVID-19 segmentation and classification. Comput Biol Med 2022; 150:106136. [PMID: 36240599 PMCID: PMC9523910 DOI: 10.1016/j.compbiomed.2022.106136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/25/2022] [Accepted: 09/18/2022] [Indexed: 11/28/2022]
Abstract
Since December 2019, coronavirus SARS-CoV-2 (COVID-19) has rapidly developed into a global epidemic, with millions of patients affected worldwide. As part of the diagnostic pathway, computed tomography (CT) scans are used to help patient management. However, parenchymal imaging findings in COVID-19 are non-specific and can be seen in other diseases. In this work, we propose to first segment lesions from CT images, and further, classify COVID-19 patients from healthy persons and common pneumonia patients. In detail, a novel Dynamic Fusion Segmentation Network (DFSN) that automatically segments infection-related pixels is first proposed. Within this network, low-level features are aggregated to high-level ones to effectively capture context characteristics of infection regions, and high-level features are dynamically fused to model multi-scale semantic information of lesions. Based on DFSN, Dynamic Transfer-learning Classification Network (DTCN) is proposed to distinguish COVID-19 patients. Within DTCN, a pre-trained DFSN is transferred and used as the backbone to extract pixel-level information. Then the pixel-level information is dynamically selected and used to make a diagnosis. In this way, the pre-trained DFSN is utilized through transfer learning, and clinical significance of segmentation results is comprehensively considered. Thus DTCN becomes more sensitive to typical signs of COVID-19. Extensive experiments are conducted to demonstrate effectiveness of the proposed DFSN and DTCN frameworks. The corresponding results indicate that these two models achieve state-of-the-art performance in terms of segmentation and classification.
Collapse
Affiliation(s)
- Xiaoqin Zhang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China.
| | - Runhua Jiang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Pengcheng Huang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Tao Wang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Mingjun Hu
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Andrew F Scarsbrook
- Radiology Department, Leeds Teaching Hospitals NHS Trust, UK; Leeds Institute of Medical Research, University of Leeds, UK
| | - Alejandro F Frangi
- Centre for Computational Imaging and Simulation Technologies in Biomedicine, Leeds Institute for Cardiovascular and Metabolic Medicine, University of Leeds, Leeds, UK; Department of Electrical Engineering, Department of Cardiovascular Sciences, KU Leuven, Belgium
| |
Collapse
|
37
|
Zhu D, Sun D, Wang D. Dual attention mechanism network for lung cancer images super-resolution. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107101. [PMID: 36367483 DOI: 10.1016/j.cmpb.2022.107101] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 08/29/2022] [Accepted: 08/29/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Currently, the morbidity and mortality of lung cancer rank first among malignant tumors worldwide. Improving the resolution of thin-slice CT of the lung is particularly important for the early diagnosis of lung cancer screening. METHODS Aiming at the problems of network training difficulty and low utilization of feature information caused by the deepening of network layers in super-resolution (SR) reconstruction technology, we propose the dual attention mechanism network for single image super-resolution (SISR). Firstly, the feature of a low-resolution image is extracted directly to retain the feature information. Secondly, several independent dual attention mechanism modules are constructed to extract high-frequency details. The introduction of residual connections can effectively solve the gradient disappearance caused by network deepening, and long and short skip connections can effectively enhance the data features. Furthermore, a hybrid loss function speeds up the network's convergence and improves image SR restoration ability. Finally, through the upsampling operation, the reconstructed high-resolution image is obtained. RESULTS The results on the Set5 dataset for 4 × enlargement show that compared with traditional SR methods such as Bicubic, VDSR, and DRRN, the average PSNR/SSIM is increased by 3.33 dB / 0.079, 0.41 dB / 0.007 and 0.22 dB / 0.006 respectively. The experimental data fully show that DAMN can better restore the image contour features, obtain higher PSNR, SSIM, and better visual effect. CONCLUSION Through the DAMN reconstruction method, the image quality can be improved without increasing radiation exposure and scanning time. Radiologists can enhance their confidence in diagnosing early lung cancer, provide a basis for clinical experts to choose treatment plans, formulate follow-up strategies, and benefit patients in the early stage.
Collapse
Affiliation(s)
- Dongmei Zhu
- College of Information Management, Nanjing Agricultural University, Nanjing, 210095, China; School of Information Engineering, Shandong Huayu University of Technology, Dezhou, 253034, China
| | - Degang Sun
- School of Information Engineering, Shandong Huayu University of Technology, Dezhou, 253034, China
| | - Dongbo Wang
- College of Information Management, Nanjing Agricultural University, Nanjing, 210095, China.
| |
Collapse
|
38
|
Chi J, Zhang S, Han X, Wang H, Wu C, Yu X. MID-UNet: Multi-input directional UNet for COVID-19 lung infection segmentation from CT images. SIGNAL PROCESSING. IMAGE COMMUNICATION 2022; 108:116835. [PMID: 35935468 PMCID: PMC9344813 DOI: 10.1016/j.image.2022.116835] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 05/30/2022] [Accepted: 07/23/2022] [Indexed: 05/05/2023]
Abstract
Coronavirus Disease 2019 (COVID-19) has spread globally since the first case was reported in December 2019, becoming a world-wide existential health crisis with over 90 million total confirmed cases. Segmentation of lung infection from computed tomography (CT) scans via deep learning method has a great potential in assisting the diagnosis and healthcare for COVID-19. However, current deep learning methods for segmenting infection regions from lung CT images suffer from three problems: (1) Low differentiation of semantic features between the COVID-19 infection regions, other pneumonia regions and normal lung tissues; (2) High variation of visual characteristics between different COVID-19 cases or stages; (3) High difficulty in constraining the irregular boundaries of the COVID-19 infection regions. To solve these problems, a multi-input directional UNet (MID-UNet) is proposed to segment COVID-19 infections in lung CT images. For the input part of the network, we firstly propose an image blurry descriptor to reflect the texture characteristic of the infections. Then the original CT image, the image enhanced by the adaptive histogram equalization, the image filtered by the non-local means filter and the blurry feature map are adopted together as the input of the proposed network. For the structure of the network, we propose the directional convolution block (DCB) which consist of 4 directional convolution kernels. DCBs are applied on the short-cut connections to refine the extracted features before they are transferred to the de-convolution parts. Furthermore, we propose a contour loss based on local curvature histogram then combine it with the binary cross entropy (BCE) loss and the intersection over union (IOU) loss for better segmentation boundary constraint. Experimental results on the COVID-19-CT-Seg dataset demonstrate that our proposed MID-UNet provides superior performance over the state-of-the-art methods on segmenting COVID-19 infections from CT images.
Collapse
Affiliation(s)
- Jianning Chi
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Shuang Zhang
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Xiaoying Han
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Huan Wang
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Chengdong Wu
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Xiaosheng Yu
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| |
Collapse
|
39
|
Savjani RR, Lauria M, Bose S, Deng J, Yuan Y, Andrearczyk V. Automated Tumor Segmentation in Radiotherapy. Semin Radiat Oncol 2022; 32:319-329. [DOI: 10.1016/j.semradonc.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
40
|
Li Y, Wu X, Yang P, Jiang G, Luo Y. Machine Learning for Lung Cancer Diagnosis, Treatment, and Prognosis. GENOMICS, PROTEOMICS & BIOINFORMATICS 2022; 20:850-866. [PMID: 36462630 PMCID: PMC10025752 DOI: 10.1016/j.gpb.2022.11.003] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 10/03/2022] [Accepted: 11/17/2022] [Indexed: 12/03/2022]
Abstract
The recent development of imaging and sequencing technologies enables systematic advances in the clinical study of lung cancer. Meanwhile, the human mind is limited in effectively handling and fully utilizing the accumulation of such enormous amounts of data. Machine learning-based approaches play a critical role in integrating and analyzing these large and complex datasets, which have extensively characterized lung cancer through the use of different perspectives from these accrued data. In this review, we provide an overview of machine learning-based approaches that strengthen the varying aspects of lung cancer diagnosis and therapy, including early detection, auxiliary diagnosis, prognosis prediction, and immunotherapy practice. Moreover, we highlight the challenges and opportunities for future applications of machine learning in lung cancer.
Collapse
Affiliation(s)
- Yawei Li
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA
| | - Xin Wu
- Department of Medicine, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Ping Yang
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, MN 55905 / Scottsdale, AZ 85259, USA
| | - Guoqian Jiang
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN 55905, USA
| | - Yuan Luo
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611, USA.
| |
Collapse
|
41
|
Automatic lung tumor segmentation from CT images using improved 3D densely connected UNet. Med Biol Eng Comput 2022; 60:3311-3323. [DOI: 10.1007/s11517-022-02667-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 09/12/2022] [Indexed: 11/25/2022]
|
42
|
Clustering based lung lobe segmentation and optimization based lung cancer classification using CT images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
43
|
Integrating Digital Twins and Deep Learning for Medical Image Analysis in the era of COVID-19. VIRTUAL REALITY & INTELLIGENT HARDWARE 2022; 4:292-305. [PMCID: PMC9458475 DOI: 10.1016/j.vrih.2022.03.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 03/13/2022] [Accepted: 03/17/2022] [Indexed: 10/18/2023]
Abstract
Digital twins is a virtual representation of a device and process that captures the physical properties of the environment and operational algorithms/techniques in the context of medical devices and technology. It may allow and facilitate healthcare organizations to determine ways to improve medical processes, enhance the patient experience, lower operating expenses, and extend the value of care. Considering the current pandemic situation of COVID-19, various medical devices, e.g., X-rays and CT scan machines and processes, are constantly being used to collect and analyze medical images. In this situation, while collecting and processing an extensive volume of data in the form of images, machines and processes sometimes suffer from system failures that can create critical issues for hospitals and patients. Thus, in this regard, we introduced a digital twin based smart healthcare system integrated with medical devices so that it can be utilized to collect information about the current health condition, configuration, and maintenance history of the device/machine/system. Furthermore, the medical images, i.e., X-rays, are further analyzed by a deep learning model to detect the infection of COVID-19. The designed system is based on Cascade RCNN architecture. In this architecture, detector stages are deeper and are more sequentially selective against close and small false positives. It is a multi stage extension of the Recurrent Convolution Neural Network (RCNN) model and sequentially trained using the output of one stage for the training of the other one. At each stage, the bounding boxes are adjusted in order to locate a suitable value of nearest false positives during training of the different stages. In this way, an arrangement of detectors is adjusted to increase Intersection over Union (IoU) that overcome the problem of overfitting. We trained the model for X-ray images as the model was previously trained on another data set. The developed system achieves good accuracy during the detection phase of the COVID-19. Experimental outcomes reveal the efficiency of the detection architecture, which gains a mean Average Precision (mAP) rate of 0.94.
Collapse
|
44
|
Jiang J, Elguindi S, Berry SL, Onochie I, Cervino L, Deasy JO, Veeraraghavan H. Nested block self-attention multiple resolution residual network for multiorgan segmentation from CT. Med Phys 2022; 49:5244-5257. [PMID: 35598077 PMCID: PMC9908007 DOI: 10.1002/mp.15765] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 05/13/2022] [Accepted: 05/13/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Fast and accurate multiorgans segmentation from computed tomography (CT) scans is essential for radiation treatment planning. Self-attention(SA)-based deep learning methodologies provide higher accuracies than standard methods but require memory and computationally intensive calculations, which restricts their use to relatively shallow networks. PURPOSE Our goal was to develop and test a new computationally fast and memory-efficient bidirectional SA method called nested block self-attention (NBSA), which is applicable to shallow and deep multiorgan segmentation networks. METHODS A new multiorgan segmentation method combining a deep multiple resolution residual network with computationally efficient SA called nested block SA (MRRN-NBSA) was developed and evaluated to segment 18 different organs from head and neck (HN) and abdomen organs. MRRN-NBSA combines features from multiple image resolutions and feature levels with SA to extract organ-specific contextual features. Computational efficiency is achieved by using memory blocks of fixed spatial extent for SA calculation combined with bidirectional attention flow. Separate models were trained for HN (n = 238) and abdomen (n = 30) and tested on set aside open-source grand challenge data sets for HN (n = 10) using a public domain database of computational anatomy and blinded testing on 20 cases from Beyond the Cranial Vault data set with overall accuracy provided by the grand challenge website for abdominal organs. Robustness to two-rater segmentations was also evaluated for HN cases using the open-source data set. Statistical comparison of MRRN-NBSA against Unet, convolutional network-based SA using criss-cross attention (CCA), dual SA, and transformer-based (UNETR) methods was done by measuring the differences in the average Dice similarity coefficient (DSC) accuracy for all HN organs using the Kruskall-Wallis test, followed by individual method comparisons using paired, two-sided Wilcoxon-signed rank tests at 95% confidence level with Bonferroni correction used for multiple comparisons. RESULTS MRRN-NBSA produced an average high DSC of 0.88 for HN and 0.86 for the abdomen that exceeded current methods. MRRN-NBSA was more accurate than the computationally most efficient CCA (average DSC of 0.845 for HN, 0.727 for abdomen). Kruskal-Wallis test showed significant difference between evaluated methods (p=0.00025). Pair-wise comparisons showed significant differences between MRRN-NBSA than Unet (p=0.0003), CCA (p=0.030), dual (p=0.038), and UNETR methods (p=0.012) after Bonferroni correction. MRRN-NBSA produced less variable segmentations for submandibular glands (0.82 ± 0.06) compared to two raters (0.75 ± 0.31). CONCLUSIONS MRRN-NBSA produced more accurate multiorgan segmentations than current methods on two different public data sets. Testing on larger institutional cohorts is required to establish feasibility for clinical use.
Collapse
Affiliation(s)
- Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 1006
| | - Sharif Elguindi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 1006
| | - Sean L. Berry
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 1006
| | - Ifeanyirochukwu Onochie
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 1006
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 1006
| | - Joseph O. Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 1006
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 1006,Corresponding Author Address: Box 84 - Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065,
| |
Collapse
|
45
|
A Sustainable Deep Learning-Based Framework for Automated Segmentation of COVID-19 Infected Regions: Using U-Net with an Attention Mechanism and Boundary Loss Function. ELECTRONICS 2022. [DOI: 10.3390/electronics11152296] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
COVID-19 has been spreading rapidly, affecting billions of people globally, with significant public health impacts. Biomedical imaging, such as computed tomography (CT), has significant potential as a possible substitute for the screening process. Because of this, automatic segmentation of images is highly desirable as clinical decision support for an extensive evaluation of disease control and monitoring. It is a dynamic tool and performs a central role in precise or accurate segmentation of infected areas or regions in CT scans, thus helping in screening, diagnosing, and disease monitoring. For this purpose, we introduced a deep learning framework for automated segmentation of COVID-19 infected lesions/regions in lung CT scan images. Specifically, we adopted a segmentation model, i.e., U-Net, and utilized an attention mechanism to enhance the framework’s ability for the segmentation of virus-infected regions. Since all of the features extracted or obtained from the encoders are not valuable for segmentation; thus, we applied the U-Net architecture with a mechanism of attention for a better representation of the features. Moreover, we applied a boundary loss function to deal with small and unbalanced lesion segmentation’s. Using different public CT scan image data sets, we validated the framework’s effectiveness in contrast with other segmentation techniques. The experimental outcomes showed the improved performance of the presented framework for the automated segmentation of lungs and infected areas in CT scan images. We also considered both boundary loss and weighted binary cross-entropy dice loss function. The overall dice accuracies of the framework are 0.93 and 0.76 for lungs and COVID-19 infected areas/regions.
Collapse
|
46
|
Liu Y, Qin C, Yu Z, Yang R, Suqing T, Liu X, Ma X. Double-branch U-Net for multi-scale organ segmentation. Methods 2022; 205:220-225. [PMID: 35809769 DOI: 10.1016/j.ymeth.2022.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 06/03/2022] [Accepted: 07/01/2022] [Indexed: 11/30/2022] Open
Abstract
U-Net has achieved great success in the task of medical image segmentation. It encodes and extracts information from several convolution blocks, and then decodes the feature maps to get the segmentation results. Our experiments show that in a multi-scale medical segmentation task, excessive downsampling will cause the model to ignore the small segmentation objects and thus fail to complete the segmentation task. In this work, we propose a more complete method Double-branch U-Net (2BUNet) to solve the multi-scale organ segmentation challenge. Our model is divided into four parts: main branch, tributary branch, information exchange module and classification module. The main advantages of the new model consist of: (1) Extracting information to improve model decoding capabilities using the complete encoding structure. (2) The information exchange module is added to the main branch and tributaries to provide regularization for the model, so as to avoid the large gap between the two paths. (3) Main branch structure for extracting major features of large organ. (4) The tributary structure is used to enlarge the image to extract the microscopic characteristics of small organ. (5) A classification assistant module is proposed to increase the class constraint for the output tensor. The comparative experiments show that our method achieves state-of-the-art performances in real scenes.
Collapse
Affiliation(s)
- Yuhao Liu
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Harbin University of Science and Technology, Harbin 150080, China.
| | - Caijie Qin
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Institute of Information Engineering, Sanming University, Sanming 365004, China
| | - Zhiqian Yu
- Information Science and Technology, Northwest University, Xian, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing 100190, China
| | - Tian Suqing
- Department of Radiation Oncology, Peking University Third Hospital, Beijing 100190, China
| | - Xia Liu
- Harbin University of Science and Technology, Harbin 150080, China.
| | - Xibo Ma
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
47
|
董 婷, 魏 珑, 叶 晓, 陈 阳, 侯 学, 聂 生. [Segmentation of ground glass pulmonary nodules using full convolution residual network based on atrous spatial pyramid pooling structure and attention mechanism]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2022; 39:441-451. [PMID: 35788513 PMCID: PMC10950767 DOI: 10.7507/1001-5515.202010051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 03/31/2022] [Indexed: 06/15/2023]
Abstract
Accurate segmentation of ground glass nodule (GGN) is important in clinical. But it is a tough work to segment the GGN, as the GGN in the computed tomography images show blur boundary, irregular shape, and uneven intensity. This paper aims to segment GGN by proposing a fully convolutional residual network, i.e., residual network based on atrous spatial pyramid pooling structure and attention mechanism (ResAANet). The network uses atrous spatial pyramid pooling (ASPP) structure to expand the feature map receptive field and extract more sufficient features, and utilizes attention mechanism, residual connection, long skip connection to fully retain sensitive features, which is extracted by the convolutional layer. First, we employ 565 GGN provided by Shanghai Chest Hospital to train and validate ResAANet, so as to obtain a stable model. Then, two groups of data selected from clinical examinations (84 GGN) and lung image database consortium (LIDC) dataset (145 GGN) were employed to validate and evaluate the performance of the proposed method. Finally, we apply the best threshold method to remove false positive regions and obtain optimized results. The average dice similarity coefficient (DSC) of the proposed algorithm on the clinical dataset and LIDC dataset reached 83.46%, 83.26% respectively, the average Jaccard index (IoU) reached 72.39%, 71.56% respectively, and the speed of segmentation reached 0.1 seconds per image. Comparing with other reported methods, our new method could segment GGN accurately, quickly and robustly. It could provide doctors with important information such as nodule size or density, which assist doctors in subsequent diagnosis and treatment.
Collapse
Affiliation(s)
- 婷 董
- 上海理工大学 医疗器械与食品学院 生物医学工程专业(上海 200093)School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, P. R. China
| | - 珑 魏
- 上海理工大学 医疗器械与食品学院 生物医学工程专业(上海 200093)School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, P. R. China
| | - 晓丹 叶
- 上海理工大学 医疗器械与食品学院 生物医学工程专业(上海 200093)School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, P. R. China
| | - 阳 陈
- 上海理工大学 医疗器械与食品学院 生物医学工程专业(上海 200093)School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, P. R. China
| | - 学文 侯
- 上海理工大学 医疗器械与食品学院 生物医学工程专业(上海 200093)School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, P. R. China
| | - 生东 聂
- 上海理工大学 医疗器械与食品学院 生物医学工程专业(上海 200093)School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, P. R. China
| |
Collapse
|
48
|
Primakov SP, Ibrahim A, van Timmeren JE, Wu G, Keek SA, Beuque M, Granzier RWY, Lavrova E, Scrivener M, Sanduleanu S, Kayan E, Halilaj I, Lenaers A, Wu J, Monshouwer R, Geets X, Gietema HA, Hendriks LEL, Morin O, Jochems A, Woodruff HC, Lambin P. Automated detection and segmentation of non-small cell lung cancer computed tomography images. Nat Commun 2022; 13:3423. [PMID: 35701415 PMCID: PMC9198097 DOI: 10.1038/s41467-022-30841-3] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 05/09/2022] [Indexed: 12/25/2022] Open
Abstract
Detection and segmentation of abnormalities on medical images is highly important for patient management including diagnosis, radiotherapy, response evaluation, as well as for quantitative image research. We present a fully automated pipeline for the detection and volumetric segmentation of non-small cell lung cancer (NSCLC) developed and validated on 1328 thoracic CT scans from 8 institutions. Along with quantitative performance detailed by image slice thickness, tumor size, image interpretation difficulty, and tumor location, we report an in-silico prospective clinical trial, where we show that the proposed method is faster and more reproducible compared to the experts. Moreover, we demonstrate that on average, radiologists & radiation oncologists preferred automatic segmentations in 56% of the cases. Additionally, we evaluate the prognostic power of the automatic contours by applying RECIST criteria and measuring the tumor volumes. Segmentations by our method stratified patients into low and high survival groups with higher significance compared to those methods based on manual contours. Correct interpretation of computer tomography (CT) scans is important for the correct assessment of a patient’s disease but can be subjective and timely. Here, the authors develop a system that can automatically segment the non-small cell lung cancer on CT images of patients and show in an in silico trial that the method was faster and more reproducible than clinicians.
Collapse
Affiliation(s)
- Sergey P Primakov
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Abdalla Ibrahim
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands.,Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, Hospital Center Universitaire De Liege, Liege, Belgium.,Department of Nuclear Medicine and Comprehensive diagnostic center Aachen (CDCA), University Hospital RWTH Aachen University, Aachen, Germany.,Department of Radiology, Columbia University Irving Medical Center, New York, USA
| | - Janita E van Timmeren
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands.,Department of Radiation Oncology, University Hospital Zürich and University of Zürich, Zürich, Switzerland
| | - Guangyao Wu
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands.,Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Simon A Keek
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Manon Beuque
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Renée W Y Granzier
- Department of Surgery, GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Elizaveta Lavrova
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands.,GIGA Cyclotron Research Centre In Vivo Imaging, University of Liège, Liège, Belgium
| | - Madeleine Scrivener
- Department of Radiation Oncology, Cliniques universitaires St-Luc, Brussels, Belgium
| | - Sebastian Sanduleanu
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Esma Kayan
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Iva Halilaj
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Anouk Lenaers
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands.,Department of Surgery, GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Jianlin Wu
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - René Monshouwer
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Xavier Geets
- Department of Radiation Oncology, Cliniques universitaires St-Luc, Brussels, Belgium
| | - Hester A Gietema
- Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Lizza E L Hendriks
- Department of Pulmonary Diseases, GROW - School for Oncology and Reproduction, Maastricht University Medical Center, Maastricht, the Netherlands
| | - Olivier Morin
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California, CA, USA
| | - Arthur Jochems
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW- School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands. .,Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands.
| |
Collapse
|
49
|
Jiang J, Rimner A, Deasy JO, Veeraraghavan H. Unpaired Cross-Modality Educed Distillation (CMEDL) for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1057-1068. [PMID: 34855590 PMCID: PMC9128665 DOI: 10.1109/tmi.2021.3132291] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Accurate and robust segmentation of lung cancers from CT, even those located close to mediastinum, is needed to more accurately plan and deliver radiotherapy and to measure treatment response. Therefore, we developed a new cross-modality educed distillation (CMEDL) approach, using unpaired CT and MRI scans, whereby an informative teacher MRI network guides a student CT network to extract features that signal the difference between foreground and background. Our contribution eliminates two requirements of distillation methods: (i) paired image sets by using an image to image (I2I) translation and (ii) pre-training of the teacher network with a large training set by using concurrent training of all networks. Our framework uses an end-to-end trained unpaired I2I translation, teacher, and student segmentation networks. Architectural flexibility of our framework is demonstrated using 3 segmentation and 2 I2I networks. Networks were trained with 377 CT and 82 T2w MRI from different sets of patients, with independent validation (N = 209 tumors) and testing (N = 609 tumors) datasets. Network design, methods to combine MRI with CT information, distillation learning under informative (MRI to CT), weak (CT to MRI) and equal teacher (MRI to MRI), and ablation tests were performed. Accuracy was measured using Dice similarity (DSC), surface Dice (sDSC), and Hausdorff distance at the 95th percentile (HD95). The CMEDL approach was significantly (p < 0.001) more accurate (DSC of 0.77 vs. 0.73) than non-CMEDL methods with an informative teacher for CT lung tumor, with a weak teacher (DSC of 0.84 vs. 0.81) for MRI lung tumor, and with equal teacher (DSC of 0.90 vs. 0.88) for MRI multi-organ segmentation. CMEDL also reduced inter-rater lung tumor segmentation variabilities.
Collapse
|
50
|
Gao W, Li X, Wang Y, Cai Y. Medical Image Segmentation Algorithm for Three-Dimensional Multimodal Using Deep Reinforcement Learning and Big Data Analytics. Front Public Health 2022; 10:879639. [PMID: 35462800 PMCID: PMC9024167 DOI: 10.3389/fpubh.2022.879639] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Accepted: 03/09/2022] [Indexed: 11/13/2022] Open
Abstract
To avoid the problems of relative overlap and low signal-to-noise ratio (SNR) of segmented three-dimensional (3D) multimodal medical images, which limit the effect of medical image diagnosis, a 3D multimodal medical image segmentation algorithm using reinforcement learning and big data analytics is proposed. Bayesian maximum a posteriori estimation method and improved wavelet threshold function are used to design wavelet shrinkage algorithm to remove high-frequency signal component noise in wavelet domain. The low-frequency signal component is processed by bilateral filtering and the inverse wavelet transform is used to denoise the 3D multimodal medical image. An end-to-end DRD U-Net model based on deep reinforcement learning is constructed. The feature extraction capacity of denoised image segmentation is increased by changing the convolution layer in the traditional reinforcement learning model to the residual module and introducing the multiscale context feature extraction module. The 3D multimodal medical image segmentation is done using the reward and punishment mechanism in the deep learning reinforcement algorithm. In order to verify the effectiveness of 3D multimodal medical image segmentation algorithm, the LIDC-IDRI data set, the SCR data set, and the DeepLesion data set are selected as the experimental data set of this article. The results demonstrate that the algorithm's segmentation effect is effective. When the number of iterations is increased to 250, the structural similarity reaches 98%, the SNR is always maintained between 55 and 60 dB, the training loss is modest, relative overlap and accuracy all exceed 95%, and the overall segmentation performance is superior. Readers will understand how deep reinforcement learning and big data analytics test the effectiveness of 3D multimodal medical image segmentation algorithm.
Collapse
Affiliation(s)
- Weiwei Gao
- College of Information and Technology, Wenzhou Business College, Wenzhou, China
| | - Xiaofeng Li
- Department of Information Engineering, Heilongjiang International University, Harbin, China
- *Correspondence: Xiaofeng Li
| | - Yanwei Wang
- School of Mechanical Engineering, Harbin Institute of Petroleum, Harbin, China
| | - Yingjie Cai
- The First Psychiatric Hospital of Harbin, Harbin, China
| |
Collapse
|