1
|
Annavarapu CSR, Parisapogu SAB, Keetha NV, Donta PK, Rajita G. A Bi-FPN-Based Encoder-Decoder Model for Lung Nodule Image Segmentation. Diagnostics (Basel) 2023; 13:diagnostics13081406. [PMID: 37189507 DOI: 10.3390/diagnostics13081406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 04/02/2023] [Accepted: 04/04/2023] [Indexed: 05/17/2023] Open
Abstract
Early detection and analysis of lung cancer involve a precise and efficient lung nodule segmentation in computed tomography (CT) images. However, the anonymous shapes, visual features, and surroundings of the nodules as observed in the CT images pose a challenging and critical problem to the robust segmentation of lung nodules. This article proposes a resource-efficient model architecture: an end-to-end deep learning approach for lung nodule segmentation. It incorporates a Bi-FPN (bidirectional feature network) between an encoder and a decoder architecture. Furthermore, it uses the Mish activation function and class weights of masks with the aim of enhancing the efficiency of the segmentation. The proposed model was extensively trained and evaluated on the publicly available LUNA-16 dataset consisting of 1186 lung nodules. To increase the probability of the suitable class of each voxel in the mask, a weighted binary cross-entropy loss of each sample of training was utilized as network training parameter. Moreover, on the account of further evaluation of robustness, the proposed model was evaluated on the QIN Lung CT dataset. The results of the evaluation show that the proposed architecture outperforms existing deep learning models such as U-Net with a Dice Similarity Coefficient of 82.82% and 81.66% on both datasets.
Collapse
Affiliation(s)
| | | | - Nikhil Varma Keetha
- Indian Institute of Technology (Indian School of Mines), Dhanbad 826004, India
| | | | | |
Collapse
|
2
|
Usman M, Shin YG. DEHA-Net: A Dual-Encoder-Based Hard Attention Network with an Adaptive ROI Mechanism for Lung Nodule Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:1989. [PMID: 36850583 PMCID: PMC9960760 DOI: 10.3390/s23041989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 01/31/2023] [Accepted: 02/07/2023] [Indexed: 06/18/2023]
Abstract
Measuring pulmonary nodules accurately can help the early diagnosis of lung cancer, which can increase the survival rate among patients. Numerous techniques for lung nodule segmentation have been developed; however, most of them either rely on the 3D volumetric region of interest (VOI) input by radiologists or use the 2D fixed region of interest (ROI) for all the slices of computed tomography (CT) scan. These methods only consider the presence of nodules within the given VOI, which limits the networks' ability to detect nodules outside the VOI and can also encompass unnecessary structures in the VOI, leading to potentially inaccurate segmentation. In this work, we propose a novel approach for 3D lung nodule segmentation that utilizes the 2D region of interest (ROI) inputted from a radiologist or computer-aided detection (CADe) system. Concretely, we developed a two-stage lung nodule segmentation technique. Firstly, we designed a dual-encoder-based hard attention network (DEHA-Net) in which the full axial slice of thoracic computed tomography (CT) scan, along with an ROI mask, were considered as input to segment the lung nodule in the given slice. The output of DEHA-Net, the segmentation mask of the lung nodule, was inputted to the adaptive region of interest (A-ROI) algorithm to automatically generate the ROI masks for the surrounding slices, which eliminated the need for any further inputs from radiologists. After extracting the segmentation along the axial axis, at the second stage, we further investigated the lung nodule along sagittal and coronal views by employing DEHA-Net. All the estimated masks were inputted into the consensus module to obtain the final volumetric segmentation of the nodule. The proposed scheme was rigorously evaluated on the lung image database consortium and image database resource initiative (LIDC/IDRI) dataset, and an extensive analysis of the results was performed. The quantitative analysis showed that the proposed method not only improved the existing state-of-the-art methods in terms of dice score but also showed significant robustness against different types, shapes, and dimensions of the lung nodules. The proposed framework achieved the average dice score, sensitivity, and positive predictive value of 87.91%, 90.84%, and 89.56%, respectively.
Collapse
|
3
|
Tang T, Li F, Jiang M, Xia X, Zhang R, Lin K. Improved Complementary Pulmonary Nodule Segmentation Model Based on Multi-Feature Fusion. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1755. [PMID: 36554161 PMCID: PMC9778431 DOI: 10.3390/e24121755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/23/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
Accurate segmentation of lung nodules from pulmonary computed tomography (CT) slices plays a vital role in the analysis and diagnosis of lung cancer. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in the automatic segmentation of lung nodules. However, they are still challenged by the large diversity of segmentation targets, and the small inter-class variances between the nodule and its surrounding tissues. To tackle this issue, we propose a features complementary network according to the process of clinical diagnosis, which made full use of the complementarity and facilitation among lung nodule location information, global coarse area, and edge information. Specifically, we first consider the importance of global features of nodules in segmentation and propose a cross-scale weighted high-level feature decoder module. Then, we develop a low-level feature decoder module for edge feature refinement. Finally, we construct a complementary module to make information complement and promote each other. Furthermore, we weight pixels located at the nodule edge on the loss function and add an edge supervision to the deep supervision, both of which emphasize the importance of edges in segmentation. The experimental results demonstrate that our model achieves robust pulmonary nodule segmentation and more accurate edge segmentation.
Collapse
Affiliation(s)
- Tiequn Tang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
- School of Physics and Electronic Engineering, Fuyang Normal University, Fuyang 236037, China
| | - Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Minshan Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
- Department of Biomedical Engineering, Florida International University, Miami, FL 33174, USA
| | - Xunpeng Xia
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Rongfu Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Optical Technology and Instrument for Medicine, Ministry of Education, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Kailin Lin
- Fudan University Shanghai Cancer Center, Shanghai 200032, China
| |
Collapse
|
4
|
Tyagi S, Talbar SN. CSE-GAN: A 3D conditional generative adversarial network with concurrent squeeze-and-excitation blocks for lung nodule segmentation. Comput Biol Med 2022; 147:105781. [DOI: 10.1016/j.compbiomed.2022.105781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 06/16/2022] [Accepted: 06/19/2022] [Indexed: 11/03/2022]
|
5
|
Prediction of Two-Year Recurrence-Free Survival in Operable NSCLC Patients Using Radiomic Features from Intra- and Size-Variant Peri-Tumoral Regions on Chest CT Images. Diagnostics (Basel) 2022; 12:diagnostics12061313. [PMID: 35741123 PMCID: PMC9221791 DOI: 10.3390/diagnostics12061313] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/13/2022] [Accepted: 05/20/2022] [Indexed: 02/04/2023] Open
Abstract
To predict the two-year recurrence-free survival of patients with non-small cell lung cancer (NSCLC), we propose a prediction model using radiomic features of the inner and outer regions of the tumor. The intratumoral region and the peritumoral regions from the boundary to 3 cm were used to extract the radiomic features based on the intensity, texture, and shape features. Feature selection was performed to identify significant radiomic features to predict two-year recurrence-free survival, and patient classification was performed into recurrence and non-recurrence groups using SVM and random forest classifiers. The probability of two-year recurrence-free survival was estimated with the Kaplan–Meier curve. In the experiment, CT images of 217 non-small-cell lung cancer patients at stages I-IIIA who underwent surgical resection at the Veterans Health Service Medical Center (VHSMC) were used. Regarding the classification performance on whole tumors, the combined radiomic features for intratumoral and peritumoral regions of 6 mm and 9 mm showed improved performance (AUC 0.66, 0.66) compared to T stage and N stage (AUC 0.60), intratumoral (AUC 0.64) and peritumoral 6 mm and 9 mm classifiers (AUC 0.59, 0.62). In the assessment of the classification performance according to the tumor size, combined regions of 21 mm and 3 mm were significant when predicting outcomes compared to other regions of tumors under 3 cm (AUC 0.70) and 3 cm~5 cm (AUC 0.75), respectively. For tumors larger than 5 cm, the combined 3 mm region was significant in predictions compared to the other features (AUC 0.71). Through this experiment, it was confirmed that peritumoral and combined regions showed higher performance than the intratumoral region for tumors less than 5 cm in size and that intratumoral and combined regions showed more stable performance than the peritumoral region in tumors larger than 5 cm.
Collapse
|
6
|
Lancaster HL, Zheng S, Aleshina OO, Yu D, Yu Chernina V, Heuvelmans MA, de Bock GH, Dorrius MD, Gratama JW, Morozov SP, Gombolevskiy VA, Silva M, Yi J, Oudkerk M. Outstanding negative prediction performance of solid pulmonary nodule volume AI for ultra-LDCT baseline lung cancer screening risk stratification. Lung Cancer 2022; 165:133-140. [PMID: 35123156 DOI: 10.1016/j.lungcan.2022.01.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 10/04/2021] [Accepted: 01/03/2022] [Indexed: 12/17/2022]
Abstract
OBJECTIVE To evaluate performance of AI as a standalone reader in ultra-low-dose CT lung cancer baseline screening, and compare it to that of experienced radiologists. METHODS 283 participants who underwent a baseline ultra-LDCT scan in Moscow Lung Cancer Screening, between February 2017-2018, and had at least one solid lung nodule, were included. Volumetric nodule measurements were performed by five experienced blinded radiologists, and independently assessed using an AI lung cancer screening prototype (AVIEW LCS, v1.0.34, Coreline Soft, Co. ltd, Seoul, Korea) to automatically detect, measure, and classify solid nodules. Discrepancies were stratified into two groups: positive-misclassification (PM); nodule classified by the reader as a NELSON-plus /EUPS-indeterminate/positive nodule, which at the reference consensus read was < 100 mm3, and negative-misclassification (NM); nodule classified as a NELSON-plus /EUPS-negative nodule, which at consensus read was ≥ 100 mm3. RESULTS 1149 nodules with a solid-component were detected, of which 878 were classified as solid nodules. For the largest solid nodule per participant (n = 283); 61 [21.6 %; 53 PM, 8 NM] discrepancies were reported for AI as a standalone reader, compared to 43 [15.1 %; 22 PM, 21 NM], 36 [12.7 %; 25 PM, 11 NM], 29 [10.2 %; 25 PM, 4 NM], 28 [9.9 %; 6 PM, 22 NM], and 50 [17.7 %; 15 PM, 35 NM] discrepancies for readers 1, 2, 3, 4, and 5 respectively. CONCLUSION Our results suggest that through the use of AI as an impartial reader in baseline lung cancer screening, negative-misclassification results could exceed that of four out of five experienced radiologists, and radiologists' workload could be drastically diminished by up to 86.7%.
Collapse
Affiliation(s)
- Harriet L Lancaster
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Institute for Diagnostic Accuracy, Groningen, Netherlands
| | - Sunyi Zheng
- Department of Radiotherapy, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Institute for Diagnostic Accuracy, Groningen, Netherlands
| | - Olga O Aleshina
- State Budget-Funded Health Care Institution of the City of Moscow «Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Moscow, Russian Federation
| | | | - Valeria Yu Chernina
- State Budget-Funded Health Care Institution of the City of Moscow «Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Moscow, Russian Federation
| | - Marjolein A Heuvelmans
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Institute for Diagnostic Accuracy, Groningen, Netherlands
| | - Geertruida H de Bock
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Monique D Dorrius
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | | | - Sergey P Morozov
- State Budget-Funded Health Care Institution of the City of Moscow «Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Moscow, Russian Federation
| | - Victor A Gombolevskiy
- State Budget-Funded Health Care Institution of the City of Moscow «Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department, Moscow, Russian Federation; AIRI, Moscow, Russian Federation
| | - Mario Silva
- Scienze Radiologiche, Department of Medicine and Surgery (DiMeC), University of Parma, Parma, Italy
| | | | - Matthijs Oudkerk
- Institute for Diagnostic Accuracy, Groningen, Netherlands; Faculty of Medical Sciences, University of Groningen, Groningen, Netherlands.
| |
Collapse
|
7
|
Bartoli A, Fournel J, Maurin A, Marchi B, Habert P, Castelli M, Gaubert JY, Cortaredona S, Lagier JC, Million M, Raoult D, Ghattas B, Jacquier A. Value and prognostic impact of a deep learning segmentation model of COVID-19 lung lesions on low-dose chest CT. RESEARCH IN DIAGNOSTIC AND INTERVENTIONAL IMAGING 2022; 1:100003. [PMID: 37520010 PMCID: PMC8939894 DOI: 10.1016/j.redii.2022.100003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 03/02/2022] [Accepted: 03/09/2022] [Indexed: 12/23/2022]
Abstract
Objectives 1) To develop a deep learning (DL) pipeline allowing quantification of COVID-19 pulmonary lesions on low-dose computed tomography (LDCT). 2) To assess the prognostic value of DL-driven lesion quantification. Methods This monocentric retrospective study included training and test datasets taken from 144 and 30 patients, respectively. The reference was the manual segmentation of 3 labels: normal lung, ground-glass opacity(GGO) and consolidation(Cons). Model performance was evaluated with technical metrics, disease volume and extent. Intra- and interobserver agreement were recorded. The prognostic value of DL-driven disease extent was assessed in 1621 distinct patients using C-statistics. The end point was a combined outcome defined as death, hospitalization>10 days, intensive care unit hospitalization or oxygen therapy. Results The Dice coefficients for lesion (GGO+Cons) segmentations were 0.75±0.08, exceeding the values for human interobserver (0.70±0.08; 0.70±0.10) and intraobserver measures (0.72±0.09). DL-driven lesion quantification had a stronger correlation with the reference than inter- or intraobserver measures. After stepwise selection and adjustment for clinical characteristics, quantification significantly increased the prognostic accuracy of the model (0.82 vs. 0.90; p<0.0001). Conclusions A DL-driven model can provide reproducible and accurate segmentation of COVID-19 lesions on LDCT. Automatic lesion quantification has independent prognostic value for the identification of high-risk patients.
Collapse
Key Words
- ACE, angiotensin-converting enzyme
- Artificial intelligence
- BMI, body mass index
- CNN, convolutional neural network
- COVID-19
- COVID-19, coronavirus disease 2019
- CT-SS, chest tomography severity score
- Cons, consolidation
- DL, deep learning
- DSC, Dice similarity coefficient
- Deep learning
- Diagnostic imaging
- GGO, ground-glass opacity
- ICU, intensive care unit
- LDCT, low-dose computed tomography
- MAE, mean absolute error
- MVSF, mean volume similarity fraction
- Multidetector computed tomography
- ROC, receiver operating characteristic
Collapse
Affiliation(s)
- Axel Bartoli
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- CRMBM - UMR CNRS 7339, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Joris Fournel
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- CRMBM - UMR CNRS 7339, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Arnaud Maurin
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
| | - Baptiste Marchi
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
| | - Paul Habert
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- LIEE, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
- CERIMED, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Maxime Castelli
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
| | - Jean-Yves Gaubert
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- LIEE, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
- CERIMED, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| | - Sebastien Cortaredona
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, VITROME, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Jean-Christophe Lagier
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, MEPHI, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Matthieu Million
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, MEPHI, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Didier Raoult
- Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
- IRD, MEPHI, Institut Hospitalo-Universitaire Méditerannée Infection, 19-21 boulevard Jean Moulin, 13005, Marseille, France
| | - Badih Ghattas
- I2M - UMR CNRS 7373, Aix-Marseille University. CNRS, Centrale Marseille, 13453 Marseille, France
| | - Alexis Jacquier
- Department of Radiology, Hôpital de la Timone Adultes, AP-HM. 264, rue Saint-Pierre, 13385 Marseille Cedex 05, France
- CRMBM - UMR CNRS 7339, Medical Faculty, Aix-Marseille University, 27, Boulevard Jean Moulin, 13385 Marseille Cedex 05, France
| |
Collapse
|
8
|
Byun S, Jung J, Hong H, Kim BS. Lung tumor segmentation using dual-coupling net with shape prior based on lung and mediastinal window images from chest CT images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:1067-1083. [PMID: 35988260 DOI: 10.3233/xst-221191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
BACKGROUND Volumetric lung tumor segmentation is difficult due to the diversity of the sizes, locations and shapes of lung tumors, as well as the similarity in the intensity with surrounding tissue structures. OBJECTIVE We propose a dual-coupling net for accurate lung tumor segmentation in chest CT images regardless of sizes, locations and shapes of lung tumors.METHODSTo extract shape information from lung tumors and use it as shape prior, three-planar images including axial, coronal, and sagittal planes are trained on 2D-Nets. Two types of window images, lung and mediastinal window images, are trained on 2D-Nets to distinguish lung tumors from the thoracic region and to better separate the boundaries of lung tumors from adjacent tissue structures. To prevent false-positive outliers to adjacent structures and to consider the spatial information of lung tumors, pairs of tumor volume-of-interest (VOI) and tumor shape prior are trained on 3D-Net.RESULTSIn the first experiment, the dual-coupling net had the highest Dice Similarity Coefficient (DSC) of 75.7%, considering the shape prior as well as mediastinal window images to prevent the leakage of adjacent structures while maintaining the shape of the lung tumor, with 18.23% p, 3.7% p, 1.1% p, and 1.77% p higher DSCs than in the 2D-Net, 2.5D-Net, 3D-Net, and single-coupling net results, respectively. In the second experiment with annotations for two clinicians, the dual-coupling net showed outcomes of 67.73% and 65.07% regarding the DSC for each annotation. In the third experiment, the dual-coupling net showed 70.97% for the DSC.CONCLUSIONSThe dual-coupling net enables accurate segmentation by distinguishing lung tumors from surrounding tissue structures and thus yields the highest DSC value.
Collapse
Affiliation(s)
- Sohyun Byun
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| | - Julip Jung
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| | - Helen Hong
- Department of Software Convergence, Seoul Women's University, Seoul, Republic of Korea
| | | |
Collapse
|
9
|
Chen H, Liu J, Lu L, Wang T, Xu X, Chu A, Peng W, Gong J, Tang W, Gu Y. Volumetric segmentation of ground glass nodule based on 3D attentional cascaded residual U-net and conditional radom field. Med Phys 2021; 49:1097-1107. [PMID: 34951492 DOI: 10.1002/mp.15423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 12/08/2021] [Accepted: 12/10/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Ground glass nodule (GGN) segmentation is one of the important and challenging tasks in diagnosing early-stage lung adenocarcinomas. Manually delineation of 3D GGN in computed tomography (CT) image is a subjective, laborious and tedious task, which presents poor repeatability. PURPOSE To reduce the annotation burden and improve the segmentation performance, this study proposes a 3D deep learning-based volumetric segmentation model to segment the GGN in CT images. METHODS A total of 379 GGNs were retrospectively collected from the public database, Shanghai Pulmonary Hospital (SHPH) and Fudan University Shanghai Cancer Center (FUSCC). First, a series of image pre-processing techniques involving image resampling, intensity normalization, 3D nodule patch cropping, and data augmentation, were adopted to generate the input images for the deep learning model by using the CT scans. Then, a 3D attentional cascaded residual network (ACRU-Net) was proposed to develop the deep learning-based segmentation model by using residual network and atrous spatial pyramid pooling module. To improve the model performance, a voxel-based conditional random field (CRF) method was used to optimize the segmentation results. Finally, a balanced cross-entropy and Dice combined loss function was applied to train and build the segmentation model. RESULTS Testing on SHPH and FUSCC datasets, the proposed method generates the Dice coefficients of 0.721±0.167 and 0.733±0.100, respectively, which are higher than that of 3D residual U-Net and ACRU-Net without CRF optimization. CONCLUSIONS The results demonstrated that combining 3D ACRU-Net and CRF effectively improved the GGN segmentation performance. The proposed segmentation model may provide a potential tool to help the radiologist in the segmentation and diagnosis of 3D GGN. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hui Chen
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Jiyu Liu
- Department of Radiology, Shanghai Pulmonary Hospital, Shanghai, 200433, China
| | - Liangjian Lu
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Ting Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Xiaomin Xu
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Aina Chu
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Wei Tang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| |
Collapse
|
10
|
Lung Nodule Detection from Feature Engineering to Deep Learning in Thoracic CT Images: a Comprehensive Review. J Digit Imaging 2021; 33:655-677. [PMID: 31997045 DOI: 10.1007/s10278-020-00320-6] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
This paper presents a systematic review of the literature focused on the lung nodule detection in chest computed tomography (CT) images. Manual detection of lung nodules by the radiologist is a sequential and time-consuming process. The detection is subjective and depends on the radiologist's experiences. Owing to the variation in shapes and appearances of a lung nodule, it is very difficult to identify the proper location of the nodule from a huge number of slices generated by the CT scanner. Small nodules (< 10 mm in diameter) may be missed by this manual detection process. Therefore, computer-aided diagnosis (CAD) system acts as a "second opinion" for the radiologists, by making final decision quickly with higher accuracy and greater confidence. The goal of this survey work is to present the current state of the artworks and their progress towards lung nodule detection to the researchers and readers in this domain. This review paper has covered the published works from 2009 to April 2018. Different nodule detection approaches are described elaborately in this work. Recently, it is observed that deep learning (DL)-based approaches are applied extensively for nodule detection and characterization. Therefore, emphasis has been given to convolutional neural network (CNN)-based DL approaches by describing different CNN-based networks.
Collapse
|
11
|
Abstract
A typical growth of cells inside tissue is normally known as a nodular entity. Lung nodule segmentation from computed tomography (CT) images becomes crucial for early lung cancer diagnosis. An issue that pertains to the segmentation of lung nodules is homogenous modular variants. The resemblance among nodules as well as among neighboring regions is very challenging to deal with. Here, we propose an end-to-end U-Net-based segmentation framework named DA-Net for efficient lung nodule segmentation. This method extracts rich features by integrating compactly and densely linked rich convolutional blocks merged with Atrous convolutions blocks to broaden the view of filters without dropping loss and coverage data. We first extract the lung’s ROI images from the whole CT scan slices using standard image processing operations and k-means clustering. This reduces the search space of the model to only lungs where the nodules are present instead of the whole CT scan slice. The evaluation of the suggested model was performed through utilizing the LIDC-IDRI dataset. According to the results, we found that DA-Net showed good performance, achieving an 81% Dice score value and 71.6% IOU score.
Collapse
|
12
|
Lung Nodule Segmentation with a Region-Based Fast Marching Method. SENSORS 2021; 21:s21051908. [PMID: 33803297 PMCID: PMC7967233 DOI: 10.3390/s21051908] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 02/27/2021] [Accepted: 03/02/2021] [Indexed: 11/16/2022]
Abstract
When dealing with computed tomography volume data, the accurate segmentation of lung nodules is of great importance to lung cancer analysis and diagnosis, being a vital part of computer-aided diagnosis systems. However, due to the variety of lung nodules and the similarity of visual characteristics for nodules and their surroundings, robust segmentation of nodules becomes a challenging problem. A segmentation algorithm based on the fast marching method is proposed that separates the image into regions with similar features, which are then merged by combining regions growing with k-means. An evaluation was performed with two distinct methods (objective and subjective) that were applied on two different datasets, containing simulation data generated for this study and real patient data, respectively. The objective experimental results show that the proposed technique can accurately segment nodules, especially in solid cases, given the mean Dice scores of 0.933 and 0.901 for round and irregular nodules. For non-solid and cavitary nodules the performance dropped—0.799 and 0.614 mean Dice scores, respectively. The proposed method was compared to active contour models and to two modern deep learning networks. It reached better overall accuracy than active contour models, having comparable results to DBResNet but lesser accuracy than 3D-UNet. The results show promise for the proposed method in computer-aided diagnosis applications.
Collapse
|
13
|
Paluru N, Dayal A, Jenssen HB, Sakinis T, Cenkeramaddi LR, Prakash J, Yalavarthy PK. Anam-Net: Anamorphic Depth Embedding-Based Lightweight CNN for Segmentation of Anomalies in COVID-19 Chest CT Images. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:932-946. [PMID: 33544680 PMCID: PMC8544939 DOI: 10.1109/tnnls.2021.3054746] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 11/14/2020] [Accepted: 01/21/2021] [Indexed: 05/18/2023]
Abstract
Chest computed tomography (CT) imaging has become indispensable for staging and managing coronavirus disease 2019 (COVID-19), and current evaluation of anomalies/abnormalities associated with COVID-19 has been performed majorly by the visual score. The development of automated methods for quantifying COVID-19 abnormalities in these CT images is invaluable to clinicians. The hallmark of COVID-19 in chest CT images is the presence of ground-glass opacities in the lung region, which are tedious to segment manually. We propose anamorphic depth embedding-based lightweight CNN, called Anam-Net, to segment anomalies in COVID-19 chest CT images. The proposed Anam-Net has 7.8 times fewer parameters compared to the state-of-the-art UNet (or its variants), making it lightweight capable of providing inferences in mobile or resource constraint (point-of-care) platforms. The results from chest CT images (test cases) across different experiments showed that the proposed method could provide good Dice similarity scores for abnormal and normal regions in the lung. We have benchmarked Anam-Net with other state-of-the-art architectures, such as ENet, LEDNet, UNet++, SegNet, Attention UNet, and DeepLabV3+. The proposed Anam-Net was also deployed on embedded systems, such as Raspberry Pi 4, NVIDIA Jetson Xavier, and mobile-based Android application (CovSeg) embedded with Anam-Net to demonstrate its suitability for point-of-care platforms. The generated codes, models, and the mobile application are available for enthusiastic users at https://github.com/NaveenPaluru/Segmentation-COVID-19.
Collapse
Affiliation(s)
- Naveen Paluru
- Department of Computational and Data SciencesIndian Institute of ScienceBengaluru560 012India
| | - Aveen Dayal
- Department of Information and Communication TechnologyUniversity of Agder4879GrimstadNorway
| | - Håvard Bjørke Jenssen
- Department of Radiology and Nuclear MedicineOslo University Hospital0372OsloNorway
- Artificial Intelligence AS0553OsloNorway
| | - Tomas Sakinis
- Department of Radiology and Nuclear MedicineOslo University Hospital0372OsloNorway
- Artificial Intelligence AS0553OsloNorway
| | | | - Jaya Prakash
- Department of Instrumentation and Applied PhysicsIndian Institute of ScienceBengaluru560 012India
| | | |
Collapse
|
14
|
Usman M, Lee BD, Byon SS, Kim SH, Lee BI, Shin YG. Volumetric lung nodule segmentation using adaptive ROI with multi-view residual learning. Sci Rep 2020; 10:12839. [PMID: 32732963 PMCID: PMC7393083 DOI: 10.1038/s41598-020-69817-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Accepted: 07/13/2020] [Indexed: 12/03/2022] Open
Abstract
Accurate quantification of pulmonary nodules can greatly assist the early diagnosis of lung cancer, enhancing patient survival possibilities. A number of nodule segmentation techniques, which either rely on a radiologist-provided 3-D volume of interest (VOI) or use the constant region of interests (ROIs) for all the slices, are proposed; however, these techniques can only investigate the presence of nodule voxels within the given VOI. Such approaches restrain the solutions to freely investigate the nodule presence outside the given VOI and also include the redundant structures (non-nodule) into VOI, which limits the segmentation accuracy. In this work, a novel semi-automated approach for 3-D segmentation of lung nodule in computerized tomography scans, has been proposed. The technique is segregated into two stages. In the first stage, a 2-D ROI containing the nodule is provided as an input to perform a patch-wise exploration along the axial axis using a novel adaptive ROI algorithm. This strategy enables the dynamic selection of the ROI in the surrounding slices to investigate the presence of nodules using a Deep Residual U-Net architecture. This stage provides the initial estimation of the nodule utilized to extract the VOI. In the second stage, the extracted VOI is further explored along the coronal and sagittal axes, in patchwise fashion, with Residual U-Nets. All the estimated masks are then fed into a consensus module to produce a final volumetric segmentation of the nodule. The algorithm is rigorously evaluated on LIDC–IDRI dataset, which is the largest publicly available dataset. The proposed approach achieved the average dice score of 87.5%, which is significantly higher than the existing state-of-the-art techniques.
Collapse
Affiliation(s)
- Muhammad Usman
- Department of Computer Science and Engineering, Seoul National University, 08826, Seoul, South Korea.,Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co. Ltd., Seoul, 06524, South Korea
| | - Byoung-Dai Lee
- School of Computer Science and Engineering, Kyonggi University, Suwon, 16227, South Korea.
| | - Shi-Sub Byon
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co. Ltd., Seoul, 06524, South Korea
| | - Sung-Hyun Kim
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co. Ltd., Seoul, 06524, South Korea
| | - Byung-Il Lee
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co. Ltd., Seoul, 06524, South Korea
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, 08826, Seoul, South Korea
| |
Collapse
|
15
|
Wu W, Gao L, Duan H, Huang G, Ye X, Nie S. Segmentation of pulmonary nodules in CT images based on 3D-UNET combined with three-dimensional conditional random field optimization. Med Phys 2020; 47:4054-4063. [PMID: 32428969 DOI: 10.1002/mp.14248] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 05/10/2020] [Accepted: 05/13/2020] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Pulmonary nodules are a potential manifestation of lung cancer. In computer-aided diagnosis (CAD) of lung cancer, it is of great significance to extract the complete boundary of the pulmonary nodules in the computed tomography (CT) scans accurately. It can provide doctors with important information such as tumor size and density, which assist doctors in subsequent diagnosis and treatment. In addition to this, in the molecular subtype and radiomics of lung cancer, segmentation of lung nodules also plays a pivotal role. Existing methods are difficult to use only one model to simultaneously treat the boundaries of multiple types of lung nodules in CT images. METHOD In order to solve the problem, this paper proposed a three-dimensional (3D)-UNET network model optimized by a 3D conditional random field (3D-CRF) to segment pulmonary nodules. On the basis of 3D-UNET, the 3D-CRF is used to optimize the sample output of the training set, so as to update the network weights in training process, reduce the model training time, and reduce the loss rate of the model. We selected 936 sets of pulmonary nodule data for the lung image database consortium and image database resource initiative (LIDC-IDRI)1 database to train and test the model. What's more, we used clinical data from partner hospitals for additional validation. RESULTS AND CONCLUSIONS The results show that our method is accurate and effective. Particularly, it shows more significance for the optimization of the segmentation of adhesive pulmonary nodules (the juxta-pleural and juxta-vascular nodules) and ground glass pulmonary nodules (GGNs).
Collapse
Affiliation(s)
- Wenhao Wu
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, People's Republic of China
| | - Lei Gao
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, People's Republic of China
| | - Huihong Duan
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, People's Republic of China
| | - Gang Huang
- Shanghai University of Medicine & Health Science, Shanghai, 201318, People's Republic of China
| | - Xiaodan Ye
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, People's Republic of China
| | - Shengdong Nie
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, People's Republic of China
| |
Collapse
|
16
|
Cao H, Liu H, Song E, Hung CC, Ma G, Xu X, Jin R, Lu J. Dual-branch residual network for lung nodule segmentation. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2019.105934] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|