1
|
Guo Z, Tan Z, Feng J, Zhou J. 3D Vascular Segmentation Supervised by 2D Annotation of Maximum Intensity Projection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2241-2253. [PMID: 38319757 DOI: 10.1109/tmi.2024.3362847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Vascular structure segmentation plays a crucial role in medical analysis and clinical applications. The practical adoption of fully supervised segmentation models is impeded by the intricacy and time-consuming nature of annotating vessels in the 3D space. This has spurred the exploration of weakly-supervised approaches that reduce reliance on expensive segmentation annotations. Despite this, existing weakly supervised methods employed in organ segmentation, which encompass points, bounding boxes, or graffiti, have exhibited suboptimal performance when handling sparse vascular structure. To alleviate this issue, we employ maximum intensity projection (MIP) to decrease the dimensionality of 3D volume to 2D image for efficient annotation, and the 2D labels are utilized to provide guidance and oversight for training 3D vessel segmentation model. Initially, we generate pseudo-labels for 3D blood vessels using the annotations of 2D projections. Subsequently, taking into account the acquisition method of the 2D labels, we introduce a weakly-supervised network that fuses 2D-3D deep features via MIP to further improve segmentation performance. Furthermore, we integrate confidence learning and uncertainty estimation to refine the generated pseudo-labels, followed by fine-tuning the segmentation network. Our method is validated on five datasets (including cerebral vessel, aorta and coronary artery), demonstrating highly competitive performance in segmenting vessels and the potential to significantly reduce the time and effort required for vessel annotation. Our code is available at: https://github.com/gzq17/Weakly-Supervised-by-MIP.
Collapse
|
2
|
Quanyang W, Yao H, Sicong W, Linlin Q, Zewei Z, Donghui H, Hongjia L, Shijun Z. Artificial intelligence in lung cancer screening: Detection, classification, prediction, and prognosis. Cancer Med 2024; 13:e7140. [PMID: 38581113 PMCID: PMC10997848 DOI: 10.1002/cam4.7140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 03/15/2024] [Accepted: 03/16/2024] [Indexed: 04/08/2024] Open
Abstract
BACKGROUND The exceptional capabilities of artificial intelligence (AI) in extracting image information and processing complex models have led to its recognition across various medical fields. With the continuous evolution of AI technologies based on deep learning, particularly the advent of convolutional neural networks (CNNs), AI presents an expanded horizon of applications in lung cancer screening, including lung segmentation, nodule detection, false-positive reduction, nodule classification, and prognosis. METHODOLOGY This review initially analyzes the current status of AI technologies. It then explores the applications of AI in lung cancer screening, including lung segmentation, nodule detection, and classification, and assesses the potential of AI in enhancing the sensitivity of nodule detection and reducing false-positive rates. Finally, it addresses the challenges and future directions of AI in lung cancer screening. RESULTS AI holds substantial prospects in lung cancer screening. It demonstrates significant potential in improving nodule detection sensitivity, reducing false-positive rates, and classifying nodules, while also showing value in predicting nodule growth and pathological/genetic typing. CONCLUSIONS AI offers a promising supportive approach to lung cancer screening, presenting considerable potential in enhancing nodule detection sensitivity, reducing false-positive rates, and classifying nodules. However, the universality and interpretability of AI results need further enhancement. Future research should focus on the large-scale validation of new deep learning-based algorithms and multi-center studies to improve the efficacy of AI in lung cancer screening.
Collapse
Affiliation(s)
- Wu Quanyang
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Huang Yao
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Wang Sicong
- Magnetic Resonance Imaging ResearchGeneral Electric Healthcare (China)BeijingChina
| | - Qi Linlin
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhang Zewei
- PET‐CT CenterNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Hou Donghui
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Li Hongjia
- PET‐CT CenterNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhao Shijun
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| |
Collapse
|
3
|
Kumar V, Prabha C, Sharma P, Mittal N, Askar SS, Abouhawwash M. Unified deep learning models for enhanced lung cancer prediction with ResNet-50-101 and EfficientNet-B3 using DICOM images. BMC Med Imaging 2024; 24:63. [PMID: 38500083 PMCID: PMC10946139 DOI: 10.1186/s12880-024-01241-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 03/07/2024] [Indexed: 03/20/2024] Open
Abstract
Significant advancements in machine learning algorithms have the potential to aid in the early detection and prevention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer aims to address the issue effectively. Using a dataset of 1,000 DICOM lung cancer images from the LIDC-IDRI repository, each image is classified into four different categories. Although deep learning is still making progress in its ability to analyze and understand cancer data, this research marks a significant step forward in the fight against cancer, promoting better health outcomes and potentially lowering the mortality rate. The Fusion Model, like all other models, achieved 100% precision in classifying Squamous Cells. The Fusion Model and ResNet-50 achieved a precision of 90%, closely followed by EfficientNet-B3 and ResNet-101 with slightly lower precision. To prevent overfitting and improve data collection and planning, the authors implemented a data extension strategy. The relationship between acquiring knowledge and reaching specific scores was also connected to advancing and addressing the issue of imprecise accuracy, ultimately contributing to advancements in health and a reduction in the mortality rate associated with lung cancer.
Collapse
Affiliation(s)
- Vinod Kumar
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, India
| | - Chander Prabha
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Preeti Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Nitin Mittal
- Skill Faculty of Engineering and Technology, Shri Vishwakarma Skill University, Palwal, Haryana, India.
| | - S S Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, 11451, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt
| |
Collapse
|
4
|
Jian M, Jin H, Zhang L, Wei B, Yu H. DBPNDNet: dual-branch networks using 3DCNN toward pulmonary nodule detection. Med Biol Eng Comput 2024; 62:563-573. [PMID: 37945795 DOI: 10.1007/s11517-023-02957-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 10/21/2023] [Indexed: 11/12/2023]
Abstract
With the advancement of artificial intelligence, CNNs have been successfully introduced into the discipline of medical data analyzing. Clinically, automatic pulmonary nodules detection remains an intractable issue since those nodules existing in the lung parenchyma or on the chest wall are tough to be visually distinguished from shadows, background noises, blood vessels, and bones. Thus, when making medical diagnosis, clinical doctors need to first pay attention to the intensity cue and contour characteristic of pulmonary nodules, so as to locate the specific spatial locations of nodules. To automate the detection process, we propose an efficient architecture of multi-task and dual-branch 3D convolution neural networks, called DBPNDNet, for automatic pulmonary nodule detection and segmentation. Among the dual-branch structure, one branch is designed for candidate region extraction of pulmonary nodule detection, while the other incorporated branch is exploited for lesion region semantic segmentation of pulmonary nodules. In addition, we develop a 3D attention weighted feature fusion module according to the doctor's diagnosis perspective, so that the captured information obtained by the designed segmentation branch can further promote the effect of the adopted detection branch mutually. The experiment has been implemented and assessed on the commonly used dataset for medical image analysis to evaluate our designed framework. On average, our framework achieved a sensitivity of 91.33% false positives per CT scan and reached 97.14% sensitivity with 8 FPs per scan. The results of the experiments indicate that our framework outperforms other mainstream approaches.
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China.
- School of Information Science and Technology, Linyi University, Linyi, China.
| | - Haodong Jin
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
- School of Control Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Linsong Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
| | - Benzheng Wei
- Medical Artificial Intelligence Research Center, Shandong University of Traditional Chinese Medicine, Qingdao, China
| | - Hui Yu
- School of Control Engineering, University of Shanghai for Science and Technology, Shanghai, China
- School of Creative Technologies, University of Portsmouth, Portsmouth, UK
| |
Collapse
|
5
|
Kang S, Park J, Lee M. Machine learning-enabled autonomous operation for atomic force microscopes. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2023; 94:123704. [PMID: 38109471 DOI: 10.1063/5.0172682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 11/26/2023] [Indexed: 12/20/2023]
Abstract
The use of scientific instruments generally requires prior knowledge and skill on the part of operators, and thus, the obtained results often vary with different operators. The autonomous operation of instruments producing reproducible and reliable results with little or no operator-to-operator variation could be of considerable benefit. Here, we demonstrate the autonomous operation of an atomic force microscope using a machine learning-based object detection technique. The developed atomic force microscope was able to autonomously perform instrument initialization, surface imaging, and image analysis. Two cameras were employed, and a machine-learning algorithm of region-based convolutional neural networks was implemented, to detect and recognize objects of interest and to perform self-calibration, alignment, and operation of each part of the instrument, as well as the analysis of obtained images. Our machine learning-based approach could be generalized to apply to various types of scanning probe microscopes and other scientific instruments.
Collapse
Affiliation(s)
- Seongseok Kang
- Department of Physics, Chungbuk National University, Seowon-Gu, Cheongju 28644, South Korea
| | - Junhong Park
- Department of Physics, Chungbuk National University, Seowon-Gu, Cheongju 28644, South Korea
| | - Manhee Lee
- Department of Physics, Chungbuk National University, Seowon-Gu, Cheongju 28644, South Korea
| |
Collapse
|
6
|
Tyagi S, Kushnure DT, Talbar SN. An amalgamation of vision transformer with convolutional neural network for automatic lung tumor segmentation. Comput Med Imaging Graph 2023; 108:102258. [PMID: 37315396 DOI: 10.1016/j.compmedimag.2023.102258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/29/2023] [Accepted: 05/29/2023] [Indexed: 06/16/2023]
Abstract
Lung cancer has the highest mortality rate. Its diagnosis and treatment analysis depends upon the accurate segmentation of the tumor. It becomes tedious if done manually as radiologists are overburdened with numerous medical imaging tests due to the increase in cancer patients and the COVID pandemic. Automatic segmentation techniques play an essential role in assisting medical experts. The segmentation approaches based on convolutional neural networks have provided state-of-the-art performances. However, they cannot capture long-range relations due to the region-based convolutional operator. Vision Transformers can resolve this issue by capturing global multi-contextual features. To explore this advantageous feature of the vision transformer, we propose an approach for lung tumor segmentation using an amalgamation of the vision transformer and convolutional neural network. We design the network as an encoder-decoder structure with convolution blocks deployed in the initial layers of the encoder to capture the features carrying essential information and the corresponding blocks in the final layers of the decoder. The deeper layers utilize the transformer blocks with a self-attention mechanism to capture more detailed global feature maps. We use a recently proposed unified loss function that combines cross-entropy and dice-based losses for network optimization. We trained our network on a publicly available NSCLC-Radiomics dataset and tested its generalizability on our dataset collected from a local hospital. We could achieve average dice coefficients of 0.7468 and 0.6847 and Hausdorff distances of 15.336 and 17.435 on public and local test data, respectively.
Collapse
Affiliation(s)
- Shweta Tyagi
- Centre of Excellence in Signal and Image Processing, Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India.
| | - Devidas T Kushnure
- Centre of Excellence in Signal and Image Processing, Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Sanjay N Talbar
- Centre of Excellence in Signal and Image Processing, Department of Electronics and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| |
Collapse
|
7
|
Shivwanshi RR, Nirala N. Hyperparameter optimization and development of an advanced CNN-based technique for lung nodule assessment. Phys Med Biol 2023; 68:175038. [PMID: 37567211 DOI: 10.1088/1361-6560/acef8c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 08/11/2023] [Indexed: 08/13/2023]
Abstract
Objective. This paper aims to propose an advanced methodology for assessing lung nodules using automated techniques with computed tomography (CT) images to detect lung cancer at an early stage.Approach. The proposed methodology utilizes a fixed-size 3 × 3 kernel in a convolution neural network (CNN) for relevant feature extraction. The network architecture comprises 13 layers, including six convolution layers for deep local and global feature extraction. The nodule detection architecture is enhanced by incorporating a transfer learning-based EfficientNetV_2 network (TLEV2N) to improve training performance. The classification of nodules is achieved by integrating the EfficientNet_V2 architecture of CNN for more accurate benign and malignant classification. The network architecture is fine-tuned to extract relevant features using a deep network while maintaining performance through suitable hyperparameters.Main results. The proposed method significantly reduces the false-negative rate, with the network achieving an accuracy of 97.56% and a specificity of 98.4%. Using the 3 × 3 kernel provides valuable insights into minute pixel variation and enables the extraction of information at a broader morphological level. The continuous responsiveness of the network to fine-tune initial values allows for further optimization possibilities, leading to the design of a standardized system capable of assessing diversified thoracic CT datasets.Significance. This paper highlights the potential of non-invasive techniques for the early detection of lung cancer through the analysis of low-dose CT images. The proposed methodology offers improved accuracy in detecting lung nodules and has the potential to enhance the overall performance of early lung cancer detection. By reconfiguring the proposed method, further advancements can be made to optimize outcomes and contribute to developing a standardized system for assessing diverse thoracic CT datasets.
Collapse
|
8
|
Liu L, Chang J, Liu Z, Zhang P, Xu X, Shang H. Hybrid Contextual Semantic Network for Accurate Segmentation and Detection of Small-Size Stroke Lesions From MRI. IEEE J Biomed Health Inform 2023; 27:4062-4073. [PMID: 37155390 DOI: 10.1109/jbhi.2023.3273771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Stroke is a cerebrovascular disease with high mortality and disability rates. The occurrence of the stroke typically produces lesions of different sizes, with the accurate segmentation and detection of small-size stroke lesions being closely related to the prognosis of patients. However, the large lesions are usually correctly identified, the small-size lesions are usually ignored. This article provides a hybrid contextual semantic network (HCSNet) that can accurately and simultaneously segment and detect small-size stroke lesions from magnetic resonance images. HCSNet inherits the advantages of the encoder-decoder architecture and applies a novel hybrid contextual semantic module that generates high-quality contextual semantic features from the spatial and channel contextual semantic features through the skip connection layer. Moreover, a mixing-loss function is proposed to optimize HCSNet for unbalanced small-size lesions. HCSNet is trained and evaluated on 2D magnetic resonance images produced from the Anatomical Tracings of Lesions After Stroke challenge (ATLAS R2.0). Extensive experiments demonstrate that HCSNet outperforms several other state-of-the-art methods in its ability to segment and detect small-size stroke lesions. Visualization and ablation experiments reveal that the hybrid semantic module improves the segmentation and detection performance of HCSNet.
Collapse
|
9
|
Sakshiwala, Singh MP. An ensemble of three-dimensional deep neural network models for multi-attribute scoring and classification of pulmonary nodules. Proc Inst Mech Eng H 2023; 237:946-957. [PMID: 37366554 DOI: 10.1177/09544119231182037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
Lung cancer is the uncontrolled growth of cells that originates in the lung parenchyma or cells that line the air passages. These cells divide rapidly to form malicious tumors. This paper proposes a multi-task ensemble of three dimensional (3D) deep neural network (DNN) based model, namely: pre-trained EfficientNetB0, BiGRU-based SEResNext101, and the proposed LungNet. The ensemble model performs binary classification and regression tasks to accurately classify the benign and malignant pulmonary nodules. This study also explores the attribute importance and proposes a domain knowledge-based regularization technique. The proposed model is evaluated on the public benchmark LIDC-IDRI dataset. Through a comparative study, it was shown that when coefficients generated by the random forest (RF) are used in the loss function, the proposed ensemble model offers a better prediction capability of the accuracy of 96.4% compared to the state-of-the-art methods. In addition, the receiver operating characteristic curves show that the proposed ensemble model has better performance than the base learners. Thus, the proposed CAD-based model can efficiently detect malignant pulmonary nodules.
Collapse
Affiliation(s)
- Sakshiwala
- Department of Computer Science and Engineering, NIT Patna, Patna, Bihar, India
| | | |
Collapse
|
10
|
Bhattacharjee A, Rabea S, Bhattacharjee A, Elkaeed EB, Murugan R, Selim HMRM, Sahu RK, Shazly GA, Salem Bekhit MM. A multi-class deep learning model for early lung cancer and chronic kidney disease detection using computed tomography images. Front Oncol 2023; 13:1193746. [PMID: 37333825 PMCID: PMC10272771 DOI: 10.3389/fonc.2023.1193746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 05/04/2023] [Indexed: 06/20/2023] Open
Abstract
Lung cancer is a fatal disease caused by an abnormal proliferation of cells in the lungs. Similarly, chronic kidney disorders affect people worldwide and can lead to renal failure and impaired kidney function. Cyst development, kidney stones, and tumors are frequent diseases impairing kidney function. Since these conditions are generally asymptomatic, early, and accurate identification of lung cancer and renal conditions is necessary to prevent serious complications. Artificial Intelligence plays a vital role in the early detection of lethal diseases. In this paper, we proposed a modified Xception deep neural network-based computer-aided diagnosis model, consisting of transfer learning based image net weights of Xception model and a fine-tuned network for automatic lung and kidney computed tomography multi-class image classification. The proposed model obtained 99.39% accuracy, 99.33% precision, 98% recall, and 98.67% F1-score for lung cancer multi-class classification. Whereas, it attained 100% accuracy, F1 score, recall and precision for kidney disease multi-class classification. Also, the proposed modified Xception model outperformed the original Xception model and the existing methods. Hence, it can serve as a support tool to the radiologists and nephrologists for early detection of lung cancer and chronic kidney disease, respectively.
Collapse
Affiliation(s)
- Ananya Bhattacharjee
- Bio-Medical Imaging Laboratory (BIOMIL), Department of Electronics and Communication Engineering, National Institute of Technology Silchar, Silchar, India
| | - Sameh Rabea
- Department of Pharmaceutical Sciences, College of Pharmacy, AlMaarefa University, Riyadh, Saudi Arabia
| | - Abhishek Bhattacharjee
- Department of Pharmaceutical Sciences, Assam University (A Central University), Silchar, India
| | - Eslam B. Elkaeed
- Department of Pharmaceutical Sciences, College of Pharmacy, AlMaarefa University, Riyadh, Saudi Arabia
| | - R. Murugan
- Bio-Medical Imaging Laboratory (BIOMIL), Department of Electronics and Communication Engineering, National Institute of Technology Silchar, Silchar, India
| | - Heba Mohammed Refat M. Selim
- Department of Pharmaceutical Sciences, College of Pharmacy, AlMaarefa University, Riyadh, Saudi Arabia
- Microbiology and Immunology Department, Faculty of Pharmacy (Girls); Al-Azhar University, Cairo, Egypt
| | - Ram Kumar Sahu
- Department of Pharmaceutical Sciences, Hemvati Nandan Bahuguna Garhwal University (A Central University), Tehri Garhwal, India
| | - Gamal A. Shazly
- Kayyali Chair for Pharmaceutical Industry, Department of Pharmaceutics, College of Pharmacy, King Saud University, Riyadh, Saudi Arabia
| | - Mounir M. Salem Bekhit
- Kayyali Chair for Pharmaceutical Industry, Department of Pharmaceutics, College of Pharmacy, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
11
|
Chen Y, Hou X, Yang Y, Ge Q, Zhou Y, Nie S. A Novel Deep Learning Model Based on Multi-Scale and Multi-View for Detection of Pulmonary Nodules. J Digit Imaging 2023; 36:688-699. [PMID: 36544067 PMCID: PMC10039158 DOI: 10.1007/s10278-022-00749-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/03/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022] Open
Abstract
Lung cancer manifests as pulmonary nodules in the early stage. Thus, the early and accurate detection of these nodules is crucial for improving the survival rate of patients. We propose a novel two-stage model for lung nodule detection. In the candidate nodule detection stage, a deep learning model based on 3D context information roughly segments the nodules detects the preprocessed image and obtain candidate nodules. In this model, 3D image blocks are input into the constructed model, and it learns the contextual information between the various slices in the 3D image block. The parameters of our model are equivalent to those of a 2D convolutional neural network (CNN), but the model could effectively learn the 3D context information of the nodules. In the false-positive reduction stage, we propose a multi-scale shared convolutional structure model. Our lung detection model has no significant increase in parameters and computation in both stages of multi-scale and multi-view detection. The proposed model was evaluated by using 888 computed tomography (CT) scans from the LIDC-IDRI dataset and achieved a competition performance metric (CPM) score of 0.957. The average detection sensitivity per scan was 0.971/1.0 FP. Furthermore, an average detection sensitivity of 0.933/1.0 FP per scan was achieved based on data from Shanghai Pulmonary Hospital. Our model exhibited a higher detection sensitivity, a lower false-positive rate, and better generalization than current lung nodule detection methods. The method has fewer parameters and less computational complexity, which provides more possibilities for the clinical application of this method.
Collapse
Affiliation(s)
- Yang Chen
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuewen Hou
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yifeng Yang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Qianqian Ge
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yan Zhou
- Department of Radiology, School of Medicine, Renji Hospital, Shanghai Jiao Tong University, Shanghai, 200127, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| |
Collapse
|
12
|
Wang F, Cheng C, Cao W, Wu Z, Wang H, Wei W, Yan Z, Liu Z. MFCNet: A multi-modal fusion and calibration networks for 3D pancreas tumor segmentation on PET-CT images. Comput Biol Med 2023; 155:106657. [PMID: 36791551 DOI: 10.1016/j.compbiomed.2023.106657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 01/29/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
In clinical diagnosis, positron emission tomography and computed tomography (PET-CT) images containing complementary information are fused. Tumor segmentation based on multi-modal PET-CT images is an important part of clinical diagnosis and treatment. However, the existing current PET-CT tumor segmentation methods mainly focus on positron emission tomography (PET) and computed tomography (CT) feature fusion, which weakens the specificity of the modality. In addition, the information interaction between different modal images is usually completed by simple addition or concatenation operations, but this has the disadvantage of introducing irrelevant information during the multi-modal semantic feature fusion, so effective features cannot be highlighted. To overcome this problem, this paper propose a novel Multi-modal Fusion and Calibration Networks (MFCNet) for tumor segmentation based on three-dimensional PET-CT images. First, a Multi-modal Fusion Down-sampling Block (MFDB) with a residual structure is developed. The proposed MFDB can fuse complementary features of multi-modal images while retaining the unique features of different modal images. Second, a Multi-modal Mutual Calibration Block (MMCB) based on the inception structure is designed. The MMCB can guide the network to focus on a tumor region by combining different branch decoding features using the attention mechanism and extracting multi-scale pathological features using a convolution kernel of different sizes. The proposed MFCNet is verified on both the public dataset (Head and Neck cancer) and the in-house dataset (pancreas cancer). The experimental results indicate that on the public and in-house datasets, the average Dice values of the proposed multi-modal segmentation network are 74.14% and 76.20%, while the average Hausdorff distances are 6.41 and 6.84, respectively. In addition, the experimental results show that the proposed MFCNet outperforms the state-of-the-art methods on the two datasets.
Collapse
Affiliation(s)
- Fei Wang
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Chao Cheng
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University(Changhai Hospital), Shanghai, 200433, China
| | - Weiwei Cao
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Zhongyi Wu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Heng Wang
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China
| | - Wenting Wei
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China.
| | - Zhaobang Liu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
13
|
Wu Y, Qi Q, Qi S, Yang L, Wang H, Yu H, Li J, Wang G, Zhang P, Liang Z, Chen R. Classification of COVID-19 from community-acquired pneumonia: Boosting the performance with capsule network and maximum intensity projection image of CT scans. Comput Biol Med 2023; 154:106567. [PMID: 36738705 PMCID: PMC9869624 DOI: 10.1016/j.compbiomed.2023.106567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 12/30/2022] [Accepted: 01/22/2023] [Indexed: 01/24/2023]
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) and community-acquired pneumonia (CAP) present a high degree of similarity in chest computed tomography (CT) images. Therefore, a procedure for accurately and automatically distinguishing between them is crucial. METHODS A deep learning method for distinguishing COVID-19 from CAP is developed using maximum intensity projection (MIP) images from CT scans. LinkNet is employed for lung segmentation of chest CT images. MIP images are produced by superposing the maximum gray of intrapulmonary CT values. The MIP images are input into a capsule network for patient-level pred iction and diagnosis of COVID-19. The network is trained using 333 CT scans (168 COVID-19/165 CAP) and validated on three external datasets containing 3581 CT scans (2110 COVID-19/1471 CAP). RESULTS LinkNet achieves the highest Dice coefficient of 0.983 for lung segmentation. For the classification of COVID-19 and CAP, the capsule network with the DenseNet-121 feature extractor outperforms ResNet-50 and Inception-V3, achieving an accuracy of 0.970 on the training dataset. Without MIP or the capsule network, the accuracy decreases to 0.857 and 0.818, respectively. Accuracy scores of 0.961, 0.997, and 0.949 are achieved on the external validation datasets. The proposed method has higher or comparable sensitivity compared with ten state-of-the-art methods. CONCLUSIONS The proposed method illustrates the feasibility of applying MIP images from CT scans to distinguish COVID-19 from CAP using capsule networks. MIP images provide conspicuous benefits when exploiting deep learning to detect COVID-19 lesions from CT scans and the capsule network improves COVID-19 diagnosis.
Collapse
Affiliation(s)
- Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Qianqian Qi
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Liming Yang
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China.
| | - Hanlin Wang
- Department of Radiology, General Hospital of the Yangtze River Shipping, Wuhan, China.
| | - Hui Yu
- General Practice Center, The Seventh Affiliated Hospital, Southern Medical University, Guangzhou, China.
| | - Jianpeng Li
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China.
| | - Gang Wang
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China.
| | - Ping Zhang
- Department of Pulmonary and Critical Care Medicine, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China.
| | - Zhenyu Liang
- State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
| | - Rongchang Chen
- Key Laboratory of Respiratory Disease of Shenzhen, Shenzhen Institute of Respiratory Disease, Shenzhen People's Hospital (Second Affiliated Hospital of Jinan University, First Affiliated Hospital of South University of Science and Technology of China), Shenzhen, China.
| |
Collapse
|
14
|
Modak S, Abdel-Raheem E, Rueda L. Applications of Deep Learning in Disease Diagnosis of Chest Radiographs: A Survey on Materials and Methods. BIOMEDICAL ENGINEERING ADVANCES 2023. [DOI: 10.1016/j.bea.2023.100076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
|
15
|
Maynord M, Farhangi MM, Fermüller C, Aloimonos Y, Levine G, Petrick N, Sahiner B, Pezeshk A. Semi-supervised training using cooperative labeling of weakly annotated data for nodule detection in chest CT. Med Phys 2023. [PMID: 36630691 DOI: 10.1002/mp.16219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 12/14/2022] [Accepted: 12/23/2022] [Indexed: 01/13/2023] Open
Abstract
PURPOSE Machine learning algorithms are best trained with large quantities of accurately annotated samples. While natural scene images can often be labeled relatively cheaply and at large scale, obtaining accurate annotations for medical images is both time consuming and expensive. In this study, we propose a cooperative labeling method that allows us to make use of weakly annotated medical imaging data for the training of a machine learning algorithm. As most clinically produced data are weakly-annotated - produced for use by humans rather than machines and lacking information machine learning depends upon - this approach allows us to incorporate a wider range of clinical data and thereby increase the training set size. METHODS Our pseudo-labeling method consists of multiple stages. In the first stage, a previously established network is trained using a limited number of samples with high-quality expert-produced annotations. This network is used to generate annotations for a separate larger dataset that contains only weakly annotated scans. In the second stage, by cross-checking the two types of annotations against each other, we obtain higher-fidelity annotations. In the third stage, we extract training data from the weakly annotated scans, and combine it with the fully annotated data, producing a larger training dataset. We use this larger dataset to develop a computer-aided detection (CADe) system for nodule detection in chest CT. RESULTS We evaluated the proposed approach by presenting the network with different numbers of expert-annotated scans in training and then testing the CADe using an independent expert-annotated dataset. We demonstrate that when availability of expert annotations is severely limited, the inclusion of weakly-labeled data leads to a 5% improvement in the competitive performance metric (CPM), defined as the average of sensitivities at different false-positive rates. CONCLUSIONS Our proposed approach can effectively merge a weakly-annotated dataset with a small, well-annotated dataset for algorithm training. This approach can help enlarge limited training data by leveraging the large amount of weakly labeled data typically generated in clinical image interpretation.
Collapse
Affiliation(s)
- Michael Maynord
- University of Maryland, Computer Science Department, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA.,Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - M Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Cornelia Fermüller
- University of Maryland, Institute for Advanced Computer Studies, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA
| | - Yiannis Aloimonos
- University of Maryland, Computer Science Department, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA
| | - Gary Levine
- Division of Radiological Imaging Devices and Electronic Products, CDRH, FDA, Silver Spring, Maryland, USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| |
Collapse
|
16
|
Jin H, Yu C, Gong Z, Zheng R, Zhao Y, Fu Q. Machine learning techniques for pulmonary nodule computer-aided diagnosis using CT images: A systematic review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
17
|
Wang H, Tang N, Zhang C, Hao Y, Meng X, Li J. Practice toward standardized performance testing of computer-aided detection algorithms for pulmonary nodule. Front Public Health 2022; 10:1071673. [PMID: 36568775 PMCID: PMC9768365 DOI: 10.3389/fpubh.2022.1071673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
This study aimed at implementing practice to build a standardized protocol to test the performance of computer-aided detection (CAD) algorithms for pulmonary nodules. A test dataset was established according to a standardized procedure, including data collection, curation and annotation. Six types of pulmonary nodules were manually annotated as reference standard. Three specific rules to match algorithm output with reference standard were applied and compared. These rules included: (1) "center hit" [whether the center of algorithm highlighted region of interest (ROI) hit the ROI of reference standard]; (2) "center distance" (whether the distance between algorithm highlighted ROI center and reference standard center was below a certain threshold); (3) "area overlap" (whether the overlap between algorithm highlighted ROI and reference standard was above a certain threshold). Performance metrics were calculated and the results were compared among ten algorithms under test (AUTs). The test set currently consisted of CT sequences from 593 patients. Under "center hit" rule, the average recall rate, average precision, and average F1 score of ten algorithms under test were 54.68, 38.19, and 42.39%, respectively. Correspondingly, the results under "center distance" rule were 55.43, 38.69, and 42.96%, and the results under "area overlap" rule were 40.35, 27.75, and 31.13%. Among the six types of pulmonary nodules, the AUTs showed the highest miss rate for pure ground-glass nodules, with an average of 59.32%, followed by pleural nodules and solid nodules, with an average of 49.80 and 42.21%, respectively. The algorithm testing results changed along with specific matching methods adopted in the testing process. The AUTs showed uneven performance on different types of pulmonary nodules. This centralized testing protocol supports the comparison between algorithms with similar intended use, and helps evaluate algorithm performance.
Collapse
Affiliation(s)
- Hao Wang
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Na Tang
- School of Bioengineering, Chongqing University, Chongqing, China
| | - Chao Zhang
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Ye Hao
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Xiangfeng Meng
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China,*Correspondence: Xiangfeng Meng
| | - Jiage Li
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China,Jiage Li
| |
Collapse
|
18
|
Gu Z, Li Y, Luo H, Zhang C, Du H. Cross attention guided multi-scale feature fusion for false-positive reduction in pulmonary nodule detection. Comput Biol Med 2022; 151:106302. [PMID: 36401972 DOI: 10.1016/j.compbiomed.2022.106302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/24/2022] [Accepted: 11/06/2022] [Indexed: 11/10/2022]
Abstract
False-positive reduction is a crucial step of computer-aided diagnosis (CAD) system for pulmonary nodules detection and it plays an important role in lung cancer diagnosis. In this paper, we propose a novel cross attention guided multi-scale feature fusion method for false-positive reduction in pulmonary nodule detection. Specifically, a 3D SENet50 fed with a candidate nodule cube is applied as the backbone to acquire multi-scale coarse features. Then, the coarse features are refined and fused by the multi-scale fusion part to achieve a better feature extraction result. Finally, a 3D spatial pyramid pooling module is used to enhance receptive field and a distributed aligned linear classifier is applied to get the confidence score. In addition, each of the five nodule cubes with different sizes centering on every testing nodule position is fed into the proposed framework to obtain a confidence score separately and a weighted fusion method is used to improve the generalization performance of the model. Extensive experiments are conducted to demonstrate the effectiveness of the classification performance of the proposed model. The data used in our work is from the LUNA16 pulmonary nodule detection challenge. In this data set, the number of true-positive pulmonary nodules is 1,557, while the number of false-positive ones is 753,418. The new method is evaluated on the LUNA16 dataset and achieves the score of the competitive performance metric (CPM) 84.8%.
Collapse
Affiliation(s)
- Zhongxuan Gu
- Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China
| | - Yueyang Li
- Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China.
| | - Haichi Luo
- College of Internet of Things Engineering, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China
| | - Caidi Zhang
- Department of Respiration, The Affiliated Hospital of Jiangnan University, 1000 Hefeng Road, Wuxi, 214122, Jiangsu, China
| | - Hongqun Du
- Department of Respiration, The Affiliated Hospital of Jiangnan University, 1000 Hefeng Road, Wuxi, 214122, Jiangsu, China
| |
Collapse
|
19
|
Shafi I, Din S, Khan A, Díez IDLT, Casanova RDJP, Pifarre KT, Ashraf I. An Effective Method for Lung Cancer Diagnosis from CT Scan Using Deep Learning-Based Support Vector Network. Cancers (Basel) 2022; 14:5457. [PMID: 36358875 PMCID: PMC9657078 DOI: 10.3390/cancers14215457] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 10/29/2022] [Accepted: 11/02/2022] [Indexed: 09/29/2023] Open
Abstract
The diagnosis of early-stage lung cancer is challenging due to its asymptomatic nature, especially given the repeated radiation exposure and high cost of computed tomography(CT). Examining the lung CT images to detect pulmonary nodules, especially the cell lung cancer lesions, is also tedious and prone to errors even by a specialist. This study proposes a cancer diagnostic model based on a deep learning-enabled support vector machine (SVM). The proposed computer-aided design (CAD) model identifies the physiological and pathological changes in the soft tissues of the cross-section in lung cancer lesions. The model is first trained to recognize lung cancer by measuring and comparing the selected profile values in CT images obtained from patients and control patients at their diagnosis. Then, the model is tested and validated using the CT scans of both patients and control patients that are not shown in the training phase. The study investigates 888 annotated CT scans from the publicly available LIDC/IDRI database. The proposed deep learning-assisted SVM-based model yields 94% accuracy for pulmonary nodule detection representing early-stage lung cancer. It is found superior to other existing methods including complex deep learning, simple machine learning, and the hybrid techniques used on lung CT images for nodule detection. Experimental results demonstrate that the proposed approach can greatly assist radiologists in detecting early lung cancer and facilitating the timely management of patients.
Collapse
Affiliation(s)
- Imran Shafi
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan
| | - Sadia Din
- Sadia Din Texas A&M University at Qatar, Education City, Al Rayyan 23874, Qatar
| | - Asim Khan
- Department of Computing, Abasyn University Islamabad Campus, Islamabad 44000, Pakistan
| | - Isabel De La Torre Díez
- Department of Signal Theory and Communications and Telematic Engineering, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
| | - Ramón del Jesús Palí Casanova
- Research Center for Foods, Nutritional Biochemistry and Health, Universidad Internacional Iberoamericana, Campeche 24560, Mexico
- Research Center for Foods, Nutritional Biochemistry and Health, Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA
| | - Kilian Tutusaus Pifarre
- Inovation Projects Department, Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain
- Research Center for Foods, Nutritional Biochemistry and Health, Universidade Internacional do Cuanza, Cuito EN 250, Angola
- Fundación Universitaria Internacional de Colombia, Calle 39A #19-18, Bogotá 111311, Colombia
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Korea
| |
Collapse
|
20
|
Chi J, Zhang S, Han X, Wang H, Wu C, Yu X. MID-UNet: Multi-input directional UNet for COVID-19 lung infection segmentation from CT images. SIGNAL PROCESSING. IMAGE COMMUNICATION 2022; 108:116835. [PMID: 35935468 PMCID: PMC9344813 DOI: 10.1016/j.image.2022.116835] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 05/30/2022] [Accepted: 07/23/2022] [Indexed: 05/05/2023]
Abstract
Coronavirus Disease 2019 (COVID-19) has spread globally since the first case was reported in December 2019, becoming a world-wide existential health crisis with over 90 million total confirmed cases. Segmentation of lung infection from computed tomography (CT) scans via deep learning method has a great potential in assisting the diagnosis and healthcare for COVID-19. However, current deep learning methods for segmenting infection regions from lung CT images suffer from three problems: (1) Low differentiation of semantic features between the COVID-19 infection regions, other pneumonia regions and normal lung tissues; (2) High variation of visual characteristics between different COVID-19 cases or stages; (3) High difficulty in constraining the irregular boundaries of the COVID-19 infection regions. To solve these problems, a multi-input directional UNet (MID-UNet) is proposed to segment COVID-19 infections in lung CT images. For the input part of the network, we firstly propose an image blurry descriptor to reflect the texture characteristic of the infections. Then the original CT image, the image enhanced by the adaptive histogram equalization, the image filtered by the non-local means filter and the blurry feature map are adopted together as the input of the proposed network. For the structure of the network, we propose the directional convolution block (DCB) which consist of 4 directional convolution kernels. DCBs are applied on the short-cut connections to refine the extracted features before they are transferred to the de-convolution parts. Furthermore, we propose a contour loss based on local curvature histogram then combine it with the binary cross entropy (BCE) loss and the intersection over union (IOU) loss for better segmentation boundary constraint. Experimental results on the COVID-19-CT-Seg dataset demonstrate that our proposed MID-UNet provides superior performance over the state-of-the-art methods on segmenting COVID-19 infections from CT images.
Collapse
Affiliation(s)
- Jianning Chi
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Shuang Zhang
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Xiaoying Han
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Huan Wang
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Chengdong Wu
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| | - Xiaosheng Yu
- Northeastern University, NO. 195, Chuangxin Road, Hunnan District, Shenyang, China
| |
Collapse
|
21
|
Deep Learning Algorithms for Diagnosis of Lung Cancer: A Systematic Review and Meta-Analysis. Cancers (Basel) 2022; 14:cancers14163856. [PMID: 36010850 PMCID: PMC9405626 DOI: 10.3390/cancers14163856] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 07/30/2022] [Accepted: 08/04/2022] [Indexed: 12/19/2022] Open
Abstract
We conducted a systematic review and meta-analysis of the diagnostic performance of current deep learning algorithms for the diagnosis of lung cancer. We searched major databases up to June 2022 to include studies that used artificial intelligence to diagnose lung cancer, using the histopathological analysis of true positive cases as a reference. The quality of the included studies was assessed independently by two authors based on the revised Quality Assessment of Diagnostic Accuracy Studies. Six studies were included in the analysis. The pooled sensitivity and specificity were 0.93 (95% CI 0.85−0.98) and 0.68 (95% CI 0.49−0.84), respectively. Despite the significantly high heterogeneity for sensitivity (I2 = 94%, p < 0.01) and specificity (I2 = 99%, p < 0.01), most of it was attributed to the threshold effect. The pooled SROC curve with a bivariate approach yielded an area under the curve (AUC) of 0.90 (95% CI 0.86 to 0.92). The DOR for the studies was 26.7 (95% CI 19.7−36.2) and heterogeneity was 3% (p = 0.40). In this systematic review and meta-analysis, we found that when using the summary point from the SROC, the pooled sensitivity and specificity of DL algorithms for the diagnosis of lung cancer were 93% and 68%, respectively.
Collapse
|
22
|
Huang YS, Chou PR, Chen HM, Chang YC, Chang RF. One-stage pulmonary nodule detection using 3-D DCNN with feature fusion and attention mechanism in CT image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106786. [PMID: 35398579 DOI: 10.1016/j.cmpb.2022.106786] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Lung cancer is the most common cause of cancer-related death in the world. Low-dose computed tomography (LDCT) is a widely used modality in lung cancer detection. The nodule is an abnormal tissue and may evolve into lung cancer. Hence, it is crucial to detect nodules in the early detection stage. However, reviewing the LDCT scans to observe suspicious nodules is a time-consuming task. Recently, designing a computer-aided detection (CADe) system with convolutional neural network (CNN) architecture has been proven that it is helpful for radiologists. Hence, in this study, a 3-D YOLO-based CADe system, 3-D OSAF-YOLOv3, is proposed for nodule detection in LDCT images. METHODS The proposed CADe system consists of data preprocessing, nodule detection, and non-maximum suppression algorithm (NMS). At first, the data preprocessing including the background elimination, the spacing normalization, and the volume of interest (VOI) extraction, are conducted to remove the non-lung region, normalize the image spacing, and divide LDCT image into numerous VOIs. Then, the VOIs are fed into the 3-D OSAF-YOLOv3 model, to detect the suspicious nodules. The proposed model is constructed by integrating the 3-D YOLOv3 with the one-shot aggregation module (OSA), the receptive field block (RFB), and the feature fusion scheme (FFS). Finally, the NMS algorithm is performed to eliminate the duplicated detection generated by the model. RESULTS In this study, the LUNA-16 dataset composed 1186 nodules from 888 LDCT scans and the competition performance metric (CPM) are used to evaluate our CADe system. In the experiment results, the proposed system can achieve a sensitivities rate of 0.962 with the false positive rate of 8 and complete a CPM value of 0.905. Moreover, according to the ablation study results, the employment of OSA module, RFB, and FFS could improve the detection performance actually. Furthermore, compared to other start-of-the-art (SOTA) models, our detection system could also achieve the higher performance. CONCLUSIONS In this study, a YOLO-based CADe system for nodule detection in CT image system integrating additional modules and scheme is proposed for nodule detection in LDCT. The result indicates that the proposed the modification can significantly improve detection performance.
Collapse
Affiliation(s)
- Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Changhua University of Education, Changhua, Taiwan
| | - Ping-Ru Chou
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan
| | - Hsin-Ming Chen
- Department of Medical Imaging, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 10617, Taiwan.
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan; Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, Taiwan.
| |
Collapse
|
23
|
Liu D, Liu F, Tie Y, Qi L, Wang F. Res-trans networks for lung nodule classification. Int J Comput Assist Radiol Surg 2022; 17:1059-1068. [PMID: 35290646 DOI: 10.1007/s11548-022-02576-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 02/02/2022] [Indexed: 12/09/2022]
Abstract
PURPOSE Lung cancer usually presents as pulmonary nodules on early diagnostic images, and accurately estimating the malignancy of pulmonary nodules is crucial to the prevention and diagnosis of lung cancer. Recently, deep learning algorithms based on convolutional neural networks have shown potential for pulmonary nodules classification. However, the size of the nodules is very diverse, ranging from 3 to 30 mm, which makes classifying them to be a challenging task. In this study, we propose a novel architecture called Res-trans networks to classify nodules in computed tomography (CT) scans. METHODS We designed local and global blocks to extract features that capture the long-range dependencies between pixels to adapt to the correct classification of lung nodules of different sizes. Specifically, we designed residual blocks with convolutional operations to extract local features and transformer blocks with self-attention to capture global features. Moreover, the Res-trans network has a sequence fusion block that aggregates and extracts the sequence feature information output by the transformer block that improves classification accuracy. RESULTS Our proposed method is extensively evaluated on the public LIDC-IDRI dataset, which contains 1,018 CT scans. A tenfold cross-validation result shows that our method obtains better performance with AUC = 0.9628 and Accuracy = 0.9292 compared with recently leading methods. CONCLUSION In this paper, a network that can capture local and global features is proposed to classify nodules in chest CT. Experimental results show that our proposed method has better classification performance and can help radiologists to accurately analyze lung nodules.
Collapse
Affiliation(s)
- Dongxu Liu
- School of Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Fenghui Liu
- Department of Respiratory and Sleep Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yun Tie
- School of Information Engineering, Zhengzhou University, Zhengzhou, China.
| | - Lin Qi
- School of Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Feng Wang
- Department of Oncology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| |
Collapse
|
24
|
Chen S. Models of Artificial Intelligence-Assisted Diagnosis of Lung Cancer Pathology Based on Deep Learning Algorithms. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:3972298. [PMID: 35378943 PMCID: PMC8976635 DOI: 10.1155/2022/3972298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 01/19/2022] [Accepted: 02/18/2022] [Indexed: 11/17/2022]
Abstract
In this article, in order to explore the application of a diagnosis system for lung cancer, we use an auxiliary diagnostic system to predict and diagnose the good and evil attributes of chest CT pulmonary nodules. This research improves the new diagnosis method based on the convolutional neural network (CNN) and the recurrent neural network (RNN) and combines the dual effects of the two algorithms to process the classification of benign and malignant nodules. By collecting H-E-stained pathological slices of 652 patients' lung lesions from two hospitals between January 2018 and January 2019, the output results of the improved 3D U-net system and the consistent results of two-person reading were compared. This article analyzes the sensitivity, specificity, positive flammability rate, and negative flammability rate of different lung nodule detection methods. In addition, the artificial intelligence system's and the radiologist's judgment results of benign and malignant pulmonary nodules are used to draw ROC curves for further analysis. The improved model has an accuracy rate of 92.3% for predicting malignant lung nodules and an accuracy rate of 82.8% for benign lung nodules. The new diagnostic method using the convolutional neural network and the recurrent neural network can be very effective for improving the accuracy of predicting lung cancer diagnosis. It can play a very effective role in the disease prediction of lung cancer patients, thereby improving the treatment effect.
Collapse
Affiliation(s)
- Su Chen
- The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510030, Guangdong, China
| |
Collapse
|
25
|
Diagnostic Value of Artificial Intelligence Based on CT Image in Benign and Malignant Pulmonary Nodules. JOURNAL OF ONCOLOGY 2022; 2022:5818423. [PMID: 35368893 PMCID: PMC8970870 DOI: 10.1155/2022/5818423] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 02/11/2022] [Accepted: 02/14/2022] [Indexed: 11/30/2022]
Abstract
Objective To evaluate the diagnostic value of artificial intelligence-assisted CT imaging in benign and malignant pulmonary nodules. Methods The CT scan screening of pulmonary nodules from November 2018 to November 2020 was retrospectively collected. The diagnosis of pulmonary nodules and surgical treatment were performed. A total of 194 nodules in 152 patients with clear pathological results were observed. All patients underwent CT examination to analyze the consistency of the results of artificial intelligence, physician reading according to imaging features, multidisciplinary team work (MDT) diagnosis, and postoperative pathological results; the diagnostic efficacy of different diagnostic methods for solitary pulmonary nodules and the differences of ROC curve and AUC were analyzed. The accuracy, specificity, sensitivity, positive predictive value, negative predictive value, false negative rate, and false positive rate of different diagnostic methods for pulmonary nodules were calculated, and the ROC curves of different diagnostic methods were plotted. Results The accuracy, sensitivity, specificity, and Youden index of artificial intelligence (AI) were 89.69%, 92.98%, 65.22%, and 58.20%; the accuracy, sensitivity, specificity, and Youden index of physician reading were 85.57%, 88.30%, 65.22%, and 53.52%; the accuracy, sensitivity, specificity, and Youden index of MDT were 96.91%, 98.25%, 86.96%, and 85.21%, respectively. The kappa values of artificial intelligence, physician reading, and MDT were 0.541, 0.437, and 0.852, and the AUC was 0.768, 0.791, and 0.926, respectively (P < 0.001). The average detection time of pulmonary nodules in the AI group, manual reading group, and MAT group was (145 ± 97) s, (534 ± 297) s, and (421 ± 128) s (P < 0.001). Conclusion Artificial intelligence pulmonary nodule detection system can improve the coincidence rate and accuracy of early diagnosis of lung cancer, shorten the average detection time, and provide more accurate information for clinical decision-making.
Collapse
|
26
|
Min Y, Hu L, Wei L, Nie S. Computer-aided detection of pulmonary nodules based on convolutional neural networks: a review. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac568e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 02/18/2022] [Indexed: 02/08/2023]
Abstract
Abstract
Computer-aided detection (CADe) technology has been proven to increase the detection rate of pulmonary nodules that has important clinical significance for the early diagnosis of lung cancer. In this study, we systematically review the latest techniques in pulmonary nodule CADe based on deep learning models with convolutional neural networks in computed tomography images. First, the brief descriptions and popular architecture of convolutional neural networks are introduced. Second, several common public databases and evaluation metrics are briefly described. Third, state-of-the-art approaches with excellent performances are selected. Subsequently, we combine the clinical diagnostic process and the traditional four steps of pulmonary nodule CADe into two stages, namely, data preprocessing and image analysis. Further, the major optimizations of deep learning models and algorithms are highlighted according to the progressive evaluation effect of each method, and some clinical evidence is added. Finally, various methods are summarized and compared. The innovative or valuable contributions of each method are expected to guide future research directions. The analyzed results show that deep learning-based methods significantly transformed the detection of pulmonary nodules, and the design of these methods can be inspired by clinical imaging diagnostic procedures. Moreover, focusing on the image analysis stage will result in improved returns. In particular, optimal results can be achieved by optimizing the steps of candidate nodule generation and false positive reduction. End-to-end methods, with greater operating speeds and lower computational consumptions, are superior to other methods in CADe of pulmonary nodules.
Collapse
|
27
|
Lin FY, Chang YC, Huang HY, Li CC, Chen YC, Chen CM. A radiomics approach for lung nodule detection in thoracic CT images based on the dynamic patterns of morphological variation. Eur Radiol 2022; 32:3767-3777. [PMID: 35020016 DOI: 10.1007/s00330-021-08456-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 09/20/2021] [Accepted: 11/02/2021] [Indexed: 11/28/2022]
Abstract
OBJECTIVES To propose and evaluate a set of radiomic features, called morphological dynamics features, for pulmonary nodule detection, which were rooted in the dynamic patterns of morphological variation and needless precise lesion segmentation. MATERIALS AND METHODS Two datasets were involved, namely, university hospital (UH) and LIDC datasets, comprising 72 CT scans (360 nodules) and 888 CT scans (2230 nodules), respectively. Each nodule was annotated by multiple radiologists. Denoted the category of nodules identified by at least k radiologists as ALk. A nodule detection algorithm, called CAD-MD algorithm, was proposed based on the morphological dynamics radiomic features, characterizing a lesion by ten sets of the same features with different values extracted from ten different thresholding results. Each nodule candidate was classified by a two-level classifier, including ten decision trees and a random forest, respectively. The CAD-MD algorithm was compared with a deep learning approach, the N-Net, using the UH dataset. RESULTS On the AL1 and AL2 of the UH dataset, the AUC of the AFROC curves were 0.777 and 0.851 for the CAD-MD algorithm and 0.478 and 0.472 for the N-Net, respectively. The CAD-MD algorithm achieved the sensitivities of 84.4% and 91.4% with 2.98 and 3.69 FPs/scan and the N-Net 74.4% and 80.7% with 3.90 and 4.49 FPs/scan, respectively. On the LIDC dataset, the CAD-MD algorithm attained the sensitivities of 87.6%, 89.2%, 92.2%, and 95.0% with 4 FPs/scan for AL1-AL4, respectively. CONCLUSION The morphological dynamics radiomic features might serve as an effective set of radiomic features for lung nodule detection. KEY POINTS • Texture features varied with such CT system settings as reconstruction kernels of CT images, CT scanner models, and parameter settings, and so on. • Shape and first-order statistics were shown to be the most robust features against variation in CT imaging parameters. • The morphological dynamics radiomic features, which mainly characterized the dynamic patterns of morphological variation, were shown to be effective for lung nodule detection.
Collapse
Affiliation(s)
- Fan-Ya Lin
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | | | - Chia-Chen Li
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan
| | - Yi-Chang Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan.,Department of Medical Imaging, Cardinal Tien Hospital, New Taipei City, Taiwan
| | - Chung-Ming Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan.
| |
Collapse
|
28
|
Cui X, Zheng S, Heuvelmans MA, Du Y, Sidorenkov G, Fan S, Li Y, Xie Y, Zhu Z, Dorrius MD, Zhao Y, Veldhuis RNJ, de Bock GH, Oudkerk M, van Ooijen PMA, Vliegenthart R, Ye Z. Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program. Eur J Radiol 2021; 146:110068. [PMID: 34871936 DOI: 10.1016/j.ejrad.2021.110068] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 10/03/2021] [Accepted: 11/22/2021] [Indexed: 11/03/2022]
Abstract
OBJECTIVE To evaluate the performance of a deep learning-based computer-aided detection (DL-CAD) system in a Chinese low-dose CT (LDCT) lung cancer screening program. MATERIALS AND METHODS One-hundred-and-eighty individuals with a lung nodule on their baseline LDCT lung cancer screening scan were randomly mixed with screenees without nodules in a 1:1 ratio (total: 360 individuals). All scans were assessed by double reading and subsequently processed by an academic DL-CAD system. The findings of double reading and the DL-CAD system were then evaluated by two senior radiologists to derive the reference standard. The detection performance was evaluated by the Free Response Operating Characteristic curve, sensitivity and false-positive (FP) rate. The senior radiologists categorized nodules according to nodule diameter, type (solid, part-solid, non-solid) and Lung-RADS. RESULTS The reference standard consisted of 262 nodules ≥ 4 mm in 196 individuals; 359 findings were considered false positives. The DL-CAD system achieved a sensitivity of 90.1% with 1.0 FP/scan for detection of lung nodules regardless of size or type, whereas double reading had a sensitivity of 76.0% with 0.04 FP/scan (P = 0.001). The sensitivity for detection of nodules ≥ 4 - ≤ 6 mm was significantly higher with DL-CAD than with double reading (86.3% vs. 58.9% respectively; P = 0.001). Sixty-three nodules were only identified by the DL-CAD system, and 27 nodules only found by double reading. The DL-CAD system reached similar performance compared to double reading in Lung-RADS 3 (94.3% vs. 90.0%, P = 0.549) and Lung-RADS 4 nodules (100.0% vs. 97.0%, P = 1.000), but showed a higher sensitivity in Lung-RADS 2 (86.2% vs. 65.4%, P < 0.001). CONCLUSIONS The DL-CAD system can accurately detect pulmonary nodules on LDCT, with an acceptable false-positive rate of 1 nodule per scan and has higher detection performance than double reading. This DL-CAD system may assist radiologists in nodule detection in LDCT lung cancer screening.
Collapse
Affiliation(s)
- Xiaonan Cui
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China; University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Sunyi Zheng
- Westlake University, Artificial Intelligence and Biomedical Image Analysis Lab, School of Engineering, Hangzhou, People's Republic of China; Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, People's Republic of China; University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, the Netherlands
| | - Marjolein A Heuvelmans
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Yihui Du
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Grigory Sidorenkov
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Shuxuan Fan
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Yanju Li
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Yongsheng Xie
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Zhongyuan Zhu
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Monique D Dorrius
- University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Yingru Zhao
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Raymond N J Veldhuis
- University of Twente, Faculty of Electrical Engineering Mathematics and Computer Science, the Netherlands
| | - Geertruida H de Bock
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Matthijs Oudkerk
- University of Groningen, Faculty of Medical Sciences, the Netherlands
| | - Peter M A van Ooijen
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, the Netherlands; University of Groningen, University Medical Center Groningen, Machine Learning Lab, Data Science Center in Health, Groningen, the Netherlands
| | - Rozemarijn Vliegenthart
- University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Zhaoxiang Ye
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China.
| |
Collapse
|
29
|
IMIIN: An inter-modality information interaction network for 3D multi-modal breast tumor segmentation. Comput Med Imaging Graph 2021; 95:102021. [PMID: 34861622 DOI: 10.1016/j.compmedimag.2021.102021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 11/02/2021] [Accepted: 11/23/2021] [Indexed: 11/22/2022]
Abstract
Breast tumor segmentation is critical to the diagnosis and treatment of breast cancer. In clinical breast cancer analysis, experts often examine multi-modal images since such images provide abundant complementary information on tumor morphology. Known multi-modal breast tumor segmentation methods extracted 2D tumor features and used information from one modal to assist another. However, these methods were not conducive to fusing multi-modal information efficiently, or may even fuse interference information, due to the lack of effective information interaction management between different modalities. Besides, these methods did not consider the effect of small tumor characteristics on the segmentation results. In this paper, We propose a new inter-modality information interaction network to segment breast tumors in 3D multi-modal MRI. Our network employs a hierarchical structure to extract local information of small tumors, which facilitates precise segmentation of tumor boundaries. Under this structure, we present a 3D tiny object segmentation network based on DenseVoxNet to preserve the boundary details of the segmented tumors (especially for small tumors). Further, we introduce a bi-directional request-supply information interaction module between different modalities so that each modal can request helpful auxiliary information according to its own needs. Experiments on a clinical 3D multi-modal MRI breast tumor dataset show that our new 3D IMIIN is superior to state-of-the-art methods and attains better segmentation results, suggesting that our new method has a good clinical application prospect.
Collapse
|
30
|
Chen X, Duan Q, Wu R, Yang Z. Segmentation of lung computed tomography images based on SegNet in the diagnosis of lung cancer. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2021. [DOI: 10.1080/16878507.2021.1981753] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Affiliation(s)
- Xiaodong Chen
- Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, Liaoning China
| | - Qiongyu Duan
- Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, Liaoning China
| | - Rong Wu
- Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, Liaoning China
| | - Zehui Yang
- Department of Pulmonary and Critical Care Medicine, Shengjing Hospital of China Medical University, Shenyang, Liaoning China
| |
Collapse
|
31
|
SCPM-Net: An anchor-free 3D lung nodule detection network using sphere representation and center points matching. Med Image Anal 2021; 75:102287. [PMID: 34731775 DOI: 10.1016/j.media.2021.102287] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 09/16/2021] [Accepted: 10/20/2021] [Indexed: 02/05/2023]
Abstract
Automatic and accurate lung nodule detection from 3D Computed Tomography (CT) scans plays a vital role in efficient lung cancer screening. Despite the state-of-the-art performance obtained by recent anchor-based detectors using Convolutional Neural Networks (CNNs) for this task, they require predetermined anchor parameters such as the size, number, and aspect ratio of anchors, and have limited robustness when dealing with lung nodules with a massive variety of sizes. To overcome these problems, we propose a 3D sphere representation-based center-points matching detection network (SCPM-Net) that is anchor-free and automatically predicts the position, radius, and offset of nodules without manual design of nodule/anchor parameters. The SCPM-Net consists of two novel components: sphere representation and center points matching. First, to match the nodule annotation in clinical practice, we replace the commonly used bounding box with our proposed bounding sphere to represent nodules with the centroid, radius, and local offset in 3D space. A compatible sphere-based intersection over-union loss function is introduced to train the lung nodule detection network stably and efficiently. Second, we empower the network anchor-free by designing a positive center-points selection and matching (CPM) process, which naturally discards pre-determined anchor boxes. An online hard example mining and re-focal loss subsequently enable the CPM process to be more robust, resulting in more accurate point assignment and mitigation of class imbalance. In addition, to better capture spatial information and 3D context for the detection, we propose to fuse multi-level spatial coordinate maps with the feature extractor and combine them with 3D squeeze-and-excitation attention modules. Experimental results on the LUNA16 dataset showed that our proposed SCPM-Net framework achieves superior performance compared with existing anchor-based and anchor-free methods for lung nodule detection with the average sensitivity at 7 predefined FPs/scan of 89.2%. Moreover, our sphere representation is verified to achieve higher detection accuracy than the traditional bounding box representation of lung nodules. Code is available at: https://github.com/HiLab-git/SCPM-Net.
Collapse
|
32
|
Chao Z, Xu W. A New General Maximum Intensity Projection Technology via the Hybrid of U-Net and Radial Basis Function Neural Network. J Digit Imaging 2021; 34:1264-1278. [PMID: 34508300 PMCID: PMC8432629 DOI: 10.1007/s10278-021-00504-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 07/16/2021] [Accepted: 08/05/2021] [Indexed: 11/29/2022] Open
Abstract
Maximum intensity projection (MIP) technology is a computer visualization method that projects three-dimensional spatial data on a visualization plane. According to the specific purposes, the specific lab thickness and direction can be selected. This technology can better show organs, such as blood vessels, arteries, veins, and bronchi and so forth, from different directions, which could bring more intuitive and comprehensive results for doctors in the diagnosis of related diseases. However, in this traditional projection technology, the details of the small projected target are not clearly visualized when the projected target is not much different from the surrounding environment, which could lead to missed diagnosis or misdiagnosis. Therefore, it is urgent to develop a new technology that can better and clearly display the angiogram. However, to the best of our knowledge, research in this area is scarce. To fill this gap in the literature, in the present study, we propose a new method based on the hybrid of convolutional neural network (CNN) and radial basis function neural network (RBFNN) to synthesize the projection image. We first adopted the U-net to obtain feature or enhanced images to be projected; subsequently, the RBF neural network performed further synthesis processing for these data; finally, the projection images were obtained. For experimental data, in order to increase the robustness of the proposed algorithm, the following three different types of datasets were adopted: the vascular projection of the brain, the bronchial projection of the lung parenchyma, and the vascular projection of the liver. In addition, radiologist evaluation and five classic metrics of image definition were implemented for effective analysis. Finally, compared to the traditional MIP technology and other structures, the use of a large number of different types of data and superior experimental results proved the versatility and robustness of the proposed method.
Collapse
Affiliation(s)
- Zhen Chao
- College of Artificial Intelligence and Big Data for Medical Sciences, Shandong First Medical University & Shandong Academy of Medical Sciences, Huaiyin District, 6699 Qingdao Road, Jinan, 250117, Shandong, China.
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
- Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon, 26493, South Korea.
| | - Wenting Xu
- Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon, 26493, South Korea
| |
Collapse
|
33
|
Farhangi MM, Sahiner B, Petrick N, Pezeshk A. Automatic lung nodule detection in thoracic CT scans using dilated slice-wise convolutions. Med Phys 2021; 48:3741-3751. [PMID: 33932241 DOI: 10.1002/mp.14915] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 04/08/2021] [Accepted: 04/15/2021] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Most state-of-the-art automated medical image analysis methods for volumetric data rely on adaptations of two-dimensional (2D) and three-dimensional (3D) convolutional neural networks (CNNs). In this paper, we develop a novel unified CNN-based model that combines the benefits of 2D and 3D networks for analyzing volumetric medical images. METHODS In our proposed framework, multiscale contextual information is first extracted from 2D slices inside a volume of interest (VOI). This is followed by dilated 1D convolutions across slices to aggregate in-plane features in a slice-wise manner and encode the information in the entire volume. Moreover, we formalize a curriculum learning strategy for a two-stage system (i.e., a system that consists of screening and false positive reduction), where the training samples are presented to the network in a meaningful order to further improve the performance. RESULTS We evaluated the proposed approach by developing a computer-aided detection (CADe) system for lung nodules. Our results on 888 CT exams demonstrate that the proposed approach can effectively analyze volumetric data by achieving a sensitivity of > 0.99 in the screening stage and a sensitivity of > 0.96 at eight false positives per case in the false positive reduction stage. CONCLUSION Our experimental results show that the proposed method provides competitive results compared to state-of-the-art 3D frameworks. In addition, we illustrate the benefits of curriculum learning strategies in two-stage systems that are of common use in medical imaging applications.
Collapse
Affiliation(s)
- M Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| |
Collapse
|
34
|
Al-Masni MA, Kim DH. CMM-Net: Contextual multi-scale multi-level network for efficient biomedical image segmentation. Sci Rep 2021; 11:10191. [PMID: 33986375 PMCID: PMC8119726 DOI: 10.1038/s41598-021-89686-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Accepted: 04/26/2021] [Indexed: 01/20/2023] Open
Abstract
Medical image segmentation of tissue abnormalities, key organs, or blood vascular system is of great significance for any computerized diagnostic system. However, automatic segmentation in medical image analysis is a challenging task since it requires sophisticated knowledge of the target organ anatomy. This paper develops an end-to-end deep learning segmentation method called Contextual Multi-Scale Multi-Level Network (CMM-Net). The main idea is to fuse the global contextual features of multiple spatial scales at every contracting convolutional network level in the U-Net. Also, we re-exploit the dilated convolution module that enables an expansion of the receptive field with different rates depending on the size of feature maps throughout the networks. In addition, an augmented testing scheme referred to as Inversion Recovery (IR) which uses logical "OR" and "AND" operators is developed. The proposed segmentation network is evaluated on three medical imaging datasets, namely ISIC 2017 for skin lesions segmentation from dermoscopy images, DRIVE for retinal blood vessels segmentation from fundus images, and BraTS 2018 for brain gliomas segmentation from MR scans. The experimental results showed superior state-of-the-art performance with overall dice similarity coefficients of 85.78%, 80.27%, and 88.96% on the segmentation of skin lesions, retinal blood vessels, and brain tumors, respectively. The proposed CMM-Net is inherently general and could be efficiently applied as a robust tool for various medical image segmentations.
Collapse
Affiliation(s)
- Mohammed A Al-Masni
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea.
| |
Collapse
|
35
|
Zheng S, Cornelissen LJ, Cui X, Jing X, Veldhuis RNJ, Oudkerk M, van Ooijen PMA. Deep convolutional neural networks for multiplanar lung nodule detection: Improvement in small nodule identification. Med Phys 2021; 48:733-744. [PMID: 33300162 PMCID: PMC7986069 DOI: 10.1002/mp.14648] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 11/23/2020] [Accepted: 11/30/2020] [Indexed: 12/12/2022] Open
Abstract
PURPOSE Early detection of lung cancer is of importance since it can increase patients' chances of survival. To detect nodules accurately during screening, radiologists would commonly take the axial, coronal, and sagittal planes into account, rather than solely the axial plane in clinical evaluation. Inspired by clinical work, the paper aims to develop an accurate deep learning framework for nodule detection by a combination of multiple planes. METHODS The nodule detection system is designed in two stages, multiplanar nodule candidate detection, multiscale false positive (FP) reduction. At the first stage, a deeply supervised encoder-decoder network is trained by axial, coronal, and sagittal slices for the candidate detection task. All possible nodule candidates from the three different planes are merged. To further refine results, a three-dimensional multiscale dense convolutional neural network that extracts multiscale contextual information is applied to remove non-nodules. In the public LIDC-IDRI dataset, 888 computed tomography scans with 1186 nodules accepted by at least three of four radiologists are selected to train and evaluate our proposed system via a tenfold cross-validation scheme. The free-response receiver operating characteristic curve is used for performance assessment. RESULTS The proposed system achieves a sensitivity of 94.2% with 1.0 FP/scan and a sensitivity of 96.0% with 2.0 FPs/scan. Although it is difficult to detect small nodules (i.e., <6 mm), our designed CAD system reaches a sensitivity of 93.4% (95.0%) of these small nodules at an overall FP rate of 1.0 (2.0) FPs/scan. At the nodule candidate detection stage, results show that the system with a multiplanar method is capable to detect more nodules compared to using a single plane. CONCLUSION Our approach achieves good performance not only for small nodules but also for large lesions on this dataset. This demonstrates the effectiveness of our developed CAD system for lung nodule detection.
Collapse
Affiliation(s)
- Sunyi Zheng
- Department of Radiation OncologyUniversity Medical Center GroningenUniversity of Groningen9713 AVGroningenThe Netherlands
| | - Ludo J. Cornelissen
- Department of Radiation OncologyUniversity Medical Center GroningenUniversity of Groningen9713 AVGroningenThe Netherlands
| | - Xiaonan Cui
- Department of RadiologyTianjin Medical University Cancer Institute and HospitalNational Clinical Research Centre of Cancer300060TianjinChina
| | - Xueping Jing
- Department of Radiation OncologyUniversity Medical Center GroningenUniversity of Groningen9713 AVGroningenThe Netherlands
| | | | - Matthijs Oudkerk
- Faculty of Medical ScienceUniversity of Groningen9713 AVGroningenThe Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation OncologyUniversity Medical Center GroningenUniversity of Groningen9713 AVGroningenThe Netherlands
| |
Collapse
|
36
|
Du H, Shao K, Bao F, Zhang Y, Gao C, Wu W, Zhang C. Automated coronary artery tree segmentation in coronary CTA using a multiobjective clustering and toroidal model-guided tracking method. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 199:105908. [PMID: 33373814 DOI: 10.1016/j.cmpb.2020.105908] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 12/13/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate coronary artery tree segmentation can now be developed to assist radiologists in detecting coronary artery disease. In clinical medicine, the noise, low contrast, and uneven intensity of medical images along with complex shapes and vessel bifurcation structures make coronary artery segmentation challenging. In this work, we propose a multiobjective clustering and toroidal model-guided tracking method that can accurately extract coronary arteries from computed tomography angiography (CTA) imagery. METHODS Utilizing integrated noise reduction, candidate region detection, geometric feature extraction, and coronary artery tracking techniques, a new segmentation framework for 3D coronary artery trees is presented. The candidate regions are extracted using a multiobjective clustering method, and the coronary arteries are tracked by a toroidal model-guided tracking method. RESULTS The qualitative and quantitative results demonstrate the effectiveness of the presented framework, which achieves better performance than the compared segmentation methods in three widely used evaluation indices: the Dice similarity coefficient (DSC), Jaccard index and Recall across the CTA data. The proposed method can accurately identify the coronary artery tree with a mean DSC of 84%, a Jaccard index of 74%, and a Recall of 93%. CONCLUSIONS The proposed segmentation framework effectively segments the coronary tree from the CTA volume, which improves the accuracy of 3D vascular tree segmentation.
Collapse
Affiliation(s)
- Hongwei Du
- School of Mathmatics, Shandong University, Jinan, Shandong 250100, China; Shandong Provincial Key Laboratory of Digital Media Technology, Jinan, Shandong 250014, China
| | - Kai Shao
- Shandong Provincial Key Laboratory of Digital Media Technology, Jinan, Shandong 250014, China; School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, Shandong 250014, China
| | - Fangxun Bao
- School of Mathmatics, Shandong University, Jinan, Shandong 250100, China.
| | - Yunfeng Zhang
- Shandong Provincial Key Laboratory of Digital Media Technology, Jinan, Shandong 250014, China; School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, Shandong 250014, China
| | - Chengyong Gao
- School of Physics, Shandong University, Jinan, Shandong 250100, China
| | - Wei Wu
- Department of Cerebrovascular Diseases, Cheeloo College of Medicine, Shandong University, Jinan, Shandong 250012, China
| | - Caiming Zhang
- Shandong Provincial Key Laboratory of Digital Media Technology, Jinan, Shandong 250014, China; School of Computer Science and Technology, Shandong University, Jinan, Shandong 250101, China
| |
Collapse
|
37
|
Perl RM, Grimmer R, Hepp T, Horger MS. Can a Novel Deep Neural Network Improve the Computer-Aided Detection of Solid Pulmonary Nodules and the Rate of False-Positive Findings in Comparison to an Established Machine Learning Computer-Aided Detection? Invest Radiol 2021; 56:103-108. [PMID: 32796198 DOI: 10.1097/rli.0000000000000713] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The aim of this study was to compare the performance of 2 approved computer-aided detection (CAD) systems for detection of pulmonary solid nodules (PSNs) in an oncologic cohort. The first CAD system is based on a conventional machine learning approach (VD10F), and the other is based on a deep 3D convolutional neural network (CNN) CAD software (VD20A). METHODS AND MATERIALS Nine hundred sixty-seven patients with a total of 2451 PSNs were retrospectively evaluated using the 2 different CAD systems. All patients had thin-slice chest computed tomography (0.6 mm) using 100 kV and 100 mAs and a high-resolution kernel (I50f). The CAD images generated by VD10F were transferred to the PACS for evaluation. The images generated by VD20A were evaluated using a Web browser-based viewer. Finally, a senior radiologist who was blinded for the CAD results examined the thin-slice images of every patient (ground truth). RESULTS A total of 2451 PSNs were detected by the senior radiologist. CAD-VD10F detected 1401 true-positive, 143 false-negative, 565 false-positive (FP), and 342 true-negative PSNs, resulting in sensitivity of 90.7%, specificity of 37.7%, positive predictive value of 0.71, and negative predictive value of 0.70. CAD-VD20A detected 1381 true-positive, 163 false-negative, 337 FP, and 570 true-negative PSNs, resulting in sensitivity of 89.4%, specificity of 62.8%, positive predictive value of 0.80, and negative predictive value 0.77, respectively. The rate of FP per scan was 0.6 for CAD-VD10F and 0.3 for CAD-VD20A. CONCLUSIONS The new deep learning-based CAD software (VD20A) shows similar sensitivity with the conventional CAD software (VD10F), but a significantly higher specificity.
Collapse
Affiliation(s)
- Regine Mariette Perl
- From the Department of Diagnostic and Interventional Radiology, University Hospital of Tuebingen, Tuebingen
| | | | | | - Marius Stefan Horger
- From the Department of Diagnostic and Interventional Radiology, University Hospital of Tuebingen, Tuebingen
| |
Collapse
|
38
|
CARNet: Automatic Cerebral Aneurysm Classification in Time-of-Flight MR Angiography by Leveraging Recurrent Neural Networks. ARTIF INTELL 2021. [DOI: 10.1007/978-3-030-93046-2_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
39
|
Zheng S, Cui X, Vonder M, Veldhuis RNJ, Ye Z, Vliegenthart R, Oudkerk M, van Ooijen PMA. Deep learning-based pulmonary nodule detection: Effect of slab thickness in maximum intensity projections at the nodule candidate detection stage. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105620. [PMID: 32615493 DOI: 10.1016/j.cmpb.2020.105620] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 06/14/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE To investigate the effect of the slab thickness in maximum intensity projections (MIPs) on the candidate detection performance of a deep learning-based computer-aided detection (DL-CAD) system for pulmonary nodule detection in CT scans. METHODS The public LUNA16 dataset includes 888 CT scans with 1186 nodules annotated by four radiologists. From those scans, MIP images were reconstructed with slab thicknesses of 5 to 50 mm (at 5 mm intervals) and 3 to 13 mm (at 2 mm intervals). The architecture in the nodule candidate detection part of the DL-CAD system was trained separately using MIP images with various slab thicknesses. Based on ten-fold cross-validation, the sensitivity and the F2 score were determined to evaluate the performance of using each slab thickness at the nodule candidate detection stage. The free-response receiver operating characteristic (FROC) curve was used to assess the performance of the whole DL-CAD system that took the results combined from 16 MIP slab thickness settings. RESULTS At the nodule candidate detection stage, the combination of results from 16 MIP slab thickness settings showed a high sensitivity of 98.0% with 46 false positives (FPs) per scan. Regarding a single MIP slab thickness of 10 mm, the highest sensitivity of 90.0% with 8 FPs/scan was reached before false positive reduction. The sensitivity increased (82.8% to 90.0%) for slab thickness of 1 to 10 mm and decreased (88.7% to 76.6%) for slab thickness of 15-50 mm. The number of FPs was decreasing with increasing slab thickness, but was stable at 5 FPs/scan at a slab thickness of 30 mm or more. After false positive reduction, the DL-CAD system, utilizing 16 MIP slab thickness settings, had the sensitivity of 94.4% with 1 FP/scan. CONCLUSIONS The utilization of multi-MIP images could improve the performance at the nodule candidate detection stage, even for the whole DL-CAD system. For a single slab thickness of 10 mm, the highest sensitivity for pulmonary nodule detection was reached at the nodule candidate detection stage, similar to the slab thickness usually applied by radiologists.
Collapse
Affiliation(s)
- Sunyi Zheng
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands.
| | - Xiaonan Cui
- Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Tianjin, China
| | - Marleen Vonder
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | | | - Zhaoxiang Ye
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Tianjin, China
| | - Rozemarijn Vliegenthart
- Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | | | - Peter M A van Ooijen
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| |
Collapse
|
40
|
Lu X, Gu Y, Yang L, Zhang B, Zhao Y, Yu D, Zhao J, Gao L, Zhou T, Liu Y, Zhang W. Multi-level 3D Densenets for False-positive Reduction in Lung Nodule Detection Based on Chest Computed Tomography. Curr Med Imaging 2020; 16:1004-1021. [PMID: 33081662 DOI: 10.2174/1573405615666191113122840] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Revised: 10/11/2019] [Accepted: 10/19/2019] [Indexed: 12/31/2022]
Abstract
OBJECTIVE False-positive nodule reduction is a crucial part of a computer-aided detection (CADe) system, which assists radiologists in accurate lung nodule detection. In this research, a novel scheme using multi-level 3D DenseNet framework is proposed to implement false-positive nodule reduction task. METHODS Multi-level 3D DenseNet models were extended to differentiate lung nodules from falsepositive nodules. First, different models were fed with 3D cubes with different sizes for encoding multi-level contextual information to meet the challenges of the large variations of lung nodules. In addition, image rotation and flipping were utilized to upsample positive samples which consisted of a positive sample set. Furthermore, the 3D DenseNets were designed to keep low-level information of nodules, as densely connected structures in DenseNet can reuse features of lung nodules and then boost feature propagation. Finally, the optimal weighted linear combination of all model scores obtained the best classification result in this research. RESULTS The proposed method was evaluated with LUNA16 dataset which contained 888 thin-slice CT scans. The performance was validated via 10-fold cross-validation. Both the Free-response Receiver Operating Characteristic (FROC) curve and the Competition Performance Metric (CPM) score show that the proposed scheme can achieve a satisfactory detection performance in the falsepositive reduction track of the LUNA16 challenge. CONCLUSION The result shows that the proposed scheme can be significant for false-positive nodule reduction task.
Collapse
Affiliation(s)
- Xiaoqi Lu
- College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China,Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, China
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Jianfeng Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lixin Gao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China,School of Foreign Languages, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Tao Zhou
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Yang Liu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Wei Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| |
Collapse
|
41
|
Yu J, Yang B, Wang J, Leader J, Wilson D, Pu J. 2D CNN versus 3D CNN for false-positive reduction in lung cancer screening. J Med Imaging (Bellingham) 2020; 7:051202. [PMID: 33062802 PMCID: PMC7550796 DOI: 10.1117/1.jmi.7.5.051202] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 09/28/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: To clarify whether and to what extent three-dimensional (3D) convolutional neural network (CNN) is superior to 2D CNN when applied to reduce false-positive nodule detections in the scenario of low-dose computed tomography (CT) lung cancer screening. Approach: We established a dataset consisting of 1600 chest CT examinations acquired on different subjects from various sources. There were in total 18,280 candidate nodules in these CT examinations, among which 9185 were nodules and 9095 were not nodules. For each candidate nodule, we extracted a number of cubic subvolumes with a dimension of 72 × 72 × 72 mm 3 by rotating the CT examinations randomly for 25 times prior to the extraction of the axis-aligned subvolumes. These subvolumes were split into three groups in a ratio of 8 ∶ 1 ∶ 1 for training, validation, and independent testing purposes. We developed a multiscale CNN architecture and implemented its 2D and 3D versions to classify pulmonary nodules into two categories, namely true positive and false positive. The performance of the 2D/3D-CNN classification schemes was evaluated using the area under the receiver operating characteristic curves (AUC). The p -values and the 95% confidence intervals (CI) were calculated. Results: The AUC for the optimal 2D-CNN model is 0.9307 (95% CI: 0.9285 to 0.9330) with a sensitivity of 92.70% and a specificity of 76.21%. The 3D-CNN model with the best performance had an AUC of 0.9541 (95% CI: 0.9495 to 0.9583) with a sensitivity of 89.98% and a specificity of 87.30%. The developed multiscale CNN architecture had a better performance than the vanilla architecture did. Conclusions: The 3D-CNN model has a better performance in false-positive reduction compared with its 2D counterpart; however, the improvement is relatively limited and demands more computational resources for training purposes.
Collapse
Affiliation(s)
- Juezhao Yu
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Bohan Yang
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Jing Wang
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Joseph Leader
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - David Wilson
- University of Pittsburgh, Department of Medicine, Pittsburgh, Pennsylvania, United States
| | - Jiantao Pu
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| |
Collapse
|