1
|
Zou Z, Zou B, Kui X, Chen Z, Li Y. DGCBG-Net: A dual-branch network with global cross-modal interaction and boundary guidance for tumor segmentation in PET/CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108125. [PMID: 38631130 DOI: 10.1016/j.cmpb.2024.108125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 02/24/2024] [Accepted: 03/07/2024] [Indexed: 04/19/2024]
Abstract
BACKGROUND AND OBJECTIVES Automatic tumor segmentation plays a crucial role in cancer diagnosis and treatment planning. Computed tomography (CT) and positron emission tomography (PET) are extensively employed for their complementary medical information. However, existing methods ignore bilateral cross-modal interaction of global features during feature extraction, and they underutilize multi-stage tumor boundary features. METHODS To address these limitations, we propose a dual-branch tumor segmentation network based on global cross-modal interaction and boundary guidance in PET/CT images (DGCBG-Net). DGCBG-Net consists of 1) a global cross-modal interaction module that extracts global contextual information from PET/CT images and promotes bilateral cross-modal interaction of global feature; 2) a shared multi-path downsampling module that learns complementary features from PET/CT modalities to mitigate the impact of misleading features and decrease the loss of discriminative features during downsampling; 3) a boundary prior-guided branch that extracts potential boundary features from CT images at multiple stages, assisting the semantic segmentation branch in improving the accuracy of tumor boundary segmentation. RESULTS Extensive experiments are conducted on STS and Hecktor 2022 datasets to evaluate the proposed method. The average Dice scores of our DGCB-Net on the two datasets are 80.33% and 79.29%, with average IOU scores of 67.64% and 70.18%. DGCB-Net outperformed the current state-of-the-art methods with a 1.77% higher Dice score and a 2.12% higher IOU score. CONCLUSIONS Extensive experimental results demonstrate that DGCBG-Net outperforms existing segmentation methods, and is competitive to state-of-arts.
Collapse
Affiliation(s)
- Ziwei Zou
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Beiji Zou
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Xiaoyan Kui
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China.
| | - Zhi Chen
- School of Computer Science and Engineering, Central South University, No. 932, Lushan South Road, ChangSha, 410083, China
| | - Yang Li
- School of Informatics, Hunan University of Chinese Medicine, No. 300, Xueshi Road, ChangSha, 410208, China
| |
Collapse
|
2
|
Song J, Lu X, Gu Y. GMAlignNet: multi-scale lightweight brain tumor image segmentation with enhanced semantic information consistency. Phys Med Biol 2024; 69:115033. [PMID: 38657628 DOI: 10.1088/1361-6560/ad4301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 04/24/2024] [Indexed: 04/26/2024]
Abstract
Although the U-shaped architecture, represented by UNet, has become a major network model for brain tumor segmentation, the repeated convolution and sampling operations can easily lead to the loss of crucial information. Additionally, directly fusing features from different levels without distinction can easily result in feature misalignment, affecting segmentation accuracy. On the other hand, traditional convolutional blocks used for feature extraction cannot capture the abundant multi-scale information present in brain tumor images. This paper proposes a multi-scale feature-aligned segmentation model called GMAlignNet that fully utilizes Ghost convolution to solve these problems. Ghost hierarchical decoupled fusion unit and Ghost hierarchical decoupled unit are used instead of standard convolutions in the encoding and decoding paths. This transformation replaces the holistic learning of volume structures by traditional convolutional blocks with multi-level learning on a specific view, facilitating the acquisition of abundant multi-scale contextual information through low-cost operations. Furthermore, a feature alignment unit is proposed that can utilize semantic information flow to guide the recovery of upsampled features. It performs pixel-level semantic information correction on misaligned features due to feature fusion. The proposed method is also employed to optimize three classic networks, namely DMFNet, HDCNet, and 3D UNet, demonstrating its effectiveness in automatic brain tumor segmentation. The proposed network model was applied to the BraTS 2018 dataset, and the results indicate that the proposed GMAlignNet achieved Dice coefficients of 81.65%, 90.07%, and 85.16% for enhancing tumor, whole tumor, and tumor core segmentation, respectively. Moreover, with only 0.29 M parameters and 26.88G FLOPs, it demonstrates better potential in terms of computational efficiency and possesses the advantages of lightweight. Extensive experiments on the BraTS 2018, BraTS 2019, and BraTS 2020 datasets suggest that the proposed model exhibits better potential in handling edge details and contour recognition.
Collapse
Affiliation(s)
- Jianli Song
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou 014010, People's Republic of China
| | - Xiaoqi Lu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou 014010, People's Republic of China
- School of Information Engineering, Inner Mongolia University of Technology, Hohhot 010051, People's Republic of China
| | - Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou 014010, People's Republic of China
| |
Collapse
|
3
|
Xie L, Xu Y, Zheng M, Chen Y, Sun M, Archer MA, Wan Y, Mao W, Tong Y. An Anthropomorphic Diagnosis System of Pulmonary Nodules using Weak Annotation-Based Deep Learning. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.05.03.24306828. [PMID: 38746400 PMCID: PMC11092690 DOI: 10.1101/2024.05.03.24306828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Purpose To develop an anthropomorphic diagnosis system of pulmonary nodules (PN) based on Deep learning (DL) that is trained by weak annotation data and has comparable performance to full-annotation based diagnosis systems. Methods The proposed system uses deep learning (DL) models to classify PNs (benign vs. malignant) with weak annotations, which eliminates the need for time-consuming and labor-intensive manual annotations of PNs. Moreover, the PN classification networks, augmented with handcrafted shape features acquired through the ball-scale transform technique, demonstrate capability to differentiate PNs with diverse labels, including pure ground-glass opacities, part-solid nodules, and solid nodules. Results The experiments were conducted on two lung CT datasets: (1) public LIDC-IDRI dataset with 1,018 subjects, (2) In-house dataset with 2740 subjects. Through 5-fold cross-validation on two datasets, the system achieved the following results: (1) an Area Under Curve (AUC) of 0.938 for PN localization and an AUC of 0.912 for PN differential diagnosis on the LIDC-IDRI dataset of 814 testing cases, (2) an AUC of 0.943 for PN localization and an AUC of 0.815 for PN differential diagnosis on the in-house dataset of 822 testing cases. These results demonstrate comparable performance to full annotation-based diagnosis systems. Conclusions Our system can efficiently localize and differentially diagnose PNs even in resource-limited environments with good robustness across different grade and morphology sub-groups in the presence of deviations due to the size, shape, and texture of the nodule, indicating its potential for future clinical translation. Summary An anthropomorphic diagnosis system of pulmonary nodules (PN) based on deep learning and weak annotation was found to achieve comparable performance to full-annotation dataset-based diagnosis systems, significantly reducing the time and the cost associated with the annotation. Key Points A fully automatic system for the diagnosis of PN in CT scans using a suitable deep learning model and weak annotations was developed to achieve comparable performance (AUC = 0.938 for PN localization, AUC = 0.912 for PN differential diagnosis) with the full-annotation based deep learning models, reducing around 30%∼80% of annotation time for the experts.The integration of the hand-crafted feature acquired from human experts (natural intelligence) into the deep learning networks and the fusion of the classification results of multi-scale networks can efficiently improve the PN classification performance across different diameters and sub-groups of the nodule.
Collapse
|
4
|
Ashames MMA, Demir A, Gerek ON, Fidan M, Gulmezoglu MB, Ergin S, Edizkan R, Koc M, Barkana A, Calisir C. Are deep learning classification results obtained on CT scans fair and interpretable? Phys Eng Sci Med 2024:10.1007/s13246-024-01419-8. [PMID: 38573489 DOI: 10.1007/s13246-024-01419-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Accepted: 03/12/2024] [Indexed: 04/05/2024]
Abstract
Following the great success of various deep learning methods in image and object classification, the biomedical image processing society is also overwhelmed with their applications to various automatic diagnosis cases. Unfortunately, most of the deep learning-based classification attempts in the literature solely focus on the aim of extreme accuracy scores, without considering interpretability, or patient-wise separation of training and test data. For example, most lung nodule classification papers using deep learning randomly shuffle data and split it into training, validation, and test sets, causing certain images from the Computed Tomography (CT) scan of a person to be in the training set, while other images of the same person to be in the validation or testing image sets. This can result in reporting misleading accuracy rates and the learning of irrelevant features, ultimately reducing the real-life usability of these models. When the deep neural networks trained on the traditional, unfair data shuffling method are challenged with new patient images, it is observed that the trained models perform poorly. In contrast, deep neural networks trained with strict patient-level separation maintain their accuracy rates even when new patient images are tested. Heat map visualizations of the activations of the deep neural networks trained with strict patient-level separation indicate a higher degree of focus on the relevant nodules. We argue that the research question posed in the title has a positive answer only if the deep neural networks are trained with images of patients that are strictly isolated from the validation and testing patient sets.
Collapse
Affiliation(s)
- Mohamad M A Ashames
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Ahmet Demir
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Omer N Gerek
- Department of Electrical and Electronics Engineering, Eskisehir Technical University, Eskisehir, Turkey
| | - Mehmet Fidan
- Vocational School of Transportation, Eskisehir Technical University, Eskisehir, Turkey
| | - M Bilginer Gulmezoglu
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Semih Ergin
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Rifat Edizkan
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Mehmet Koc
- Department of Computer Engineering, Eskisehir Technical University, Eskisehir, Turkey.
| | - Atalay Barkana
- Department of Electrical and Electronics Engineering, Eskisehir Technical University, Eskisehir, Turkey
| | - Cuneyt Calisir
- Department of Radiology, Manisa Celal Bayar University, Manisa, Turkey
| |
Collapse
|
5
|
Zahari R, Cox J, Obara B. Uncertainty-aware image classification on 3D CT lung. Comput Biol Med 2024; 172:108324. [PMID: 38508053 DOI: 10.1016/j.compbiomed.2024.108324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 03/06/2024] [Accepted: 03/14/2024] [Indexed: 03/22/2024]
Abstract
Early detection is crucial for lung cancer to prolong the patient's survival. Existing model architectures used in such systems have shown promising results. However, they lack reliability and robustness in their predictions and the models are typically evaluated on a single dataset, making them overconfident when a new class is present. With the existence of uncertainty, uncertain images can be referred to medical experts for a second opinion. Thus, we propose an uncertainty-aware framework that includes three phases: data preprocessing and model selection and evaluation, uncertainty quantification (UQ), and uncertainty measurement and data referral for the classification of benign and malignant nodules using 3D CT images. To quantify the uncertainty, we employed three approaches; Monte Carlo Dropout (MCD), Deep Ensemble (DE), and Ensemble Monte Carlo Dropout (EMCD). We evaluated eight different deep learning models consisting of ResNet, DenseNet, and the Inception network family, all of which achieved average F1 scores above 0.832, and the highest average value of 0.845 was obtained using InceptionResNetV2. Furthermore, incorporating the UQ demonstrated significant improvement in the overall model performance. Upon evaluation of the uncertainty estimate, MCD outperforms the other UQ models except for the metric, URecall, where DE and EMCD excel, implying that they are better at identifying incorrect predictions with higher uncertainty levels, which is vital in the medical field. Finally, we show that using a threshold for data referral can greatly improve the performance further, increasing the accuracy up to 0.959.
Collapse
Affiliation(s)
- Rahimi Zahari
- School of Computing, Newcastle University, Newcastle upon Tyne, UK
| | - Julie Cox
- County Durham and Darlington NHS Foundation Trust, County Durham, UK
| | - Boguslaw Obara
- School of Computing, Newcastle University, Newcastle upon Tyne, UK; Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK.
| |
Collapse
|
6
|
Ewals LJS, Heesterbeek LJJ, Yu B, van der Wulp K, Mavroeidis D, Funk M, Snijders CCP, Jacobs I, Nederend J, Pluyter JR. The Impact of Expectation Management and Model Transparency on Radiologists' Trust and Utilization of AI Recommendations for Lung Nodule Assessment on Computed Tomography: Simulated Use Study. JMIR AI 2024; 3:e52211. [PMID: 38875574 PMCID: PMC11041414 DOI: 10.2196/52211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 11/14/2023] [Accepted: 02/03/2024] [Indexed: 06/16/2024]
Abstract
BACKGROUND Many promising artificial intelligence (AI) and computer-aided detection and diagnosis systems have been developed, but few have been successfully integrated into clinical practice. This is partially owing to a lack of user-centered design of AI-based computer-aided detection or diagnosis (AI-CAD) systems. OBJECTIVE We aimed to assess the impact of different onboarding tutorials and levels of AI model explainability on radiologists' trust in AI and the use of AI recommendations in lung nodule assessment on computed tomography (CT) scans. METHODS In total, 20 radiologists from 7 Dutch medical centers performed lung nodule assessment on CT scans under different conditions in a simulated use study as part of a 2×2 repeated-measures quasi-experimental design. Two types of AI onboarding tutorials (reflective vs informative) and 2 levels of AI output (black box vs explainable) were designed. The radiologists first received an onboarding tutorial that was either informative or reflective. Subsequently, each radiologist assessed 7 CT scans, first without AI recommendations. AI recommendations were shown to the radiologist, and they could adjust their initial assessment. Half of the participants received the recommendations via black box AI output and half received explainable AI output. Mental model and psychological trust were measured before onboarding, after onboarding, and after assessing the 7 CT scans. We recorded whether radiologists changed their assessment on found nodules, malignancy prediction, and follow-up advice for each CT assessment. In addition, we analyzed whether radiologists' trust in their assessments had changed based on the AI recommendations. RESULTS Both variations of onboarding tutorials resulted in a significantly improved mental model of the AI-CAD system (informative P=.01 and reflective P=.01). After using AI-CAD, psychological trust significantly decreased for the group with explainable AI output (P=.02). On the basis of the AI recommendations, radiologists changed the number of reported nodules in 27 of 140 assessments, malignancy prediction in 32 of 140 assessments, and follow-up advice in 12 of 140 assessments. The changes were mostly an increased number of reported nodules, a higher estimated probability of malignancy, and earlier follow-up. The radiologists' confidence in their found nodules changed in 82 of 140 assessments, in their estimated probability of malignancy in 50 of 140 assessments, and in their follow-up advice in 28 of 140 assessments. These changes were predominantly increases in confidence. The number of changed assessments and radiologists' confidence did not significantly differ between the groups that received different onboarding tutorials and AI outputs. CONCLUSIONS Onboarding tutorials help radiologists gain a better understanding of AI-CAD and facilitate the formation of a correct mental model. If AI explanations do not consistently substantiate the probability of malignancy across patient cases, radiologists' trust in the AI-CAD system can be impaired. Radiologists' confidence in their assessments was improved by using the AI recommendations.
Collapse
Affiliation(s)
- Lotte J S Ewals
- Catharina Cancer Institute, Catharina Hospital Eindhoven, Eindhoven, Netherlands
| | | | - Bin Yu
- Research Center for Marketing and Supply Chain Management, Nyenrode Business University, Breukelen, Netherlands
| | - Kasper van der Wulp
- Catharina Cancer Institute, Catharina Hospital Eindhoven, Eindhoven, Netherlands
| | | | - Mathias Funk
- Department of Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Chris C P Snijders
- Department of Human Technology Interaction, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Igor Jacobs
- Department of Hospital Services and Informatics, Philips Research, Eindhoven, Netherlands
| | - Joost Nederend
- Catharina Cancer Institute, Catharina Hospital Eindhoven, Eindhoven, Netherlands
| | - Jon R Pluyter
- Department of Experience Design, Royal Philips, Eindhoven, Netherlands
| |
Collapse
|
7
|
Gu Y, Yang X, Sun M, Wang C, Yang H, Yang C, Wang J, Kong G, Lv J, Zhang W. Graph-guided deep hashing networks for similar patient retrieval. Comput Biol Med 2024; 169:107865. [PMID: 38157772 DOI: 10.1016/j.compbiomed.2023.107865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/19/2023] [Accepted: 12/17/2023] [Indexed: 01/03/2024]
Abstract
With the rapid growth and widespread application of electronic health records (EHRs), similar patient retrieval has become an important task for downstream clinical decision support such as diagnostic reference, treatment planning, etc. However, the high dimensionality, large volume, and heterogeneity of EHRs pose challenges to the efficient and accurate retrieval of patients with similar medical conditions to the current case. Several previous studies have attempted to alleviate these issues by using hash coding techniques, improving retrieval efficiency but merely exploring underlying characteristics among instances to preserve retrieval accuracy. In this paper, drug categories of instances recorded in EHRs are regarded as the ground truth to determine the pairwise similarity, and we consider the abundant semantic information within such multi-labels and propose a novel framework named Graph-guided Deep Hashing Networks (GDHN). To capture correlation dependencies among the multi-labels, we first construct a label graph where each node represents a drug category, then a graph convolution network (GCN) is employed to derive the multi-label embedding of each instance. Thus, we can utilize the learned multi-label embeddings to guide the patient hashing process to obtain more informative and discriminative hash codes. Extensive experiments have been conducted on two datasets, including a real-world dataset concerning IgA nephropathy from Peking University First Hospital, and a publicly available dataset from MIMIC-III, compared with traditional hashing methods and state-of-the-art deep hashing methods using three evaluation metrics. The results demonstrate that GDHN outperforms the competitors at different hash code lengths, validating the superiority of our proposal.
Collapse
Affiliation(s)
- Yifan Gu
- Renal Division, Department of Medicine, Peking University First Hospital, Beijing, China; State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Xuebing Yang
- State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Mengxuan Sun
- State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Chutong Wang
- State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Hongyu Yang
- Renal Division, Department of Medicine, Peking University First Hospital, Beijing, China; Research Units of Diagnosis and Treatment of Immune-mediated Kidney Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Chao Yang
- Renal Division, Department of Medicine, Peking University First Hospital, Beijing, China; Research Units of Diagnosis and Treatment of Immune-mediated Kidney Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Jinwei Wang
- Renal Division, Department of Medicine, Peking University First Hospital, Beijing, China; Research Units of Diagnosis and Treatment of Immune-mediated Kidney Diseases, Chinese Academy of Medical Sciences, Beijing, China
| | - Guilan Kong
- National Institute of Health Data Science, Peking University, Beijing, China; Advanced Institute of Information Technology, Peking University, Hangzhou, China
| | - Jicheng Lv
- Renal Division, Department of Medicine, Peking University First Hospital, Beijing, China; Research Units of Diagnosis and Treatment of Immune-mediated Kidney Diseases, Chinese Academy of Medical Sciences, Beijing, China.
| | - Wensheng Zhang
- State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing, China; Guangzhou University, Guangzhou, China.
| |
Collapse
|
8
|
Quanyang W, Lina Z, Yao H, Jiawei W, Wei T, Linlin Q, Zewei Z, Donghui H, Hongjia L, Shuluan C, Jiaxing Z, Shijun Z. Application of computer-aided detection for NCCN-based follow-up recommendation in subsolid nodules: Effect on inter-observer agreement. Cancer Med 2024; 13:e6967. [PMID: 38348960 PMCID: PMC10832308 DOI: 10.1002/cam4.6967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 01/08/2024] [Accepted: 01/12/2024] [Indexed: 02/15/2024] Open
Abstract
RATIONALE AND OBJECTIVES Computer-aided detection (CAD) of pulmonary nodules reduces the impact of observer variability, improving the reliability and reproducibility of nodule assessments in clinical practice. Therefore, this study aimed to assess the impact of CAD on inter-observer agreement in the follow-up management of subsolid nodules. MATERIALS AND METHODS A dataset comprising 60 subsolid nodule cases was constructed based on the National Cancer Center lung cancer screening data. Five observers independently assessed all low-dose computed tomography scans and assigned follow-up management strategies to each case according to the National Comprehensive Cancer Network (NCCN) guidelines, using both manual measurements and CAD assistance. The linearly weighted Cohen's kappa test was used to measure agreement between paired observers. Agreement among multiple observers was evaluated using the Fleiss kappa statistic. RESULTS The agreement of the five observers for NCCN follow-up management categorization was moderate when measured manually, with a Fleiss kappa score of 0.437. Utilizing CAD led to a notable enhancement in agreement, achieving a substantial consensus with a Fleiss kappa value of 0.623. After using CAD, the proportion of major and substantial management discrepancies decreased from 27.5% to 15.8% and 4.8% to 1.5%, respectively (p < 0.01). In 23 lung cancer cases presenting as part-solid nodules, CAD significantly elevates the average sensitivity in detecting lung cancer cases presenting as part-solid nodules (overall sensitivity, 82.6% vs. 92.2%; p < 0.05). CONCLUSION The application of CAD significantly improves inter-observer agreement in the follow-up management strategy for subsolid nodules. It also demonstrates the potential to reduce substantial management discrepancies and increase detection sensitivity in lung cancer cases presenting as part-solid nodules.
Collapse
Affiliation(s)
- Wu Quanyang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhou Lina
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Huang Yao
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Wang Jiawei
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Tang Wei
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Qi Linlin
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhang Zewei
- PET‐CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Hou Donghui
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Li Hongjia
- PET‐CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Chen Shuluan
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhang Jiaxing
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhao Shijun
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| |
Collapse
|
9
|
Zhan X, Long H, Gou F, Wu J. A semantic fidelity interpretable-assisted decision model for lung nodule classification. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-03043-5. [PMID: 38141069 DOI: 10.1007/s11548-023-03043-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/24/2023] [Indexed: 12/24/2023]
Abstract
PURPOSE Early diagnosis of lung nodules is important for the treatment of lung cancer patients, existing capsule network-based assisted diagnostic models for lung nodule classification have shown promising prospects in terms of interpretability. However, these models lack the ability to draw features robustly at shallow networks, which in turn limits the performance of the models. Therefore, we propose a semantic fidelity capsule encoding and interpretable (SFCEI)-assisted decision model for lung nodule multi-class classification. METHODS First, we propose multilevel receptive field feature encoding block to capture multi-scale features of lung nodules of different sizes. Second, we embed multilevel receptive field feature encoding blocks in the residual code-and-decode attention layer to extract fine-grained context features. Integrating multi-scale features and contextual features to form semantic fidelity lung nodule attribute capsule representations, which consequently enhances the performance of the model. RESULTS We implemented comprehensive experiments on the dataset (LIDC-IDRI) to validate the superiority of the model. The stratified fivefold cross-validation results show that the accuracy (94.17%) of our method exceeds existing advanced approaches in the multi-class classification of malignancy scores for lung nodules. CONCLUSION The experiments confirm that the methodology proposed can effectively capture the multi-scale features and contextual features of lung nodules. It enhances the capability of shallow structure drawing features in capsule networks, which in turn improves the classification performance of malignancy scores. The interpretable model can support the physicians' confidence in clinical decision-making.
Collapse
Affiliation(s)
- Xiangbing Zhan
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China
| | - Huiyun Long
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC, 3800, Australia.
| |
Collapse
|
10
|
Kumar A, Pandey SK, Varshney N, Singh KU, Singh T, Shah MA. Distinctive approach in brain tumor detection and feature extraction using biologically inspired DWT method and SVM. Sci Rep 2023; 13:22735. [PMID: 38123666 PMCID: PMC10733354 DOI: 10.1038/s41598-023-50073-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 12/14/2023] [Indexed: 12/23/2023] Open
Abstract
Brain tumors result from uncontrolled cell growth, potentially leading to fatal consequences if left untreated. While significant efforts have been made with some promising results, the segmentation and classification of brain tumors remain challenging due to their diverse locations, shapes, and sizes. In this study, we employ a combination of Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA) to enhance performance and streamline the medical image segmentation process. Proposed method using Otsu's segmentation method followed by PCA to identify the most informative features. Leveraging the grey-level co-occurrence matrix, we extract numerous valuable texture features. Subsequently, we apply a Support Vector Machine (SVM) with various kernels for classification. We evaluate the proposed method's performance using metrics such as accuracy, sensitivity, specificity, and the Dice Similarity Index coefficient. The experimental results validate the effectiveness of our approach, with recall rates of 86.9%, precision of 95.2%, F-measure of 90.9%, and overall accuracy. Simulation of the results shows improvements in both quality and accuracy compared to existing techniques. In results section, experimental Dice Similarity Index coefficient of 0.82 indicates a strong overlap between the machine-extracted tumor region and the manually delineated tumor region.
Collapse
Affiliation(s)
- Ankit Kumar
- Department of Information Technology, Guru Ghasidas Vishwavidyalaya, Bilaspur, India
| | - Saroj Kumar Pandey
- Department of Computer Engineering & Applications, GLA University, Mathura, Uttar Pradesh, India
| | - Neeraj Varshney
- Department of Computer Engineering & Applications, GLA University, Mathura, Uttar Pradesh, India
| | - Kamred Udham Singh
- School of Computer Science and Engineering, Graphic Hill Era University, Dehradun, 248002, India
| | - Teekam Singh
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, 248002, India
| | - Mohd Asif Shah
- Kebri Dehar University, Kebri Dehar, Somali, 250, Ethiopia.
- Centre of Research Impact and Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, 140401, Punjab, India.
- Division of Research and Development, Lovely Professional University, Phagwara, Punjab, 144001, India.
| |
Collapse
|
11
|
Ma L, Wan C, Hao K, Cai A, Liu L. A novel fusion algorithm for benign-malignant lung nodule classification on CT images. BMC Pulm Med 2023; 23:474. [PMID: 38012620 PMCID: PMC10683224 DOI: 10.1186/s12890-023-02708-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 10/12/2023] [Indexed: 11/29/2023] Open
Abstract
The accurate recognition of malignant lung nodules on CT images is critical in lung cancer screening, which can offer patients the best chance of cure and significant reductions in mortality from lung cancer. Convolutional Neural Network (CNN) has been proven as a powerful method in medical image analysis. Radiomics which is believed to be of interest based on expert opinion can describe high-throughput extraction from CT images. Graph Convolutional Network explores the global context and makes the inference on both graph node features and relational structures. In this paper, we propose a novel fusion algorithm, RGD, for benign-malignant lung nodule classification by incorporating Radiomics study and Graph learning into the multiple Deep CNNs to form a more complete and distinctive feature representation, and ensemble the predictions for robust decision-making. The proposed method was conducted on the publicly available LIDC-IDRI dataset in a 10-fold cross-validation experiment and it obtained an average accuracy of 93.25%, a sensitivity of 89.22%, a specificity of 95.82%, precision of 92.46%, F1 Score of 0.9114 and AUC of 0.9629. Experimental results illustrate that the RGD model achieves superior performance compared with the state-of-the-art methods. Moreover, the effectiveness of the fusion strategy has been confirmed by extensive ablation studies. In the future, the proposed model which performs well on the pulmonary nodule classification on CT images will be applied to increase confidence in the clinical diagnosis of lung cancer.
Collapse
Affiliation(s)
- Ling Ma
- College of Software, Nankai University, Tianjin, 300350, China
| | - Chuangye Wan
- College of Software, Nankai University, Tianjin, 300350, China
| | - Kexin Hao
- College of Software, Nankai University, Tianjin, 300350, China
| | - Annan Cai
- College of Software, Nankai University, Tianjin, 300350, China
| | - Lizhi Liu
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, Guangdong, China.
| |
Collapse
|
12
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
13
|
Hendrix W, Hendrix N, Scholten ET, Mourits M, Trap-de Jong J, Schalekamp S, Korst M, van Leuken M, van Ginneken B, Prokop M, Rutten M, Jacobs C. Deep learning for the detection of benign and malignant pulmonary nodules in non-screening chest CT scans. COMMUNICATIONS MEDICINE 2023; 3:156. [PMID: 37891360 PMCID: PMC10611755 DOI: 10.1038/s43856-023-00388-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
BACKGROUND Outside a screening program, early-stage lung cancer is generally diagnosed after the detection of incidental nodules in clinically ordered chest CT scans. Despite the advances in artificial intelligence (AI) systems for lung cancer detection, clinical validation of these systems is lacking in a non-screening setting. METHOD We developed a deep learning-based AI system and assessed its performance for the detection of actionable benign nodules (requiring follow-up), small lung cancers, and pulmonary metastases in CT scans acquired in two Dutch hospitals (internal and external validation). A panel of five thoracic radiologists labeled all nodules, and two additional radiologists verified the nodule malignancy status and searched for any missed cancers using data from the national Netherlands Cancer Registry. The detection performance was evaluated by measuring the sensitivity at predefined false positive rates on a free receiver operating characteristic curve and was compared with the panel of radiologists. RESULTS On the external test set (100 scans from 100 patients), the sensitivity of the AI system for detecting benign nodules, primary lung cancers, and metastases is respectively 94.3% (82/87, 95% CI: 88.1-98.8%), 96.9% (31/32, 95% CI: 91.7-100%), and 92.0% (104/113, 95% CI: 88.5-95.5%) at a clinically acceptable operating point of 1 false positive per scan (FP/s). These sensitivities are comparable to or higher than the radiologists, albeit with a slightly higher FP/s (average difference of 0.6). CONCLUSIONS The AI system reliably detects benign and malignant pulmonary nodules in clinically indicated CT scans and can potentially assist radiologists in this setting.
Collapse
Affiliation(s)
- Ward Hendrix
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
| | - Nils Hendrix
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
- Jheronimus Academy of Data Science, Sint Janssingel 92, 5211 DA, 's-Hertogenbosch, The Netherlands
| | - Ernst T Scholten
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Mariëlle Mourits
- Radiology Department, Canisius Wilhelmina Hospital, Weg door Jonkerbos 100, 6532 SZ, Nijmegen, The Netherlands
| | - Joline Trap-de Jong
- Radiology Department, St. Antonius Hospital, Koekoekslaan 1, 3435 CM, Nieuwegein, The Netherlands
| | - Steven Schalekamp
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Mike Korst
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
| | - Maarten van Leuken
- Radiology Department, Canisius Wilhelmina Hospital, Weg door Jonkerbos 100, 6532 SZ, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Mathias Prokop
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, University Medical Center Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Matthieu Rutten
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
| | - Colin Jacobs
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands.
| |
Collapse
|
14
|
Dong Y, Li X, Yang Y, Wang M, Gao B. A Synthesizing Semantic Characteristics Lung Nodules Classification Method Based on 3D Convolutional Neural Network. Bioengineering (Basel) 2023; 10:1245. [PMID: 38002369 PMCID: PMC10669569 DOI: 10.3390/bioengineering10111245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 11/26/2023] Open
Abstract
Early detection is crucial for the survival and recovery of lung cancer patients. Computer-aided diagnosis system can assist in the early diagnosis of lung cancer by providing decision support. While deep learning methods are increasingly being applied to tasks such as CAD (Computer-aided diagnosis system), these models lack interpretability. In this paper, we propose a convolutional neural network model that combines semantic characteristics (SCCNN) to predict whether a given pulmonary nodule is malignant. The model synthesizes the advantages of multi-view, multi-task and attention modules in order to fully simulate the actual diagnostic process of radiologists. The 3D (three dimensional) multi-view samples of lung nodules are extracted by spatial sampling method. Meanwhile, semantic characteristics commonly used in radiology reports are used as an auxiliary task and serve to explain how the model interprets. The introduction of the attention module in the feature fusion stage improves the classification of lung nodules as benign or malignant. Our experimental results using the LIDC-IDRI (Lung Image Database Consortium and Image Database Resource Initiative) show that this study achieves 95.45% accuracy and 97.26% ROC (Receiver Operating Characteristic) curve area. The results show that the method we proposed not only realize the classification of benign and malignant compared to standard 3D CNN approaches but can also be used to intuitively explain how the model makes predictions, which can assist clinical diagnosis.
Collapse
Affiliation(s)
| | - Xiaoqin Li
- Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China; (Y.D.); (Y.Y.); (M.W.); (B.G.)
| | | | | | | |
Collapse
|
15
|
Prosper AE, Kammer MN, Maldonado F, Aberle DR, Hsu W. Expanding Role of Advanced Image Analysis in CT-detected Indeterminate Pulmonary Nodules and Early Lung Cancer Characterization. Radiology 2023; 309:e222904. [PMID: 37815447 PMCID: PMC10623199 DOI: 10.1148/radiol.222904] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 03/23/2023] [Accepted: 03/30/2023] [Indexed: 10/11/2023]
Abstract
The implementation of low-dose chest CT for lung screening presents a crucial opportunity to advance lung cancer care through early detection and interception. In addition, millions of pulmonary nodules are incidentally detected annually in the United States, increasing the opportunity for early lung cancer diagnosis. Yet, realization of the full potential of these opportunities is dependent on the ability to accurately analyze image data for purposes of nodule classification and early lung cancer characterization. This review presents an overview of traditional image analysis approaches in chest CT using semantic characterization as well as more recent advances in the technology and application of machine learning models using CT-derived radiomic features and deep learning architectures to characterize lung nodules and early cancers. Methodological challenges currently faced in translating these decision aids to clinical practice, as well as the technical obstacles of heterogeneous imaging parameters, optimal feature selection, choice of model, and the need for well-annotated image data sets for the purposes of training and validation, will be reviewed, with a view toward the ultimate incorporation of these potentially powerful decision aids into routine clinical practice.
Collapse
Affiliation(s)
- Ashley Elizabeth Prosper
- From the Department of Radiological Sciences, David Geffen School of
Medicine at UCLA, 924 Westwood Blvd, Suite 420, Los Angeles, CA 90024 (A.E.P.,
D.R.A., W.H.); Division of Allergy, Pulmonary and Critical Care Medicine,
Department of Medicine, Vanderbilt University Medical Center, Nashville, Tenn
(M.N.K., F.M.); and Department of Bioengineering, UCLA Samueli School of
Engineering, Los Angeles, Calif (D.R.A., W.H.)
| | - Michael N. Kammer
- From the Department of Radiological Sciences, David Geffen School of
Medicine at UCLA, 924 Westwood Blvd, Suite 420, Los Angeles, CA 90024 (A.E.P.,
D.R.A., W.H.); Division of Allergy, Pulmonary and Critical Care Medicine,
Department of Medicine, Vanderbilt University Medical Center, Nashville, Tenn
(M.N.K., F.M.); and Department of Bioengineering, UCLA Samueli School of
Engineering, Los Angeles, Calif (D.R.A., W.H.)
| | - Fabien Maldonado
- From the Department of Radiological Sciences, David Geffen School of
Medicine at UCLA, 924 Westwood Blvd, Suite 420, Los Angeles, CA 90024 (A.E.P.,
D.R.A., W.H.); Division of Allergy, Pulmonary and Critical Care Medicine,
Department of Medicine, Vanderbilt University Medical Center, Nashville, Tenn
(M.N.K., F.M.); and Department of Bioengineering, UCLA Samueli School of
Engineering, Los Angeles, Calif (D.R.A., W.H.)
| | - Denise R. Aberle
- From the Department of Radiological Sciences, David Geffen School of
Medicine at UCLA, 924 Westwood Blvd, Suite 420, Los Angeles, CA 90024 (A.E.P.,
D.R.A., W.H.); Division of Allergy, Pulmonary and Critical Care Medicine,
Department of Medicine, Vanderbilt University Medical Center, Nashville, Tenn
(M.N.K., F.M.); and Department of Bioengineering, UCLA Samueli School of
Engineering, Los Angeles, Calif (D.R.A., W.H.)
| | - William Hsu
- From the Department of Radiological Sciences, David Geffen School of
Medicine at UCLA, 924 Westwood Blvd, Suite 420, Los Angeles, CA 90024 (A.E.P.,
D.R.A., W.H.); Division of Allergy, Pulmonary and Critical Care Medicine,
Department of Medicine, Vanderbilt University Medical Center, Nashville, Tenn
(M.N.K., F.M.); and Department of Bioengineering, UCLA Samueli School of
Engineering, Los Angeles, Calif (D.R.A., W.H.)
| |
Collapse
|
16
|
Yanagawa M, Ito R, Nozaki T, Fujioka T, Yamada A, Fujita S, Kamagata K, Fushimi Y, Tsuboyama T, Matsui Y, Tatsugami F, Kawamura M, Ueda D, Fujima N, Nakaura T, Hirata K, Naganawa S. New trend in artificial intelligence-based assistive technology for thoracic imaging. LA RADIOLOGIA MEDICA 2023; 128:1236-1249. [PMID: 37639191 PMCID: PMC10547663 DOI: 10.1007/s11547-023-01691-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023]
Abstract
Although there is no solid agreement for artificial intelligence (AI), it refers to a computer system with intelligence similar to that of humans. Deep learning appeared in 2006, and more than 10 years have passed since the third AI boom was triggered by improvements in computing power, algorithm development, and the use of big data. In recent years, the application and development of AI technology in the medical field have intensified internationally. There is no doubt that AI will be used in clinical practice to assist in diagnostic imaging in the future. In qualitative diagnosis, it is desirable to develop an explainable AI that at least represents the basis of the diagnostic process. However, it must be kept in mind that AI is a physician-assistant system, and the final decision should be made by the physician while understanding the limitations of AI. The aim of this article is to review the application of AI technology in diagnostic imaging from PubMed database while particularly focusing on diagnostic imaging in thorax such as lesion detection and qualitative diagnosis in order to help radiologists and clinicians to become more familiar with AI in thorax.
Collapse
Affiliation(s)
- Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-City, Osaka, 565-0871, Japan.
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-0016, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8519, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-2621, Japan
| | - Shohei Fujita
- Department of Radiology, Graduate School of Medicine and Faculty of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyoku, Kyoto, 606-8507, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-City, Osaka, 565-0871, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kita-ku, Okayama, 700-8558, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-Machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N15, W5, Kita-ku, Sapporo, 060-8638, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo Chuo-ku, Kumamoto, 860-8556, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita 15 Nish I 7, Kita-ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
17
|
Jenkin Suji R, Bhadauria SS, Wilfred Godfrey W. A survey and taxonomy of 2.5D approaches for lung segmentation and nodule detection in CT images. Comput Biol Med 2023; 165:107437. [PMID: 37717526 DOI: 10.1016/j.compbiomed.2023.107437] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/20/2023] [Accepted: 08/28/2023] [Indexed: 09/19/2023]
Abstract
CAD systems for lung cancer diagnosis and detection can significantly offer unbiased, infatiguable diagnostics with minimal variance, decreasing the mortality rate and the five-year survival rate. Lung segmentation and lung nodule detection are critical steps in the lung cancer CAD system pipeline. Literature on lung segmentation and lung nodule detection mostly comprises techniques that process 3-D volumes or 2-D slices and surveys. However, surveys that highlight 2.5D techniques for lung segmentation and lung nodule detection still need to be included. This paper presents a background and discussion on 2.5D methods to fill this gap. Further, this paper also gives a taxonomy of 2.5D approaches and a detailed description of the 2.5D approaches. Based on the taxonomy, various 2.5D techniques for lung segmentation and lung nodule detection are clustered into these 2.5D approaches, which is followed by possible future work in this direction.
Collapse
|
18
|
Cellina M, Cacioppa LM, Cè M, Chiarpenello V, Costa M, Vincenzo Z, Pais D, Bausano MV, Rossini N, Bruno A, Floridi C. Artificial Intelligence in Lung Cancer Screening: The Future Is Now. Cancers (Basel) 2023; 15:4344. [PMID: 37686619 PMCID: PMC10486721 DOI: 10.3390/cancers15174344] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/27/2023] [Accepted: 08/28/2023] [Indexed: 09/10/2023] Open
Abstract
Lung cancer has one of the worst morbidity and fatality rates of any malignant tumour. Most lung cancers are discovered in the middle and late stages of the disease, when treatment choices are limited, and patients' survival rate is low. The aim of lung cancer screening is the identification of lung malignancies in the early stage of the disease, when more options for effective treatments are available, to improve the patients' outcomes. The desire to improve the efficacy and efficiency of clinical care continues to drive multiple innovations into practice for better patient management, and in this context, artificial intelligence (AI) plays a key role. AI may have a role in each process of the lung cancer screening workflow. First, in the acquisition of low-dose computed tomography for screening programs, AI-based reconstruction allows a further dose reduction, while still maintaining an optimal image quality. AI can help the personalization of screening programs through risk stratification based on the collection and analysis of a huge amount of imaging and clinical data. A computer-aided detection (CAD) system provides automatic detection of potential lung nodules with high sensitivity, working as a concurrent or second reader and reducing the time needed for image interpretation. Once a nodule has been detected, it should be characterized as benign or malignant. Two AI-based approaches are available to perform this task: the first one is represented by automatic segmentation with a consequent assessment of the lesion size, volume, and densitometric features; the second consists of segmentation first, followed by radiomic features extraction to characterize the whole abnormalities providing the so-called "virtual biopsy". This narrative review aims to provide an overview of all possible AI applications in lung cancer screening.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, 20121 Milano, Italy;
| | - Laura Maria Cacioppa
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
- Division of Interventional Radiology, Department of Radiological Sciences, University Hospital “Azienda Ospedaliera Universitaria delle Marche”, 60126 Ancona, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Vittoria Chiarpenello
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Marco Costa
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Zakaria Vincenzo
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Daniele Pais
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Maria Vittoria Bausano
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Nicolò Rossini
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
| | - Alessandra Bruno
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
| | - Chiara Floridi
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
- Division of Interventional Radiology, Department of Radiological Sciences, University Hospital “Azienda Ospedaliera Universitaria delle Marche”, 60126 Ancona, Italy
- Division of Radiology, Department of Radiological Sciences, University Hospital “Azienda Ospedaliera Universitaria delle Marche”, 60126 Ancona, Italy
| |
Collapse
|
19
|
Baidya Kayal E, Ganguly S, Sasi A, Sharma S, DS D, Saini M, Rangarajan K, Kandasamy D, Bakhshi S, Mehndiratta A. A proposed methodology for detecting the malignant potential of pulmonary nodules in sarcoma using computed tomographic imaging and artificial intelligence-based models. Front Oncol 2023; 13:1212526. [PMID: 37671060 PMCID: PMC10476362 DOI: 10.3389/fonc.2023.1212526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 07/31/2023] [Indexed: 09/07/2023] Open
Abstract
The presence of lung metastases in patients with primary malignancies is an important criterion for treatment management and prognostication. Computed tomography (CT) of the chest is the preferred method to detect lung metastasis. However, CT has limited efficacy in differentiating metastatic nodules from benign nodules (e.g., granulomas due to tuberculosis) especially at early stages (<5 mm). There is also a significant subjectivity associated in making this distinction, leading to frequent CT follow-ups and additional radiation exposure along with financial and emotional burden to the patients and family. Even 18F-fluoro-deoxyglucose positron emission technology-computed tomography (18F-FDG PET-CT) is not always confirmatory for this clinical problem. While pathological biopsy is the gold standard to demonstrate malignancy, invasive sampling of small lung nodules is often not clinically feasible. Currently, there is no non-invasive imaging technique that can reliably characterize lung metastases. The lung is one of the favored sites of metastasis in sarcomas. Hence, patients with sarcomas, especially from tuberculosis prevalent developing countries, can provide an ideal platform to develop a model to differentiate lung metastases from benign nodules. To overcome the lack of optimal specificity of CT scan in detecting pulmonary metastasis, a novel artificial intelligence (AI)-based protocol is proposed utilizing a combination of radiological and clinical biomarkers to identify lung nodules and characterize it as benign or metastasis. This protocol includes a retrospective cohort of nearly 2,000-2,250 sample nodules (from at least 450 patients) for training and testing and an ambispective cohort of nearly 500 nodules (from 100 patients; 50 patients each from the retrospective and prospective cohort) for validation. Ground-truth annotation of lung nodules will be performed using an in-house-built segmentation tool. Ground-truth labeling of lung nodules (metastatic/benign) will be performed based on histopathological results or baseline and/or follow-up radiological findings along with clinical outcome of the patient. Optimal methods for data handling and statistical analysis are included to develop a robust protocol for early detection and classification of pulmonary metastasis at baseline and at follow-up and identification of associated potential clinical and radiological markers.
Collapse
Affiliation(s)
- Esha Baidya Kayal
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Shuvadeep Ganguly
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Archana Sasi
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Swetambri Sharma
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Dheeksha DS
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Manish Saini
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Krithika Rangarajan
- Radiodiagnosis, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | | | - Sameer Bakhshi
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Amit Mehndiratta
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
- Department of Biomedical Engineering, All India Institute of Medical Sciences, New Delhi, Delhi, India
| |
Collapse
|
20
|
Huang Y, Jiao J, Yu J, Zheng Y, Wang Y. RsALUNet: A reinforcement supervision U-Net-based framework for multi-ROI segmentation of medical images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
|
21
|
Bhattacharjee A, Rabea S, Bhattacharjee A, Elkaeed EB, Murugan R, Selim HMRM, Sahu RK, Shazly GA, Salem Bekhit MM. A multi-class deep learning model for early lung cancer and chronic kidney disease detection using computed tomography images. Front Oncol 2023; 13:1193746. [PMID: 37333825 PMCID: PMC10272771 DOI: 10.3389/fonc.2023.1193746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 05/04/2023] [Indexed: 06/20/2023] Open
Abstract
Lung cancer is a fatal disease caused by an abnormal proliferation of cells in the lungs. Similarly, chronic kidney disorders affect people worldwide and can lead to renal failure and impaired kidney function. Cyst development, kidney stones, and tumors are frequent diseases impairing kidney function. Since these conditions are generally asymptomatic, early, and accurate identification of lung cancer and renal conditions is necessary to prevent serious complications. Artificial Intelligence plays a vital role in the early detection of lethal diseases. In this paper, we proposed a modified Xception deep neural network-based computer-aided diagnosis model, consisting of transfer learning based image net weights of Xception model and a fine-tuned network for automatic lung and kidney computed tomography multi-class image classification. The proposed model obtained 99.39% accuracy, 99.33% precision, 98% recall, and 98.67% F1-score for lung cancer multi-class classification. Whereas, it attained 100% accuracy, F1 score, recall and precision for kidney disease multi-class classification. Also, the proposed modified Xception model outperformed the original Xception model and the existing methods. Hence, it can serve as a support tool to the radiologists and nephrologists for early detection of lung cancer and chronic kidney disease, respectively.
Collapse
Affiliation(s)
- Ananya Bhattacharjee
- Bio-Medical Imaging Laboratory (BIOMIL), Department of Electronics and Communication Engineering, National Institute of Technology Silchar, Silchar, India
| | - Sameh Rabea
- Department of Pharmaceutical Sciences, College of Pharmacy, AlMaarefa University, Riyadh, Saudi Arabia
| | - Abhishek Bhattacharjee
- Department of Pharmaceutical Sciences, Assam University (A Central University), Silchar, India
| | - Eslam B. Elkaeed
- Department of Pharmaceutical Sciences, College of Pharmacy, AlMaarefa University, Riyadh, Saudi Arabia
| | - R. Murugan
- Bio-Medical Imaging Laboratory (BIOMIL), Department of Electronics and Communication Engineering, National Institute of Technology Silchar, Silchar, India
| | - Heba Mohammed Refat M. Selim
- Department of Pharmaceutical Sciences, College of Pharmacy, AlMaarefa University, Riyadh, Saudi Arabia
- Microbiology and Immunology Department, Faculty of Pharmacy (Girls); Al-Azhar University, Cairo, Egypt
| | - Ram Kumar Sahu
- Department of Pharmaceutical Sciences, Hemvati Nandan Bahuguna Garhwal University (A Central University), Tehri Garhwal, India
| | - Gamal A. Shazly
- Kayyali Chair for Pharmaceutical Industry, Department of Pharmaceutics, College of Pharmacy, King Saud University, Riyadh, Saudi Arabia
| | - Mounir M. Salem Bekhit
- Kayyali Chair for Pharmaceutical Industry, Department of Pharmaceutics, College of Pharmacy, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
22
|
Wang J, Sourlos N, Zheng S, van der Velden N, Pelgrim GJ, Vliegenthart R, van Ooijen P. Preparing CT imaging datasets for deep learning in lung nodule analysis: Insights from four well-known datasets. Heliyon 2023; 9:e17104. [PMID: 37484314 PMCID: PMC10361226 DOI: 10.1016/j.heliyon.2023.e17104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 06/06/2023] [Accepted: 06/07/2023] [Indexed: 07/25/2023] Open
Abstract
BACKGROUND Deep learning is an important means to realize the automatic detection, segmentation, and classification of pulmonary nodules in computed tomography (CT) images. An entire CT scan cannot directly be used by deep learning models due to image size, image format, image dimensionality, and other factors. Between the acquisition of the CT scan and feeding the data into the deep learning model, there are several steps including data use permission, data access and download, data annotation, and data preprocessing. This paper aims to recommend a complete and detailed guide for researchers who want to engage in interdisciplinary lung nodule research of CT images and Artificial Intelligence (AI) engineering. METHODS The data preparation pipeline used the following four popular large-scale datasets: LIDC-IDRI (Lung Image Database Consortium image collection), LUNA16 (Lung Nodule Analysis 2016), NLST (National Lung Screening Trial) and NELSON (The Dutch-Belgian Randomized Lung Cancer Screening Trial). The dataset preparation is presented in chronological order. FINDINGS The different data preparation steps before deep learning were identified. These include both more generic steps and steps dedicated to lung nodule research. For each of these steps, the required process, necessity, and example code or tools for actual implementation are provided. DISCUSSION AND CONCLUSION Depending on the specific research question, researchers should be aware of the various preparation steps required and carefully select datasets, data annotation methods, and image preprocessing methods. Moreover, it is vital to acknowledge that each auxiliary tool or code has its specific scope of use and limitations. This paper proposes a standardized data preparation process while clearly demonstrating the principles and sequence of different steps. A data preparation pipeline can be quickly realized by following these proposed steps and implementing the suggested example codes and tools.
Collapse
Affiliation(s)
- Jingxuan Wang
- Department of Radiology, University of Groningen, University Medical Center of Groningen, 9713GZ, Groningen, the Netherlands
| | - Nikos Sourlos
- Department of Radiology, University of Groningen, University Medical Center of Groningen, 9713GZ, Groningen, the Netherlands
| | - Sunyi Zheng
- School of Engineering, Westlake University, Xihu District, 310030, Hangzhou, China
| | - Nils van der Velden
- Department of Radiology, University of Groningen, University Medical Center of Groningen, 9713GZ, Groningen, the Netherlands
| | - Gert Jan Pelgrim
- Department of Radiology, University of Groningen, University Medical Center of Groningen, 9713GZ, Groningen, the Netherlands
| | - Rozemarijn Vliegenthart
- Department of Radiology, University of Groningen, University Medical Center of Groningen, 9713GZ, Groningen, the Netherlands
- Data Science Center in Health (DASH), University of Groningen, University Medical Center of Groningen, 9713GZ, Groningen, the Netherlands
| | - Peter van Ooijen
- Department of Radiation Oncology, University of Groningen, University Medical Center of Groningen, 9713GZ, Groningen, the Netherlands
- Data Science Center in Health (DASH), University of Groningen, University Medical Center of Groningen, 9713GZ, Groningen, the Netherlands
| |
Collapse
|
23
|
Ewals LJS, van der Wulp K, van den Borne BEEM, Pluyter JR, Jacobs I, Mavroeidis D, van der Sommen F, Nederend J. The Effects of Artificial Intelligence Assistance on the Radiologists' Assessment of Lung Nodules on CT Scans: A Systematic Review. J Clin Med 2023; 12:jcm12103536. [PMID: 37240643 DOI: 10.3390/jcm12103536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 04/19/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
To reduce the number of missed or misdiagnosed lung nodules on CT scans by radiologists, many Artificial Intelligence (AI) algorithms have been developed. Some algorithms are currently being implemented in clinical practice, but the question is whether radiologists and patients really benefit from the use of these novel tools. This study aimed to review how AI assistance for lung nodule assessment on CT scans affects the performances of radiologists. We searched for studies that evaluated radiologists' performances in the detection or malignancy prediction of lung nodules with and without AI assistance. Concerning detection, radiologists achieved with AI assistance a higher sensitivity and AUC, while the specificity was slightly lower. Concerning malignancy prediction, radiologists achieved with AI assistance generally a higher sensitivity, specificity and AUC. The radiologists' workflows of using the AI assistance were often only described in limited detail in the papers. As recent studies showed improved performances of radiologists with AI assistance, AI assistance for lung nodule assessment holds great promise. To achieve added value of AI tools for lung nodule assessment in clinical practice, more research is required on the clinical validation of AI tools, impact on follow-up recommendations and ways of using AI tools.
Collapse
Affiliation(s)
- Lotte J S Ewals
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Kasper van der Wulp
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Ben E E M van den Borne
- Department of Pulmonology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Jon R Pluyter
- Department of Experience Design, Royal Philips, 5656 AE Eindhoven, The Netherlands
| | - Igor Jacobs
- Department of Hospital Services and Informatics, Philips Research, 5656 AE Eindhoven, The Netherlands
| | - Dimitrios Mavroeidis
- Department of Data Science, Philips Research, 5656 AE Eindhoven, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Joost Nederend
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| |
Collapse
|
24
|
Braveen M, Nachiyappan S, Seetha R, Anusha K, Ahilan A, Prasanth A, Jeyam A. ALBAE feature extraction based lung pneumonia and cancer classification. Soft comput 2023:1-14. [PMID: 37362264 PMCID: PMC10187954 DOI: 10.1007/s00500-023-08453-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/06/2023] [Indexed: 06/28/2023]
Abstract
Lung cancer is a deadly disease showing uncontrolled proliferation of malignant cells in the lungs. If the lung cancer is detected in early stages, it can be cured before critical stage. In recent years, new technologies have gained much attention in the healthcare industry however, the unpredictable appearance of tumors, finding their presence, determining its shape, size and high discrepancy in medical images are the challenging tasks. To overcome this issue a novel Ant lion-based Autoencoders (ALbAE) model is proposed for efficient classification of lung cancer and pneumonia. Initially Computed Tomography (CT) images are pre-processed using median filters to remove noise artifacts and improving the quality of the images. Consequently, the relevant features such as image edges, pixel rates of the images and blood clots are extracted by ant lion-based autoencoder (ALbAE) technique. Finally, in classification stage, the lung CT images are classified into three different categories such as normal lung, cancer affected lung and pneumonia affected lung using Random forest technique. The effectiveness of the implemented design is estimated by different parameters such as precision, recall, Accuracy and F1-measure. The proposed approach attains 97% accuracy; 98% of recall and F-measure rate is attained through the developed design and the proposed model gains 96% of precision score. Experimental outcomes show that the proposed model performs better than existing SVM, ELM, and MLP in classifying lung cancer and pneumonia.
Collapse
Affiliation(s)
- M. Braveen
- Assistant professor senior, School of Computer Science and Engineering, Vellore institute of technology, Chennai, Tamil Nadu India
| | - S. Nachiyappan
- Associate Professor, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu India
| | - R. Seetha
- Associate Professor, School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu India
| | - K. Anusha
- Associate Professor, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu India
| | - A. Ahilan
- Associate Professor, Department of Electronics and Communication Engineering, PSN College of Engineering and Technology, Tirunelveli, Tamil Nadu India
| | - A. Prasanth
- Assistant Professor, Department of Electronics and Communication Engineering, Sri Venkateswara College of Engineering, Sriperumbudur, India
| | - A. Jeyam
- Assistant Professor, Computer Science and Engineering, Lord Jegannath College of Engineering and Technology, Kanyakumari, Tamil Nadu 629402 India
| |
Collapse
|
25
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
26
|
Computer-Aided Detection of Subsolid Nodules on Chest Computed Tomography: Assessment of Visualization on Vessel-Suppressed Images. J Comput Assist Tomogr 2023; 47:412-417. [PMID: 36877791 DOI: 10.1097/rct.0000000000001444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
Abstract
OBJECTIVES This study aimed to clarify the performance of automatic detection of subsolid nodules by commercially available software on computed tomography (CT) images of various slice thicknesses and compare it with visualization on the accompanying vessel-suppression CT (VS-CT) images. METHODS A total of 95 subsolid nodules from 84 CT examinations of 84 patients were included. The reconstructed CT image series of each case with 3-, 2-, and 1-mm slice thicknesses were loaded into a commercially available software application (ClearRead CT) for automatic detection of subsolid nodules and generation of VS-CT images. Automatic nodule detection sensitivity was assessed for 95 nodules on each series of images acquired at 3 slice thicknesses. Four radiologists subjectively evaluated visual assessment of the nodules on VS-CT. RESULTS ClearRead CT automatically detected 69.5% (66/95 nodules), 68.4% (65/95 nodules), and 70.5% (67/95 nodules) of all subsolid nodules in 3-, 2-, and 1-mm slices, respectively. The detection rate was higher for part-solid nodules than for pure ground-glass nodules at all slice thicknesses. In the visualization assessment on VS-CT, 3 nodules at each slice thickness (3.2%) were judged as invisible, while 26 of 29 (89.7%), 27 of 30 (90.0%), and 25 of 28 (89.3%) nodules, which were missed by computer-aided detection, were judged as visible in 3-, 2-, and 1-mm slices, respectively. CONCLUSIONS The automatic detection rate of subsolid nodules by ClearRead CT was approximately 70% at all slice thicknesses. More than 95% of subsolid nodules were visualized on VS-CT, including nodules undetected by the automated software. Computed tomography acquisition at slices thinner than 3 mm did not confer any benefits.
Collapse
|
27
|
Shen Z, Cao P, Yang J, Zaiane OR. WS-LungNet: A two-stage weakly-supervised lung cancer detection and diagnosis network. Comput Biol Med 2023; 154:106587. [PMID: 36709519 DOI: 10.1016/j.compbiomed.2023.106587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 01/13/2023] [Accepted: 01/22/2023] [Indexed: 01/26/2023]
Abstract
Computer-aided lung cancer diagnosis (CAD) system on computed tomography (CT) helps radiologists guide preoperative planning and prognosis assessment. The flexibility and scalability of deep learning methods are limited in lung CAD. In essence, two significant challenges to be solved are (1) Label scarcity due to cost annotations of CT images by experienced domain experts, and (2) Label inconsistency between the observed nodule malignancy and the patients' pathology evaluation. These two issues can be considered weak label problems. We address these issues in this paper by introducing a weakly-supervised lung cancer detection and diagnosis network (WS-LungNet), consisting of a semi-supervised computer-aided detection (Semi-CADe) that can segment 3D pulmonary nodules based on unlabeled data through adversarial learning to reduce label scarcity, as well as a cross-nodule attention computer-aided diagnosis (CNA-CADx) for evaluating malignancy at the patient level by modeling correlations between nodules via cross-attention mechanisms and thereby eliminating label inconsistency. Through extensive evaluations on the LIDC-IDRI public database, we show that our proposed method achieves 82.99% competition performance metric (CPM) on pulmonary nodule detection and 88.63% area under the curve (AUC) on lung cancer diagnosis. Extensive experiments demonstrate the advantage of WS-LungNet on nodule detection and malignancy evaluation tasks. Our promising results demonstrate the benefits and flexibility of the semi-supervised segmentation with adversarial learning and the nodule instance correlation learning with the attention mechanism. The results also suggest that making use of the unlabeled data and taking the relationship among nodules in a case into account are essential for lung cancer detection and diagnosis.
Collapse
Affiliation(s)
- Zhiqiang Shen
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Peng Cao
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China.
| | - Jinzhu Yang
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Osmar R Zaiane
- Alberta Machine Intelligence Institute, University of Alberta, Canada
| |
Collapse
|
28
|
Hochhegger B, Pasini R, Roncally Carvalho A, Rodrigues R, Altmayer S, Kayat Bittencourt L, Marchiori E, Forghani R. Artificial Intelligence for Cardiothoracic Imaging: Overview of Current and Emerging Applications. Semin Roentgenol 2023; 58:184-195. [PMID: 37087139 DOI: 10.1053/j.ro.2023.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 02/02/2023] [Indexed: 03/07/2023]
Abstract
Artificial intelligence algorithms can learn by assimilating information from large datasets in order to decipher complex associations, identify previously undiscovered pathophysiological states, and construct prediction models. There has been tremendous interest and increased incorporation of artificial intelligence into various industries, including healthcare. As a result, there has been an exponential rise in the number of research articles and industry participants producing models intended for a variety of applications in medical imaging, which can be challenging to navigate for radiologists. In thoracic imaging, multiple applications are being evaluated for chest radiography and computed tomography and include applications for lung nodule evaluation and cancer imaging, quantifying diffuse lung disorders, and cardiac imaging, to name a few. This review aims to provide an overview of current clinical AI models, focusing on the most common clinical applications of AI in cardiothoracic imaging.
Collapse
|
29
|
Thirumagal E, Saruladha K. Lung cancer diagnosis using Hessian adaptive learning optimization in generative adversarial networks. Soft comput 2023. [DOI: 10.1007/s00500-023-07877-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
30
|
Modak S, Abdel-Raheem E, Rueda L. Applications of Deep Learning in Disease Diagnosis of Chest Radiographs: A Survey on Materials and Methods. BIOMEDICAL ENGINEERING ADVANCES 2023. [DOI: 10.1016/j.bea.2023.100076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
|
31
|
Radiomics and Artificial Intelligence Can Predict Malignancy of Solitary Pulmonary Nodules in the Elderly. Diagnostics (Basel) 2023; 13:diagnostics13030384. [PMID: 36766488 PMCID: PMC9914272 DOI: 10.3390/diagnostics13030384] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/16/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023] Open
Abstract
Solitary pulmonary nodules (SPNs) are a diagnostic and therapeutic challenge for thoracic surgeons. Although such lesions are usually benign, the risk of malignancy remains significant, particularly in elderly patients, who represent a large segment of the affected population. Surgical treatment in this subset, which usually presents several comorbidities, requires careful evaluation, especially when pre-operative biopsy is not feasible and comorbidities may jeopardize the outcome. Radiomics and artificial intelligence (AI) are progressively being applied in predicting malignancy in suspicious nodules and assisting the decision-making process. In this study, we analyzed features of the radiomic images of 71 patients with SPN aged more than 75 years (median 79, IQR 76-81) who had undergone upfront pulmonary resection based on CT and PET-CT findings. Three different machine learning algorithms were applied-functional tree, Rep Tree and J48. Histology was malignant in 64.8% of nodules and the best predictive value was achieved by the J48 model (AUC 0.9). The use of AI analysis of radiomic features may be applied to the decision-making process in elderly frail patients with suspicious SPNs to minimize the false positive rate and reduce the incidence of unnecessary surgery.
Collapse
|
32
|
Qiao J, Fan Y, Zhang M, Fang K, Li D, Wang Z. Ensemble framework based on attributes and deep features for benign-malignant classification of lung nodule. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
33
|
Philip B, Jain A, Wojtowicz M, Khan I, Voller C, Patel RSK, Elmahdi D, Harky A. Current investigative modalities for detecting and staging lung cancers: a comprehensive summary. Indian J Thorac Cardiovasc Surg 2023; 39:42-52. [PMID: 36590039 PMCID: PMC9794670 DOI: 10.1007/s12055-022-01430-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 10/06/2022] [Accepted: 10/13/2022] [Indexed: 12/05/2022] Open
Abstract
This narrative review compares the advantages and drawbacks of imaging and other investigation modalities which currently assist with lung cancer diagnosis and staging, as well as those which are not routinely indicated for this. We examine plain film radiography, computed tomography (CT) (alone, as well as in conjunction with positron emission tomography (PET)), magnetic resonance imaging (MRI), ultrasound, and newer techniques such as image-guided bronchoscopy (IGB) and robotic bronchoscopy (RB). While a chest X-ray is the first-line imaging investigation in patients presenting with symptoms suggestive of lung cancer, it has a high positive predictive value (PPV) even after negative X-ray findings, which calls into question its value as part of a potential national screening programme. CT lowers the mortality for high-risk patients when compared to X-ray and certain scoring systems, such as the Brock model can guide the need for further imaging, like PET-CT, which has high sensitivity and specificity for diagnosing solitary pulmonary nodules as malignant, as well as for assessing small cell lung cancer spread. In practice, PET-CT is offered to everyone whose lung cancer is to be treated with a curative intent. In contrast, MRI is only recommended for isolated distant metastases. Similarly, ultrasound imaging is not used for diagnosis of lung cancer but can be useful when there is suspicion of intrathoracic lymph node involvement. Ultrasound imaging in the form of endobronchial ultrasonography (EBUS) is often used to aid tissue sampling, yet the diagnostic value of this technique varies widely between studies. RB is another novel technique that offers an alternative way to biopsy lesions, but further research on it is necessary. Lastly, thoracic surgical biopsies, particularly minimally invasive video-assisted techniques, have been used increasingly to aid in diagnosis and staging.
Collapse
Affiliation(s)
- Bejoy Philip
- Department of Cardiothoracic Surgery, Liverpool Heart and Chest Hospital, Liverpool, L14 3PE UK
| | - Anchal Jain
- Department of Cardiothoracic Surgery, Royal Stoke University Hospital, Stoke-on-Trent, UK
| | | | - Inayat Khan
- Department of Medicine, Royal Sussex County Hospital, Brighton, UK
| | - Calum Voller
- School of Medicine, University of Liverpool, Liverpool, UK
| | | | - Darbi Elmahdi
- School of Medicine, University of Central Lancashire, Preston, UK
| | - Amer Harky
- Department of Cardiothoracic Surgery, Liverpool Heart and Chest Hospital, Liverpool, L14 3PE UK
| |
Collapse
|
34
|
Jin H, Yu C, Gong Z, Zheng R, Zhao Y, Fu Q. Machine learning techniques for pulmonary nodule computer-aided diagnosis using CT images: A systematic review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
35
|
A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
|
36
|
Gu Z, Li Y, Luo H, Zhang C, Du H. Cross attention guided multi-scale feature fusion for false-positive reduction in pulmonary nodule detection. Comput Biol Med 2022; 151:106302. [PMID: 36401972 DOI: 10.1016/j.compbiomed.2022.106302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/24/2022] [Accepted: 11/06/2022] [Indexed: 11/10/2022]
Abstract
False-positive reduction is a crucial step of computer-aided diagnosis (CAD) system for pulmonary nodules detection and it plays an important role in lung cancer diagnosis. In this paper, we propose a novel cross attention guided multi-scale feature fusion method for false-positive reduction in pulmonary nodule detection. Specifically, a 3D SENet50 fed with a candidate nodule cube is applied as the backbone to acquire multi-scale coarse features. Then, the coarse features are refined and fused by the multi-scale fusion part to achieve a better feature extraction result. Finally, a 3D spatial pyramid pooling module is used to enhance receptive field and a distributed aligned linear classifier is applied to get the confidence score. In addition, each of the five nodule cubes with different sizes centering on every testing nodule position is fed into the proposed framework to obtain a confidence score separately and a weighted fusion method is used to improve the generalization performance of the model. Extensive experiments are conducted to demonstrate the effectiveness of the classification performance of the proposed model. The data used in our work is from the LUNA16 pulmonary nodule detection challenge. In this data set, the number of true-positive pulmonary nodules is 1,557, while the number of false-positive ones is 753,418. The new method is evaluated on the LUNA16 dataset and achieves the score of the competitive performance metric (CPM) 84.8%.
Collapse
Affiliation(s)
- Zhongxuan Gu
- Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China
| | - Yueyang Li
- Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China.
| | - Haichi Luo
- College of Internet of Things Engineering, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China
| | - Caidi Zhang
- Department of Respiration, The Affiliated Hospital of Jiangnan University, 1000 Hefeng Road, Wuxi, 214122, Jiangsu, China
| | - Hongqun Du
- Department of Respiration, The Affiliated Hospital of Jiangnan University, 1000 Hefeng Road, Wuxi, 214122, Jiangsu, China
| |
Collapse
|
37
|
Artificial Intelligence in Lung Cancer Imaging: Unfolding the Future. Diagnostics (Basel) 2022; 12:diagnostics12112644. [PMID: 36359485 PMCID: PMC9689810 DOI: 10.3390/diagnostics12112644] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/26/2022] [Accepted: 10/29/2022] [Indexed: 11/30/2022] Open
Abstract
Lung cancer is one of the malignancies with higher morbidity and mortality. Imaging plays an essential role in each phase of lung cancer management, from detection to assessment of response to treatment. The development of imaging-based artificial intelligence (AI) models has the potential to play a key role in early detection and customized treatment planning. Computer-aided detection of lung nodules in screening programs has revolutionized the early detection of the disease. Moreover, the possibility to use AI approaches to identify patients at risk of developing lung cancer during their life can help a more targeted screening program. The combination of imaging features and clinical and laboratory data through AI models is giving promising results in the prediction of patients’ outcomes, response to specific therapies, and risk for toxic reaction development. In this review, we provide an overview of the main imaging AI-based tools in lung cancer imaging, including automated lesion detection, characterization, segmentation, prediction of outcome, and treatment response to provide radiologists and clinicians with the foundation for these applications in a clinical scenario.
Collapse
|
38
|
Research on CT Lung Segmentation Method of Preschool Children based on Traditional Image Processing and ResUnet. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7321330. [PMID: 36262868 PMCID: PMC9576440 DOI: 10.1155/2022/7321330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/13/2022] [Accepted: 09/21/2022] [Indexed: 11/22/2022]
Abstract
Lung segmentation using computed tomography (CT) images is important for diagnosing various lung diseases. Currently, no lung segmentation method has been developed for assessing the CT images of preschool children, which may differ from those of adults due to (1) presence of artifacts caused by the shaking of children, (2) loss of a localized lung area due to a failure to hold their breath, and (3) a smaller CT chest area, compared with adults. To solve these unique problems, this study developed an automatic lung segmentation method by combining traditional imaging methods with ResUnet using the CT images of 60 children, aged 0-6 years. First, the CT images were cropped and zoomed through ecological operations to concentrate the segmentation task on the chest area. Then, a ResUnet model was used to improve the loss for lung segmentation, and case-based connected domain operations were performed to filter the segmentation results and improve segmentation accuracy. The proposed method demonstrated promising segmentation results on a test set of 12 cases, with average accuracy, Dice, precision, and recall of 0.9479, 0.9678, 0.9711, and 0.9715, respectively, which achieved the best performance relative to the other six models. This study shows that the proposed method can achieve good segmentation results in CT of preschool children, laying a good foundation for the diagnosis of children's lung diseases.
Collapse
|
39
|
Chen LW, Yang SM, Chuang CC, Wang HJ, Chen YC, Lin MW, Hsieh MS, Antonoff MB, Chang YC, Wu CC, Pan T, Chen CM. Solid Attenuation Components Attention Deep Learning Model to Predict Micropapillary and Solid Patterns in Lung Adenocarcinomas on Computed Tomography. Ann Surg Oncol 2022; 29:7473-7482. [PMID: 35789301 DOI: 10.1245/s10434-022-12055-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 06/08/2022] [Indexed: 11/18/2022]
Abstract
BACKGROUND High-grade adenocarcinoma subtypes (micropapillary and solid) treated with sublobar resection have an unfavorable prognosis compared with those treated with lobectomy. We investigated the potential of incorporating solid attenuation component masks with deep learning in the prediction of high-grade components to optimize surgical strategy preoperatively. METHODS A total of 502 patients with pathologically confirmed high-grade adenocarcinomas were retrospectively enrolled between 2016 and 2020. The SACs attention DL model was developed to apply solid-attenuation-component-like subregion masks (tumor area ≥ - 190 HU) to guide the DL model for predicting high-grade subtypes. The SACA-DL was assessed using 5-fold cross-validation and external validation in the training and testing sets, respectively. The performance, which was evaluated using the area under the curve (AUC), was compared between SACA-DL and the DL model without SACs attention (DLwoSACs), the prior radiomics model, or the model based on the consolidation/tumor (C/T) diameter ratio. RESULTS We classified 313 and 189 patients into training and testing cohorts, respectively. The SACA-DL achieved an AUC of 0.91 for the cross-validation, which was significantly superior to those of the DLwoSACs (AUC = 0.88; P = 0.02), prior radiomics model (AUC = 0.85; P = 0.004), and C/T ratio (AUC = 0.84; P = 0.002). An AUC of 0.93 was achieved for external validation in the SACA-DL and was significantly better than those of the DLwoSACs (AUC = 0.89; P = 0.04), prior radiomics model (AUC = 0.85; P < 0.001), and C/T ratio (AUC = 0.85; P < 0.001). CONCLUSIONS The combination of solid-attenuation-component-like subregion masks with the DL model is a promising approach for the preoperative prediction of high-grade adenocarcinoma subtypes.
Collapse
Affiliation(s)
- Li-Wei Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan.,Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Shun-Mao Yang
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan.,Department of Surgery, National Taiwan University Hospital Biomedical Park Hospital, Zhubei City, Hsinchu County, Taiwan
| | - Ching-Chia Chuang
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Hao-Jen Wang
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Yi-Chang Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan.,Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Mong-Wei Lin
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Min-Shu Hsieh
- Department of Pathology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Mara B Antonoff
- Department of Thoracic and Cardiovascular Surgery, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Carol C Wu
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Tinsu Pan
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX, USA.
| | - Chung-Ming Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
40
|
Jalalifar SA, Sadeghi-Naini A. Data-Efficient Training of Pure Vision Transformers for the Task of Chest X-ray Abnormality Detection Using Knowledge Distillation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1444-1447. [PMID: 36086223 DOI: 10.1109/embc48229.2022.9871372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
It is generally believed that vision transformers (ViTs) require a huge amount of data to generalize well, which limits their adoption. The introduction of data-efficient algorithms such as data-efficient image transformers (DeiT) provided an opportunity to explore the application of ViTs in medical imaging, where data scarcity is a limiting factor. In this work, we investigated the possibility of using pure transformers for the task of chest x-ray abnormality detection on a small dataset. Our proposed framework is built on a DeiT structure benefiting from a teacher-student scheme for training, with a DenseNet with strong classification performance as the teacher and an adapted ViT as the student. The results show that the performance of transformers is on par with that of convolutional neural networks (CNNs). We achieved a test accuracy of 92.2% for the task of classifying chest x-ray images (normal/pneumonia/COVID-19) on a carefully selected dataset using pure transformers. The results show the capability of transformers to accompany or replace CNNs for achieving state-of-the-art in medical imaging applications. The code and models of this work are available at https://github.com/Ouantimb-Lab/DeiTCovid.
Collapse
|
41
|
Tomassini S, Falcionelli N, Sernani P, Burattini L, Dragoni AF. Lung nodule diagnosis and cancer histology classification from computed tomography data by convolutional neural networks: A survey. Comput Biol Med 2022; 146:105691. [PMID: 35691714 DOI: 10.1016/j.compbiomed.2022.105691] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 05/26/2022] [Accepted: 05/31/2022] [Indexed: 11/30/2022]
Abstract
Lung cancer is among the deadliest cancers. Besides lung nodule classification and diagnosis, developing non-invasive systems to classify lung cancer histological types/subtypes may help clinicians to make targeted treatment decisions timely, having a positive impact on patients' comfort and survival rate. As convolutional neural networks have proven to be responsible for the significant improvement of the accuracy in lung cancer diagnosis, with this survey we intend to: show the contribution of convolutional neural networks not only in identifying malignant lung nodules but also in classifying lung cancer histological types/subtypes directly from computed tomography data; point out the strengths and weaknesses of slice-based and scan-based approaches employing convolutional neural networks; and highlight the challenges and prospective solutions to successfully apply convolutional neural networks for such classification tasks. To this aim, we conducted a comprehensive analysis of relevant Scopus-indexed studies involved in lung nodule diagnosis and cancer histology classification up to January 2022, dividing the investigation in convolutional neural network-based approaches fed with planar or volumetric computed tomography data. Despite the application of convolutional neural networks in lung nodule diagnosis and cancer histology classification is a valid strategy, some challenges raised, mainly including the lack of publicly-accessible annotated data, together with the lack of reproducibility and clinical interpretability. We believe that this survey will be helpful for future studies involved in lung nodule diagnosis and cancer histology classification prior to lung biopsy by means of convolutional neural networks.
Collapse
Affiliation(s)
- Selene Tomassini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Nicola Falcionelli
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Paolo Sernani
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Laura Burattini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Aldo Franco Dragoni
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| |
Collapse
|
42
|
Bhattacharjee A, Murugan R, Soni B, Goel T. Ada-GridRF: A Fast and Automated Adaptive Boost Based Grid Search Optimized Random Forest Ensemble model for Lung Cancer Detection. Phys Eng Sci Med 2022; 45:981-994. [PMID: 35771385 DOI: 10.1007/s13246-022-01150-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 06/02/2022] [Indexed: 12/19/2022]
Abstract
Lung cancer is considered one of the leading causes of death all across the world. Various radiology-related fields increasingly have used Computer-aided diagnosis (CAD) systems. It just has already become a part of clinical work for lung cancer detection. In this article, we proposed an Adaptive Boost-based Grid Search Optimized Random Forest (Ada-GridRF) classifier that best optimized the hyperparameters of the base random forest model to identify the malignant and non-malignant nodules from the trained CT images. Improved performance speed and reduced computational complexity were the advantages of the proposed method. The proposed methodology was compared with other hyperparameter optimization techniques and also with different conventional approaches. It even outperformed the popular state-of-the-art deep learning techniques such as transfer learning and convolutional neural network. The experimental results proved that the proposed method yielded the best performance metrics of 97.97% accuracy, 100% sensitivity, 96% specificity, 96.08% precision, 98% F1-score, 4% False positives rate, and 99.8% Area under the ROC curve (AUC). It took only 8 msec to train the model. Thus, the proposed Ada-GridRF model can aid radiologists in fast lung cancer detection.
Collapse
Affiliation(s)
- Ananya Bhattacharjee
- Bio-Medical Imaging Laboratory (BIOMIL), Department of Electronics and Communication Engineering, National Institute of Technology Silchar, Silchar, Assam, 788010, India
| | - R Murugan
- Bio-Medical Imaging Laboratory (BIOMIL), Department of Electronics and Communication Engineering, National Institute of Technology Silchar, Silchar, Assam, 788010, India.
| | - Badal Soni
- Department of Computer Science and Engineering, National Institute of Technology Silchar, Silchar, Assam, 788010, India
| | - Tripti Goel
- Bio-Medical Imaging Laboratory (BIOMIL), Department of Electronics and Communication Engineering, National Institute of Technology Silchar, Silchar, Assam, 788010, India
| |
Collapse
|
43
|
Song Y, Ren S, Lu Y, Fu X, Wong KKL. Deep learning-based automatic segmentation of images in cardiac radiography: A promising challenge. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106821. [PMID: 35487181 DOI: 10.1016/j.cmpb.2022.106821] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Revised: 04/08/2022] [Accepted: 04/17/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Due to the advancement of medical imaging and computer technology, machine intelligence to analyze clinical image data increases the probability of disease prevention and successful treatment. When diagnosing and detecting heart disease, medical imaging can provide high-resolution scans of every organ or tissue in the heart. The diagnostic results obtained by the imaging method are less susceptible to human interference. They can process numerous patient information, assist doctors in early detection of heart disease, intervene and treat patients, and improve the understanding of heart disease symptoms and clinical diagnosis of great significance. In a computer-aided diagnosis system, accurate segmentation of cardiac scan images is the basis and premise of subsequent thoracic function analysis and 3D image reconstruction. EXISTING TECHNIQUES This paper systematically reviews automatic methods and some difficulties for cardiac segmentation in radiographic images. Combined with recent advanced deep learning techniques, the feasibility of using deep learning network models for image segmentation is discussed, and the commonly used deep learning frameworks are compared. DEVELOPED INSIGHTS There are many standard methods for medical image segmentation, such as traditional methods based on regions and edges and methods based on deep learning. Because of characteristics of non-uniform grayscale, individual differences, artifacts and noise of medical images, the above image segmentation methods have certain limitations. It is tough to obtain the needed results sensitivity and accuracy when performing heart segmentation. The deep learning model proposed has achieved good results in image segmentation. Accurate segmentation improves the accuracy of disease diagnosis and reduces subsequent irrelevant computations. SUMMARY There are two requirements for accurate segmentation of radiological images. One is to use image segmentation to improve the development of computer-aided diagnosis. The other is to achieve complete segmentation of the heart. When there are lesions or deformities in the heart, there will be some abnormalities in the radiographic images, and the segmentation algorithm needs to segment the heart altogether. The quantity of processing inside a certain range will no longer be a restriction for real-time detection with the advancement of deep learning and the enhancement of hardware device performance.
Collapse
Affiliation(s)
- Yucheng Song
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Shengbing Ren
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Yu Lu
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China.
| | - Xianghua Fu
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Kelvin K L Wong
- School of Computer Science and Engineering, Central South University, Changsha, China.
| |
Collapse
|
44
|
Two-Stage Deep Learning Method for Breast Cancer Detection Using High-Resolution Mammogram Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094616] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Breast cancer screening and detection using high-resolution mammographic images have always been a difficult task in computer vision due to the presence of very small yet clinically significant abnormal growths in breast masses. The size difference between such masses and the overall mammogram image as well as difficulty in distinguishing intra-class features of the Breast Imaging Reporting and Database System (BI-RADS) categories creates challenges for accurate diagnosis. To obtain near-optimal results, object detection models should be improved by directly focusing on breast cancer detection. In this work, we propose a new two-stage deep learning method. In the first stage, the breast area is extracted from the mammogram and small square patches are generated to narrow down the region of interest (RoI). In the second stage, breast masses are detected and classified into BI-RADS categories. To improve the classification accuracy for intra-classes, we design an effective tumor classification model and combine its results with the detection model’s classification scores. Experiments conducted on the newly collected high-resolution mammography dataset demonstrate our two-stage method outperforms the original Faster R-CNN model by improving mean average precision (mAP) from 0.85 to 0.94. In addition, comparisons with existing works on a popular INbreast dataset validate the performance of our two-stage model.
Collapse
|
45
|
Fahmy D, Kandil H, Khelifi A, Yaghi M, Ghazal M, Sharafeldeen A, Mahmoud A, El-Baz A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers (Basel) 2022; 14:cancers14071840. [PMID: 35406614 PMCID: PMC8997734 DOI: 10.3390/cancers14071840] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI. Abstract Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
Collapse
Affiliation(s)
- Dalia Fahmy
- Diagnostic Radiology Department, Mansoura University Hospital, Mansoura 35516, Egypt;
| | - Heba Kandil
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Information Technology Department, Faculty of Computers and Informatics, Mansoura University, Mansoura 35516, Egypt
| | - Adel Khelifi
- Computer Science and Information Technology Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Maha Yaghi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Correspondence:
| |
Collapse
|
46
|
Zhou J, Xin H. Emerging artificial intelligence methods for fighting lung cancer: a survey. CLINICAL EHEALTH 2022. [DOI: 10.1016/j.ceh.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
47
|
Deep Learning Applications in Computed Tomography Images for Pulmonary Nodule Detection and Diagnosis: A Review. Diagnostics (Basel) 2022; 12:diagnostics12020298. [PMID: 35204388 PMCID: PMC8871398 DOI: 10.3390/diagnostics12020298] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/21/2022] [Accepted: 01/22/2022] [Indexed: 12/26/2022] Open
Abstract
Lung cancer has one of the highest mortality rates of all cancers and poses a severe threat to people’s health. Therefore, diagnosing lung nodules at an early stage is crucial to improving patient survival rates. Numerous computer-aided diagnosis (CAD) systems have been developed to detect and classify such nodules in their early stages. Currently, CAD systems for pulmonary nodules comprise data acquisition, pre-processing, lung segmentation, nodule detection, false-positive reduction, segmentation, and classification. A number of review articles have considered various components of such systems, but this review focuses on segmentation and classification parts. Specifically, categorizing segmentation parts based on lung nodule type and network architectures, i.e., general neural network and multiview convolution neural network (CNN) architecture. Moreover, this work organizes related literature for classification of parts based on nodule or non-nodule and benign or malignant. The essential CT lung datasets and evaluation metrics used in the detection and diagnosis of lung nodules have been systematically summarized as well. Thus, this review provides a baseline understanding of the topic for interested readers.
Collapse
|