1
|
Alnafisah KH, Ranjan A, Sahu SP, Chen J, Alhejji SM, Noël A, Gartia MR, Mukhopadhyay S. Machine learning for automated classification of lung collagen in a urethane-induced lung injury mouse model. BIOMEDICAL OPTICS EXPRESS 2024; 15:5980-5998. [PMID: 39421774 PMCID: PMC11482176 DOI: 10.1364/boe.527972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 06/18/2024] [Accepted: 07/25/2024] [Indexed: 10/19/2024]
Abstract
Dysregulation of lung tissue collagen level plays a vital role in understanding how lung diseases progress. However, traditional scoring methods rely on manual histopathological examination introducing subjectivity and inconsistency into the assessment process. These methods are further hampered by inter-observer variability, lack of quantification, and their time-consuming nature. To mitigate these drawbacks, we propose a machine learning-driven framework for automated scoring of lung collagen content. Our study begins with the collection of a lung slide image dataset from adult female mice using second harmonic generation (SHG) microscopy. In our proposed approach, first, we manually extracted features based on the 46 statistical parameters of fibrillar collagen. Subsequently, we pre-processed the images and utilized a pre-trained VGG16 model to uncover hidden features from pre-processed images. We then combined both image and statistical features to train various machine learning and deep neural network models for classification tasks. We employed advanced unsupervised techniques like K-means, principal component analysis (PCA), t-distributed stochastic neighbour embedding (t-SNE), and uniform manifold approximation and projection (UMAP) to conduct thorough image analysis for lung collagen content. Also, the evaluation of the trained models using the collagen data includes both binary and multi-label classification to predict lung cancer in a urethane-induced mouse model. Experimental validation of our proposed approach demonstrates promising results. We obtained an average accuracy of 83% and an area under the receiver operating characteristic curve (ROC AUC) values of 0.96 through the use of a support vector machine (SVM) model for binary categorization tasks. For multi-label classification tasks, to quantify the structural alteration of collagen, we attained an average accuracy of 73% and ROC AUC values of 1.0, 0.38, 0.95, and 0.86 for control, baseline, treatment_1, and treatment_2 groups, respectively. Our findings provide significant potential for enhancing diagnostic accuracy, understanding disease mechanisms, and improving clinical practice using machine learning and deep learning models.
Collapse
Affiliation(s)
| | - Amit Ranjan
- Center for Computation & Technology and Department of Environmental Sciences, Louisiana State University, Baton Rouge, LA 70803, USA
| | - Sushant P Sahu
- Department of Mechanical and Industrial Engineering, Louisiana State University, Baton Rouge, LA 70803, USA
- Amity Institute of Biotechnology and Applied Sciences, Amity University, Mumbai, Maharashtra-410206, India
| | - Jianhua Chen
- Department of Computer Science, Louisiana State University, Baton Rouge, LA 70803, USA
| | | | - Alexandra Noël
- Department of Comparative Biomedical Sciences, Louisiana State University, Baton Rouge, LA 70803, USA
| | - Manas Ranjan Gartia
- Department of Mechanical and Industrial Engineering, Louisiana State University, Baton Rouge, LA 70803, USA
| | - Supratik Mukhopadhyay
- Center for Computation & Technology and Department of Environmental Sciences, Louisiana State University, Baton Rouge, LA 70803, USA
| |
Collapse
|
2
|
Santone A, Mercaldo F, Brunese L. A Method for Real-Time Lung Nodule Instance Segmentation Using Deep Learning. Life (Basel) 2024; 14:1192. [PMID: 39337974 PMCID: PMC11433569 DOI: 10.3390/life14091192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 09/16/2024] [Accepted: 09/19/2024] [Indexed: 09/30/2024] Open
Abstract
Lung screening is really crucial in the early detection and management of masses, with particular regard to cancer. Studies have shown that lung cancer screening, can reduce lung cancer mortality by 20-30% in high-risk populations. In recent times, the advent of deep learning, with particular regard to computer vision, demonstrated the ability to effectively detect and locate objects from video streams and also (medical) images. Considering these aspects, in this paper, we propose a method aimed to perform instance segmentation, i.e., by providing a mask for each lung mass instance detected, allowing for the identification of individual masses even if they overlap or are close to each other by classifying the detected masses into (generic) nodules, cancer or adenocarcinoma. In this paper, we considered the you-only-look-once model for lung nodule segmentation. An experimental analysis, performed on a set of real-world lung computed tomography images, demonstrated the effectiveness of the proposed method not only in the detection of lung masses but also in lung mass segmentation, thus providing a helpful way not only for radiologist to conduct automatic lung screening but also for discovering very small masses not easily recognizable to the naked eye and that may deserve attention. As a matter of fact, in the evaluation of a dataset composed of 3654 lung scans, the proposed method obtains an average precision of 0.757 and an average recall of 0.738 in the classification task. Additionally, it reaches an average mask precision of 0.75 and an average mask recall of 0.733. These results indicate that the proposed method is capable of not only classifying masses as nodules, cancer, and adenocarcinoma, but also effectively segmenting the areas, thereby performing instance segmentation.
Collapse
Affiliation(s)
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy; (A.S.); (L.B.)
| | | |
Collapse
|
3
|
Ashames MMA, Demir A, Gerek ON, Fidan M, Gulmezoglu MB, Ergin S, Edizkan R, Koc M, Barkana A, Calisir C. Are deep learning classification results obtained on CT scans fair and interpretable? Phys Eng Sci Med 2024; 47:967-979. [PMID: 38573489 PMCID: PMC11408573 DOI: 10.1007/s13246-024-01419-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Accepted: 03/12/2024] [Indexed: 04/05/2024]
Abstract
Following the great success of various deep learning methods in image and object classification, the biomedical image processing society is also overwhelmed with their applications to various automatic diagnosis cases. Unfortunately, most of the deep learning-based classification attempts in the literature solely focus on the aim of extreme accuracy scores, without considering interpretability, or patient-wise separation of training and test data. For example, most lung nodule classification papers using deep learning randomly shuffle data and split it into training, validation, and test sets, causing certain images from the Computed Tomography (CT) scan of a person to be in the training set, while other images of the same person to be in the validation or testing image sets. This can result in reporting misleading accuracy rates and the learning of irrelevant features, ultimately reducing the real-life usability of these models. When the deep neural networks trained on the traditional, unfair data shuffling method are challenged with new patient images, it is observed that the trained models perform poorly. In contrast, deep neural networks trained with strict patient-level separation maintain their accuracy rates even when new patient images are tested. Heat map visualizations of the activations of the deep neural networks trained with strict patient-level separation indicate a higher degree of focus on the relevant nodules. We argue that the research question posed in the title has a positive answer only if the deep neural networks are trained with images of patients that are strictly isolated from the validation and testing patient sets.
Collapse
Affiliation(s)
- Mohamad M A Ashames
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Ahmet Demir
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Omer N Gerek
- Department of Electrical and Electronics Engineering, Eskisehir Technical University, Eskisehir, Turkey
| | - Mehmet Fidan
- Vocational School of Transportation, Eskisehir Technical University, Eskisehir, Turkey
| | - M Bilginer Gulmezoglu
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Semih Ergin
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Rifat Edizkan
- Department of Electrical and Electronics Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| | - Mehmet Koc
- Department of Computer Engineering, Eskisehir Technical University, Eskisehir, Turkey.
| | - Atalay Barkana
- Department of Electrical and Electronics Engineering, Eskisehir Technical University, Eskisehir, Turkey
| | - Cuneyt Calisir
- Department of Radiology, Manisa Celal Bayar University, Manisa, Turkey
| |
Collapse
|
4
|
Wang R, Huang S, Wang P, Shi X, Li S, Ye Y, Zhang W, Shi L, Zhou X, Tang X. Bibliometric analysis of the application of deep learning in cancer from 2015 to 2023. Cancer Imaging 2024; 24:85. [PMID: 38965599 PMCID: PMC11223420 DOI: 10.1186/s40644-024-00737-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 06/27/2024] [Indexed: 07/06/2024] Open
Abstract
BACKGROUND Recently, the application of deep learning (DL) has made great progress in various fields, especially in cancer research. However, to date, the bibliometric analysis of the application of DL in cancer is scarce. Therefore, this study aimed to explore the research status and hotspots of the application of DL in cancer. METHODS We retrieved all articles on the application of DL in cancer from the Web of Science database Core Collection database. Biblioshiny, VOSviewer and CiteSpace were used to perform the bibliometric analysis through analyzing the numbers, citations, countries, institutions, authors, journals, references, and keywords. RESULTS We found 6,016 original articles on the application of DL in cancer. The number of annual publications and total citations were uptrend in general. China published the greatest number of articles, USA had the highest total citations, and Saudi Arabia had the highest centrality. Chinese Academy of Sciences was the most productive institution. Tian, Jie published the greatest number of articles, while He Kaiming was the most co-cited author. IEEE Access was the most popular journal. The analysis of references and keywords showed that DL was mainly used for the prediction, detection, classification and diagnosis of breast cancer, lung cancer, and skin cancer. CONCLUSIONS Overall, the number of articles on the application of DL in cancer is gradually increasing. In the future, further expanding and improving the application scope and accuracy of DL applications, and integrating DL with protein prediction, genomics and cancer research may be the research trends.
Collapse
Affiliation(s)
- Ruiyu Wang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Shu Huang
- Department of Gastroenterology, Lianshui County People' Hospital, Huaian, China
- Department of Gastroenterology, Lianshui People' Hospital of Kangda CollegeAffiliated to, Nanjing Medical University , Huaian, China
| | - Ping Wang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Xiaomin Shi
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Shiqi Li
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Yusong Ye
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Wei Zhang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Lei Shi
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Xian Zhou
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China.
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China.
| | - Xiaowei Tang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China.
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China.
| |
Collapse
|
5
|
Dhingra S, Goyal S, Thirumal D, Sharma P, Kaur G, Mittal N. Mesoporous silica nanoparticles: a versatile carrier platform in lung cancer management. Nanomedicine (Lond) 2024; 19:1331-1346. [PMID: 39105754 PMCID: PMC11318747 DOI: 10.1080/17435889.2024.2348438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 04/24/2024] [Indexed: 08/07/2024] Open
Abstract
Mesoporous silica nanoparticles (MSNPs) are inorganic nanoparticles that have been comprehensively investigated and are intended to deliver therapeutic agents. MSNPs have revolutionized the therapy for various conditions, especially cancer and infectious diseases. In this article, the viability of MSNPs' administration for lung cancer therapy has been reviewed. However, certain challenges lay ahead in the successful translation such as toxicology, immunology, large-scale production, and regulatory matters have made it extremely difficult to translate such discoveries from the bench to the bedside. This review highlights recent developments, characteristics, mechanism of action and customization for targeted delivery. This review also covers the most recent data that sheds light on MSNPs' extraordinary therapeutic potential in fighting lung cancer as well as future hurdles.
Collapse
Affiliation(s)
- Smriti Dhingra
- Chitkara College of Pharmacy, Chitkara University, Punjab, 140401, India
| | - Shuchi Goyal
- Chitkara College of Pharmacy, Chitkara University, Punjab, 140401, India
| | - Divya Thirumal
- Manipal College of Pharmaceutical Sciences, Manipal Academy of Higher Education, Manipal, 576104,India
| | - Preety Sharma
- Chitkara College of Pharmacy, Chitkara University, Punjab, 140401, India
| | - Gurpreet Kaur
- Department of Pharmaceutical Sciences & Drug Research, Punjabi University, Patiala, Punjab, 147002, India
| | - Neeraj Mittal
- Chitkara College of Pharmacy, Chitkara University, Punjab, 140401, India
| |
Collapse
|
6
|
Krebs JR, Imran M, Fazzone B, Viscardi C, Berwick B, Stinson G, Heithaus E, Upchurch GR, Shao W, Cooper MA. Volumetric analysis of acute uncomplicated type B aortic dissection using an automated deep learning aortic zone segmentation model. J Vasc Surg 2024:S0741-5214(24)01245-X. [PMID: 38851467 DOI: 10.1016/j.jvs.2024.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/24/2024] [Accepted: 06/02/2024] [Indexed: 06/10/2024]
Abstract
BACKGROUND Machine learning techniques have shown excellent performance in three-dimensional medical image analysis, but have not been applied to acute uncomplicated type B aortic dissection (auTBAD) using Society for Vascular Surgery (SVS) and Society of Thoracic Surgeons (STS)-defined aortic zones. The purpose of this study was to establish a trained, automatic machine learning aortic zone segmentation model to facilitate performance of an aortic zone volumetric comparison between patients with auTBAD based on the rate of aortic growth. METHODS Patients with auTBAD and serial imaging were identified. For each patient, imaging characteristics from two computed tomography (CT) scans were analyzed: (1) the baseline CT angiography (CTA) at the index admission and (2) either the most recent surveillance CTA or the most recent CTA before an aortic intervention. Patients were stratified into two comparative groups based on aortic growth: rapid growth (diameter increase of ≥5 mm/year) and no or slow growth (diameter increase of <5 mm/year). Deidentified images were imported into an open source software package for medical image analysis and images were annotated based on SVS/STS criteria for aortic zones. Our model was trained using four-fold cross-validation. The segmentation output was used to calculate aortic zone volumes from each imaging study. RESULTS Of 59 patients identified for inclusion, rapid growth was observed in 33 patients (56%) and no or slow growth was observed in 26 patients (44%). There were no differences in baseline demographics, comorbidities, admission mean arterial pressure, number of discharge antihypertensives, or high-risk imaging characteristics between groups (P > .05 for all). Median duration between baseline and interval CT was 1.07 years (interquartile range [IQR], 0.38-2.57). Postdischarge aortic intervention was performed in 13 patients (22%) at a mean of 1.5 ± 1.2 years, with no difference between the groups (P > .05). Among all patients, the largest relative percent increases in zone volumes over time were found in zone 4 (13.9%; IQR, -6.82 to 35.1) and zone 5 (13.4%; IQR, -7.78 to 37.9). There were no differences in baseline zone volumes between groups (P > .05 for all). The average Dice coefficient, a performance measure of the model output, was 0.73. Performance was best in zone 5 (0.84) and zone 9 (0.91). CONCLUSIONS We describe an automatic deep learning segmentation model incorporating SVS-defined aortic zones. The open source, trained model demonstrates concordance to the manually segmented aortas with the strongest performance in zones 5 and 9, providing a framework for further clinical applications. In our limited sample, there were no differences in baseline aortic zone volumes between patients with rapid growth and patients with no or slow growth.
Collapse
Affiliation(s)
- Jonathan R Krebs
- Department of Surgery, Division of Vascular Surgery, University of Florida, Gainesville, FL
| | - Muhammad Imran
- Department of Medicine, University of Florida, Gainesville, FL
| | - Brian Fazzone
- Department of Surgery, Division of Vascular Surgery, University of Florida, Gainesville, FL
| | - Chelsea Viscardi
- Department of Surgery, Division of Vascular Surgery, University of Florida, Gainesville, FL
| | | | - Griffin Stinson
- Department of Surgery, Division of Vascular Surgery, University of Florida, Gainesville, FL
| | - Evans Heithaus
- Department of Radiology, University of Florida, Gainesville, FL
| | - Gilbert R Upchurch
- Department of Surgery, Division of Vascular Surgery, University of Florida, Gainesville, FL
| | - Wei Shao
- Department of Medicine, University of Florida, Gainesville, FL
| | - Michol A Cooper
- Department of Surgery, Division of Vascular Surgery, University of Florida, Gainesville, FL.
| |
Collapse
|
7
|
Samarla SK, P M. Ensemble fusion model for improved lung abnormality classification: Leveraging pre-trained models. MethodsX 2024; 12:102640. [PMID: 38524306 PMCID: PMC10957444 DOI: 10.1016/j.mex.2024.102640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 02/27/2024] [Indexed: 03/26/2024] Open
Abstract
Lung abnormalities pose significant health concerns, underscoring the need for swift and accurate diagnoses to facilitate timely medical intervention. This study introduces a novel methodology for the sub-classification of lung abnormalities within chest X-rays captured via smartphones. An accurate and timely diagnosis of lung abnormalities is essential for the successful implementation of appropriate therapy. In this paper, we propose a novel approach using a Convolutional neural network (CNN) with three maximum pooling layers and early fusion for sub-classifying lung abnormalities from chest Xrays. Based on the kind of abnormality, the CheXpert dataset is divided into 13 sub-classes, each of which is trained using a different sub-model. An early fusion procedure is then used to integrate the outputs of the sub-model.•3M-CNN (Method 1): We employed a Convolutional Neural Network (CNN) with three max pooling layers and an early fusion strategy to train dedicated sub-models for each of the 13 distinct sub-classes of lung abnormalities using the CheXpert dataset.•Ensemble Model (Method 2): Our 'Ensemble model' integrated the outputs of the trained sub-models, providing a powerful approach for the sub-classification of lung abnormalities.•Exceptional Accuracy: Our '3M-CNN' and 'fused model' achieved an accuracy of 98.79%, surpassing established methodologies, which is beneficial in resource-constrained environments embracing smartphone-based imaging.
Collapse
Affiliation(s)
- Suresh Kumar Samarla
- Information Technology, Puducherry Technological University, Puducherry, India
- CSE Department, SRKR Engineering College, AndhraPradesh, India
| | - Maragathavalli P
- Information Technology, Puducherry Technological University, Puducherry, India
| |
Collapse
|
8
|
Zeng M, Wang X, Chen W. Worldwide research landscape of artificial intelligence in lung disease: A scientometric study. Heliyon 2024; 10:e31129. [PMID: 38826704 PMCID: PMC11141367 DOI: 10.1016/j.heliyon.2024.e31129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 05/09/2024] [Accepted: 05/10/2024] [Indexed: 06/04/2024] Open
Abstract
Purpose To perform a comprehensive bibliometric analysis of the application of artificial intelligence (AI) in lung disease to understand the current status and emerging trends of this field. Materials and methods AI-based lung disease research publications were selected from the Web of Science Core Collection. Citespace, VOS viewer and Excel were used to analyze and visualize co-authorship, co-citation, and co-occurrence analysis of authors, keywords, countries/regions, references and institutions in this field. Results Our study included a total of 5210 papers. The number of publications on AI in lung disease showed explosive growth since 2017. China and the United States lead in publication numbers. The most productive author were Li, Weimin and Qian Wei, with Shanghai Jiaotong University as the most productive institution. Radiology was the most co-cited journal. Lung cancer and COVID-19 emerged as the most studied diseases. Deep learning, convolutional neural network, lung cancer, radiomics will be the focus of future research. Conclusions AI-based diagnosis and treatment of lung disease has become a research hotspot in recent years, yielding significant results. Future work should focus on establishing multimodal AI models that incorporate clinical, imaging and laboratory information. Enhanced visualization of deep learning, AI-driven differential diagnosis model for lung disease and the creation of international large-scale lung disease databases should also be considered.
Collapse
Affiliation(s)
| | | | - Wei Chen
- Department of Radiology, Southwest Hospital, Third Military Medical University, Chongqing, China
| |
Collapse
|
9
|
Liu J, Qi L, Xu Q, Chen J, Cui S, Li F, Wang Y, Cheng S, Tan W, Zhou Z, Wang J. A Self-supervised Learning-Based Fine-Grained Classification Model for Distinguishing Malignant From Benign Subcentimeter Solid Pulmonary Nodules. Acad Radiol 2024:S1076-6332(24)00287-3. [PMID: 38777719 DOI: 10.1016/j.acra.2024.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/02/2024] [Accepted: 05/05/2024] [Indexed: 05/25/2024]
Abstract
RATIONALE AND OBJECTIVES Diagnosing subcentimeter solid pulmonary nodules (SSPNs) remains challenging in clinical practice. Deep learning may perform better than conventional methods in differentiating benign and malignant pulmonary nodules. This study aimed to develop and validate a model for differentiating malignant and benign SSPNs using CT images. MATERIALS AND METHODS This retrospective study included consecutive patients with SSPNs detected between January 2015 and October 2021 as an internal dataset. Malignancy was confirmed pathologically; benignity was confirmed pathologically or via follow-up evaluations. The SSPNs were segmented manually. A self-supervision pre-training-based fine-grained network was developed for predicting SSPN malignancy. The pre-trained model was established using data from the National Lung Screening Trial, Lung Nodule Analysis 2016, and a database of 5478 pulmonary nodules from the previous study, with subsequent fine-tuning using the internal dataset. The model's efficacy was investigated using an external cohort from another center, and its accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were determined. RESULTS Overall, 1276 patients (mean age, 56 ± 10 years; 497 males) with 1389 SSPNs (mean diameter, 7.5 ± 2.0 mm; 625 benign) were enrolled. The internal dataset was specifically enriched for malignancy. The model's performance in the internal testing set (316 SSPNs) was: AUC, 0.964 (95% confidence interval (95%CI): 0.942-0.986); accuracy, 0.934; sensitivity, 0.965; and specificity, 0.908. The model's performance in the external test set (202 SSPNs) was: AUC, 0.945 (95% CI: 0.910-0.979); accuracy, 0.911; sensitivity, 0.977; and specificity, 0.860. CONCLUSION This deep learning model was robust and exhibited good performance in predicting the malignancy of SSPNs, which could help optimize patient management.
Collapse
Affiliation(s)
- Jianing Liu
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Linlin Qi
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Qian Xu
- Department of Computed Tomography and Magnetic Resonance, The Fourth Hospital of Hebei Medical University, Shijiazhuang, He Bei, China
| | - Jiaqi Chen
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Shulei Cui
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Fenglan Li
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Yawen Wang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Sainan Cheng
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Weixiong Tan
- Beijing Deepwise & League of PhD Technology Co. Ltd, Beijing, China
| | - Zhen Zhou
- Beijing Deepwise & League of PhD Technology Co. Ltd, Beijing, China
| | - Jianwei Wang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China.
| |
Collapse
|
10
|
Thakur GK, Thakur A, Kulkarni S, Khan N, Khan S. Deep Learning Approaches for Medical Image Analysis and Diagnosis. Cureus 2024; 16:e59507. [PMID: 38826977 PMCID: PMC11144045 DOI: 10.7759/cureus.59507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/01/2024] [Indexed: 06/04/2024] Open
Abstract
In addition to enhancing diagnostic accuracy, deep learning techniques offer the potential to streamline workflows, reduce interpretation time, and ultimately improve patient outcomes. The scalability and adaptability of deep learning algorithms enable their deployment across diverse clinical settings, ranging from radiology departments to point-of-care facilities. Furthermore, ongoing research efforts focus on addressing the challenges of data heterogeneity, model interpretability, and regulatory compliance, paving the way for seamless integration of deep learning solutions into routine clinical practice. As the field continues to evolve, collaborations between clinicians, data scientists, and industry stakeholders will be paramount in harnessing the full potential of deep learning for advancing medical image analysis and diagnosis. Furthermore, the integration of deep learning algorithms with other technologies, including natural language processing and computer vision, may foster multimodal medical data analysis and clinical decision support systems to improve patient care. The future of deep learning in medical image analysis and diagnosis is promising. With each success and advancement, this technology is getting closer to being leveraged for medical purposes. Beyond medical image analysis, patient care pathways like multimodal imaging, imaging genomics, and intelligent operating rooms or intensive care units can benefit from deep learning models.
Collapse
Affiliation(s)
- Gopal Kumar Thakur
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Abhishek Thakur
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Shridhar Kulkarni
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Naseebia Khan
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Shahnawaz Khan
- Department of Computer Application, Bundelkhand University, Jhansi, IND
| |
Collapse
|
11
|
Zahari R, Cox J, Obara B. Uncertainty-aware image classification on 3D CT lung. Comput Biol Med 2024; 172:108324. [PMID: 38508053 DOI: 10.1016/j.compbiomed.2024.108324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 03/06/2024] [Accepted: 03/14/2024] [Indexed: 03/22/2024]
Abstract
Early detection is crucial for lung cancer to prolong the patient's survival. Existing model architectures used in such systems have shown promising results. However, they lack reliability and robustness in their predictions and the models are typically evaluated on a single dataset, making them overconfident when a new class is present. With the existence of uncertainty, uncertain images can be referred to medical experts for a second opinion. Thus, we propose an uncertainty-aware framework that includes three phases: data preprocessing and model selection and evaluation, uncertainty quantification (UQ), and uncertainty measurement and data referral for the classification of benign and malignant nodules using 3D CT images. To quantify the uncertainty, we employed three approaches; Monte Carlo Dropout (MCD), Deep Ensemble (DE), and Ensemble Monte Carlo Dropout (EMCD). We evaluated eight different deep learning models consisting of ResNet, DenseNet, and the Inception network family, all of which achieved average F1 scores above 0.832, and the highest average value of 0.845 was obtained using InceptionResNetV2. Furthermore, incorporating the UQ demonstrated significant improvement in the overall model performance. Upon evaluation of the uncertainty estimate, MCD outperforms the other UQ models except for the metric, URecall, where DE and EMCD excel, implying that they are better at identifying incorrect predictions with higher uncertainty levels, which is vital in the medical field. Finally, we show that using a threshold for data referral can greatly improve the performance further, increasing the accuracy up to 0.959.
Collapse
Affiliation(s)
- Rahimi Zahari
- School of Computing, Newcastle University, Newcastle upon Tyne, UK
| | - Julie Cox
- County Durham and Darlington NHS Foundation Trust, County Durham, UK
| | - Boguslaw Obara
- School of Computing, Newcastle University, Newcastle upon Tyne, UK; Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK.
| |
Collapse
|
12
|
Sindhu A, Jadhav U, Ghewade B, Bhanushali J, Yadav P. Revolutionizing Pulmonary Diagnostics: A Narrative Review of Artificial Intelligence Applications in Lung Imaging. Cureus 2024; 16:e57657. [PMID: 38707160 PMCID: PMC11070215 DOI: 10.7759/cureus.57657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 04/04/2024] [Indexed: 05/07/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a transformative force in healthcare, particularly in pulmonary diagnostics. This comprehensive review explores the impact of AI on revolutionizing lung imaging, focusing on its applications in detecting abnormalities, diagnosing pulmonary conditions, and predicting disease prognosis. We provide an overview of traditional pulmonary diagnostic methods and highlight the importance of accurate and efficient lung imaging for early intervention and improved patient outcomes. Through the lens of AI, we examine machine learning algorithms, deep learning techniques, and natural language processing for analyzing radiology reports. Case studies and examples showcase the successful implementation of AI in pulmonary diagnostics, alongside challenges faced and lessons learned. Finally, we discuss future directions, including integrating AI into clinical workflows, ethical considerations, and the need for further research and collaboration in this rapidly evolving field. This review underscores the transformative potential of AI in enhancing the accuracy, efficiency, and accessibility of pulmonary healthcare.
Collapse
Affiliation(s)
- Arman Sindhu
- Respiratory Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Ulhas Jadhav
- Respiratory Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Babaji Ghewade
- Respiratory Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Jay Bhanushali
- Respiratory Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Pallavi Yadav
- Obstetrics and Gynecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
13
|
Jaksik R, Szumała K, Dinh KN, Śmieja J. Multiomics-Based Feature Extraction and Selection for the Prediction of Lung Cancer Survival. Int J Mol Sci 2024; 25:3661. [PMID: 38612473 PMCID: PMC11011391 DOI: 10.3390/ijms25073661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/14/2024] Open
Abstract
Lung cancer is a global health challenge, hindered by delayed diagnosis and the disease's complex molecular landscape. Accurate patient survival prediction is critical, motivating the exploration of various -omics datasets using machine learning methods. Leveraging multi-omics data, this study seeks to enhance the accuracy of survival prediction by proposing new feature extraction techniques combined with unbiased feature selection. Two lung adenocarcinoma multi-omics datasets, originating from the TCGA and CPTAC-3 projects, were employed for this purpose, emphasizing gene expression, methylation, and mutations as the most relevant data sources that provide features for the survival prediction models. Additionally, gene set aggregation was shown to be the most effective feature extraction method for mutation and copy number variation data. Using the TCGA dataset, we identified 32 molecular features that allowed the construction of a 2-year survival prediction model with an AUC of 0.839. The selected features were additionally tested on an independent CPTAC-3 dataset, achieving an AUC of 0.815 in nested cross-validation, which confirmed the robustness of the identified features.
Collapse
Affiliation(s)
- Roman Jaksik
- Department of Systems Biology and Engineering, Silesian University of Technology, 44-100 Gliwice, Poland;
| | - Kamila Szumała
- Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, 44-100 Gliwice, Poland;
| | - Khanh Ngoc Dinh
- Irving Institute for Cancer Dynamics and Department of Statistics, Columbia University, New York, NY 10027, USA;
| | - Jarosław Śmieja
- Department of Systems Biology and Engineering, Silesian University of Technology, 44-100 Gliwice, Poland;
| |
Collapse
|
14
|
Benzaquen J, Hofman P, Lopez S, Leroy S, Rouis N, Padovani B, Fontas E, Marquette CH, Boutros J. Integrating artificial intelligence into lung cancer screening: a randomised controlled trial protocol. BMJ Open 2024; 14:e074680. [PMID: 38355174 PMCID: PMC10868245 DOI: 10.1136/bmjopen-2023-074680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 12/21/2023] [Indexed: 02/16/2024] Open
Abstract
INTRODUCTION Lung cancer (LC) is the most common cause of cancer-related deaths worldwide. Its early detection can be achieved with a CT scan. Two large randomised trials proved the efficacy of low-dose CT (LDCT)-based lung cancer screening (LCS) in high-risk populations. The decrease in specific mortality is 20%-25%.Nonetheless, implementing LCS on a large scale faces obstacles due to the low number of thoracic radiologists and CT scans available for the eligible population and the high frequency of false-positive screening results and the long period of indeterminacy of nodules that can reach up to 24 months, which is a source of prolonged anxiety and multiple costly examinations with possible side effects.Deep learning, an artificial intelligence solution has shown promising results in retrospective trials detecting lung nodules and characterising them. However, until now no prospective studies have demonstrated their importance in a real-life setting. METHODS AND ANALYSIS This open-label randomised controlled study focuses on LCS for patients aged 50-80 years, who smoked more than 20 pack-years, whether active or quit smoking less than 15 years ago. Its objective is to determine whether assisting a multidisciplinary team (MDT) with a 3D convolutional network-based analysis of screening chest CT scans accelerates the definitive classification of nodules into malignant or benign. 2722 patients will be included with the aim to demonstrate a 3-month reduction in the delay between lung nodule detection and its definitive classification into benign or malignant. ETHICS AND DISSEMINATION The sponsor of this study is the University Hospital of Nice. The study was approved for France by the ethical committee CPP (Comités de Protection des Personnes) Sud-Ouest et outre-mer III (No. 2022-A01543-40) and the Agence Nationale du Medicament et des produits de Santé (Ministry of Health) in December 2023. The findings of the trial will be disseminated through peer-reviewed journals and national and international conference presentations. TRIAL REGISTRATION NUMBER NCT05704920.
Collapse
Affiliation(s)
- Jonathan Benzaquen
- Department of Pulmonary Medicine and Thoracic Oncology, FHU OncoAge, IHU RespirERA, Centre Hospitalier Universitaire de Nice, Nice, France
| | - Paul Hofman
- Laboratory of Clinical and Experimental Pathology, FHU OncoAge, IHU RespirERA, Universite Cote d'Azur, Centre hospitalier Universitaire de Nice, Nice, France
| | | | - Sylvie Leroy
- Department of Pulmonary Medicine and Thoracic Oncology, FHU OncoAge, IHU RespirERA, Centre Hospitalier Universitaire de Nice, Nice, France
- Institut de Pharmacologie Moléculaire et Cellulaire, Nice, France
| | - Nesrine Rouis
- Department of Pulmonary Medicine and Thoracic Oncology, FHU OncoAge, IHU RespirERA, Centre Hospitalier Universitaire de Nice, Nice, France
| | - Bernard Padovani
- Department of Radiology, Centre Hospitalier Universitaire de Nice, Nice, France
| | - Eric Fontas
- Délégation à la Recherche Clinique et à l'Innovation, Centre Hospitalier Universitaire de Nice, Nice, France
| | - Charles Hugo Marquette
- Department of Pulmonary Medicine and Thoracic Oncology, FHU OncoAge, IHU RespirERA, Centre Hospitalier Universitaire de Nice, Nice, France
| | - Jacques Boutros
- Department of Pulmonary Medicine and Thoracic Oncology, FHU OncoAge, IHU RespirERA, Centre Hospitalier Universitaire de Nice, Nice, France
| |
Collapse
|
15
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
16
|
Mese I, Altintas Taslicay C, Sivrioglu AK. Synergizing photon-counting CT with deep learning: potential enhancements in medical imaging. Acta Radiol 2024; 65:159-166. [PMID: 38146126 DOI: 10.1177/02841851231217995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
Abstract
This review article highlights the potential of integrating photon-counting computed tomography (CT) and deep learning algorithms in medical imaging to enhance diagnostic accuracy, improve image quality, and reduce radiation exposure. The use of photon-counting CT provides superior image quality, reduced radiation dose, and material decomposition capabilities, while deep learning algorithms excel in automating image analysis and improving diagnostic accuracy. The integration of these technologies can lead to enhanced material decomposition and classification, spectral image analysis, predictive modeling for individualized medicine, workflow optimization, and radiation dose management. However, data requirements, computational resources, and regulatory and ethical concerns remain challenges that need to be addressed to fully realize the potential of this technology. The fusion of photon-counting CT and deep learning algorithms is poised to revolutionize medical imaging and transform patient care.
Collapse
Affiliation(s)
- Ismail Mese
- Department of Radiology, Health Sciences University, Erenkoy Mental Health and Neurology Training and Research Hospital, Istanbul, Turkey
| | | | | |
Collapse
|
17
|
Jenkin Suji R, Bhadauria SS, Wilfred Godfrey W. A survey and taxonomy of 2.5D approaches for lung segmentation and nodule detection in CT images. Comput Biol Med 2023; 165:107437. [PMID: 37717526 DOI: 10.1016/j.compbiomed.2023.107437] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/20/2023] [Accepted: 08/28/2023] [Indexed: 09/19/2023]
Abstract
CAD systems for lung cancer diagnosis and detection can significantly offer unbiased, infatiguable diagnostics with minimal variance, decreasing the mortality rate and the five-year survival rate. Lung segmentation and lung nodule detection are critical steps in the lung cancer CAD system pipeline. Literature on lung segmentation and lung nodule detection mostly comprises techniques that process 3-D volumes or 2-D slices and surveys. However, surveys that highlight 2.5D techniques for lung segmentation and lung nodule detection still need to be included. This paper presents a background and discussion on 2.5D methods to fill this gap. Further, this paper also gives a taxonomy of 2.5D approaches and a detailed description of the 2.5D approaches. Based on the taxonomy, various 2.5D techniques for lung segmentation and lung nodule detection are clustered into these 2.5D approaches, which is followed by possible future work in this direction.
Collapse
|
18
|
Subashchandrabose U, John R, Anbazhagu UV, Venkatesan VK, Thyluru Ramakrishna M. Ensemble Federated Learning Approach for Diagnostics of Multi-Order Lung Cancer. Diagnostics (Basel) 2023; 13:3053. [PMID: 37835796 PMCID: PMC10572651 DOI: 10.3390/diagnostics13193053] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 09/20/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
The early detection and classification of lung cancer is crucial for improving a patient's outcome. However, the traditional classification methods are based on single machine learning models. Hence, this is limited by the availability and quality of data at the centralized computing server. In this paper, we propose an ensemble Federated Learning-based approach for multi-order lung cancer classification. This approach combines multiple machine learning models trained on different datasets allowing for improvising accuracy and generalization. Moreover, the Federated Learning approach enables the use of distributed data while ensuring data privacy and security. We evaluate the approach on a Kaggle cancer dataset and compare the results with traditional machine learning models. The results demonstrate an accuracy of 89.63% with lung cancer classification.
Collapse
Affiliation(s)
| | - Rajan John
- Department of Computer Science, College of Computer Science and Information Technology, Jazan University, Jazan 45142, Saudi Arabia;
| | - Usha Veerasamy Anbazhagu
- Department of Computing Technologies, School of Computing, Faculty of Engineering and Technology, SRM Institute of Science and Technology, SRM Nagar, Kattankulathur, Chennai 603203, India;
| | - Vinoth Kumar Venkatesan
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore 632014, India
| | - Mahesh Thyluru Ramakrishna
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-Be University), Bangalore 560066, India
| |
Collapse
|
19
|
Hung SC, Wang YT, Tseng MH. An Interpretable Three-Dimensional Artificial Intelligence Model for Computer-Aided Diagnosis of Lung Nodules in Computed Tomography Images. Cancers (Basel) 2023; 15:4655. [PMID: 37760624 PMCID: PMC10526230 DOI: 10.3390/cancers15184655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 09/16/2023] [Accepted: 09/17/2023] [Indexed: 09/29/2023] Open
Abstract
Lung cancer is typically classified into small-cell carcinoma and non-small-cell carcinoma. Non-small-cell carcinoma accounts for approximately 85% of all lung cancers. Low-dose chest computed tomography (CT) can quickly and non-invasively diagnose lung cancer. In the era of deep learning, an artificial intelligence (AI) computer-aided diagnosis system can be developed for the automatic recognition of CT images of patients, creating a new form of intelligent medical service. For many years, lung cancer has been the leading cause of cancer-related deaths in Taiwan, with smoking and air pollution increasing the likelihood of developing the disease. The incidence of lung adenocarcinoma in never-smoking women has also increased significantly in recent years, resulting in an important public health problem. Early detection of lung cancer and prompt treatment can help reduce the mortality rate of patients with lung cancer. In this study, an improved 3D interpretable hierarchical semantic convolutional neural network named HSNet was developed and validated for the automatic diagnosis of lung cancer based on a collection of lung nodule images. The interpretable AI model proposed in this study, with different training strategies and adjustment of model parameters, such as cyclic learning rate and random weight averaging, demonstrated better diagnostic performance than the previous literature, with results of a four-fold cross-validation procedure showing calcification: 0.9873 ± 0.006, margin: 0.9207 ± 0.009, subtlety: 0.9026 ± 0.014, texture: 0.9685 ± 0.006, sphericity: 0.8652 ± 0.021, and malignancy: 0.9685 ± 0.006.
Collapse
Affiliation(s)
- Sheng-Chieh Hung
- Master Program in Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan;
| | - Yao-Tung Wang
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan;
- Division of Pulmonary Medicine, Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| | - Ming-Hseng Tseng
- Master Program in Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan;
- Department of Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan
- Information Technology Office, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| |
Collapse
|
20
|
Chowdhury NA, Wang L, Gu L, Kaya M. Exploring the Potential of Sensing for Breast Cancer Detection. APPLIED SCIENCES 2023; 13:9982. [DOI: 10.3390/app13179982] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/11/2024]
Abstract
Breast cancer is a generalized global problem. Biomarkers are the active substances that have been considered as the signature of the existence and evolution of cancer. Early screening of different biomarkers associated with breast cancer can help doctors to design a treatment plan. However, each screening technique for breast cancer has some limitations. In most cases, a single technique can detect a single biomarker at a specific time. In this study, we address different types of biomarkers associated with breast cancer. This review article presents a detailed picture of different techniques and each technique’s associated mechanism, sensitivity, limit of detection, and linear range for breast cancer detection at early stages. The limitations of existing approaches require researchers to modify and develop new methods to identify cancer biomarkers at early stages.
Collapse
Affiliation(s)
- Nure Alam Chowdhury
- Department of Biomedical Engineering and Science, Florida Institute of Technology, Melbourne, FL 32901, USA
| | - Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| | - Linxia Gu
- Department of Biomedical Engineering and Science, Florida Institute of Technology, Melbourne, FL 32901, USA
| | - Mehmet Kaya
- Department of Biomedical Engineering and Science, Florida Institute of Technology, Melbourne, FL 32901, USA
| |
Collapse
|
21
|
Ruiz-Fresneda MA, Gijón A, Morales-Álvarez P. Bibliometric analysis of the global scientific production on machine learning applied to different cancer types. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2023; 30:96125-96137. [PMID: 37566331 PMCID: PMC10482761 DOI: 10.1007/s11356-023-28576-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 06/29/2023] [Indexed: 08/12/2023]
Abstract
Cancer disease is one of the main causes of death in the world, with million annual cases in the last decades. The need to find a cure has stimulated the search for efficient treatments and diagnostic procedures. One of the most promising tools that has emerged against cancer in recent years is machine learning (ML), which has raised a huge number of scientific papers published in a relatively short period of time. The present study analyzes global scientific production on ML applied to the most relevant cancer types through various bibliometric indicators. We find that over 30,000 studies have been published so far and observe that cancers with the highest number of published studies using ML (breast, lung, and colon cancer) are those with the highest incidence, being the USA and China the main scientific producers on the subject. Interestingly, the role of China and Japan in stomach cancer is correlated with the number of cases of this cancer type in Asia (78% of the worldwide cases). Knowing the countries and institutions that most study each area can be of great help for improving international collaborations between research groups and countries. Our analysis shows that medical and computer science journals lead the number of publications on the subject and could be useful for researchers in the field. Finally, keyword co-occurrence analysis suggests that ML-cancer research trends are focused not only on the use of ML as an effective diagnostic method, but also for the improvement of radiotherapy- and chemotherapy-based treatments.
Collapse
Affiliation(s)
| | - Alfonso Gijón
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
- Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, Granada, Spain
| | - Pablo Morales-Álvarez
- Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, Granada, Spain
- Department of Statistics and Operations Research, University of Granada, Granada, Spain
| |
Collapse
|
22
|
Wang L. Microwave Imaging and Sensing Techniques for Breast Cancer Detection. MICROMACHINES 2023; 14:1462. [PMID: 37512773 PMCID: PMC10385169 DOI: 10.3390/mi14071462] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/14/2023] [Accepted: 07/17/2023] [Indexed: 07/30/2023]
Abstract
Medical imaging techniques, including X-ray mammography, ultrasound, and magnetic resonance imaging, play a crucial role in the timely identification and monitoring of breast cancer. However, these conventional imaging modalities have their limitations, and there is a need for a more accurate and sensitive alternative. Microwave imaging has emerged as a promising technique for breast cancer detection due to its non-ionizing, non-invasive, and cost-effective nature. Recent advancements in microwave imaging and sensing techniques have opened up new possibilities for the early diagnosis and treatment of breast cancer. By combining microwave sensing with machine learning techniques, microwave imaging approaches can rapidly and affordably identify and classify breast tumors. This manuscript provides a comprehensive overview of the latest developments in microwave imaging and sensing techniques for the early detection of breast cancer. It discusses the principles and applications of microwave imaging and highlights its advantages over conventional imaging modalities. The manuscript also delves into integrating machine learning algorithms to enhance the accuracy and efficiency of microwave imaging in breast cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
23
|
Sorrentino S, Manetti F, Bresci A, Vernuccio F, Ceconello C, Ghislanzoni S, Bongarzone I, Vanna R, Cerullo G, Polli D. Deep ensemble learning and transfer learning methods for classification of senescent cells from nonlinear optical microscopy images. Front Chem 2023; 11:1213981. [PMID: 37426334 PMCID: PMC10326547 DOI: 10.3389/fchem.2023.1213981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/14/2023] [Indexed: 07/11/2023] Open
Abstract
The success of chemotherapy and radiotherapy anti-cancer treatments can result in tumor suppression or senescence induction. Senescence was previously considered a favorable therapeutic outcome, until recent advancements in oncology research evidenced senescence as one of the culprits of cancer recurrence. Its detection requires multiple assays, and nonlinear optical (NLO) microscopy provides a solution for fast, non-invasive, and label-free detection of therapy-induced senescent cells. Here, we develop several deep learning architectures to perform binary classification between senescent and proliferating human cancer cells using NLO microscopy images and we compare their performances. As a result of our work, we demonstrate that the most performing approach is the one based on an ensemble classifier, that uses seven different pre-trained classification networks, taken from literature, with the addition of fully connected layers on top of their architectures. This approach achieves a classification accuracy of over 90%, showing the possibility of building an automatic, unbiased senescent cells image classifier starting from multimodal NLO microscopy data. Our results open the way to a deeper investigation of senescence classification via deep learning techniques with a potential application in clinical diagnosis.
Collapse
Affiliation(s)
| | | | - Arianna Bresci
- Department of Physics, Politecnico di Milano, Milan, Italy
| | | | | | - Silvia Ghislanzoni
- Department of Advanced Diagnostics, Fondazione IRCCS Istituto Nazionale dei Tumori Milano, Milan, Italy
| | - Italia Bongarzone
- Department of Advanced Diagnostics, Fondazione IRCCS Istituto Nazionale dei Tumori Milano, Milan, Italy
| | - Renzo Vanna
- CNR-Institute for Photonics and Nanotechnologies (CNR-IFN), Milan, Italy
| | - Giulio Cerullo
- Department of Physics, Politecnico di Milano, Milan, Italy
- CNR-Institute for Photonics and Nanotechnologies (CNR-IFN), Milan, Italy
| | - Dario Polli
- Department of Physics, Politecnico di Milano, Milan, Italy
- CNR-Institute for Photonics and Nanotechnologies (CNR-IFN), Milan, Italy
| |
Collapse
|
24
|
Irshad RR, Hussain S, Sohail SS, Zamani AS, Madsen DØ, Alattab AA, Ahmed AAA, Norain KAA, Alsaiari OAS. A Novel IoT-Enabled Healthcare Monitoring Framework and Improved Grey Wolf Optimization Algorithm-Based Deep Convolution Neural Network Model for Early Diagnosis of Lung Cancer. SENSORS (BASEL, SWITZERLAND) 2023; 23:2932. [PMID: 36991642 PMCID: PMC10052730 DOI: 10.3390/s23062932] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/02/2023] [Accepted: 03/03/2023] [Indexed: 06/19/2023]
Abstract
Lung cancer is a high-risk disease that causes mortality worldwide; nevertheless, lung nodules are the main manifestation that can help to diagnose lung cancer at an early stage, lowering the workload of radiologists and boosting the rate of diagnosis. Artificial intelligence-based neural networks are promising technologies for automatically detecting lung nodules employing patient monitoring data acquired from sensor technology through an Internet-of-Things (IoT)-based patient monitoring system. However, the standard neural networks rely on manually acquired features, which reduces the effectiveness of detection. In this paper, we provide a novel IoT-enabled healthcare monitoring platform and an improved grey-wolf optimization (IGWO)-based deep convulution neural network (DCNN) model for lung cancer detection. The Tasmanian Devil Optimization (TDO) algorithm is utilized to select the most pertinent features for diagnosing lung nodules, and the convergence rate of the standard grey wolf optimization (GWO) algorithm is modified, resulting in an improved GWO algorithm. Consequently, an IGWO-based DCNN is trained on the optimal features obtained from the IoT platform, and the findings are saved in the cloud for the doctor's judgment. The model is built on an Android platform with DCNN-enabled Python libraries, and the findings are evaluated against cutting-edge lung cancer detection models.
Collapse
Affiliation(s)
- Reyazur Rashid Irshad
- Department of Computer Science, College of Science and Arts, Najran University, Sharurah 68341, Saudi Arabia
| | - Shahid Hussain
- Department of Computer Science and Engineering, Sejong University, Seoul 30019, Republic of Korea
| | - Shahab Saquib Sohail
- Department of Computer Science and Engineering, School of Engineering Sciences and Technology, Jamia Hamdard, New Delhi 110062, India
| | - Abu Sarwar Zamani
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Dag Øivind Madsen
- USN School of Business, University of South-Eastern Norway, 3511 Hønefoss, Norway
| | - Ahmed Abdu Alattab
- Department of Computer Science, College of Science and Arts, Najran University, Sharurah 68341, Saudi Arabia
- Department of Computer Science, Faculty of Computer Science and Information Systems, Thamar University, Thamar 87246, Yemen
| | | | | | - Omar Ali Saleh Alsaiari
- Department of Computer Science, College of Science and Arts, Najran University, Sharurah 68341, Saudi Arabia
| |
Collapse
|
25
|
Thirumagal E, Saruladha K. Lung cancer diagnosis using Hessian adaptive learning optimization in generative adversarial networks. Soft comput 2023. [DOI: 10.1007/s00500-023-07877-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
26
|
Medical Images Segmentation for Lung Cancer Diagnosis Based on Deep Learning Architectures. Diagnostics (Basel) 2023; 13:diagnostics13030546. [PMID: 36766655 PMCID: PMC9914913 DOI: 10.3390/diagnostics13030546] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 01/28/2023] [Accepted: 01/29/2023] [Indexed: 02/05/2023] Open
Abstract
Lung cancer presents one of the leading causes of mortalities for people around the world. Lung image analysis and segmentation are one of the primary steps used for early diagnosis of cancer. Handcrafted medical imaging segmentation presents a very time-consuming task for radiation oncologists. To address this problem, we propose in this work to develop a full and entire system used for early diagnosis of lung cancer in CT scan imaging. The proposed lung cancer diagnosis system is composed of two main parts: the first part is used for segmentation developed on top of the UNETR network, and the second part is a classification part used to classify the output segmentation part, either benign or malignant, developed on top of the self-supervised network. The proposed system presents a powerful tool for early diagnosing and combatting lung cancer using 3D-input CT scan data. Extensive experiments have been performed to contribute to better segmentation and classification results. Training and testing experiments have been performed using the Decathlon dataset. Experimental results have been conducted to new state-of-the-art performances: segmentation accuracy of 97.83%, and 98.77% as classification accuracy. The proposed system presents a new powerful tool to use for early diagnosing and combatting lung cancer using 3D-input CT scan data.
Collapse
|