1
|
Wang Y, Guo Y, Wang Z, Yu L, Yan Y, Gu Z. Enhancing semantic segmentation in chest X-ray images through image preprocessing: ps-KDE for pixel-wise substitution by kernel density estimation. PLoS One 2024; 19:e0299623. [PMID: 38913621 PMCID: PMC11195943 DOI: 10.1371/journal.pone.0299623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 05/08/2024] [Indexed: 06/26/2024] Open
Abstract
BACKGROUND In medical imaging, the integration of deep-learning-based semantic segmentation algorithms with preprocessing techniques can reduce the need for human annotation and advance disease classification. Among established preprocessing techniques, Contrast Limited Adaptive Histogram Equalization (CLAHE) has demonstrated efficacy in improving segmentation algorithms across various modalities, such as X-rays and CT. However, there remains a demand for improved contrast enhancement methods considering the heterogeneity of datasets and the various contrasts across different anatomic structures. METHOD This study proposes a novel preprocessing technique, ps-KDE, to investigate its impact on deep learning algorithms to segment major organs in posterior-anterior chest X-rays. Ps-KDE augments image contrast by substituting pixel values based on their normalized frequency across all images. We evaluate our approach on a U-Net architecture with ResNet34 backbone pre-trained on ImageNet. Five separate models are trained to segment the heart, left lung, right lung, left clavicle, and right clavicle. RESULTS The model trained to segment the left lung using ps-KDE achieved a Dice score of 0.780 (SD = 0.13), while that of trained on CLAHE achieved a Dice score of 0.717 (SD = 0.19), p<0.01. ps-KDE also appears to be more robust as CLAHE-based models misclassified right lungs in select test images for the left lung model. The algorithm for performing ps-KDE is available at https://github.com/wyc79/ps-KDE. DISCUSSION Our results suggest that ps-KDE offers advantages over current preprocessing techniques when segmenting certain lung regions. This could be beneficial in subsequent analyses such as disease classification and risk stratification.
Collapse
Affiliation(s)
- Yuanchen Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Yujie Guo
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Ziqi Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Linzi Yu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Yujie Yan
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Zifan Gu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
2
|
Alsubai S. Transfer learning based approach for lung and colon cancer detection using local binary pattern features and explainable artificial intelligence (AI) techniques. PeerJ Comput Sci 2024; 10:e1996. [PMID: 38660170 PMCID: PMC11042027 DOI: 10.7717/peerj-cs.1996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 03/27/2024] [Indexed: 04/26/2024]
Abstract
Cancer, a life-threatening disorder caused by genetic abnormalities and metabolic irregularities, is a substantial health danger, with lung and colon cancer being major contributors to death. Histopathological identification is critical in directing effective treatment regimens for these cancers. The earlier these disorders are identified, the lesser the risk of death. The use of machine learning and deep learning approaches has the potential to speed up cancer diagnosis processes by allowing researchers to analyse large patient databases quickly and affordably. This study introduces the Inception-ResNetV2 model with strategically incorporated local binary patterns (LBP) features to improve diagnostic accuracy for lung and colon cancer identification. The model is trained on histopathological images, and the integration of deep learning and texture-based features has demonstrated its exceptional performance with 99.98% accuracy. Importantly, the study employs explainable artificial intelligence (AI) through SHapley Additive exPlanations (SHAP) to unravel the complex inner workings of deep learning models, providing transparency in decision-making processes. This study highlights the potential to revolutionize cancer diagnosis in an era of more accurate and reliable medical assessments.
Collapse
Affiliation(s)
- Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
3
|
Nam HK, Lea WWI, Yang Z, Noh E, Rhie YJ, Lee KH, Hong SJ. Clinical validation of a deep-learning-based bone age software in healthy Korean children. Ann Pediatr Endocrinol Metab 2024; 29:102-108. [PMID: 38271993 PMCID: PMC11076234 DOI: 10.6065/apem.2346050.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/19/2023] [Accepted: 04/28/2023] [Indexed: 01/27/2024] Open
Abstract
PURPOSE Bone age (BA) is needed to assess developmental status and growth disorders. We evaluated the clinical performance of a deep-learning-based BA software to estimate the chronological age (CA) of healthy Korean children. METHODS This retrospective study included 371 healthy children (217 boys, 154 girls), aged between 4 and 17 years, who visited the Department of Pediatrics for health check-ups between January 2017 and December 2018. A total of 553 left-hand radiographs from 371 healthy Korean children were evaluated using a commercial deep-learning-based BA software (BoneAge, Vuno, Seoul, Korea). The clinical performance of the deep learning (DL) software was determined using the concordance rate and Bland-Altman analysis via comparison with the CA. RESULTS A 2-sample t-test (P<0.001) and Fisher exact test (P=0.011) showed a significant difference between the normal CA and the BA estimated by the DL software. There was good correlation between the 2 variables (r=0.96, P<0.001); however, the root mean square error was 15.4 months. With a 12-month cutoff, the concordance rate was 58.8%. The Bland-Altman plot showed that the DL software tended to underestimate the BA compared with the CA, especially in children under the age of 8.3 years. CONCLUSION The DL-based BA software showed a low concordance rate and a tendency to underestimate the BA in healthy Korean children.
Collapse
Affiliation(s)
- Hyo-Kyoung Nam
- Department of Pediatrics, Korea University College of Medicine, Seoul, Korea
| | - Winnah Wu-In Lea
- Department of Radiology, Korea University College of Medicine, Seoul, Korea
| | - Zepa Yang
- Smart Health Care Center, Korea University Guro Hospital, Seoul, Korea
- Korea University Guro Hospital-Medical Image Data Center (KUGH-MIDC), Seoul, Korea
| | - Eunjin Noh
- Smart Health Care Center, Korea University Guro Hospital, Seoul, Korea
| | - Young-Jun Rhie
- Department of Pediatrics, Korea University College of Medicine, Seoul, Korea
| | - Kee-Hyoung Lee
- Department of Pediatrics, Korea University College of Medicine, Seoul, Korea
| | - Suk-Joo Hong
- Department of Radiology, Korea University College of Medicine, Seoul, Korea
- Korea University Guro Hospital-Medical Image Data Center (KUGH-MIDC), Seoul, Korea
| |
Collapse
|
4
|
Usuzaki T, Takahashi K, Takagi H, Ishikuro M, Obara T, Yamaura T, Kamimoto M, Majima K. Efficacy of exponentiation method with a convolutional neural network for classifying lung nodules on CT images by malignancy level. Eur Radiol 2023; 33:9309-9319. [PMID: 37477673 DOI: 10.1007/s00330-023-09946-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 04/24/2023] [Accepted: 05/19/2023] [Indexed: 07/22/2023]
Abstract
OBJECTIVES The aim of this study was to examine the performance of a convolutional neural network (CNN) combined with exponentiating each pixel value in classifying benign and malignant lung nodules on computed tomography (CT) images. MATERIALS AND METHODS Images in the Lung Image Database Consortium-Image Database Resource Initiative (LIDC-IDRI) were analyzed. Four CNN models were then constructed to classify the lung nodules by malignancy level (malignancy level 1 vs. 2, malignancy level 1 vs. 3, malignancy level 1 vs. 4, and malignancy level 1 vs. 5). The exponentiation method was applied for exponent values of 1.0 to 10.0 in increments of 0.5. Accuracy, sensitivity, specificity, and area under the curve of receiver operating characteristics (AUC-ROC) were calculated. These statistics were compared between an exponent value of 1.0 and all other exponent values in each model by the Mann-Whitney U-test. RESULTS In malignancy 1 vs. 4, maximum test accuracy (MTA; exponent value = 2.0, 3.0, 3.5, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, and 10.0) and specificity (6.5, 7.0, and 9.0) were improved by up to 0.012 and 0.037, respectively. In malignancy 1 vs. 5, MTA (6.5 and 7.0) and sensitivity (1.5) were improved by up to 0.030 and 0.0040, respectively. CONCLUSIONS The exponentiation method improved the performance of the CNN in the task of classifying lung nodules on CT images as benign or malignant. The exponentiation method demonstrated two advantages: improved accuracy, and the ability to adjust sensitivity and specificity by selecting an appropriate exponent value. CLINICAL RELEVANCE STATEMENT Adjustment of sensitivity and specificity by selecting an exponent value enables the construction of proper CNN models for screening, diagnosis, and treatment processes among patients with lung nodules. KEY POINTS • The exponentiation method improved the performance of the convolutional neural network. • Contrast accentuation by the exponentiation method may derive features of lung nodules. • Sensitivity and specificity can be adjusted by selecting an exponent value.
Collapse
Affiliation(s)
- Takuma Usuzaki
- Department of Diagnostic Radiology, Tohoku University Hospital, 1-1 Seiryo-Machi, Aoba-Ku, Sendai, Miyagi, 980-8574, Japan.
| | - Kengo Takahashi
- Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Hidenobu Takagi
- Department of Diagnostic Radiology, Tohoku University Hospital, 1-1 Seiryo-Machi, Aoba-Ku, Sendai, Miyagi, 980-8574, Japan
- Department of Advanced MRI Collaborative Research, Graduate School of Medicine, Tohoku University, Sendai, Japan
| | - Mami Ishikuro
- Division of Molecular Epidemiology, Graduate School of Medicine, Tohoku University, Sendai, Miyagi, Japan
| | - Taku Obara
- Division of Molecular Epidemiology, Graduate School of Medicine, Tohoku University, Sendai, Miyagi, Japan
- Division of Molecular Epidemiology, Department of Preventive Medicine and Epidemiology, Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Department of Pharmaceutical Sciences, Tohoku University Hospital, Sendai, Japan
| | | | | | | |
Collapse
|
5
|
Kang CC, Lee TY, Lim WF, Yeo WWY. Opportunities and challenges of 5G network technology toward precision medicine. Clin Transl Sci 2023; 16:2078-2094. [PMID: 37702288 PMCID: PMC10651640 DOI: 10.1111/cts.13640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 08/31/2023] [Accepted: 09/01/2023] [Indexed: 09/14/2023] Open
Abstract
Moving away from traditional "one-size-fits-all" treatment to precision-based medicine has tremendously improved disease prognosis, accuracy of diagnosis, disease progression prediction, and targeted-treatment. The current cutting-edge of 5G network technology is enabling a growing trend in precision medicine to extend its utility and value to the smart healthcare system. The 5G network technology will bring together big data, artificial intelligence, and machine learning to provide essential levels of connectivity to enable a new health ecosystem toward precision medicine. In the 5G-enabled health ecosystem, its applications involve predictive and preventative measurements which enable advances in patient personalization. This review aims to discuss the opportunities, challenges, and prospects posed to 5G network technology in moving forward to deliver personalized treatments and patient-centric care via a precision medicine approach.
Collapse
Affiliation(s)
- Chia Chao Kang
- School of Electrical Engineering and Artificial IntelligenceXiamen University MalaysiaSepangSelangorMalaysia
| | - Tze Yan Lee
- School of Liberal Arts, Science and Technology (PUScLST)Perdana UniversityKuala LumpurMalaysia
| | - Wai Feng Lim
- Sunway Medical CentreSubang JayaSelangor Darul EhsanMalaysia
| | - Wendy Wai Yeng Yeo
- School of PharmacyMonash University MalaysiaBandar SunwaySelangor Darul EhsanMalaysia
| |
Collapse
|
6
|
Gandhi Z, Gurram P, Amgai B, Lekkala SP, Lokhandwala A, Manne S, Mohammed A, Koshiya H, Dewaswala N, Desai R, Bhopalwala H, Ganti S, Surani S. Artificial Intelligence and Lung Cancer: Impact on Improving Patient Outcomes. Cancers (Basel) 2023; 15:5236. [PMID: 37958411 PMCID: PMC10650618 DOI: 10.3390/cancers15215236] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/23/2023] [Accepted: 10/24/2023] [Indexed: 11/15/2023] Open
Abstract
Lung cancer remains one of the leading causes of cancer-related deaths worldwide, emphasizing the need for improved diagnostic and treatment approaches. In recent years, the emergence of artificial intelligence (AI) has sparked considerable interest in its potential role in lung cancer. This review aims to provide an overview of the current state of AI applications in lung cancer screening, diagnosis, and treatment. AI algorithms like machine learning, deep learning, and radiomics have shown remarkable capabilities in the detection and characterization of lung nodules, thereby aiding in accurate lung cancer screening and diagnosis. These systems can analyze various imaging modalities, such as low-dose CT scans, PET-CT imaging, and even chest radiographs, accurately identifying suspicious nodules and facilitating timely intervention. AI models have exhibited promise in utilizing biomarkers and tumor markers as supplementary screening tools, effectively enhancing the specificity and accuracy of early detection. These models can accurately distinguish between benign and malignant lung nodules, assisting radiologists in making more accurate and informed diagnostic decisions. Additionally, AI algorithms hold the potential to integrate multiple imaging modalities and clinical data, providing a more comprehensive diagnostic assessment. By utilizing high-quality data, including patient demographics, clinical history, and genetic profiles, AI models can predict treatment responses and guide the selection of optimal therapies. Notably, these models have shown considerable success in predicting the likelihood of response and recurrence following targeted therapies and optimizing radiation therapy for lung cancer patients. Implementing these AI tools in clinical practice can aid in the early diagnosis and timely management of lung cancer and potentially improve outcomes, including the mortality and morbidity of the patients.
Collapse
Affiliation(s)
- Zainab Gandhi
- Department of Internal Medicine, Geisinger Wyoming Valley Medical Center, Wilkes Barre, PA 18711, USA
| | - Priyatham Gurram
- Department of Medicine, Mamata Medical College, Khammam 507002, India; (P.G.); (S.P.L.); (S.M.)
| | - Birendra Amgai
- Department of Internal Medicine, Geisinger Community Medical Center, Scranton, PA 18510, USA;
| | - Sai Prasanna Lekkala
- Department of Medicine, Mamata Medical College, Khammam 507002, India; (P.G.); (S.P.L.); (S.M.)
| | - Alifya Lokhandwala
- Department of Medicine, Jawaharlal Nehru Medical College, Wardha 442001, India;
| | - Suvidha Manne
- Department of Medicine, Mamata Medical College, Khammam 507002, India; (P.G.); (S.P.L.); (S.M.)
| | - Adil Mohammed
- Department of Internal Medicine, Central Michigan University College of Medicine, Saginaw, MI 48602, USA;
| | - Hiren Koshiya
- Department of Internal Medicine, Prime West Consortium, Inglewood, CA 92395, USA;
| | - Nakeya Dewaswala
- Department of Cardiology, University of Kentucky, Lexington, KY 40536, USA;
| | - Rupak Desai
- Independent Researcher, Atlanta, GA 30079, USA;
| | - Huzaifa Bhopalwala
- Department of Internal Medicine, Appalachian Regional Hospital, Hazard, KY 41701, USA; (H.B.); (S.G.)
| | - Shyam Ganti
- Department of Internal Medicine, Appalachian Regional Hospital, Hazard, KY 41701, USA; (H.B.); (S.G.)
| | - Salim Surani
- Departmet of Pulmonary, Critical Care Medicine, Texas A&M University, College Station, TX 77845, USA;
| |
Collapse
|
7
|
Shanmugam K, Rajaguru H. Exploration and Enhancement of Classifiers in the Detection of Lung Cancer from Histopathological Images. Diagnostics (Basel) 2023; 13:3289. [PMID: 37892110 PMCID: PMC10606104 DOI: 10.3390/diagnostics13203289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 10/20/2023] [Accepted: 10/21/2023] [Indexed: 10/29/2023] Open
Abstract
Lung cancer is a prevalent malignancy that impacts individuals of all genders and is often diagnosed late due to delayed symptoms. To catch it early, researchers are developing algorithms to study lung cancer images. The primary objective of this work is to propose a novel approach for the detection of lung cancer using histopathological images. In this work, the histopathological images underwent preprocessing, followed by segmentation using a modified approach of KFCM-based segmentation and the segmented image intensity values were dimensionally reduced using Particle Swarm Optimization (PSO) and Grey Wolf Optimization (GWO). Algorithms such as KL Divergence and Invasive Weed Optimization (IWO) are used for feature selection. Seven different classifiers such as SVM, KNN, Random Forest, Decision Tree, Softmax Discriminant, Multilayer Perceptron, and BLDC were used to analyze and classify the images as benign or malignant. Results were compared using standard metrics, and kappa analysis assessed classifier agreement. The Decision Tree Classifier with GWO feature extraction achieved good accuracy of 85.01% without feature selection and hyperparameter tuning approaches. Furthermore, we present a methodology to enhance the accuracy of the classifiers by employing hyperparameter tuning algorithms based on Adam and RAdam. By combining features from GWO and IWO, and using the RAdam algorithm, the Decision Tree classifier achieves the commendable accuracy of 91.57%.
Collapse
Affiliation(s)
| | - Harikumar Rajaguru
- Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam 638401, India;
| |
Collapse
|
8
|
Wang H, Xu S, Fang KB, Dai ZS, Wei GZ, Chen LF. Contrast-enhanced magnetic resonance image segmentation based on improved U-Net and Inception-ResNet in the diagnosis of spinal metastases. J Bone Oncol 2023; 42:100498. [PMID: 37670740 PMCID: PMC10475503 DOI: 10.1016/j.jbo.2023.100498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 07/18/2023] [Accepted: 07/26/2023] [Indexed: 09/07/2023] Open
Abstract
Objective The objective of this study was to investigate the use of contrast-enhanced magnetic resonance imaging (CE-MRI) combined with radiomics and deep learning technology for the identification of spinal metastases and primary malignant spinal bone tumor. Methods The region growing algorithm was utilized to segment the lesions, and two parameters were defined based on the region of interest (ROI). Deep learning algorithms were employed: improved U-Net, which utilized CE-MRI parameter maps as input, and used 10 layers of CE images as input. Inception-ResNet model was used to extract relevant features for disease identification and construct a diagnosis classifier. Results The diagnostic accuracy of radiomics was 0.74, while the average diagnostic accuracy of improved U-Net was 0.98, respectively. the PA of our model is as high as 98.001%. The findings indicate that CE-MRI based radiomics and deep learning have the potential to assist in the differential diagnosis of spinal metastases and primary malignant spinal bone tumor. Conclusion CE-MRI combined with radiomics and deep learning technology can potentially assist in the differential diagnosis of spinal metastases and primary malignant spinal bone tumor, providing a promising approach for clinical diagnosis.
Collapse
Affiliation(s)
- Hai Wang
- Department of Orthopedics, The First Affiliated Hospital, Fujian Medical University, Fuzhou 350005, China
- Department of Orthopedics, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou 350212,China
| | - Shaohua Xu
- Department of Hepatobiliary and Pancreatic Surgery, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou 350014, China
| | - Kai-bin Fang
- Department of Orthopedics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Zhang-Sheng Dai
- Department of Orthopedics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Guo-Zhen Wei
- Department of Orthopedics, The First Affiliated Hospital, Fujian Medical University, Fuzhou 350005, China
- Department of Orthopedics, National Regional Medical Center, Binhai Campus of the First Affiliated Hospital, Fujian Medical University, Fuzhou 350212,China
| | - Lu-Feng Chen
- Department of Thoracic and Cardiovascular Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, China
| |
Collapse
|
9
|
Ozcelik N, Ozcelik AE, Guner Zirih NM, Selimoglu I, Gumus A. Deep learning for diagnosis of malign pleural effusion on computed tomography images. Clinics (Sao Paulo) 2023; 78:100210. [PMID: 37149920 DOI: 10.1016/j.clinsp.2023.100210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 04/01/2023] [Accepted: 04/18/2023] [Indexed: 05/09/2023] Open
Abstract
BACKGROUND The pleura is a serous membrane that surrounds the lungs. The visceral surface secretes fluid into the serous cavity and the parietal surface ensures a regular absorption of this fluid. If this balance is disturbed, fluid accumulation occurs in the pleural space called "Pleural Effusion". Today, accurate diagnosis of pleural diseases is becoming more critical, as advances in treatment protocols have contributed positively to prognosis. Our aim is to perform computer-aided numerical analysis of Computed Tomography (CT) images from patients showing pleural effusion images on CT and to examine the prediction of malignant/benign distinction using deep learning by comparing with the cytology results. METHODS The authors classified 408 CT images from 64 patients whose etiology of pleural effusion was investigated using the deep learning method. 378 of the images were used for the training of the system; 15 malignant and 15 benign CT images, which were not included in the training group, were used as the test. RESULTS Among the 30 test images evaluated in the system; 14 of 15 malignant patients and 13 of 15 benign patients were estimated with correct diagnosis (PPD: 93.3%, NPD: 86.67%, Sensitivity: 87.5%, Specificity: 92.86%). CONCLUSION Advances in computer-aided diagnostic analysis of CT images and obtaining a pre-diagnosis of pleural fluid may reduce the need for interventional procedures by guiding physicians about which patients may have malignancies. Thus, it is cost and time-saving in patient management, allowing earlier diagnosis and treatment.
Collapse
Affiliation(s)
- Neslihan Ozcelik
- Recep Tayyip Erdogan University, Faculty of Medicine, Training and Research Hospital, Chest Disease, Rize, Turkey.
| | - Ali Erdem Ozcelik
- Recep Tayyip Erdogan University, Engineering and Architecture Faculty, Department of Landscape Architecture (Geomatics Engineer), Rize, Turkey
| | - Nese Merve Guner Zirih
- Recep Tayyip Erdogan University, Faculty of Medicine, Training and Research Hospital, Chest Disease, Rize, Turkey
| | - Inci Selimoglu
- Recep Tayyip Erdogan University, Faculty of Medicine, Training and Research Hospital, Chest Disease, Rize, Turkey
| | - Aziz Gumus
- Recep Tayyip Erdogan University, Faculty of Medicine, Training and Research Hospital, Chest Disease, Rize, Turkey
| |
Collapse
|
10
|
Lima T, Luz D, Oseas A, Veras R, Araújo F. Automatic classification of pulmonary nodules in computed tomography images using pre-trained networks and bag of features. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-17. [PMID: 37362706 PMCID: PMC10116084 DOI: 10.1007/s11042-023-14900-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 07/26/2022] [Accepted: 02/10/2023] [Indexed: 06/28/2023]
Abstract
Lung cancer has the highest incidence in the world. The standard tests for its diagnostics are medical imaging exams, sputum cytology, and lung biopsy. Computed Tomography (CT) of the chest plays an essential role in the early detection of nodules since it can allow for more treatment options and increases patient survival. However, the analysis of these exams is a tiring and error-prone process. Thus, computational methods can help the specialist in this analysis. This work addresses the classification of pulmonary nodules as benign or malignant on CT images. Our approach uses the pre-trained VGG16, VGG19, Inception, Resnet50, and Xception, to extract features from each 2D slice of the 3D nodules. Then, we use Principal Component Analysis to reduce the dimensionality of the feature vectors and make them all the same length. Then, we use Bag of Features (BoF) to combine the feature vectors of the different 2D slices and generate only one signature representing the 3D nodule. The classification step uses Random Forest. We evaluated the proposed method with 1,405 segmented nodules from the LIDC-IDRI database and obtained an accuracy of 95.34%, F1-Score of 91.73, kappa of 0.88, sensitivity of 90.53%, specificity of 97.26% and AUC of 0.99. The main conclusion was that the combination by BoF of features extracted from 2D slices using pre-trained architectures produced better results than training 2D and 3D CNNs in the nodules. In addition, the use of BoF also makes the creation of the nodule signature independent of the number of slices.
Collapse
Affiliation(s)
- Thiago Lima
- Departamento de Computação, Universidade Federal do Piauí, Teresina, PI Brasil
- Departamento de Engenharia Elétrica, Universidade Federal do Piauí, Teresina, PI Brasil
| | - Daniel Luz
- Departamento de Computação, Universidade Federal do Piauí, Teresina, PI Brasil
- Departamento de Engenharia Elétrica, Universidade Federal do Piauí, Teresina, PI Brasil
- Departamento de Informática, Instituto Federal de Educação, Ciência e Tecnologia do Piauí, Picos, PI Brasil
| | - Antonio Oseas
- Departamento de Computação, Universidade Federal do Piauí, Teresina, PI Brasil
- Departamento de Engenharia Elétrica, Universidade Federal do Piauí, Teresina, PI Brasil
- Departamento de Sistemas de Informação, Universidade Federal do Piauí, Picos, PI Brasil
| | - Rodrigo Veras
- Departamento de Computação, Universidade Federal do Piauí, Teresina, PI Brasil
| | - Flávio Araújo
- Departamento de Computação, Universidade Federal do Piauí, Teresina, PI Brasil
- Departamento de Engenharia Elétrica, Universidade Federal do Piauí, Teresina, PI Brasil
- Departamento de Sistemas de Informação, Universidade Federal do Piauí, Picos, PI Brasil
| |
Collapse
|
11
|
Les T, Markiewicz T, Dziekiewicz M, Gallego J, Swiderska-Chadaj Z, Lorent M. Localization of spleen and kidney organs from CT scans based on classification of slices in rotational views. Sci Rep 2023; 13:5709. [PMID: 37029169 PMCID: PMC10082200 DOI: 10.1038/s41598-023-32741-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 03/31/2023] [Indexed: 04/09/2023] Open
Abstract
This article presents a novel multiple organ localization and tracking technique applied to spleen and kidney regions in computed tomography images. The proposed solution is based on a unique approach to classify regions in different spatial projections (e.g., side projection) using convolutional neural networks. Our procedure merges classification results from different projection resulting in a 3D segmentation. The proposed system is able to recognize the contour of the organ with an accuracy of 88-89% depending on the body organ. Research has shown that the use of a single method can be useful for the detection of different organs: kidney and spleen. Our solution can compete with U-Net based solutions in terms of hardware requirements, as it has significantly lower demands. Additionally, it gives better results in small data sets. Another advantage of our solution is a significantly lower training time on an equally sized data set and more capabilities to parallelize calculations. The proposed system enables visualization, localization and tracking of organs and is therefore a valuable tool in medical diagnostic problems.
Collapse
Affiliation(s)
- Tomasz Les
- University of Technology, Plac Politechniki 1, 00-661, Warsaw, Poland.
| | - Tomasz Markiewicz
- University of Technology, Plac Politechniki 1, 00-661, Warsaw, Poland
- Military Institute of Medicine, Szaserów 128, 04-141, Warsaw, Poland
| | | | - Jaime Gallego
- University of Barcelona, Gran Via de les Corts Catalanes, 08007, Barcelona, Spain
| | | | - Malgorzata Lorent
- Military Institute of Medicine, Szaserów 128, 04-141, Warsaw, Poland
| |
Collapse
|
12
|
Rehman A, Khan A, Fatima G, Naz S, Razzak I. Review on chest pathogies detection systems using deep learning techniques. Artif Intell Rev 2023; 56:1-47. [PMID: 37362896 PMCID: PMC10027283 DOI: 10.1007/s10462-023-10457-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Chest radiography is the standard and most affordable way to diagnose, analyze, and examine different thoracic and chest diseases. Typically, the radiograph is examined by an expert radiologist or physician to decide about a particular anomaly, if exists. Moreover, computer-aided methods are used to assist radiologists and make the analysis process accurate, fast, and more automated. A tremendous improvement in automatic chest pathologies detection and analysis can be observed with the emergence of deep learning. The survey aims to review, technically evaluate, and synthesize the different computer-aided chest pathologies detection systems. The state-of-the-art of single and multi-pathologies detection systems, which are published in the last five years, are thoroughly discussed. The taxonomy of image acquisition, dataset preprocessing, feature extraction, and deep learning models are presented. The mathematical concepts related to feature extraction model architectures are discussed. Moreover, the different articles are compared based on their contributions, datasets, methods used, and the results achieved. The article ends with the main findings, current trends, challenges, and future recommendations.
Collapse
Affiliation(s)
- Arshia Rehman
- COMSATS University Islamabad, Abbottabad-Campus, Abbottabad, Pakistan
| | - Ahmad Khan
- COMSATS University Islamabad, Abbottabad-Campus, Abbottabad, Pakistan
| | - Gohar Fatima
- The Islamia University of Bahawalpur, Bahawal Nagar Campus, Bahawal Nagar, Pakistan
| | - Saeeda Naz
- Govt Girls Post Graduate College No.1, Abbottabad, Pakistan
| | - Imran Razzak
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
13
|
Kayadibi İ, Güraksın GE. An Explainable Fully Dense Fusion Neural Network with Deep Support Vector Machine for Retinal Disease Determination. INT J COMPUT INT SYS 2023. [DOI: 10.1007/s44196-023-00210-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023] Open
Abstract
AbstractRetinal issues are crucial because they result in visual loss. Early diagnosis can aid physicians in initiating treatment and preventing visual loss. Optical coherence tomography (OCT), which portrays retinal morphology cross-sectionally and noninvasively, is used to identify retinal abnormalities. The process of analyzing OCT images, on the other hand, takes time. This study has proposed a hybrid approach based on a fully dense fusion neural network (FD-CNN) and dual preprocessing to identify retinal diseases, such as choroidal neovascularization, diabetic macular edema, drusen from OCT images. A dual preprocessing methodology, in other words, a hybrid speckle reduction filter was initially used to diminish speckle noise present in OCT images. Secondly, the FD-CNN architecture was trained, and the features obtained from this architecture were extracted. Then Deep Support Vector Machine (D-SVM) and Deep K-Nearest Neighbor (D-KNN) classifiers were proposed to reclassify those features and tested on University of California San Diego (UCSD) and Duke OCT datasets. D-SVM demonstrated the best performance in both datasets. D-SVM achieved 99.60% accuracy, 99.60% sensitivity, 99.87% specificity, 99.60% precision and 99.60% F1 score in the UCSD dataset. It achieved 97.50% accuracy, 97.64% sensitivity, 98.91% specificity, 96.61% precision, and 97.03% F1 score in Duke dataset. Additionally, the results were compared to state-of-the-art works on the both datasets. The D-SVM was demonstrated to be an efficient and productive strategy for improving the robustness of automatic retinal disease classification. Also, in this study, it is shown that the unboxing of how AI systems' black-box choices is made by generating heat maps using the local interpretable model-agnostic explanation method, which is an explainable artificial intelligence (XAI) technique. Heat maps, in particular, may contribute to the development of more stable deep learning-based systems, as well as enhancing the confidence in the diagnosis of retinal disease in the analysis of OCT image for ophthalmologists.
Collapse
|
14
|
Yang L, Liu H, Han J, Xu S, Zhang G, Wang Q, Du Y, Yang F, Zhao X, Shi G. Ultra-low-dose CT lung screening with artificial intelligence iterative reconstruction: evaluation via automatic nodule-detection software. Clin Radiol 2023:S0009-9260(23)00031-4. [PMID: 36948944 DOI: 10.1016/j.crad.2023.01.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 01/04/2023] [Accepted: 01/15/2023] [Indexed: 02/05/2023]
Abstract
AIM To test the feasibility of ultra-low-dose (ULD) computed tomography (CT) combined with an artificial intelligence iterative reconstruction (AIIR) algorithm for screening pulmonary nodules using computer-assisted diagnosis (CAD). MATERIALS AND METHODS A chest phantom with artificial pulmonary nodules was first scanned using the routine protocol and the ULD protocol (3.28 versus 0.18 mSv) to compare the image quality and to test the acceptability of the ULD CT protocol. Next, 147 lung-screening patients were enrolled prospectively, undergoing an additional ULD CT immediately after their routine CT examination for clinical validation. Images were reconstructed with filtered back-projection (FBP), hybrid iterative reconstruction (HIR), the AIIR, and were imported to the CAD software for preliminary nodule detection. Subjective image quality on the phantom was scored using a five-point scale and compared using the Mann-Whitney U-test. Nodule detection using CAD was evaluated for ULD HIR and AIIR images using the routine dose image as reference. RESULTS Higher image quality was scored for AIIR than for FBP and HIR at ULD (p<0.001). As reported by CAD, 107 patients were presented with more than five nodules on routine dose images and were chosen to represent the challenging cases at an early stage of pulmonary disease. Among such, the performance of nodule detection by CAD on ULD HIR and AIIR images was 75.2% and 92.2% of the routine dose image, respectively. CONCLUSION Combined with AIIR, it was feasible to use an ULD CT protocol with 95% dose reduction for CAD-based screening of pulmonary nodules.
Collapse
Affiliation(s)
- L Yang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - H Liu
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - J Han
- United Imaging Healthcare, Shanghai, China
| | - S Xu
- United Imaging Healthcare, Shanghai, China
| | - G Zhang
- United Imaging Healthcare, Shanghai, China
| | - Q Wang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Y Du
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - F Yang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - X Zhao
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - G Shi
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China.
| |
Collapse
|
15
|
Modak S, Abdel-Raheem E, Rueda L. Applications of Deep Learning in Disease Diagnosis of Chest Radiographs: A Survey on Materials and Methods. BIOMEDICAL ENGINEERING ADVANCES 2023. [DOI: 10.1016/j.bea.2023.100076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
|
16
|
Chandrashekar K, Setlur AS, Sabhapathi C A, Raiker SS, Singh S, Niranjan V. Decision Support System and Web-Application Using Supervised Machine Learning Algorithms for Easy Cancer Classifications. Cancer Inform 2023; 22:11769351221147244. [PMID: 36714384 PMCID: PMC9880585 DOI: 10.1177/11769351221147244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Accepted: 12/06/2022] [Indexed: 01/24/2023] Open
Abstract
Using a decision support system (DSS) that classifies various cancers provides support to the clinicians/researchers to make better decisions that can aid in early cancer diagnosis, thereby reducing chances of incorrect disease diagnosis. Thus, this work aimed at designing a classification model that can predict accurately for 5 different cancer types comprising of 20 cancer exomes, using the mutations identified from whole exome cancer analysis. Initially, a basic model was designed using supervised machine learning classification algorithms such as K-nearest neighbor (KNN), support vector machine (SVM), decision tree, naïve bayes and random forest (RF), among which decision tree and random forest performed better in terms of preliminary model accuracy. However, output predictions were incorrect due to less training scores. Thus, 16 essential features were then selected for model improvement using 2 approaches. All imbalanced datasets were balanced using SMOTE. In the first approach, all features from 20 cancer exome datasets were trained and models were designed using decision tree and random forest. Balanced datasets for decision tree model showed an accuracy of 77%, while with the RF model, the accuracy improved to 82% where all 5 cancer types were predicted correctly. Area under the curve for RF model was closer to 1, than decision tree model. In the second approach, all 15 datasets were trained, while 5 were tested. However, only 2 cancer types were predicted correctly. To cross validate RF model, Matthew's correlation co-efficient (MCC) test was performed. For method 1, the MCC test and MCC cross validation was found to be 0.7796 and 0.9356 respectively. Likewise, for second approach, MCC was observed to be 0.9365, corroborating the accuracy of the designed model. The model was successfully deployed using Streamlit as a web application for easy use. This study presents insights for allowing easy cancer classifications.
Collapse
Affiliation(s)
| | | | | | | | | | - Vidya Niranjan
- Vidya Niranjan, Department of
Biotechnology, R V College of Engineering, Bengaluru, Karnataka 560059, India.
| |
Collapse
|
17
|
Hussain Ali Y, Chinnaperumal S, Marappan R, Raju SK, Sadiq AT, Farhan AK, Srinivasan P. Multi-Layered Non-Local Bayes Model for Lung Cancer Early Diagnosis Prediction with the Internet of Medical Things. Bioengineering (Basel) 2023; 10:bioengineering10020138. [PMID: 36829633 PMCID: PMC9952033 DOI: 10.3390/bioengineering10020138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 01/07/2023] [Accepted: 01/11/2023] [Indexed: 01/22/2023] Open
Abstract
The Internet of Things (IoT) has been influential in predicting major diseases in current practice. The deep learning (DL) technique is vital in monitoring and controlling the functioning of the healthcare system and ensuring an effective decision-making process. In this study, we aimed to develop a framework implementing the IoT and DL to identify lung cancer. The accurate and efficient prediction of disease is a challenging task. The proposed model deploys a DL process with a multi-layered non-local Bayes (NL Bayes) model to manage the process of early diagnosis. The Internet of Medical Things (IoMT) could be useful in determining factors that could enable the effective sorting of quality values through the use of sensors and image processing techniques. We studied the proposed model by analyzing its results with regard to specific attributes such as accuracy, quality, and system process efficiency. In this study, we aimed to overcome problems in the existing process through the practical results of a computational comparison process. The proposed model provided a low error rate (2%, 5%) and an increase in the number of instance values. The experimental results led us to conclude that the proposed model can make predictions based on images with high sensitivity and better precision values compared to other specific results. The proposed model achieved the expected accuracy (81%, 95%), the expected specificity (80%, 98%), and the expected sensitivity (80%, 99%). This model is adequate for real-time health monitoring systems in the prediction of lung cancer and can enable effective decision-making with the use of DL techniques.
Collapse
Affiliation(s)
- Yossra Hussain Ali
- Department of Computer Sciences, University of Technology, Baghdad 10066, Iraq
| | - Seelammal Chinnaperumal
- Department of Computer Science and Engineering, Solamalai College of Engineering, Madurai 625020, India
| | - Raja Marappan
- School of Computing, Sastra Deemed University, Thanjavur 613401, India
| | - Sekar Kidambi Raju
- School of Computing, Sastra Deemed University, Thanjavur 613401, India
- Correspondence:
| | - Ahmed T. Sadiq
- Department of Computer Sciences, University of Technology, Baghdad 10066, Iraq
| | - Alaa K. Farhan
- Department of Computer Sciences, University of Technology, Baghdad 10066, Iraq
| | | |
Collapse
|
18
|
Muhsen IN, Rasheed OW, Habib EA, Alsaad RK, Maghrabi MK, Rahman MA, Sicker D, Wood WA, Beg MS, Sung AD, Hashmi SK. Current Status and Future Perspectives on the Internet of Things in Oncology. Hematol Oncol Stem Cell Ther 2023; 16:102-109. [PMID: 34687614 DOI: 10.1016/j.hemonc.2021.09.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 09/04/2021] [Accepted: 09/24/2021] [Indexed: 01/31/2023] Open
Abstract
The Internet of Things (IoT) has penetrated many aspects of everyday human life. The use of IoT in healthcare has been expanding over the past few years. In this review, we highlighted the current applications of IoT in the medical literature, along with the challenges and opportunities. IoT use mainly involves sensors and wearables, with potential applications in improving the quality of life, personal health monitoring, and diagnosis of diseases. Our literature review highlights that the current main application studied in the literature is physical activity tracking. In addition, we discuss the current technologies that would help IoT-enabled devices achieve safe, quick, and meaningful data transfer. These technologies include machine learning/artificial intelligence, 5G, and blockchain. Data on current IoT-enabled devices are still limited, and future research should address these devices' effect on patients' outcomes and the methods by which their integration in healthcare will avoid increasing costs.
Collapse
Affiliation(s)
- Ibrahim N Muhsen
- Department of Medicine, Houston Methodist Hospital, Houston, TX, USA
| | - Omar W Rasheed
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | - Eiad A Habib
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | - Rakan K Alsaad
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | | | - Md A Rahman
- Department of Cyber Security and Forensic Computing, University of Prince Mugrin, Medina, Saudi Arabia
| | - Douglas Sicker
- School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA
| | - William A Wood
- Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Muhammad S Beg
- Division of Hematology/Medical Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Anthony D Sung
- Division of Hematologic Malignancies and Cellular Therapy, Department of Medicine, Duke University School of Medicine, NC, USA
| | - Shahrukh K Hashmi
- Division of Hematology, Department of Medicine, Mayo Clinic, Rochester, MN, USA.,Department of Medicine, Sheikh Shakbout Medical City, Abu Dhabi, United Arab Emirates
| |
Collapse
|
19
|
Weikert T, Jaeger PF, Yang S, Baumgartner M, Breit HC, Winkel DJ, Sommer G, Stieltjes B, Thaiss W, Bremerich J, Maier-Hein KH, Sauter AW. Automated lung cancer assessment on 18F-PET/CT using Retina U-Net and anatomical region segmentation. Eur Radiol 2023; 33:4270-4279. [PMID: 36625882 PMCID: PMC10182147 DOI: 10.1007/s00330-022-09332-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/13/2022] [Accepted: 10/17/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVES To develop and test a Retina U-Net algorithm for the detection of primary lung tumors and associated metastases of all stages on FDG-PET/CT. METHODS A data set consisting of 364 FDG-PET/CTs of patients with histologically confirmed lung cancer was used for algorithm development and internal testing. The data set comprised tumors of all stages. All lung tumors (T), lymphatic metastases (N), and distant metastases (M) were manually segmented as 3D volumes using whole-body PET/CT series. The data set was split into a training (n = 216), validation (n = 74), and internal test data set (n = 74). Detection performance for all lesion types at multiple classifier thresholds was evaluated and false-positive-findings-per-case (FP/c) calculated. Next, detected lesions were assigned to categories T, N, or M using an automated anatomical region segmentation. Furthermore, reasons for FPs were visually assessed and analyzed. Finally, performance was tested on 20 PET/CTs from another institution. RESULTS Sensitivity for T lesions was 86.2% (95% CI: 77.2-92.7) at a FP/c of 2.0 on the internal test set. The anatomical correlate to most FPs was the physiological activity of bone marrow (16.8%). TNM categorization based on the anatomical region approach was correct in 94.3% of lesions. Performance on the external test set confirmed the good performance of the algorithm (overall detection rate = 88.8% (95% CI: 82.5-93.5%) and FP/c = 2.7). CONCLUSIONS Retina U-Nets are a valuable tool for tumor detection tasks on PET/CT and can form the backbone of reading assistance tools in this field. FPs have anatomical correlates that can lead the way to further algorithm improvements. The code is publicly available. KEY POINTS • Detection of malignant lesions in PET/CT with Retina U-Net is feasible. • All false-positive findings had anatomical correlates, physiological bone marrow activity being the most prevalent. • Retina U-Nets can build the backbone for tools assisting imaging professionals in lung tumor staging.
Collapse
Affiliation(s)
- T Weikert
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland.
| | - P F Jaeger
- Division of Medical Image Computing, German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - S Yang
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - M Baumgartner
- Division of Medical Image Computing, German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - H C Breit
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - D J Winkel
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - G Sommer
- Institute of Radiology and Nuclear Medicine, Hirslanden Klinik St. Anna, St. Anna-Strasse 32, 6006, Lucerne, Switzerland
| | - B Stieltjes
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - W Thaiss
- Department of Nuclear Medicine, University Hospital Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Germany
| | - J Bremerich
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| | - K H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany.,Department of Radiation Oncology, Pattern Analysis and Learning Group, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - A W Sauter
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031, Basel, Switzerland
| |
Collapse
|
20
|
Jin H, Yu C, Gong Z, Zheng R, Zhao Y, Fu Q. Machine learning techniques for pulmonary nodule computer-aided diagnosis using CT images: A systematic review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
21
|
Sethy PK, Geetha Devi A, Padhan B, Behera SK, Sreedhar S, Das K. Lung cancer histopathological image classification using wavelets and AlexNet. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:211-221. [PMID: 36463485 DOI: 10.3233/xst-221301] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Among malignant tumors, lung cancer has the highest morbidity and fatality rates worldwide. Screening for lung cancer has been investigated for decades in order to reduce mortality rates of lung cancer patients, and treatment options have improved dramatically in recent years. Pathologists utilize various techniques to determine the stage, type, and subtype of lung cancers, but one of the most common is a visual assessment of histopathology slides. The most common subtypes of lung cancer are adenocarcinoma and squamous cell carcinoma, lung benign, and distinguishing between them requires visual inspection by a skilled pathologist. The purpose of this article was to develop a hybrid network for the categorization of lung histopathology images, and it did so by combining AlexNet, wavelet, and support vector machines. In this study, we feed the integrated discrete wavelet transform (DWT) coefficients and AlexNet deep features into linear support vector machines (SVMs) for lung nodule sample classification. The LC25000 Lung and colon histopathology image dataset, which contains 5,000 digital histopathology images in three categories of benign (normal cells), adenocarcinoma, and squamous carcinoma cells (both are cancerous cells) is used in this study to train and test SVM classifiers. The study results of using a 10-fold cross-validation method achieve an accuracy of 99.3% and an area under the curve (AUC) of 0.99 in classifying these digital histopathology images of lung nodule samples.
Collapse
Affiliation(s)
| | - A Geetha Devi
- Department of Electronics and Communication Engineering, PVP Siddhartha Institute of Technology, Vijayawada, AP, India
| | - Bikash Padhan
- Department of Electronics, Sambalpur University, Jyoti Vihar, Burla, India
| | | | | | - Kalyan Das
- Department Computer Science Engineering and Application, Sambalpur University Institute of Information Technology, Burla, India
| |
Collapse
|
22
|
A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
|
23
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:cancers14225569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
24
|
Li Y, Zheng X, Xie F, Ye L, Bignami E, Tandon YK, Rodríguez M, Gu Y, Sun J. Development and validation of the artificial intelligence (AI)-based diagnostic model for bronchial lumen identification. Transl Lung Cancer Res 2022; 11:2261-2274. [PMID: 36519015 PMCID: PMC9742630 DOI: 10.21037/tlcr-22-761] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 11/08/2022] [Indexed: 08/29/2023]
Abstract
BACKGROUND Bronchoscopy is a key step in the diagnosis and treatment of respiratory diseases. However, the level of expertise varies among different bronchoscopists. Artificial intelligence (AI) may help them identify bronchial lumens. Thus, a bronchoscopy quality-control system based on AI was built to improve the performance of bronchoscopists. METHODS This single-center observational study consecutively collected bronchoscopy videos from Shanghai Chest Hospital and segmented each video into 31 different anatomical locations to develop an AI-assisted system based on a convolutional neural network (CNN) model. We then designed a single-center trial to compare the accuracy of lumen recognition by bronchoscopists with and without the assistance of the AI system. RESULTS A total of 28,441 qualified images of bronchial lumen were used to train the CNNs. In the cross-validation set, the optimal accuracy of the six models was between 91.83% and 96.62%. In the test set, the visual geometry group 16 (VGG-16) achieved optimal performance with an accuracy of 91.88%, and an area under the curve of 0.995. In the clinical evaluation, the accuracy rate of the AI system alone was 54.30% (202/372). For the identification of bronchi except for segmental bronchi, the accuracy was 82.69% (129/156). In group 1, the recognition accuracy rates of doctors A, B, a and b alone were 42.47%, 34.68%, 28.76%, and 29.57%, respectively, but increased to 57.53%, 54.57%, 54.57%, and 46.24% respectively when combined with the AI system. Similarly, in group 2, the recognition accuracy rates of doctors C, D, c, and d were 37.90%, 41.40%, 30.91%, and 33.60% respectively, but increased to 51.61%, 47.85%, 53.49%, and 54.30% respectively, when combined with the AI system. Except for doctor D, the accuracy of doctors in recognizing lumen was significantly higher with AI assistance than without AI assistance, regardless of their experience (P<0.001). CONCLUSIONS Our AI system could better recognize bronchial lumen and reduce differences in the operation levels of different bronchoscopists. It could be used to improve the quality of everyday bronchoscopies.
Collapse
Affiliation(s)
- Ying Li
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Respiratory Endoscopy, Shanghai, China
| | - Xiaoxuan Zheng
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Respiratory Endoscopy, Shanghai, China
| | - Fangfang Xie
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Respiratory Endoscopy, Shanghai, China
| | - Lin Ye
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Respiratory Endoscopy, Shanghai, China
| | - Elena Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | | | - María Rodríguez
- Department of Thoracic Surgery, Clínica Universidad de Navarra, Madrid, Spain
| | - Yun Gu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
| | - Jiayuan Sun
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Respiratory Endoscopy, Shanghai, China
| |
Collapse
|
25
|
Analysis of Smart Lung Tumour Detector and Stage Classifier Using Deep Learning Techniques with Internet of Things. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4608145. [PMID: 36148416 PMCID: PMC9489382 DOI: 10.1155/2022/4608145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 07/15/2022] [Accepted: 07/27/2022] [Indexed: 11/30/2022]
Abstract
The use of artificial intelligence (AI) and the Internet of Things (IoT), which is a developing technology in medical applications that assists physicians in making more informed decisions regarding patients' courses of treatment, has become increasingly widespread in recent years in the field of healthcare. On the other hand, the number of PET scans that are being performed is rising, and radiologists are getting significantly overworked as a result. As a direct result of this, a novel approach that goes by the name “computer-aided diagnostics” is now being investigated as a potential method for reducing the tremendous workloads. A Smart Lung Tumor Detector and Stage Classifier (SLD-SC) is presented in this study as a hybrid technique for PET scans. This detector can identify the stage of a lung tumour. Following the development of the modified LSTM for the detection of lung tumours, the proposed SLD-SC went on to develop a Multilayer Convolutional Neural Network (M-CNN) for the classification of the various stages of lung cancer. This network was then modelled and validated utilising standard benchmark images. The suggested SLD-SC is now being evaluated on lung cancer pictures taken from patients with the disease. We observed that our recommended method gave good results when compared to other tactics that are currently being used in the literature. These findings were outstanding in terms of the performance metrics accuracy, recall, and precision that were assessed. As can be shown by the much better outcomes that were achieved with each of the test images that were used, our proposed method excels its rivals in a variety of respects. In addition to this, it achieves an average accuracy of 97 percent in the categorization of lung tumours, which is much higher than the accuracy achieved by the other approaches.
Collapse
|
26
|
Zhang L, Zhang MQ, Lv X. HEp-2 image classification using a multi-class and multiple-binary classifier. Med Biol Eng Comput 2022; 60:3113-3124. [DOI: 10.1007/s11517-022-02646-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 06/24/2022] [Indexed: 11/24/2022]
|
27
|
Vicini S, Bortolotto C, Rengo M, Ballerini D, Bellini D, Carbone I, Preda L, Laghi A, Coppola F, Faggioni L. A narrative review on current imaging applications of artificial intelligence and radiomics in oncology: focus on the three most common cancers. Radiol Med 2022; 127:819-836. [DOI: 10.1007/s11547-022-01512-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Accepted: 06/01/2022] [Indexed: 12/24/2022]
|
28
|
Pei Q, Luo Y, Chen Y, Li J, Xie D, Ye T. Artificial intelligence in clinical applications for lung cancer: diagnosis, treatment and prognosis. Clin Chem Lab Med 2022; 60:1974-1983. [PMID: 35771735 DOI: 10.1515/cclm-2022-0291] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 06/17/2022] [Indexed: 12/12/2022]
Abstract
Artificial Intelligence (AI) is a branch of computer science that includes research in robotics, language recognition, image recognition, natural language processing, and expert systems. AI is poised to change medical practice, and oncology is not an exception to this trend. As the matter of fact, lung cancer has the highest morbidity and mortality worldwide. The leading cause is the complexity of associating early pulmonary nodules with neoplastic changes and numerous factors leading to strenuous treatment choice and poor prognosis. AI can effectively enhance the diagnostic efficiency of lung cancer while providing optimal treatment and evaluating prognosis, thereby reducing mortality. This review seeks to provide an overview of AI relevant to all the fields of lung cancer. We define the core concepts of AI and cover the basics of the functioning of natural language processing, image recognition, human-computer interaction and machine learning. We also discuss the most recent breakthroughs in AI technologies and their clinical application regarding diagnosis, treatment, and prognosis in lung cancer. Finally, we highlight the future challenges of AI in lung cancer and its impact on medical practice.
Collapse
Affiliation(s)
- Qin Pei
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| | - Yanan Luo
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| | - Yiyu Chen
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| | - Jingyuan Li
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| | - Dan Xie
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| | - Ting Ye
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| |
Collapse
|
29
|
A Secure Framework toward IoMT-Assisted Data Collection, Modeling, and Classification for Intelligent Dermatology Healthcare Services. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:6805460. [PMID: 35845738 PMCID: PMC9259277 DOI: 10.1155/2022/6805460] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 05/01/2022] [Accepted: 06/02/2022] [Indexed: 12/11/2022]
Abstract
The abnormal growth of the skin cells is known as skin cancer. It is one of the main problems in the dermatology area. Skin lesions or malignancies have been a source of worry for many individuals in recent years. Irrespective of the skin tone, there exist three major classes of skin lesions, i.e., basal cell carcinoma, squamous cell carcinoma, and melanoma. The early diagnosis of these lesions is equally important for human life. In the proposed work, a secure IoMT-Assisted framework is introduced that can help the patients to do the initial screening of skin lesions remotely. The initially proposed approach uses an IoMT-based data collection device which is accessible by patients to capture skin lesions images. Next, the captured skin sample is encrypted and sent to the collected image toward cloud storage. Later, the received sample image is classified into appropriate class labels using an ensemble classifier. In the proposed framework, four CNN models were ensemble i.e., VGG-16, DenseNet-201, Inception-V3, and Efficient-B7. The framework has experimented with the “HAM10000” dataset having 7 different kinds of skin lesions data. Although DenseNet-201 performed well, the ensemble model provides the highest accuracy with 87.22 percent as well as its test loss/error is lower than others with 0.4131. Moreover, the ensemble model's classification ability is much higher with an AUC score of 0.9745. Moreover, A recommendation team has been assigned to assess the sample of the patient as well as suggest the patient according to classified results by the CAD.
Collapse
|
30
|
Using machine learning and an electronic tongue for discriminating saliva samples from oral cavity cancer patients and healthy individuals. Talanta 2022; 243:123327. [DOI: 10.1016/j.talanta.2022.123327] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 02/14/2022] [Accepted: 02/16/2022] [Indexed: 11/20/2022]
|
31
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
32
|
Shi F, Chen B, Cao Q, Wei Y, Zhou Q, Zhang R, Zhou Y, Yang W, Wang X, Fan R, Yang F, Chen Y, Li W, Gao Y, Shen D. Semi-Supervised Deep Transfer Learning for Benign-Malignant Diagnosis of Pulmonary Nodules in Chest CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:771-781. [PMID: 34705640 DOI: 10.1109/tmi.2021.3123572] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Lung cancer is the leading cause of cancer deaths worldwide. Accurately diagnosing the malignancy of suspected lung nodules is of paramount clinical importance. However, to date, the pathologically-proven lung nodule dataset is largely limited and is highly imbalanced in benign and malignant distributions. In this study, we proposed a Semi-supervised Deep Transfer Learning (SDTL) framework for benign-malignant pulmonary nodule diagnosis. First, we utilize a transfer learning strategy by adopting a pre-trained classification network that is used to differentiate pulmonary nodules from nodule-like tissues. Second, since the size of samples with pathological-proven is small, an iterated feature-matching-based semi-supervised method is proposed to take advantage of a large available dataset with no pathological results. Specifically, a similarity metric function is adopted in the network semantic representation space for gradually including a small subset of samples with no pathological results to iteratively optimize the classification network. In this study, a total of 3,038 pulmonary nodules (from 2,853 subjects) with pathologically-proven benign or malignant labels and 14,735 unlabeled nodules (from 4,391 subjects) were retrospectively collected. Experimental results demonstrate that our proposed SDTL framework achieves superior diagnosis performance, with accuracy = 88.3%, AUC = 91.0% in the main dataset, and accuracy = 74.5%, AUC = 79.5% in the independent testing dataset. Furthermore, ablation study shows that the use of transfer learning provides 2% accuracy improvement, and the use of semi-supervised learning further contributes 2.9% accuracy improvement. Results implicate that our proposed classification network could provide an effective diagnostic tool for suspected lung nodules, and might have a promising application in clinical practice.
Collapse
|
33
|
Khanam N, Kumar R. Recent Applications of Artificial Intelligence in Early Cancer Detection. Curr Med Chem 2022; 29:4410-4435. [PMID: 35196970 DOI: 10.2174/0929867329666220222154733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 11/30/2021] [Accepted: 12/08/2021] [Indexed: 11/22/2022]
Abstract
Cancer is a deadly disease often caused by the accumulation of various genetic mutations and pathological alterations. The death rate can only be reduced when it is detected in the early stages because treatment of cancer when the tumor has not metastasized in many regions of the body is more effective. However, early cancer detection is fraught with difficulties. Advances in artificial intelligence (AI) have developed a new scope for efficient and early detection of such a fatal disease. AI algorithms have a remarkable ability to perform well on a variety of tasks that are presented or fed to the system. Numerous studies have produced machine learning and deep learning-assisted cancer prediction models to detect cancer from previously accessible data with better accuracy, sensitivity, and specificity. It has been observed that the accuracy of prediction models in classifying fed data as benign, malignant, or normal is improved by implementing efficient image processing techniques and data segmentation augmentation methodologies, along with advanced algorithms. In this review, recent AI-based models for the diagnosis of the most prevalent cancers in the breast, lung, brain, and skin have been analysed. Available AI techniques, data preparation, modeling processes, and performance assessments have been included in the review.
Collapse
Affiliation(s)
- Nausheen Khanam
- Amity Institute of Biotechnology, Amity University Uttar Pradesh Lucknow Campus, Uttar Pradesh, India
| | - Rajnish Kumar
- Amity Institute of Biotechnology, Amity University Uttar Pradesh Lucknow Campus, Uttar Pradesh, India
| |
Collapse
|
34
|
Ouyang Z, Zhang P, Pan W, Li Q. Deep learning-based body part recognition algorithm for three-dimensional medical images. Med Phys 2022; 49:3067-3079. [PMID: 35157332 DOI: 10.1002/mp.15536] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 01/24/2022] [Accepted: 01/25/2022] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND The automatic recognition of human body parts in three-dimensional (3D) medical images is important in many clinical applications. However, methods presented in prior studies have mainly classified each two-dimensional (2D) slice independently rather than recognizing a batch of consecutive slices as a specific body part. PURPOSE In this study, we aim to develop a deep-learning-based method designed to automatically divide computed tomography (CT) and magnetic resonance imaging (MRI) scans into five consecutive body parts: head, neck, chest, abdomen, and pelvis. METHODS A deep learning framework was developed to recognize body parts in two stages. In the first pre-classification stage, a convolutional neural network (CNN) using the GoogLeNet Inception v3 architecture and a long short-term memory (LSTM) network were combined to classify each 2D slice; the CNN extracted information from a single slice, whereas the LSTM employed rich contextual information among consecutive slices. In the second post-processing stage, the input scan was further partitioned into consecutive body parts by identifying the optimal boundaries between them based on the slice classification results of the first stage. To evaluate the performance of the proposed method, 662 CT and 1434 MRI scans were used. RESULTS Our method achieved a very good performance in 2D slice classification compared with state-of-the-art methods, with overall classification accuracies of 97.3% and 98.2% for CT and MRI scans, respectively. Moreover, our method further divided whole scans into consecutive body parts with mean boundary errors of 8.9 mm and 3.5 mm for CT and MRI data, respectively. CONCLUSIONS The proposed method significantly improved the slice classification accuracy compared with state-of-the-art methods, and further accurately divided CT and MRI scans into consecutive body parts based on the results of slice classification. The developed method can be employed as an important step in various computer-aided diagnosis and medical image analysis schemes. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Zihui Ouyang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Peng Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Weifan Pan
- Zhejiang Taimei Medical Technology Co., Ltd, Jiaxing, Zhejiang, 314001, China
| | - Qiang Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| |
Collapse
|
35
|
Cloud-Based Lung Tumor Detection and Stage Classification Using Deep Learning Techniques. BIOMED RESEARCH INTERNATIONAL 2022; 2022:4185835. [PMID: 35047635 PMCID: PMC8763490 DOI: 10.1155/2022/4185835] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 11/30/2021] [Accepted: 12/07/2021] [Indexed: 02/01/2023]
Abstract
Artificial intelligence (AI), Internet of Things (IoT), and the cloud computing have recently become widely used in the healthcare sector, which aid in better decision-making for a radiologist. PET imaging or positron emission tomography is one of the most reliable approaches for a radiologist to diagnosing many cancers, including lung tumor. In this work, we proposed stage classification of lung tumor which is a more challenging task in computer-aided diagnosis. As a result, a modified computer-aided diagnosis is being considered as a way to reduce the heavy workloads and second opinion to radiologists. In this paper, we present a strategy for classifying and validating different stages of lung tumor progression, as well as a deep neural model and data collection using cloud system for categorizing phases of pulmonary illness. The proposed system presents a Cloud-based Lung Tumor Detector and Stage Classifier (Cloud-LTDSC) as a hybrid technique for PET/CT images. The proposed Cloud-LTDSC initially developed the active contour model as lung tumor segmentation, and multilayer convolutional neural network (M-CNN) for classifying different stages of lung cancer has been modelled and validated with standard benchmark images. The performance of the presented technique is evaluated using a benchmark image LIDC-IDRI dataset of 50 low doses and also utilized the lung CT DICOM images. Compared with existing techniques in the literature, our proposed method achieved good result for the performance metrics accuracy, recall, and precision evaluated. Under numerous aspects, our proposed approach produces superior outcomes on all of the applied dataset images. Furthermore, the experimental result achieves an average lung tumor stage classification accuracy of 97%-99.1% and an average of 98.6% which is significantly higher than the other existing techniques.
Collapse
|
36
|
Efficacy prediction based on attribute and multi-source data collaborative for auxiliary medical system in developing countries. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06713-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
37
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
38
|
Yin C, Wang S, Pan D. Computed Tomography Image Characteristics before and after Interventional Treatment of Children's Lymphangioma under Artificial Intelligence Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:2673013. [PMID: 34925537 PMCID: PMC8677374 DOI: 10.1155/2021/2673013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 11/09/2021] [Indexed: 11/17/2022]
Abstract
The artificial intelligence algorithm was used to analyze the characteristics of computed tomography (CT) images before and after interventional treatment of children's lymphangioma. Retrospective analysis was performed, and 30 children with lymphangioma from the hospital were recruited as the study subjects. The ultrasound-guided bleomycin interventional therapy was adopted and applied to CT scanning through convolutional neural network (CNN). The CT imaging-related indicators before and after interventional therapy were detected, and feature analysis was performed. In addition, the CNN algorithm was adopted to segment the image of the tumor was clearer and more accurate. At the same time, the Dice similarity coefficient (DSC) of the CNN algorithm was 0.9, which had a higher degree of agreement. In terms of clinical symptoms, the cured children's lesions disappeared, the skin surface returned to normal color, and the treatment was smooth. In the two cases with effective treatment, the cystic mass at the lesion site was significantly smaller, and the nodules disappeared. CT images before interventional therapy showed that lymphangiomas in children were more common in the neck. The cystic masses at all lesion sites varied in diameter and size, and most of them were similar to round and irregular, with uniform density distribution. The boundary was clear, the cyst was solid, and there were different degrees of compression and spread to the surrounding structure. Most of them were polycystic, and a few of them were single cystic. After interventional treatment, CT images showed that 27 cases of cured children's lymphangioma completely disappeared. Lymphangioma was significantly reduced in two children with effective treatment. Edema around the tumor also decreased significantly. Patients who did not respond to the treatment received interventional treatment again, and the tumors disappeared completely on CT imaging. No recurrence or new occurrence was found in three-month follow-up. The total effective rate of interventional therapy for lymphangioma in children was 96.67%. The CNN algorithm can effectively compare the CT image features before and after interventional treatment for children's lymphangioma. It was suggested that the artificial intelligence algorithm-aided CT imaging examination was helpful to guide physicians in the accurate treatment of children's lymphangioma.
Collapse
Affiliation(s)
- Chuangao Yin
- Department of Image, Anhui Children's Hospital, Hefei, 230051 Anhui, China
| | - Song Wang
- Department of Image, Anhui Children's Hospital, Hefei, 230051 Anhui, China
| | - Deng Pan
- Department of Image, Anhui Children's Hospital, Hefei, 230051 Anhui, China
| |
Collapse
|
39
|
Faruqui N, Yousuf MA, Whaiduzzaman M, Azad AKM, Barros A, Moni MA. LungNet: A hybrid deep-CNN model for lung cancer diagnosis using CT and wearable sensor-based medical IoT data. Comput Biol Med 2021; 139:104961. [PMID: 34741906 DOI: 10.1016/j.compbiomed.2021.104961] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 10/13/2021] [Accepted: 10/17/2021] [Indexed: 12/25/2022]
Abstract
Lung cancer, also known as pulmonary cancer, is one of the deadliest cancers, but yet curable if detected at the early stage. At present, the ambiguous features of the lung cancer nodule make the computer-aided automatic diagnosis a challenging task. To alleviate this, we present LungNet, a novel hybrid deep-convolutional neural network-based model, trained with CT scan and wearable sensor-based medical IoT (MIoT) data. LungNet consists of a unique 22-layers Convolutional Neural Network (CNN), which combines latent features that are learned from CT scan images and MIoT data to enhance the diagnostic accuracy of the system. Operated from a centralized server, the network has been trained with a balanced dataset having 525,000 images that can classify lung cancer into five classes with high accuracy (96.81%) and low false positive rate (3.35%), outperforming similar CNN-based classifiers. Moreover, it classifies the stage-1 and stage-2 lung cancers into 1A, 1B, 2A and 2B sub-classes with 91.6% accuracy and false positive rate of 7.25%. High predictive capability accompanied with sub-stage classification renders LungNet as a promising prospect in developing CNN-based automatic lung cancer diagnosis systems.
Collapse
Affiliation(s)
- Nuruzzaman Faruqui
- Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, 1342, Bangladesh.
| | - Mohammad Abu Yousuf
- Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, 1342, Bangladesh.
| | - Md Whaiduzzaman
- Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, 1342, Bangladesh; Queensland University of Technology, 2 George St, Brisbane City, QLD, 4000, Australia.
| | - A K M Azad
- Faculty of Science, Engineering & Technology, Swinburne University of Technology Sydney, Australia.
| | - Alistair Barros
- Queensland University of Technology, 2 George St, Brisbane City, QLD, 4000, Australia.
| | - Mohammad Ali Moni
- School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland, St Lucia, QLD, 4072, Australia.
| |
Collapse
|
40
|
Kundu R, Singh PK, Mirjalili S, Sarkar R. COVID-19 detection from lung CT-Scans using a fuzzy integral-based CNN ensemble. Comput Biol Med 2021; 138:104895. [PMID: 34649147 PMCID: PMC8483997 DOI: 10.1016/j.compbiomed.2021.104895] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 09/19/2021] [Accepted: 09/22/2021] [Indexed: 12/16/2022]
Abstract
The COVID-19 pandemic has collapsed the public healthcare systems, along with severely damaging the economy of the world. The SARS-CoV-2 virus also known as the coronavirus, led to community spread, causing the death of more than a million people worldwide. The primary reason for the uncontrolled spread of the virus is the lack of provision for population-wise screening. The apparatus for RT-PCR based COVID-19 detection is scarce and the testing process takes 6-9 h. The test is also not satisfactorily sensitive (71% sensitive only). Hence, Computer-Aided Detection techniques based on deep learning methods can be used in such a scenario using other modalities like chest CT-scan images for more accurate and sensitive screening. In this paper, we propose a method that uses a Sugeno fuzzy integral ensemble of four pre-trained deep learning models, namely, VGG-11, GoogLeNet, SqueezeNet v1.1 and Wide ResNet-50-2, for classification of chest CT-scan images into COVID and Non-COVID categories. The proposed framework has been tested on a publicly available dataset for evaluation and it achieves 98.93% accuracy and 98.93% sensitivity on the same. The model outperforms state-of-the-art methods on the same dataset and proves to be a reliable COVID-19 detector. The relevant source codes for the proposed approach can be found at: https://github.com/Rohit-Kundu/Fuzzy-Integral-Covid-Detection.
Collapse
Affiliation(s)
- Rohit Kundu
- Department of Electrical Engineering, Jadavpur University, 188, Raja S. C. Mallick Road, Kolkata-700032, West Bengal, India
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata-700106, West Bengal, India
| | - Seyedali Mirjalili
- Centre for Artificial Intelligence Research and Optimization, Torrens University, Australia,Yonser Frontier Lab, Yonsei University, South Korea,Corresponding author. Centre for Artificial Intelligence Research and Optimization, Torrens University, Australia
| | - Ram Sarkar
- Department of Computer Science & Engineering, Jadavpur University, 188, Raja S. C. Mallick Road, Kolkata-700032, West Bengal, India
| |
Collapse
|
41
|
Li P, Kong X, Li J, Zhu G, Lu X, Shen P, Shah SAA, Bennamoun M, Hua T. A Dataset of Pulmonary Lesions With Multiple-Level Attributes and Fine Contours. Front Digit Health 2021; 2:609349. [PMID: 34713070 PMCID: PMC8521952 DOI: 10.3389/fdgth.2020.609349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 12/09/2020] [Indexed: 11/13/2022] Open
Abstract
Lung cancer is a life-threatening disease and its diagnosis is of great significance. Data scarcity and unavailability of datasets is a major bottleneck in lung cancer research. In this paper, we introduce a dataset of pulmonary lesions for designing the computer-aided diagnosis (CAD) systems. The dataset has fine contour annotations and nine attribute annotations. We define the structure of the dataset in detail, and then discuss the relationship of the attributes and pathology, and the correlation between the nine attributes with the chi-square test. To demonstrate the contribution of our dataset to computer-aided system design, we define four tasks that can be developed using our dataset. Then, we use our dataset to model multi-attribute classification tasks. We discuss the performance in 2D, 2.5D, and 3D input modes of the classification model. To improve performance, we introduce two attention mechanisms and verify the principles of the attention mechanisms through visualization. Experimental results show the relationship between different models and different levels of attributes.
Collapse
Affiliation(s)
- Ping Li
- Shanghai BNC, Shanghai, China
| | - Xiangwen Kong
- Embedded Technology & Vision Processing Research Center, Xidian University, Xi'an, China
| | - Johann Li
- Embedded Technology & Vision Processing Research Center, Xidian University, Xi'an, China
| | - Guangming Zhu
- Embedded Technology & Vision Processing Research Center, Xidian University, Xi'an, China
| | | | | | - Syed Afaq Ali Shah
- College of Science, Health, Engineering and Education, Murdoch University, Perth, WA, Australia
| | - Mohammed Bennamoun
- School of Computer Science and Software Engineering, The University of Western Australia, Perth, WA, Australia
| | - Tao Hua
- Pet Center, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
42
|
Senthil K, Vidyaathulasiraman. Ovarian cancer diagnosis using pretrained mask CNN-based segmentation with VGG-19 architecture. BIO-ALGORITHMS AND MED-SYSTEMS 2021. [DOI: 10.1515/bams-2021-0098] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Abstract
Objectives
This paper proposed the neural network-based segmentation model using Pre-trained Mask Convolutional Neural Network (CNN) with VGG-19 architecture. Since ovarian is very tiny tissue, it needs to be segmented with higher accuracy from the annotated image of ovary images collected in dataset. This model is proposed to predict and suppress the illness early and to correctly diagnose it, helping the doctor save the patient's life.
Methods
The paper uses the neural network based segmentation using Pre-trained Mask CNN integrated with VGG-19 NN architecture for CNN to enhance the ovarian cancer prediction and diagnosis.
Results
Proposed segmentation using hybrid neural network of CNN will provide higher accuracy when compared with logistic regression, Gaussian naïve Bayes, and random Forest and Support Vector Machine (SVM) classifiers.
Collapse
Affiliation(s)
- Kavitha Senthil
- Department of Computer Science , Periyar University , Salem , India
| | - Vidyaathulasiraman
- Department of Computer Science , Government Arts and Science College for Women , Bargur , India
| |
Collapse
|
43
|
Kundu R, Singh PK, Ferrara M, Ahmadian A, Sarkar R. ET-NET: an ensemble of transfer learning models for prediction of COVID-19 infection through chest CT-scan images. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 81:31-50. [PMID: 34483709 PMCID: PMC8405348 DOI: 10.1007/s11042-021-11319-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 07/07/2021] [Accepted: 07/21/2021] [Indexed: 05/02/2023]
Abstract
The COVID-19 virus has caused a worldwide pandemic, affecting numerous individuals and accounting for more than a million deaths. The countries of the world had to declare complete lockdown when the coronavirus led to community spread. Although the real-time Polymerase Chain Reaction (RT-PCR) test is the gold-standard test for COVID-19 screening, it is not satisfactorily accurate and sensitive. On the other hand, Computer Tomography (CT) scan images are much more sensitive and can be suitable for COVID-19 detection. To this end, in this paper, we develop a fully automated method for fast COVID-19 screening by using chest CT-scan images employing Deep Learning techniques. For this supervised image classification problem, a bootstrap aggregating or Bagging ensemble of three transfer learning models, namely, Inception v3, ResNet34 and DenseNet201, has been used to boost the performance of the individual models. The proposed framework, called ET-NET, has been evaluated on a publicly available dataset, achieving 97.81 ± 0.53 % accuracy, 97.77 ± 0.58 % precision, 97.81 ± 0.52 % sensitivity and 97.77 ± 0.57 % specificity on 5-fold cross-validation outperforming the state-of-the-art method on the same dataset by 1.56%. The relevant codes for the proposed approach are accessible in: https://github.com/Rohit-Kundu/ET-NET_Covid-Detection.
Collapse
Affiliation(s)
- Rohit Kundu
- Department of Electrical Engineering, Jadavpur University, Kolkata, 700032 India
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Kolkata, 700106 India
| | - Massimiliano Ferrara
- Department of Law, Economics and Human Sciences & Decisions Lab, Mediterranea University of Reggio Calabria, Reggio Calabria, 89125 Italy
- ICRIOS - The Invernizzi Centre for Research in Innovation, Organization, Strategy and Entrepreneurship, Bocconi University - Department of Management and Technology, Via Sarfatti 25, Milano, 20136 MI Italy
| | - Ali Ahmadian
- Institute of IR 4.0, The National University of Malaysia, Bangi, 43600 UKM Selangor Malaysia
- Department of Mathematics, Near East University, Nicosia, TRNC, Mersin 10 Turkey
- Institute for Mathematical Research, Universiti Putra Malaysia, Seri Kembangan, Selangor 43400 UPM Malaysia
| | - Ram Sarkar
- Department of Computer Science & Engineering, Jadavpur University, Kolkata, 700032 India
| |
Collapse
|
44
|
Menopausal Women's Health Care Method Based on Computer Nursing Diagnosis Intelligent System. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:4963361. [PMID: 34367537 PMCID: PMC8346312 DOI: 10.1155/2021/4963361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 06/26/2021] [Indexed: 11/17/2022]
Abstract
Taking into account the current feature extraction speed and recognition effect of intelligent diagnosis of menopausal women's health care behavior, this paper proposes to use a cross-layer convolutional neural network to extract behavior features autonomously and use support vector machine multiclass behavior classifier to classify behavior. Compared with the feature images extracted by traditional methods, the behavioral features extracted in this paper are related to the individual menopausal women and have better semantic information, and the feature description ability in the time domain and the space domain has been enhanced. Through Matlab software, using the database established in this paper to compare its feature extraction time, test classification time, and final recognition accuracy with ordinary convolutional neural networks, it is concluded that the cross-layer CNN-SVM model can ensure the speed of feature extraction. It proves that the method in this paper can be applied to the behavioral intelligent diagnosis system for intelligently nursing menopausal women and has good practical value. This paper designs a home care bed intelligent monitoring system, which can automatically detect the posture of the care bed, and not only can change the posture of the bed under the control of personnel, but also can automatically complete the posture conversion according to the setting. At the same time, the system has the function of monitoring the physical condition of the person being cared for and can detect the heart rate, blood oxygen, and other physiological indicators of the bedridden person. In addition, the system can also provide a remote diagnosis function, allowing nursing staff to remotely view the current state of the nursing bed and the physical condition of the person. After testing, the system works stably, improves the automation and safety of the nursing bed control, and enriches the functions of the nursing bed.
Collapse
|
45
|
Yan P, Tong AN, Nie XL, Ma MG. Assessment of safety margin after microwave ablation of stage I NSCLC with three-dimensional reconstruction technique using CT imaging. BMC Med Imaging 2021; 21:96. [PMID: 34098894 PMCID: PMC8185913 DOI: 10.1186/s12880-021-00626-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 05/27/2021] [Indexed: 12/24/2022] Open
Abstract
Objective To assess the ablative margin of microwave ablation (MWA) for stage I non-small cell lung cancer (NSCLC) using a three-dimensional (3D) reconstruction technique. Materials and methods We retrospectively analyzed 36 patients with stage I NSCLC lesions undergoing MWA and analyzed the relationship between minimal ablative margin and the local tumor progression (LTP) interval, the distant metastasis interval and disease-free survival (DFS). The minimal ablative margin was measured using the fusion of 3D computed tomography reconstruction technique. Results Univariate and multivariate analyses indicated that tumor size (hazard ratio [HR] = 1.91, P < 0.01; HR = 2.41, P = 0.01) and minimal ablative margin (HR = 0.13, P < 0.01; HR = 0.11, P < 0.01) were independent prognostic factors for the LTP interval. Tumor size (HR = 1.96, P < 0.01; HR = 2.35, P < 0.01) and minimal ablative margin (HR = 0.17, P < 0.01; HR = 0.13, P < 0.01) were independent prognostic factors for DFS by univariate and multivariate analyses. In the group with a minimal ablative margin < 5 mm, the 1-year and 2-year local progression-free rates were 35.7% and 15.9%, respectively. The 1-year and 2-year distant metastasis-free rates were 75.6% and 75.6%, respectively; the 1-year and 2-year disease-free survival rates were 16.7% and 11.1%, respectively. In the group with a minimal ablative margin ≥ 5 mm, the 1-year and 2-year local progression-free rates were 88.9% and 69.4%, respectively. The 1-year and 2-year distant metastasis-free rates were 94.4% and 86.6%, respectively; the 1-year and 2-year disease-free survival rates were 88.9% and 63.7%, respectively. The feasibility of 3D quantitative analysis of the ablative margins after MWA for NSCLC has been validated. Conclusions The minimal ablative margin is an independent factor of NSCLC relapse after MWA, and the fusion of 3D reconstruction technique can feasibly assess the minimal ablative margin. Supplementary Information The online version contains supplementary material available at 10.1186/s12880-021-00626-z.
Collapse
Affiliation(s)
- Peng Yan
- Department of Oncology, Jinan Central Hospital Affiliated to Shandong University, Jinan, People's Republic of China
| | - An-Na Tong
- Department of Radiation, The 960th Hospital of the PLA Joint Logistics Support Force, Jinan, People's Republic of China
| | - Xiu-Li Nie
- Department of Radiology, Jinan Central Hospital Affiliated to Shandong University, Jinan, People's Republic of China
| | - Min-Ge Ma
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, People's Republic of China.
| |
Collapse
|
46
|
Peters AA, Decasper A, Munz J, Klaus J, Loebelenz LI, Hoffner MKM, Hourscht C, Heverhagen JT, Christe A, Ebner L. Performance of an AI based CAD system in solid lung nodule detection on chest phantom radiographs compared to radiology residents and fellow radiologists. J Thorac Dis 2021; 13:2728-2737. [PMID: 34164165 PMCID: PMC8182550 DOI: 10.21037/jtd-20-3522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Background Despite the decreasing relevance of chest radiography in lung cancer screening, chest radiography is still frequently applied to assess for lung nodules. The aim of the current study was to determine the accuracy of a commercial AI based CAD system for the detection of artificial lung nodules on chest radiograph phantoms and compare the performance to radiologists in training. Methods Sixty-one anthropomorphic lung phantoms were equipped with 140 randomly deployed artificial lung nodules (5, 8, 10, 12 mm). A random generator chose nodule size and distribution before a two-plane chest X-ray (CXR) of each phantom was performed. Seven blinded radiologists in training (2 fellows, 5 residents) with 2 to 5 years of experience in chest imaging read the CXRs on a PACS-workstation independently. Results of the software were recorded separately. McNemar test was used to compare each radiologist’s results to the AI-computer-aided-diagnostic (CAD) software in a per-nodule and a per-phantom approach and Fleiss-Kappa was applied for inter-rater and intra-observer agreements. Results Five out of seven readers showed a significantly higher accuracy than the AI algorithm. The pooled accuracies of the radiologists in a nodule-based and a phantom-based approach were 0.59 and 0.82 respectively, whereas the AI-CAD showed accuracies of 0.47 and 0.67, respectively. Radiologists’ average sensitivity for 10 and 12 mm nodules was 0.80 and dropped to 0.66 for 8 mm (P=0.04) and 0.14 for 5 mm nodules (P<0.001). The radiologists and the algorithm both demonstrated a significant higher sensitivity for peripheral compared to central nodules (0.66 vs. 0.48; P=0.004 and 0.64 vs. 0.094; P=0.025, respectively). Inter-rater agreements were moderate among the radiologists and between radiologists and AI-CAD software (K’=0.58±0.13 and 0.51±0.1). Intra-observer agreement was calculated for two readers and was almost perfect for the phantom-based (K’=0.85±0.05; K’=0.80±0.02); and substantial to almost perfect for the nodule-based approach (K’=0.83±0.02; K’=0.78±0.02). Conclusions The AI based CAD system as a primary reader acts inferior to radiologists regarding lung nodule detection in chest phantoms. Chest radiography has reasonable accuracy in lung nodule detection if read by a radiologist alone and may be further optimized by an AI based CAD system as a second reader.
Collapse
Affiliation(s)
- Alan A Peters
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Amanda Decasper
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jaro Munz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jeremias Klaus
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Laura I Loebelenz
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Maximilian Korbinian Michael Hoffner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Cynthia Hourscht
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Johannes T Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Department of BioMedical Research, Experimental Radiology, University of Bern, Bern, Switzerland.,Department of Radiology, The Ohio State University, Columbus, OH, USA
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
47
|
Rocha GC, Paiva HM, Sanches DG, Fiks D, Castro RM, Silva LFAE. Information system for epidemic control: a computational solution addressing successful experiences and main challenges. LIBRARY HI TECH 2021. [DOI: 10.1108/lht-11-2020-0276] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
PurposeThe SARS-CoV-2 pandemic has caused a major impact on worldwide public health and economics. The lessons learned from the successful attempts to contain the pandemic escalation revealed that the wise usage of contact tracing and information systems can widely help the containment work of any contagious disease. In this context, this paper investigates other researches on this domain, as well as the main issues related to the practical implementation of such systems and specifies a technical solution.Design/methodology/approachThe proposed solution is based on the automatic identification of relevant contacts between infected or suspected people with susceptible people; inference of contamination risk based on symptoms history, user navigation records and contact information; real-time georeferenced information of population density of infected or suspect people; and automatic individual social distancing recommendation calculated through the individual contamination risk and the worsening of clinical condition risk.FindingsThe solution was specified, prototyped and evaluated by potential users and health authorities. The proposed solution has the potential of becoming a reference on how to coordinate the efforts of health authorities and the population on epidemic control.Originality/valueThis paper proposed an original information system for epidemic control which was applied for the SARS-CoV-2 pandemic and could be easily extended to other epidemics.
Collapse
|
48
|
3D CNN with Visual Insights for Early Detection of Lung Cancer Using Gradient-Weighted Class Activation. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:6695518. [PMID: 33777347 PMCID: PMC7979307 DOI: 10.1155/2021/6695518] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 02/09/2021] [Accepted: 02/23/2021] [Indexed: 11/17/2022]
Abstract
The 3D convolutional neural network is able to make use of the full nonlinear 3D context information of lung nodule detection from the DICOM (Digital Imaging and Communications in Medicine) images, and the Gradient Class Activation has shown to be useful for tailoring classification tasks and localization interpretation for fine-grained features and visual explanation for the internal working. Gradient-weighted class activation plays a crucial role for clinicians and radiologists in terms of trusting and adopting the model. Practitioners not only rely on a model that can provide high precision but also really want to gain the respect of radiologists. So, in this paper, we explored the lung nodule classification using the improvised 3D AlexNet with lightweight architecture. Our network employed the full nature of the multiview network strategy. We have conducted the binary classification (benign and malignant) on computed tomography (CT) images from the LUNA 16 database conglomerate and database image resource initiative. The results obtained are through the 10-fold cross-validation. Experimental results have shown that the proposed lightweight architecture achieved a superior classification accuracy of 97.17% on LUNA 16 dataset when compared with existing classification algorithms and low-dose CT scan images as well.
Collapse
|
49
|
Abstract
Lung cancer is one of the most common diseases among humans and one of the major causes of growing mortality. Medical experts believe that diagnosing lung cancer in the early phase can reduce death with the illustration of lung nodule through computed tomography (CT) screening. Examining the vast amount of CT images can reduce the risk. However, the CT scan images incorporate a tremendous amount of information about nodules, and with an increasing number of images make their accurate assessment very challenging tasks for radiologists. Recently, various methods are evolved based on handcraft and learned approach to assist radiologists. In this paper, we reviewed different promising approaches developed in the computer-aided diagnosis (CAD) system to detect and classify the nodule through the analysis of CT images to provide radiologists' assistance and present the comprehensive analysis of different methods.
Collapse
Affiliation(s)
- Shailesh Kumar Thakur
- Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, India.
| | - Dhirendra Pratap Singh
- Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, India
| | - Jaytrilok Choudhary
- Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, India
| |
Collapse
|
50
|
Masud M, Sikder N, Nahid AA, Bairagi AK, AlZain MA. A Machine Learning Approach to Diagnosing Lung and Colon Cancer Using a Deep Learning-Based Classification Framework. SENSORS 2021; 21:s21030748. [PMID: 33499364 PMCID: PMC7865416 DOI: 10.3390/s21030748] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/10/2021] [Accepted: 01/18/2021] [Indexed: 12/19/2022]
Abstract
The field of Medicine and Healthcare has attained revolutionary advancements in the last forty years. Within this period, the actual reasons behind numerous diseases were unveiled, novel diagnostic methods were designed, and new medicines were developed. Even after all these achievements, diseases like cancer continue to haunt us since we are still vulnerable to them. Cancer is the second leading cause of death globally; about one in every six people die suffering from it. Among many types of cancers, the lung and colon variants are the most common and deadliest ones. Together, they account for more than 25% of all cancer cases. However, identifying the disease at an early stage significantly improves the chances of survival. Cancer diagnosis can be automated by using the potential of Artificial Intelligence (AI), which allows us to assess more cases in less time and cost. With the help of modern Deep Learning (DL) and Digital Image Processing (DIP) techniques, this paper inscribes a classification framework to differentiate among five types of lung and colon tissues (two benign and three malignant) by analyzing their histopathological images. The acquired results show that the proposed framework can identify cancer tissues with a maximum of 96.33% accuracy. Implementation of this model will help medical professionals to develop an automatic and reliable system capable of identifying various types of lung and colon cancers.
Collapse
Affiliation(s)
- Mehedi Masud
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
- Correspondence:
| | - Niloy Sikder
- Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh; (N.S.); (A.K.B.)
| | - Abdullah-Al Nahid
- Electronics and Communication Engineering Discipline, Khulna University, Khulna 9208, Bangladesh;
| | - Anupam Kumar Bairagi
- Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh; (N.S.); (A.K.B.)
| | - Mohammed A. AlZain
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia;
| |
Collapse
|