1
|
Li L, Yang J, Por LY, Khan MS, Hamdaoui R, Hussain L, Iqbal Z, Rotaru IM, Dobrotă D, Aldrdery M, Omar A. Enhancing lung cancer detection through hybrid features and machine learning hyperparameters optimization techniques. Heliyon 2024; 10:e26192. [PMID: 38404820 PMCID: PMC10884486 DOI: 10.1016/j.heliyon.2024.e26192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 01/30/2024] [Accepted: 02/08/2024] [Indexed: 02/27/2024] Open
Abstract
Machine learning offers significant potential for lung cancer detection, enabling early diagnosis and potentially improving patient outcomes. Feature extraction remains a crucial challenge in this domain. Combining the most relevant features can further enhance detection accuracy. This study employed a hybrid feature extraction approach, which integrates both Gray-level co-occurrence matrix (GLCM) with Haralick and autoencoder features with an autoencoder. These features were subsequently fed into supervised machine learning methods. Support Vector Machine (SVM) Radial Base Function (RBF) and SVM Gaussian achieved perfect performance measures, while SVM polynomial produced an accuracy of 99.89% when utilizing GLCM with an autoencoder, Haralick, and autoencoder features. SVM Gaussian achieved an accuracy of 99.56%, while SVM RBF achieved an accuracy of 99.35% when utilizing GLCM with Haralick features. These results demonstrate the potential of the proposed approach for developing improved diagnostic and prognostic lung cancer treatment planning and decision-making systems.
Collapse
Affiliation(s)
- Liangyu Li
- Center for Software Technology and Management, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor, Malaysia
- Health Informatics Laboratory, Cancer Research Institute, Chifeng Cancer Hospital (Second Affiliated Hospital of Chifeng University), Medical Department, Chifeng University, Chifeng City, Inner Mongolia Autonomous Region, 024000, China
| | - Jing Yang
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | - Lip Yee Por
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | - Mohammad Shahbaz Khan
- Children's National Hospital, 111 Michigan Ave NW, Washington, DC, 20010, United States
| | - Rim Hamdaoui
- Department of Computer Science, College of Science and Human Studies Dawadmi, Shaqra University, Shaqra, Riyadh, Saudi Arabia
| | - Lal Hussain
- Department of Computer Science and Information Technology, King Abdullah Campus Chatter Kalas, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan
- Department of Computer Science and Information Technology, Neelum Campus, University of Azad Jammu and Kashmir, Athmuqam, 13230, Azad Kashmir, Pakistan
| | - Zahoor Iqbal
- School of Computer Science and Technology, Zhejiang Normal University, Jinhua, 321004, China
| | - Ionela Magdalena Rotaru
- Department of Industrial Engineering and Management, Lucian Blaga University of Sibiu, Bulevardul Victoriei 10, Sibiu, 550024, Romania
| | - Dan Dobrotă
- Faculty of Engineering, Lucian Blaga University of Sibiu, Bulevardul Victoriei 10, Sibiu, 550024, Romania
| | - Moutaz Aldrdery
- Department of Chemical Engineering, College of Engineering, King Khalid University, Abha, 61411, Saudi Arabia
| | - Abdulfattah Omar
- Department of English, College of Science & Humanities, Prince Sattam Bin Abdulaziz University, Saudi Arabia
| |
Collapse
|
2
|
Montaha S, Azam S, Bhuiyan MRI, Chowa SS, Mukta MSH, Jonkman M. Malignancy pattern analysis of breast ultrasound images using clinical features and a graph convolutional network. Digit Health 2024; 10:20552076241251660. [PMID: 38817843 PMCID: PMC11138200 DOI: 10.1177/20552076241251660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 04/12/2024] [Indexed: 06/01/2024] Open
Abstract
Objective Early diagnosis of breast cancer can lead to effective treatment, possibly increase long-term survival rates, and improve quality of life. The objective of this study is to present an automated analysis and classification system for breast cancer using clinical markers such as tumor shape, orientation, margin, and surrounding tissue. The novelty and uniqueness of the study lie in the approach of considering medical features based on the diagnosis of radiologists. Methods Using clinical markers, a graph is generated where each feature is represented by a node, and the connection between them is represented by an edge which is derived through Pearson's correlation method. A graph convolutional network (GCN) model is proposed to classify breast tumors into benign and malignant, using the graph data. Several statistical tests are performed to assess the importance of the proposed features. The performance of the proposed GCN model is improved by experimenting with different layer configurations and hyper-parameter settings. Results Results show that the proposed model has a 98.73% test accuracy. The performance of the model is compared with a graph attention network, a one-dimensional convolutional neural network, and five transfer learning models, ten machine learning models, and three ensemble learning models. The performance of the model was further assessed with three supplementary breast cancer ultrasound image datasets, where the accuracies are 91.03%, 94.37%, and 89.62% for Dataset A, Dataset B, and Dataset C (combining Dataset A and Dataset B) respectively. Overfitting issues are assessed through k-fold cross-validation. Conclusion Several variants are utilized to present a more rigorous and fair evaluation of our work, especially the importance of extracting clinically relevant features. Moreover, a GCN model using graph data can be a promising solution for an automated feature-based breast image classification system.
Collapse
Affiliation(s)
- Sidratul Montaha
- Department of Computer Science, University of Calgary, Calgary, Canada
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| | | | - Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| | | | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| |
Collapse
|
3
|
Kiani Shahvandi M, Souri M, Tavasoli S, Moradi Kashkooli F, Kar S, Soltani M. A comparative study between conventional chemotherapy and photothermal activated nano-sized targeted drug delivery to solid tumor. Comput Biol Med 2023; 166:107574. [PMID: 37839220 DOI: 10.1016/j.compbiomed.2023.107574] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Revised: 09/05/2023] [Accepted: 10/11/2023] [Indexed: 10/17/2023]
Abstract
Delivery of chemotherapeutic medicines to solid tumors is critical for optimal therapeutic success and minimal adverse effects. We mathematically developed a delivery method using thermosensitive nanocarriers activated by light irradiation. To assess its efficacy and identify critical events and parameters affecting therapeutic response, we compared this method to bolus and continuous infusions of doxorubicin for both single and multiple administrations. A hybrid sprouting angiogenesis approach generates a semi-realistic microvascular network to evaluate therapeutic drug distribution and microvascular heterogeneity. A pharmacodynamics model evaluates treatment success based on tumor survival cell percentage. The study found that whereas bolus injection boosted extracellular drug concentration levels by 90%, continuous infusion improved therapeutic response due to improved bioavailability. Cancer cell death increases by 6% with several injections compared to single injections due to prolonged chemotherapeutic medication exposure. However, responsive nanocarriers supply more than 2.1 times more drug than traditional chemotherapy in extracellular space, suppressing tumor development longer. Also, controlled drug release decreases systemic side effects substantial through diminishing the concentration of free drug in the circulation. The primary finding of this work highlights the significance of high bioavailability in treatment response. The results indicate that responsive nanocarriers contribute to increased bioavailability, leading to improved therapeutic benefits. By including drug delivery features in a semi-realistic model, this numerical study sought to improve drug-bio interaction comprehension. The model provides a good framework for understanding preclinical and clinical targeted oncology study outcomes.
Collapse
Affiliation(s)
| | - Mohammad Souri
- Department of NanoBiotechnology, Pasteur Institute of Iran, Tehran, Iran
| | - Shaghayegh Tavasoli
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | | | - Saptarshi Kar
- College of Engineering and Technology, American University of the Middle East, Kuwait
| | - M Soltani
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, Iran; Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Canada; Centre for Biotechnology and Bioengineering (CBB), University of Waterloo, Waterloo, Canada; Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Centre for Sustainable Business, International Business University, Toronto, Canada.
| |
Collapse
|
4
|
Khalid A, Mehmood A, Alabrah A, Alkhamees BF, Amin F, AlSalman H, Choi GS. Breast Cancer Detection and Prevention Using Machine Learning. Diagnostics (Basel) 2023; 13:3113. [PMID: 37835856 PMCID: PMC10572157 DOI: 10.3390/diagnostics13193113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 09/25/2023] [Accepted: 09/28/2023] [Indexed: 10/15/2023] Open
Abstract
Breast cancer is a common cause of female mortality in developing countries. Early detection and treatment are crucial for successful outcomes. Breast cancer develops from breast cells and is considered a leading cause of death in women. This disease is classified into two subtypes: invasive ductal carcinoma (IDC) and ductal carcinoma in situ (DCIS). The advancements in artificial intelligence (AI) and machine learning (ML) techniques have made it possible to develop more accurate and reliable models for diagnosing and treating this disease. From the literature, it is evident that the incorporation of MRI and convolutional neural networks (CNNs) is helpful in breast cancer detection and prevention. In addition, the detection strategies have shown promise in identifying cancerous cells. The CNN Improvements for Breast Cancer Classification (CNNI-BCC) model helps doctors spot breast cancer using a trained deep learning neural network system to categorize breast cancer subtypes. However, they require significant computing power for imaging methods and preprocessing. Therefore, in this research, we proposed an efficient deep learning model that is capable of recognizing breast cancer in computerized mammograms of varying densities. Our research relied on three distinct modules for feature selection: the removal of low-variance features, univariate feature selection, and recursive feature elimination. The craniocaudally and medial-lateral views of mammograms are incorporated. We tested it with a large dataset of 3002 merged pictures gathered from 1501 individuals who had digital mammography performed between February 2007 and May 2015. In this paper, we applied six different categorization models for the diagnosis of breast cancer, including the random forest (RF), decision tree (DT), k-nearest neighbors (KNN), logistic regression (LR), support vector classifier (SVC), and linear support vector classifier (linear SVC). The simulation results prove that our proposed model is highly efficient, as it requires less computational power and is highly accurate.
Collapse
Affiliation(s)
- Arslan Khalid
- Faculty of Computing, Islamia University of Bahawalpur, Bahawalpur 63100, Punjab, Pakistan; (A.K.); (A.M.)
| | - Arif Mehmood
- Faculty of Computing, Islamia University of Bahawalpur, Bahawalpur 63100, Punjab, Pakistan; (A.K.); (A.M.)
| | - Amerah Alabrah
- Department of Information Systems, College of Computer and Information Science, King Saud University, Riyadh 11543, Saudi Arabia;
| | - Bader Fahad Alkhamees
- Department of Information Systems, College of Computer and Information Science, King Saud University, Riyadh 11543, Saudi Arabia;
| | - Farhan Amin
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea;
| | - Hussain AlSalman
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia;
| | - Gyu Sang Choi
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea;
| |
Collapse
|
5
|
Mohammadi A, Torres-Cuenca T, Mirza-Aghazadeh-Attari M, Faeghi F, Acharya UR, Abbasian Ardakani A. Deep Radiomics Features of Median Nerves for Automated Diagnosis of Carpal Tunnel Syndrome With Ultrasound Images: A Multi-Center Study. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023; 42:2257-2268. [PMID: 37159483 DOI: 10.1002/jum.16244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 03/18/2023] [Accepted: 04/16/2023] [Indexed: 05/11/2023]
Abstract
OBJECTIVES Ultrasound is widely used in diagnosing carpal tunnel syndrome (CTS). However, the limitations of ultrasound in CTS detection are the lack of objective measures in the detection of nerve abnormality and the operator-dependent nature of ultrasound imaging. Therefore, in this study, we developed and proposed externally validated artificial intelligence (AI) models based on deep-radiomics features. METHODS We have used 416 median nerves from 2 countries (Iran and Colombia) for the development (112 entrapped and 112 normal nerves from Iran) and validation (26 entrapped and 26 normal nerves from Iran, and 70 entrapped and 70 normal nerves from Columbia) of our models. Ultrasound images were fed to the SqueezNet architecture to extract deep-radiomics features. Then a ReliefF method was used to select the clinically significant features. The selected deep-radiomics features were fed to 9 common machine-learning algorithms to choose the best-performing classifier. The 2 best-performing AI models were then externally validated. RESULTS Our developed model achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.910 (88.46% sensitivity, 88.46% specificity) and 0.908 (84.62% sensitivity, 88.46% specificity) with support vector machine and stochastic gradient descent (SGD), respectively using the internal validation dataset. Furthermore, both models consistently performed well in the external validation dataset, and achieved an AUC of 0.890 (85.71% sensitivity, 82.86% specificity) and 0.890 (84.29% sensitivity and 82.86% specificity), with SVM and SGD models, respectively. CONCLUSION Our proposed AI models fed with deep-radiomics features performed consistently with internal and external datasets. This justifies that our proposed system can be employed for clinical use in hospitals and polyclinics.
Collapse
Affiliation(s)
- Afshin Mohammadi
- Department of Radiology, Faculty of Medicine, Urmia University of Medical Science, Urmia, Iran
| | - Thomas Torres-Cuenca
- Department of Physical Medicine and Rehabilitation, National University of Colombia, Bogotá, Colombia
| | - Mohammad Mirza-Aghazadeh-Attari
- Russell H. Morgan Department of Radiology and Radiological Sciences, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Fariborz Faeghi
- Department of Radiology Technology, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Queensland, Australia
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| | - Ali Abbasian Ardakani
- Department of Radiology Technology, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
6
|
Liu Z, Lv Q, Lee CH, Shen L. GSDA: Generative adversarial network-based semi-supervised data augmentation for ultrasound image classification. Heliyon 2023; 9:e19585. [PMID: 37809802 PMCID: PMC10558834 DOI: 10.1016/j.heliyon.2023.e19585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 08/25/2023] [Accepted: 08/28/2023] [Indexed: 10/10/2023] Open
Abstract
Medical Ultrasound (US) is one of the most widely used imaging modalities in clinical practice, but its usage presents unique challenges such as variable imaging quality. Deep Learning (DL) models can serve as advanced medical US image analysis tools, but their performance is greatly limited by the scarcity of large datasets. To solve the common data shortage, we develop GSDA, a Generative Adversarial Network (GAN)-based semi-supervised data augmentation method. GSDA consists of the GAN and Convolutional Neural Network (CNN). The GAN synthesizes and pseudo-labels high-resolution, high-quality US images, and both real and synthesized images are then leveraged to train the CNN. To address the training challenges of both GAN and CNN with limited data, we employ transfer learning techniques during their training. We also introduce a novel evaluation standard that balances classification accuracy with computational time. We evaluate our method on the BUSI dataset and GSDA outperforms existing state-of-the-art methods. With the high-resolution and high-quality images synthesized, GSDA achieves a 97.9% accuracy using merely 780 images. Given these promising results, we believe that GSDA holds potential as an auxiliary tool for medical US analysis.
Collapse
Affiliation(s)
- Zhaoshan Liu
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore
| | - Qiujie Lv
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore
- School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China
| | - Chau Hung Lee
- Department of Radiology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore, 308433, Singapore
| | - Lei Shen
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore
| |
Collapse
|
7
|
Zafar A, Tanveer J, Ali MU, Lee SW. BU-DLNet: Breast Ultrasonography-Based Cancer Detection Using Deep-Learning Network Selection and Feature Optimization. Bioengineering (Basel) 2023; 10:825. [PMID: 37508852 PMCID: PMC10376009 DOI: 10.3390/bioengineering10070825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/04/2023] [Accepted: 07/09/2023] [Indexed: 07/30/2023] Open
Abstract
Early detection of breast lesions and distinguishing between malignant and benign lesions are critical for breast cancer (BC) prognosis. Breast ultrasonography (BU) is an important radiological imaging modality for the diagnosis of BC. This study proposes a BU image-based framework for the diagnosis of BC in women. Various pre-trained networks are used to extract the deep features of the BU images. Ten wrapper-based optimization algorithms, including the marine predator algorithm, generalized normal distribution optimization, slime mold algorithm, equilibrium optimizer (EO), manta-ray foraging optimization, atom search optimization, Harris hawks optimization, Henry gas solubility optimization, path finder algorithm, and poor and rich optimization, were employed to compute the optimal subset of deep features using a support vector machine classifier. Furthermore, a network selection algorithm was employed to determine the best pre-trained network. An online BU dataset was used to test the proposed framework. After comprehensive testing and analysis, it was found that the EO algorithm produced the highest classification rate for each pre-trained model. It produced the highest classification accuracy of 96.79%, and it was trained using only a deep feature vector with a size of 562 in the ResNet-50 model. Similarly, the Inception-ResNet-v2 had the second highest classification accuracy of 96.15% using the EO algorithm. Moreover, the results of the proposed framework are compared with those in the literature.
Collapse
Affiliation(s)
- Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Jawad Tanveer
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Muhammad Umair Ali
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Seung Won Lee
- Department of Precision Medicine, School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
8
|
Mustafa E, Jadoon EK, Khaliq-Uz-Zaman S, Humayun MA, Maray M. An Ensembled Framework for Human Breast Cancer Survivability Prediction Using Deep Learning. Diagnostics (Basel) 2023; 13:diagnostics13101688. [PMID: 37238173 DOI: 10.3390/diagnostics13101688] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/13/2023] [Accepted: 04/23/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is categorized as an aggressive disease, and it is one of the leading causes of death. Accurate survival predictions for both long-term and short-term survivors, when delivered on time, can help physicians make effective treatment decisions for their patients. Therefore, there is a dire need to design an efficient and rapid computational model for breast cancer prognosis. In this study, we propose an ensemble model for breast cancer survivability prediction (EBCSP) that utilizes multi-modal data and stacks the output of multiple neural networks. Specifically, we design a convolutional neural network (CNN) for clinical modalities, a deep neural network (DNN) for copy number variations (CNV), and a long short-term memory (LSTM) architecture for gene expression modalities to effectively handle multi-dimensional data. The independent models' results are then used for binary classification (long term > 5 years and short term < 5 years) based on survivability using the random forest method. The EBCSP model's successful application outperforms models that utilize a single data modality for prediction and existing benchmarks.
Collapse
Affiliation(s)
- Ehzaz Mustafa
- Department of Computer Science, Comsats University Islamabad, Abbottabad Campus, Islamabad 22060, Pakistan
| | - Ehtisham Khan Jadoon
- Department of Computer Science, Comsats University Islamabad, Abbottabad Campus, Islamabad 22060, Pakistan
| | - Sardar Khaliq-Uz-Zaman
- Department of Computer Science, Comsats University Islamabad, Abbottabad Campus, Islamabad 22060, Pakistan
| | - Mohammad Ali Humayun
- Department of Computer Science, Information Technology University of the Punjab, Lahore 54590, Pakistan
| | - Mohammed Maray
- Department of Information Systems, King Khalid University, Abha 62529, Saudi Arabia
| |
Collapse
|
9
|
Zafar A, Dad Kallu K, Atif Yaqub M, Ali MU, Hyuk Byun J, Yoon M, Su Kim K. A Hybrid GCN and Filter-Based Framework for Channel and Feature Selection: An fNIRS-BCI Study. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8812844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
In this study, a channel and feature selection methodology is devised for brain-computer interface (BCI) applications using functional near-infrared spectroscopy (fNIRS). A graph convolutional network (GCN) is employed to select the appropriate and correlated fNIRS channels. Furthermore, in the feature extraction phase, the performance of two filter-based feature selection algorithms, (i) the minimum redundancy maximum relevance (mRMR) and (ii) ReliefF, is investigated. The five most commonly used temporal statistical features (i.e., mean, slope, maximum, skewness, and kurtosis) are used, whereas the conventional support vector machine (SVM) is utilized as a classifier for training and testing. The proposed methodology is validated using an available online dataset of motor imagery (left- and right-hand), mental arithmetic, and baseline tasks. First, the efficacy of the proposed methodology is shown for two-class BCI applications (i.e., left- vs. right-hand motor imagery and mental arithmetic vs. baseline). Second, the proposed framework is applied to four-class BCI applications (i.e., left- vs. right-hand motor imagery vs. mental arithmetic vs. baseline). The results show that the number of appropriate channels and features was significantly reduced, resulting in a significant increase in classification accuracy for both two-class and four-class BCI applications, respectively. Furthermore, both mRMR (i.e., 87.8% for motor imagery, 87.1% for mental arithmetic, and 78.7% for four-class) and ReliefF (i.e., 90.7% for motor imagery, 93.7% for mental arithmetic, and 81.6% for four-class) yielded high average classification accuracy
. However, the results of the ReliefF algorithm are more stable and significant.
Collapse
|
10
|
Chen Y, Zhang X, Li D, Park H, Li X, Liu P, Jin J, Shen Y. Automatic segmentation of thyroid with the assistance of the devised boundary improvement based on multicomponent small dataset. APPL INTELL 2023; 53:1-16. [PMID: 37363389 PMCID: PMC10015528 DOI: 10.1007/s10489-023-04540-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2023] [Indexed: 03/17/2023]
Abstract
Deep learning has been widely considered in medical image segmentation. However, the difficulty of acquiring medical images and labels can affect the accuracy of the segmentation results for deep learning methods. In this paper, an automatic segmentation method is proposed by devising a multicomponent neighborhood extreme learning machine to improve the boundary attention region of the preliminary segmentation results. The neighborhood features are acquired by training U-Nets with the multicomponent small dataset, which consists of original thyroid ultrasound images, Sobel edge images and superpixel images. Afterward, the neighborhood features are selected by min-redundancy and max-relevance filter in the designed extreme learning machine, and the selected features are used to train the extreme learning machine to obtain supplementary segmentation results. Finally, the accuracy of the segmentation results is improved by adjusting the boundary attention region of the preliminary segmentation results with the supplementary segmentation results. This method combines the advantages of deep learning and traditional machine learning, boosting the accuracy of thyroid segmentation accuracy with a small dataset in a multigroup test.
Collapse
Affiliation(s)
- Yifei Chen
- Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
- Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141 Korea
| | - Xin Zhang
- Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - Dandan Li
- Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - HyunWook Park
- Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141 Korea
| | - Xinran Li
- Mathematics, Harbin Institute of Technology, Harbin, 150001 China
| | - Peng Liu
- Heilongjiang Provincial Key Laboratory of Trace Elements and Human Health, Harbin Medical University, Harbin, 150081 China
| | - Jing Jin
- Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - Yi Shen
- Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| |
Collapse
|
11
|
Akyol S, Yildirim M, Alatas B. Multi-feature fusion and improved BO and IGWO metaheuristics based models for automatically diagnosing the sleep disorders from sleep sounds. Comput Biol Med 2023; 157:106768. [PMID: 36907034 DOI: 10.1016/j.compbiomed.2023.106768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 02/21/2023] [Accepted: 03/07/2023] [Indexed: 03/12/2023]
Abstract
A night of regular and quality sleep is vital in human life. Sleep quality has a great impact on the daily life of people and those around them. Sounds such as snoring reduce not only the sleep quality of the person but also reduce the sleep quality of the partner. Sleep disorders can be eliminated by examining the sounds that people make at night. It is a very difficult process to follow and treat this process by experts. Therefore, this study, it is aimed to diagnose sleep disorders using computer-aided systems. In the study, the used data set contains seven hundred sound data which has seven different sound class such as cough, farting, laugh, scream, sneeze, sniffle, and snore. In the model proposed in the study, firstly, the feature maps of the sound signals in the data set were extracted. Three different methods were used in the feature extraction process. These methods are MFCC, Mel-spectrogram, and Chroma. The features extracted in these three methods are combined. Thanks to this method, the features of the same sound signal extracted in three different methods are used. This increases the performance of the proposed model. Later, the combined feature maps were analyzed using the proposed New Improved Gray Wolf Optimization (NI-GWO), which is the improved version of the Improved Gray Wolf Optimization (I-GWO) algorithm, and the proposed Improved Bonobo Optimizer (IBO) algorithm, which is the improved version of the Bonobo Optimizer (BO). In this way, it is aimed to run the models faster, reduce the number of features, and obtain the most optimum result. Finally, Support Vector Machine (SVM) and k-nearest neighbors (KNN) supervised shallow machine learning methods were used to calculate the metaheuristic algorithms' fitness values. Different types of metrics such as accuracy, sensitivity, F1 etc., were used for the performance comparison. Using the feature maps optimized by the proposed NI-GWO and IBO algorithms, the highest accuracy value was obtained from the SVM classifier with 99.28% for both metaheuristic algorithms.
Collapse
Affiliation(s)
- Sinem Akyol
- Department of Software Engineering, Firat University, 23100, Elazig, Turkey
| | - Muhammed Yildirim
- Department of Computer Engineering, Malatya Turgut Ozal University, 44200, Malatya, Turkey
| | - Bilal Alatas
- Department of Software Engineering, Firat University, 23100, Elazig, Turkey.
| |
Collapse
|
12
|
Chaudhury S, Sau K. A BERT encoding with Recurrent Neural Network and Long-Short Term Memory for breast cancer image classification. DECISION ANALYTICS JOURNAL 2023; 6:100177. [DOI: 10.1016/j.dajour.2023.100177] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
13
|
A novel deep learning model for breast lesion classification using ultrasound Images: A multicenter data evaluation. Phys Med 2023; 107:102560. [PMID: 36878133 DOI: 10.1016/j.ejmp.2023.102560] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 02/20/2023] [Accepted: 02/26/2023] [Indexed: 03/07/2023] Open
Abstract
PURPOSE Breast cancer is one of the major reasons of death due to cancer in women. Early diagnosis is the most critical key for disease screening, control, and reducing mortality. A robust diagnosis relies on the correct classification of breast lesions. While breast biopsy is referred to as the "gold standard" in assessing both the activity and degree of breast cancer, it is an invasive and time-consuming approach. METHOD The current study's primary objective was to develop a novel deep-learning architecture based on the InceptionV3 network to classify ultrasound breast lesions. The main promotions of the proposed architecture were converting the InceptionV3 modules to residual inception ones, increasing their number, and altering the hyperparameters. In addition, we used a combination of five datasets (three public datasets and two prepared from different imaging centers) for training and evaluating the model. RESULTS The dataset was split into the train (80%) and test (20%) groups. The model achieved 0.83, 0.77, 0.8, 0.81, 0.81, 0.18, and 0.77 for the precision, recall, F1 score, accuracy, AUC, Root Mean Squared Error, and Cronbach's α in the test group, respectively. CONCLUSIONS This study illustrates that the improved InceptionV3 can robustly classify breast tumors, potentially reducing the need for biopsy in many cases.
Collapse
|
14
|
High accuracy hybrid CNN classifiers for breast cancer detection using mammogram and ultrasound datasets. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
Srikantamurthy MM, Rallabandi VPS, Dudekula DB, Natarajan S, Park J. Classification of benign and malignant subtypes of breast cancer histopathology imaging using hybrid CNN-LSTM based transfer learning. BMC Med Imaging 2023; 23:19. [PMID: 36717788 PMCID: PMC9885590 DOI: 10.1186/s12880-023-00964-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 01/12/2023] [Indexed: 01/31/2023] Open
Abstract
BACKGROUND Grading of cancer histopathology slides requires more pathologists and expert clinicians as well as it is time consuming to look manually into whole-slide images. Hence, an automated classification of histopathological breast cancer sub-type is useful for clinical diagnosis and therapeutic responses. Recent deep learning methods for medical image analysis suggest the utility of automated radiologic imaging classification for relating disease characteristics or diagnosis and patient stratification. METHODS To develop a hybrid model using the convolutional neural network (CNN) and the long short-term memory recurrent neural network (LSTM RNN) to classify four benign and four malignant breast cancer subtypes. The proposed CNN-LSTM leveraging on ImageNet uses a transfer learning approach in classifying and predicting four subtypes of each. The proposed model was evaluated on the BreakHis dataset comprises 2480 benign and 5429 malignant cancer images acquired at magnifications of 40×, 100×, 200× and 400×. RESULTS The proposed hybrid CNN-LSTM model was compared with the existing CNN models used for breast histopathological image classification such as VGG-16, ResNet50, and Inception models. All the models were built using three different optimizers such as adaptive moment estimator (Adam), root mean square propagation (RMSProp), and stochastic gradient descent (SGD) optimizers by varying numbers of epochs. From the results, we noticed that the Adam optimizer was the best optimizer with maximum accuracy and minimum model loss for both the training and validation sets. The proposed hybrid CNN-LSTM model showed the highest overall accuracy of 99% for binary classification of benign and malignant cancer, and, whereas, 92.5% for multi-class classifier of benign and malignant cancer subtypes, respectively. CONCLUSION To conclude, the proposed transfer learning approach outperformed the state-of-the-art machine and deep learning models in classifying benign and malignant cancer subtypes. The proposed method is feasible in classification of other cancers as well as diseases.
Collapse
Affiliation(s)
| | | | - Dawood Babu Dudekula
- 3BIGS Omicscore Pvt. Ltd., 909 Lavelle Building, Richmond Circle, Bangalore, 560025 India
| | - Sathishkumar Natarajan
- 3BIGS Co. Ltd, 156, B-831, Geumgang Penterium IX Tower, Hwaseong, 18469 Republic of Korea
| | - Junhyung Park
- 3BIGS Co. Ltd, 156, B-831, Geumgang Penterium IX Tower, Hwaseong, 18469 Republic of Korea
| |
Collapse
|
16
|
Efficient Breast Cancer Diagnosis from Complex Mammographic Images Using Deep Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7717712. [PMID: 36909966 PMCID: PMC9998154 DOI: 10.1155/2023/7717712] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 02/15/2023] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
Medical image analysis places a significant focus on breast cancer, which poses a significant threat to women's health and contributes to many fatalities. An early and precise diagnosis of breast cancer through digital mammograms can significantly improve the accuracy of disease detection. Computer-aided diagnosis (CAD) systems must analyze the medical imagery and perform detection, segmentation, and classification processes to assist radiologists with accurately detecting breast lesions. However, early-stage mammography cancer detection is certainly difficult. The deep convolutional neural network has demonstrated exceptional results and is considered a highly effective tool in the field. This study proposes a computational framework for diagnosing breast cancer using a ResNet-50 convolutional neural network to classify mammogram images. To train and classify the INbreast dataset into benign or malignant categories, the framework utilizes transfer learning from the pretrained ResNet-50 CNN on ImageNet. The results revealed that the proposed framework achieved an outstanding classification accuracy of 93%, surpassing other models trained on the same dataset. This novel approach facilitates early diagnosis and classification of malignant and benign breast cancer, potentially saving lives and resources. These outcomes highlight that deep convolutional neural network algorithms can be trained to achieve highly accurate results in various mammograms, along with the capacity to enhance medical tools by reducing the error rate in screening mammograms.
Collapse
|
17
|
Flower XL, Poonguzhali S. Performance improvement and complexity reduction in the classification of EMG signals with mRMR-based CNN-KNN combined model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-220811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
For real-time applications, the performance in classifying the movement should be as high as possible, and the computational complexity should be low. This paper focuses on the classification of five upper arm movements which can be provided as a control for human-machine interface (HMI) based applications. The conventional machine learning algorithms are used for classification with both time and frequency domain features, and k-nearest neighbor (KNN) outplay others. To further improve the classification accuracy, pretrained CNN architectures are employed which leads to computational complexity and memory requirements. To overcome this, the deep convolutional neural network (CNN) model is introduced with three convolutional layers. To further improve the performance which is the key idea behind real-time applications, a hybrid CNN-KNN model is proposed. Even though the performance is high, the computation costs of the hybrid method are more. Minimum redundancy maximum relevance (mRMR), a feature selection method makes an effort to reduce feature dimensions. As a result, better performance is achieved by our proposed method CNN-KNN with mRMR which reduces computational complexity and memory requirement with a mean prediction accuracy of about 99.05±0.25% with 100 features.
Collapse
Affiliation(s)
- X. Little Flower
- Department of Electronics and Communication Engineering, College of Engineering Guindy (CEG), Anna University, Chennai, India
| | - S. Poonguzhali
- Department of Electronics and Communication Engineering, College of Engineering Guindy (CEG), Anna University, Chennai, India
| |
Collapse
|
18
|
Performance analysis of seven Convolutional Neural Networks (CNNs) with transfer learning for Invasive Ductal Carcinoma (IDC) grading in breast histopathological images. Sci Rep 2022; 12:19200. [PMID: 36357456 PMCID: PMC9649772 DOI: 10.1038/s41598-022-21848-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 10/04/2022] [Indexed: 11/11/2022] Open
Abstract
Computer-aided Invasive Ductal Carcinoma (IDC) grading classification systems based on deep learning have shown that deep learning may achieve reliable accuracy in IDC grade classification using histopathology images. However, there is a dearth of comprehensive performance comparisons of Convolutional Neural Network (CNN) designs on IDC in the literature. As such, we would like to conduct a comparison analysis of the performance of seven selected CNN models: EfficientNetB0, EfficientNetV2B0, EfficientNetV2B0-21k, ResNetV1-50, ResNetV2-50, MobileNetV1, and MobileNetV2 with transfer learning. To implement each pre-trained CNN architecture, we deployed the corresponded feature vector available from the TensorFlowHub, integrating it with dropout and dense layers to form a complete CNN model. Our findings indicated that the EfficientNetV2B0-21k (0.72B Floating-Point Operations and 7.1 M parameters) outperformed other CNN models in the IDC grading task. Nevertheless, we discovered that practically all selected CNN models perform well in the IDC grading task, with an average balanced accuracy of 0.936 ± 0.0189 on the cross-validation set and 0.9308 ± 0.0211on the test set.
Collapse
|
19
|
Sun H, He Q, Qi S, Yao Y, Teng Y. Improving the level of autism discrimination with augmented data by GraphRNN. Comput Biol Med 2022; 150:106141. [PMID: 36191394 DOI: 10.1016/j.compbiomed.2022.106141] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 09/07/2022] [Accepted: 09/18/2022] [Indexed: 11/16/2022]
Abstract
Datasets are the key to deep learning in autism disease research. However, due to the small quantity and heterogeneity of samples in current public datasets, for example Autism Brain Imaging Data Exchange (ABIDE), the recognition research is not sufficiently effective. Previous studies primarily focused on optimizing feature selection methods and data augmentation to improve recognition accuracy. This research is based on the latter, which learns the edge distribution of a real brain network through the graph recurrent neural network (GraphRNN) and generates synthetic data that have an incentive effect on the discriminant model. Experimental results show that the synthetic data greatly improves the classification ability of the subsequent classifiers, for example, it can improve the classification accuracy of a 50-layer ResNet by up to 30% compared with the case without synthetic data.
Collapse
Affiliation(s)
- Haonan Sun
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, China
| | - Qiang He
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07102, USA
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China.
| |
Collapse
|
20
|
Xie S, Zhang Y, Lv D, Chen X, Lu J, Liu J. A new improved maximal relevance and minimal redundancy method based on feature subset. THE JOURNAL OF SUPERCOMPUTING 2022; 79:3157-3180. [PMID: 36060093 PMCID: PMC9424812 DOI: 10.1007/s11227-022-04763-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 08/09/2022] [Indexed: 06/15/2023]
Abstract
Feature selection plays a very significant role for the success of pattern recognition and data mining. Based on the maximal relevance and minimal redundancy (mRMR) method, combined with feature subset, this paper proposes an improved maximal relevance and minimal redundancy (ImRMR) feature selection method based on feature subset. In ImRMR, the Pearson correlation coefficient and mutual information are first used to measure the relevance of a single feature to the sample category, and a factor is introduced to adjust the weights of the two measurement criteria. And an equal grouping method is exploited to generate candidate feature subsets according to the ranking features. Then, the relevance and redundancy of candidate feature subsets are calculated and the ordered sequence of these feature subsets is gained by incremental search method. Finally, the final optimal feature subset is obtained from these feature subsets by combining the sequence forward search method and the classification learning algorithm. Experiments are conducted on seven datasets. The results show that ImRMR can effectively remove irrelevant and redundant features, which can not only reduce the dimension of sample features and time of model training and prediction, but also improve the classification performance.
Collapse
Affiliation(s)
- Shanshan Xie
- College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming, 650224 China
| | - Yan Zhang
- College of Mathematics and Physics, Southwest Forestry University, Kunming, 650224 China
| | - Danjv Lv
- College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming, 650224 China
| | - Xu Chen
- College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming, 650224 China
| | - Jing Lu
- College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming, 650224 China
| | - Jiang Liu
- Research Institute of Forestry Policy and Information, Chinese Academy of Forestry, Beijing, 100091, China
| |
Collapse
|
21
|
Wang W, Jiang R, Cui N, Li Q, Yuan F, Xiao Z. Semi-supervised vision transformer with adaptive token sampling for breast cancer classification. Front Pharmacol 2022; 13:929755. [PMID: 35935827 PMCID: PMC9353650 DOI: 10.3389/fphar.2022.929755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Accepted: 06/29/2022] [Indexed: 12/24/2022] Open
Abstract
Various imaging techniques combined with machine learning (ML) models have been used to build computer-aided diagnosis (CAD) systems for breast cancer (BC) detection and classification. The rise of deep learning models in recent years, represented by convolutional neural network (CNN) models, has pushed the accuracy of ML-based CAD systems to a new level that is comparable to human experts. Existing studies have explored the usage of a wide spectrum of CNN models for BC detection, and supervised learning has been the mainstream. In this study, we propose a semi-supervised learning framework based on the Vision Transformer (ViT). The ViT is a model that has been validated to outperform CNN models on numerous classification benchmarks but its application in BC detection has been rare. The proposed method offers a custom semi-supervised learning procedure that unifies both supervised and consistency training to enhance the robustness of the model. In addition, the method uses an adaptive token sampling technique that can strategically sample the most significant tokens from the input image, leading to an effective performance gain. We validate our method on two datasets with ultrasound and histopathology images. Results demonstrate that our method can consistently outperform the CNN baselines for both learning tasks. The code repository of the project is available at https://github.com/FeiYee/Breast-area-TWO.
Collapse
Affiliation(s)
- Wei Wang
- Department of Breast Surgery, Hubei Provincial Clinical Research Center for Breast Cancer, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Ran Jiang
- Department of Thyroid and Breast Surgery, Maternal and Child Health Hospital of Hubei Province, Wuhan, Hubei, China
| | - Ning Cui
- Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Qian Li
- Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Feng Yuan
- Department of Breast Surgery, Hubei Provincial Clinical Research Center for Breast Cancer, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
- *Correspondence: Feng Yuan, ; Zhifeng Xiao,
| | - Zhifeng Xiao
- School of Engineering,Penn State Erie, The Behrend College, Erie, PA, United States
- *Correspondence: Feng Yuan, ; Zhifeng Xiao,
| |
Collapse
|
22
|
Ahmad S, Ullah T, Ahmad I, AL-Sharabi A, Ullah K, Khan RA, Rasheed S, Ullah I, Uddin MN, Ali MS. A Novel Hybrid Deep Learning Model for Metastatic Cancer Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8141530. [PMID: 35785076 PMCID: PMC9249449 DOI: 10.1155/2022/8141530] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/28/2022] [Accepted: 06/01/2022] [Indexed: 12/18/2022]
Abstract
Cancer has been found as a heterogeneous disease with various subtypes and aims to destroy the body's normal cells abruptly. As a result, it is essential to detect and prognosis the distinct type of cancer since they may help cancer survivors with treatment in the early stage. It must also divide cancer patients into high- and low-risk groups. While realizing efficient detection of cancer is frequently a time-taking and exhausting task with the high possibility of pathologist errors and previous studies employed data mining and machine learning (ML) techniques to identify cancer, these strategies rely on handcrafted feature extraction techniques that result in incorrect classification. On the contrary, deep learning (DL) is robust in feature extraction and has recently been widely used for classification and detection purposes. This research implemented a novel hybrid AlexNet-gated recurrent unit (AlexNet-GRU) model for the lymph node (LN) breast cancer detection and classification. We have used a well-known Kaggle (PCam) data set to classify LN cancer samples. This study is tested and compared among three models: convolutional neural network GRU (CNN-GRU), CNN long short-term memory (CNN-LSTM), and the proposed AlexNet-GRU. The experimental results indicated that the performance metrics accuracy, precision, sensitivity, and specificity (99.50%, 98.10%, 98.90%, and 97.50) of the proposed model can reduce the pathologist errors that occur during the diagnosis process of incorrect classification and significantly better performance than CNN-GRU and CNN-LSTM models. The proposed model is compared with other recent ML/DL algorithms to analyze the model's efficiency, which reveals that the proposed AlexNet-GRU model is computationally efficient. Also, the proposed model presents its superiority over state-of-the-art methods for LN breast cancer detection and classification.
Collapse
Affiliation(s)
- Shahab Ahmad
- School of Management Science and Engineering, Chongqing University of Post and Telecommunication, Chongqing 400065, China
| | - Tahir Ullah
- Department of Electronics and Information Engineering, Xian Jiaotong University, Xian, China
| | - Ijaz Ahmad
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | | | - Kalim Ullah
- Department of Zoology, Kohat University of Science and Technology, Kohat 26000, Pakistan
| | - Rehan Ali Khan
- Department of Electrical Engineering, University of Science and Technology, Bannu 28100, Pakistan
| | - Saim Rasheed
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University Jeddah, Saudi Arabia
| | - Inam Ullah
- College of Internet of Things (IoT) Engineering, Hohai University (HHU), Changzhou Campus, Nanjing 213022, China
| | - Md. Nasir Uddin
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia 7003, Bangladesh
| | - Md. Sadek Ali
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia 7003, Bangladesh
| |
Collapse
|
23
|
An optimized deep learning architecture for breast cancer diagnosis based on improved marine predators algorithm. Neural Comput Appl 2022; 34:18015-18033. [PMID: 35698722 PMCID: PMC9175533 DOI: 10.1007/s00521-022-07445-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 05/14/2022] [Indexed: 11/12/2022]
Abstract
Breast cancer is the second leading cause of death in women; therefore, effective early detection of this cancer can reduce its mortality rate. Breast cancer detection and classification in the early phases of development may allow for optimal therapy. Convolutional neural networks (CNNs) have enhanced tumor detection and classification efficiency in medical imaging compared to traditional approaches. This paper proposes a novel classification model for breast cancer diagnosis based on a hybridized CNN and an improved optimization algorithm, along with transfer learning, to help radiologists detect abnormalities efficiently. The marine predators algorithm (MPA) is the optimization algorithm we used, and we improve it using the opposition-based learning strategy to cope with the implied weaknesses of the original MPA. The improved marine predators algorithm (IMPA) is used to find the best values for the hyperparameters of the CNN architecture. The proposed method uses a pretrained CNN model called ResNet50 (residual network). This model is hybridized with the IMPA algorithm, resulting in an architecture called IMPA-ResNet50. Our evaluation is performed on two mammographic datasets, the mammographic image analysis society (MIAS) and curated breast imaging subset of DDSM (CBIS-DDSM) datasets. The proposed model was compared with other state-of-the-art approaches. The obtained results showed that the proposed model outperforms the compared state-of-the-art approaches, which are beneficial to classification performance, achieving 98.32% accuracy, 98.56% sensitivity, and 98.68% specificity on the CBIS-DDSM dataset and 98.88% accuracy, 97.61% sensitivity, and 98.40% specificity on the MIAS dataset. To evaluate the performance of IMPA in finding the optimal values for the hyperparameters of ResNet50 architecture, it compared to four other optimization algorithms including gravitational search algorithm (GSA), Harris hawks optimization (HHO), whale optimization algorithm (WOA), and the original MPA algorithm. The counterparts algorithms are also hybrid with the ResNet50 architecture produce models named GSA-ResNet50, HHO-ResNet50, WOA-ResNet50, and MPA-ResNet50, respectively. The results indicated that the proposed IMPA-ResNet50 is achieved a better performance than other counterparts.
Collapse
|
24
|
Senturk ZK. Layer recurrent neural network-based diagnosis of Parkinson’s disease using voice features. BIOMED ENG-BIOMED TE 2022; 67:249-266. [DOI: 10.1515/bmt-2022-0022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 05/18/2022] [Indexed: 12/13/2022]
Abstract
Abstract
Parkinson’s disease (PD), a slow-progressing neurological disease, affects a large percentage of the world’s elderly population, and this population is expected to grow over the next decade. As a result, early detection is crucial for community health and the future of the globe in order to take proper safeguards and have a less arduous treatment procedure. Recent research has begun to focus on the motor system deficits caused by PD. Because practically most of the PD patients suffer from voice abnormalities, researchers working on automated diagnostic systems investigate vocal impairments. In this paper, we undertake extensive experiments with features extracted from voice signals. We propose a layer Recurrent Neural Network (RNN) based diagnosis for PD. To prove the efficiency of the model, different network models are compared. To the best of our knowledge, several neural network topologies, namely RNN, Cascade Forward Neural Networks (CFNN), and Feed Forward Neural Networks (FFNN), are used and compared for voice-based PD detection for the first time. In addition, the impacts of data normalization and feature selection (FS) are thoroughly examined. The findings reveal that normalization increases classifier performance and Laplacian-based FS outperforms. The proposed RNN model with 300 voice features achieves 99.74% accuracy.
Collapse
Affiliation(s)
- Zehra Karapinar Senturk
- Computer Engineering Department , Faculty of Engineering, Duzce University , 81620 , Duzce , Turkey
| |
Collapse
|
25
|
Boosting chameleon swarm algorithm with consumption AEO operator for global optimization and feature selection. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108743] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
26
|
Eroğlu O, Eroğlu Y, Yıldırım M, Karlıdag T, Çınar A, Akyiğit A, Kaygusuz İ, Yıldırım H, Keleş E, Yalçın Ş. Is it useful to use computerized tomography image-based artificial intelligence modelling in the differential diagnosis of chronic otitis media with and without cholesteatoma? Am J Otolaryngol 2022; 43:103395. [PMID: 35241288 DOI: 10.1016/j.amjoto.2022.103395] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 02/01/2022] [Accepted: 02/13/2022] [Indexed: 11/19/2022]
Abstract
OBJECTIVE Cholesteatoma is an aggressive form of chronic otitis media (COM). For this reason, it is important to distinguish between COM with and without cholesteatoma. In this study, the role of artificial intelligence modelling in differentiating COM with and without cholesteatoma on computed tomography images was evaluated. METHODS The files of 200 patients who underwent mastoidectomy and/or tympanoplasty for COM in our clinic between January 2016 and January 2021 were retrospectively reviewed. According to the presence of cholesteatoma, the patients were divided into two groups as chronic otitis with cholesteatoma (n = 100) and chronic otitis without cholesteatoma (n = 100). The control group (n = 100) consisted of patients who did not have any previous ear disease and did not have any active complaints about the ear. Temporal bone computed tomography (CT) images of all patients were analyzed. The distinction between cholesteatoma and COM was evaluated by using 80% of the CT images obtained for the training of artificial intelligence modelling and the remaining 20% for testing purposes. RESULTS The accuracy rate obtained in the hybrid model we used in our study was 95.4%. The proposed model correctly predicted 2952 out of 3093 CT images, while it predicted 141 incorrectly. It correctly predicted 936 (93.78%) of 998 images in the COM group with cholesteatoma, 835 (92.77%) of 900 images in the COM group without cholesteatoma, and 1181 (98.82%) of 1195 images in the normal group. CONCLUSION In our study, it has been shown that the differentiation of COM with and without cholesteatoma with artificial intelligence modelling can be made with highly accurate diagnosis rates by using CT images. With the deep learning modelling we proposed, the highest correct diagnosis rate in the literature was obtained. According to the results of our study, we think that with the use of artificial intelligence in practice, the diagnosis of cholesteatoma can be made earlier, it will help in the selection of the most appropriate treatment approach, and the complications can be reduced.
Collapse
Affiliation(s)
- Orkun Eroğlu
- Fırat University, School of Medicine, Department of Otorhinolaryngology, Elazig, Turkey.
| | - Yeşim Eroğlu
- Fırat University, School of Medicine, Department of Radiology, Elazig, Turkey
| | - Muhammed Yıldırım
- Malatya Turgut Ozal University, Faculty of Engineering and Natural Sciences, Department of Computer Engineering, Malatya, Turkey
| | - Turgut Karlıdag
- Fırat University, School of Medicine, Department of Otorhinolaryngology, Elazig, Turkey
| | - Ahmet Çınar
- Fırat University, School of Engineering, Department of Computer Engineering, Elazig, Turkey.
| | - Abdulvahap Akyiğit
- Fırat University, School of Medicine, Department of Otorhinolaryngology, Elazig, Turkey
| | - İrfan Kaygusuz
- Fırat University, School of Medicine, Department of Otorhinolaryngology, Elazig, Turkey
| | - Hanefi Yıldırım
- Fırat University, School of Medicine, Department of Radiology, Elazig, Turkey.
| | - Erol Keleş
- Fırat University, School of Medicine, Department of Otorhinolaryngology, Elazig, Turkey
| | - Şinasi Yalçın
- Fırat University, School of Medicine, Department of Otorhinolaryngology, Elazig, Turkey
| |
Collapse
|
27
|
Liu H, Cui G, Luo Y, Guo Y, Zhao L, Wang Y, Subasi A, Dogan S, Tuncer T. Artificial Intelligence-Based Breast Cancer Diagnosis Using Ultrasound Images and Grid-Based Deep Feature Generator. Int J Gen Med 2022; 15:2271-2282. [PMID: 35256855 PMCID: PMC8898057 DOI: 10.2147/ijgm.s347491] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 01/11/2022] [Indexed: 01/30/2023] Open
Abstract
Purpose Breast cancer is a prominent cancer type with high mortality. Early detection of breast cancer could serve to improve clinical outcomes. Ultrasonography is a digital imaging technique used to differentiate benign and malignant tumors. Several artificial intelligence techniques have been suggested in the literature for breast cancer detection using breast ultrasonography (BUS). Nowadays, particularly deep learning methods have been applied to biomedical images to achieve high classification performances. Patients and Methods This work presents a new deep feature generation technique for breast cancer detection using BUS images. The widely known 16 pre-trained CNN models have been used in this framework as feature generators. In the feature generation phase, the used input image is divided into rows and columns, and these deep feature generators (pre-trained models) have applied to each row and column. Therefore, this method is called a grid-based deep feature generator. The proposed grid-based deep feature generator can calculate the error value of each deep feature generator, and then it selects the best three feature vectors as a final feature vector. In the feature selection phase, iterative neighborhood component analysis (INCA) chooses 980 features as an optimal number of features. Finally, these features are classified by using a deep neural network (DNN). Results The developed grid-based deep feature generation-based image classification model reached 97.18% classification accuracy on the ultrasonic images for three classes, namely malignant, benign, and normal. Conclusion The findings obviously denoted that the proposed grid deep feature generator and INCA-based feature selection model successfully classified breast ultrasonic images.
Collapse
Affiliation(s)
- Haixia Liu
- Department of Ultrasound, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Guozhong Cui
- Department of Surgical Oncology, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Yi Luo
- Medical Statistics Room, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Yajie Guo
- Department of Ultrasound, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Lianli Zhao
- Department of Internal Medicine teaching and research group, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, China
| | - Yueheng Wang
- Department of Ultrasound, The Second Hospital of Hebei MedicalUniversity, Shijiazhuang, Hebei Province, 050000, People's Republic of China
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, 20520, Finland.,Department of Computer Science, College of Engineering, Effat University, Jeddah, 21478, Saudi Arabia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
| |
Collapse
|
28
|
Misra S, Jeon S, Managuli R, Lee S, Kim G, Yoon C, Lee S, Barr RG, Kim C. Bi-Modal Transfer Learning for Classifying Breast Cancers via Combined B-Mode and Ultrasound Strain Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:222-232. [PMID: 34633928 DOI: 10.1109/tuffc.2021.3119251] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Although accurate detection of breast cancer still poses significant challenges, deep learning (DL) can support more accurate image interpretation. In this study, we develop a highly robust DL model based on combined B-mode ultrasound (B-mode) and strain elastography ultrasound (SE) images for classifying benign and malignant breast tumors. This study retrospectively included 85 patients, including 42 with benign lesions and 43 with malignancies, all confirmed by biopsy. Two deep neural network models, AlexNet and ResNet, were separately trained on combined 205 B-mode and 205 SE images (80% for training and 20% for validation) from 67 patients with benign and malignant lesions. These two models were then configured to work as an ensemble using both image-wise and layer-wise and tested on a dataset of 56 images from the remaining 18 patients. The ensemble model captures the diverse features present in the B-mode and SE images and also combines semantic features from AlexNet and ResNet models to classify the benign from the malignant tumors. The experimental results demonstrate that the accuracy of the proposed ensemble model is 90%, which is better than the individual models and the model trained using B-mode or SE images alone. Moreover, some patients that were misclassified by the traditional methods were correctly classified by the proposed ensemble method. The proposed ensemble DL model will enable radiologists to achieve superior detection efficiency owing to enhance classification accuracy for breast cancers in ultrasound (US) images.
Collapse
|
29
|
Assari Z, Mahloojifar A, Ahmadinejad N. A bimodal BI-RADS-guided GoogLeNet-based CAD system for solid breast masses discrimination using transfer learning. Comput Biol Med 2021; 142:105160. [PMID: 34995955 DOI: 10.1016/j.compbiomed.2021.105160] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 12/14/2021] [Accepted: 12/18/2021] [Indexed: 12/14/2022]
Abstract
Numerous solid breast masses require sophisticated analysis to establish a differential diagnosis. Consequently, complementary modalities such as ultrasound imaging are frequently required to evaluate mammographically further detected masses. Radiologists mentally integrate complementary information from images acquired of the same patient to make a more conclusive and effective diagnosis. However, it has always been a challenging task. This paper details a novel bimodal GoogLeNet-based CAD system that addresses the challenges associated with combining information from mammographic and sonographic images for solid breast mass classification. Each modality is initially trained using two distinct monomodal models in the proposed framework. Then, using the high-level feature maps extracted from both modalities, a bimodal model is trained. In order to fully exploit the BI-RADS descriptors, different image content representations of each mass are obtained and used as input images. In addition, using an ImageNet pre-trained GoogLeNet model, two publicly available databases, and our collected dataset, a two-step transfer learning strategy has been proposed. Our bimodal model achieves the best recognition results in terms of sensitivity, specificity, F1-score, Matthews Correlation Coefficient, area under the receiver operating characteristic curve, and accuracy metrics of 90.91%, 89.87%, 90.32%, 80.78%, 95.82%, and 90.38%, respectively. The promising results indicate that the proposed CAD system can facilitate bimodal suspicious mass analysis and thus contribute significantly to improving breast cancer diagnostic performance.
Collapse
Affiliation(s)
- Zahra Assari
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
| | - Ali Mahloojifar
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran.
| | - Nasrin Ahmadinejad
- Medical Imaging Center, Cancer Research Institute, Imam Khomeini Hospital Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Sciences (TUMS), Tehran, Iran
| |
Collapse
|
30
|
Li CC, Wu MY, Sun YC, Chen HH, Wu HM, Fang ST, Chung WY, Guo WY, Lu HHS. Ensemble classification and segmentation for intracranial metastatic tumors on MRI images based on 2D U-nets. Sci Rep 2021; 11:20634. [PMID: 34667233 PMCID: PMC8526612 DOI: 10.1038/s41598-021-99984-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 10/01/2021] [Indexed: 12/21/2022] Open
Abstract
The extraction of brain tumor tissues in 3D Brain Magnetic Resonance Imaging (MRI) plays an important role in diagnosis before the gamma knife radiosurgery (GKRS). In this article, the post-contrast T1 whole-brain MRI images had been collected by Taipei Veterans General Hospital (TVGH) and stored in DICOM format (dated from 1999 to 2018). The proposed method starts with the active contour model to get the region of interest (ROI) automatically and enhance the image contrast. The segmentation models are trained by MRI images with tumors to avoid imbalanced data problem under model construction. In order to achieve this objective, a two-step ensemble approach is used to establish such diagnosis, first, classify whether there is any tumor in the image, and second, segment the intracranial metastatic tumors by ensemble neural networks based on 2D U-Net architecture. The ensemble for classification and segmentation simultaneously also improves segmentation accuracy. The result of classification achieves a F1-measure of [Formula: see text], while the result of segmentation achieves an IoU of [Formula: see text] and a DICE score of [Formula: see text]. Significantly reduce the time for manual labeling from 30 min to 18 s per patient.
Collapse
Affiliation(s)
- Cheng-Chung Li
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Meng-Yun Wu
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Ying-Chou Sun
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Hung-Hsun Chen
- Center of Teaching and Learning Development, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Hsiu-Mei Wu
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Ssu-Ting Fang
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Wen-Yuh Chung
- Department of Neurosurgery, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan
| | - Wan-Yuo Guo
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Henry Horng-Shing Lu
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan.
| |
Collapse
|
31
|
Eroglu Y, Yildirim K, Çinar A, Yildirim M. Diagnosis and grading of vesicoureteral reflux on voiding cystourethrography images in children using a deep hybrid model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 210:106369. [PMID: 34474195 DOI: 10.1016/j.cmpb.2021.106369] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 08/17/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Vesicoureteral reflux is the leakage of urine from the bladder into the ureter. As a result, urinary tract infections and kidney scarring can occur in children. Voiding cystourethrography is the primary radiological imaging method used to diagnose vesicoureteral reflux in children with a history of recurrent urinary tract infection. Besides the diagnosis of reflux, it is graded with voiding cystourethrography. In this study, we aimed to diagnose and grade vesicoureteral reflux in Voiding cystourethrography images using hybrid CNN in deep learning methods. METHODS Images of pediatric patients diagnosed with VUR between 2016 and 2021 in our hospital (Firat University Hospital) were graded according to the international vesicoureteral reflux radiographic grading system. VCUG images of 236 normal and 992 with vesicoureteral reflux pediatric patients were available. A total of 6 classes were created as normal and graded 1-5 patients. RESULTS In this study, a hybrid-based mRMR (Minimum Redundancy Maximum Relevance) using CNN (Convolutional Neural Networks) model is developed for the diagnosis and grading of vesicoureteral reflux on voiding cystourethrography images. Googlenet, MobilenetV2, and Densenet201 models are used as a part of the hybrid architecture. The obtained features from these architectures are examined in concatenating process. Then, these features are classified in machine learning classifiers after optimizing with the mRMR method. Among the models used in the study, the highest accuracy value was obtained in the proposed model with an accuracy rate of 96.9%. CONCLUSIONS It shows that the hybrid model developed according to the findings of our study can be used in the diagnosis and grading of vesicoureteral reflux in voiding cystourethrography images.
Collapse
Affiliation(s)
- Yesim Eroglu
- Department of Radiology, Firat University School of Medicine, Elazig, Turkey.
| | - Kadir Yildirim
- Department of Urology, Turgut Ozal University, Malatya, Turkey.
| | - Ahmet Çinar
- Department of Computer Engineering, Firat University, Elazig, Turkey.
| | - Muhammed Yildirim
- Department of Computer Engineering, Firat University, Elazig, Turkey.
| |
Collapse
|
32
|
Liu Z, Ni S, Yang C, Sun W, Huang D, Su H, Shu J, Qin N. Axillary lymph node metastasis prediction by contrast-enhanced computed tomography images for breast cancer patients based on deep learning. Comput Biol Med 2021; 136:104715. [PMID: 34388460 DOI: 10.1016/j.compbiomed.2021.104715] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 07/09/2021] [Accepted: 07/27/2021] [Indexed: 12/09/2022]
Abstract
When doctors use contrast-enhanced computed tomography (CECT) images to predict the metastasis of axillary lymph nodes (ALN) for breast cancer patients, the prediction performance could be degraded by subjective factors such as experience, psychological factors, and degree of fatigue. This study aims to exploit efficient deep learning schemes to predict the metastasis of ALN automatically via CECT images. A new construction called deformable sampling module (DSM) was meticulously designed as a plug-and-play sampling module in the proposed deformable attention VGG19 (DA-VGG19). A dataset of 800 samples labeled from 800 CECT images of 401 breast cancer patients retrospectively enrolled in the last three years was adopted to train, validate, and test the deep convolutional neural network models. By comparing the accuracy, positive predictive value, negative predictive value, sensitivity and specificity indices, the performance of the proposed model is analyzed in detail. The best-performing DA-VGG19 model achieved an accuracy of 0.9088, which is higher than that of other classification neural networks. As such, the proposed intelligent diagnosis algorithm can provide doctors with daily diagnostic assistance and advice and reduce the workload of doctors. The source code mentioned in this article will be released later.
Collapse
Affiliation(s)
- Ziyi Liu
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Sijie Ni
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Chunmei Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, 646000, China
| | - Weihao Sun
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Deqing Huang
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Hu Su
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Jian Shu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, 646000, China.
| | - Na Qin
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China.
| |
Collapse
|