1
|
Vijayan S, Panneerselvam R, Roshini TV. Hybrid machine learning-based breast cancer segmentation framework using ultrasound images with optimal weighted features. Cell Biochem Funct 2024; 42:e4054. [PMID: 38783623 DOI: 10.1002/cbf.4054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 04/08/2024] [Accepted: 05/12/2024] [Indexed: 05/25/2024]
Abstract
One of the most dangerous conditions in clinical practice is breast cancer because it affects the entire life of women in recent days. Nevertheless, the existing techniques for diagnosing breast cancer are complicated, expensive, and inaccurate. Many trans-disciplinary and computerized systems are recently created to prevent human errors in both quantification and diagnosis. Ultrasonography is a crucial imaging technique for cancer detection. Therefore, it is essential to develop a system that enables the healthcare sector to rapidly and effectively detect breast cancer. Due to its benefits in predicting crucial feature identification from complicated breast cancer datasets, machine learning is widely employed in the categorization of breast cancer patterns. The performance of machine learning models is limited by the absence of a successful feature enhancement strategy. There are a few issues that need to be handled with the traditional breast cancer detection method. Thus, a novel breast cancer detection model is designed based on machine learning approaches and employing ultrasonic images. At first, ultrasound images utilized for the analysis is acquired from the benchmark resources and offered as the input to preprocessing phase. The images are preprocessed by utilizing a filtering and contrast enhancement approach and attained the preprocessed image. Then, the preprocessed images are subjected to the segmentation phase. In this phase, segmentation is performed by employing Fuzzy C-Means, active counter, and watershed algorithm and also attained the segmented images. Later, the segmented images are provided to the pixel selection phase. Here, the pixels are selected by the developed hybrid model Conglomerated Aphid with Galactic Swarm Optimization (CAGSO) to attain the final segmented pixels. Then, the selected segmented pixel is fed in to feature extraction phase for attaining the shape features and the textual features. Further, the acquired features are offered to the optimal weighted feature selection phase, and also their weights are tuned tune by the developed CAGSO. Finally, the optimal weighted features are offered to the breast cancer detection phase. Finally, the developed breast cancer detection model secured an enhanced performance rate than the classical approaches throughout the experimental analysis.
Collapse
Affiliation(s)
- Sudharsana Vijayan
- Department of Electronics and Communication Engineering, College of Engineering and Technology, SRM Institute of Science and Technology Kattankulathur, Chengalpattu, Tamil Nadu, India
| | - Radhika Panneerselvam
- Department of Electronics and Communication Engineering, College of Engineering and Technology, SRM Institute of Science and Technology Kattankulathur, Chengalpattu, Tamil Nadu, India
| | - Thundi Valappil Roshini
- Department of Electronics and Communication Engineering, Vimal Jyothi Engineering College, Chemperi, Kannur, Kerala, India
| |
Collapse
|
2
|
Heikal A, El-Ghamry A, Elmougy S, Rashad MZ. Fine tuning deep learning models for breast tumor classification. Sci Rep 2024; 14:10753. [PMID: 38730248 PMCID: PMC11087494 DOI: 10.1038/s41598-024-60245-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 04/19/2024] [Indexed: 05/12/2024] Open
Abstract
This paper proposes an approach to enhance the differentiation task between benign and malignant Breast Tumors (BT) using histopathology images from the BreakHis dataset. The main stages involve preprocessing, which encompasses image resizing, data partitioning (training and testing sets), followed by data augmentation techniques. Both feature extraction and classification tasks are employed by a Custom CNN. The experimental results show that the proposed approach using the Custom CNN model exhibits better performance with an accuracy of 84% than applying the same approach using other pretrained models, including MobileNetV3, EfficientNetB0, Vgg16, and ResNet50V2, that present relatively lower accuracies, ranging from 74 to 82%; these four models are used as both feature extractors and classifiers. To increase the accuracy and other performance metrics, Grey Wolf Optimization (GWO), and Modified Gorilla Troops Optimization (MGTO) metaheuristic optimizers are applied to each model separately for hyperparameter tuning. In this case, the experimental results show that the Custom CNN model, refined with MGTO optimization, reaches an exceptional accuracy of 93.13% in just 10 iterations, outperforming the other state-of-the-art methods, and the other four used pretrained models based on the BreakHis dataset.
Collapse
Affiliation(s)
- Abeer Heikal
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt.
- Department of Computer Science, Misr Higher Institute for Commerce and Computers, Mansoura, 35511, Egypt.
| | - Amir El-Ghamry
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Samir Elmougy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - M Z Rashad
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| |
Collapse
|
3
|
Subhashini R, Velswamy R, Sree Rathna Lakshmi NVS, Sivanandam C. An innovative breast cancer detection framework using multiscale dilated densenet with attention mechanism. NETWORK (BRISTOL, ENGLAND) 2024:1-37. [PMID: 38648017 DOI: 10.1080/0954898x.2024.2343348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Accepted: 04/05/2024] [Indexed: 04/25/2024]
Abstract
Cancer-related deadly diseases affect both developed and underdeveloped nations worldwide. Effective network learning is crucial to more reliably identify and categorize breast carcinoma in vast and unbalanced image datasets. The absence of early cancer symptoms makes the early identification process challenging. Therefore, from the perspectives of diagnosis, prevention, and therapy, cancer continues to be among the healthcare concerns that numerous researchers work to advance. It is highly essential to design an innovative breast cancer detection model by considering the complications presented in the classical techniques. Initially, breast cancer images are gathered from online sources and it is further subjected to the segmentation region. Here, it is segmented using Adaptive Trans-Dense-Unet (A-TDUNet), and their parameters are tuned using the developed Modified Sheep Flock Optimization Algorithm (MSFOA). The segmented images are further subjected to the breast cancer detection stage and effective breast cancer detection is performed by Multiscale Dilated Densenet with Attention Mechanism (MDD-AM). Throughout the result validation, the Net Present Value (NPV) and accuracy rate of the designed approach are 96.719% and 93.494%. Hence, the implemented breast cancer detection model secured a better efficacy rate than the baseline detection methods in diverse experimental conditions.
Collapse
Affiliation(s)
- R Subhashini
- Department of Information Technology, Sona College of Technology, Salem, Tamil Nadu, India
| | - Rajasekar Velswamy
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India
| | - N V S Sree Rathna Lakshmi
- Department of Electronics and Communication Engineering, Agni College of Technology, Thazhambur, Tamil Nadu, India
| | - Chakaravarthi Sivanandam
- Department of Computer Science and Engineering, Panimalar Engineering College, Poonamallee, Chennai, Tamil Nadu, India
| |
Collapse
|
4
|
Zhao Y, Zhou X, Pan T, Gao S, Zhang W. Correspondence-based Generative Bayesian Deep Learning for semi-supervised volumetric medical image segmentation. Comput Med Imaging Graph 2024; 113:102352. [PMID: 38341947 DOI: 10.1016/j.compmedimag.2024.102352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 02/03/2024] [Accepted: 02/03/2024] [Indexed: 02/13/2024]
Abstract
Automated medical image segmentation plays a crucial role in diverse clinical applications. The high annotation costs of fully-supervised medical segmentation methods have spurred a growing interest in semi-supervised methods. Existing semi-supervised medical segmentation methods train the teacher segmentation network using labeled data to establish pseudo labels for unlabeled data. The quality of these pseudo labels is constrained as these methods fail to effectively address the significant bias in the data distribution learned from the limited labeled data. To address these challenges, this paper introduces an innovative Correspondence-based Generative Bayesian Deep Learning (C-GBDL) model. Built upon the teacher-student architecture, we design a multi-scale semantic correspondence method to aid the teacher model in generating high-quality pseudo labels. Specifically, our teacher model, embedded with the multi-scale semantic correspondence, learns a better-generalized data distribution from input volumes by feature matching with the reference volumes. Additionally, a double uncertainty estimation schema is proposed to further rectify the noisy pseudo labels. The double uncertainty estimation takes the predictive entropy as the first uncertainty estimation and takes the structural similarity between the input volume and its corresponding reference volumes as the second uncertainty estimation. Four groups of comparative experiments conducted on two public medical datasets demonstrate the effectiveness and the superior performance of our proposed model. Our code is available on https://github.com/yumjoo/C-GBDL.
Collapse
Affiliation(s)
- Yuzhou Zhao
- Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China
| | - Xinyu Zhou
- Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China
| | - Tongxin Pan
- Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China
| | - Shuyong Gao
- Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China.
| | - Wenqiang Zhang
- Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China; Shanghai Engineering Research Center of AI & Robotics, Academy for Engineering and Technology, Fudan University, Shanghai, China.
| |
Collapse
|
5
|
Kumar V, Prabha C, Sharma P, Mittal N, Askar SS, Abouhawwash M. Unified deep learning models for enhanced lung cancer prediction with ResNet-50-101 and EfficientNet-B3 using DICOM images. BMC Med Imaging 2024; 24:63. [PMID: 38500083 PMCID: PMC10946139 DOI: 10.1186/s12880-024-01241-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 03/07/2024] [Indexed: 03/20/2024] Open
Abstract
Significant advancements in machine learning algorithms have the potential to aid in the early detection and prevention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer aims to address the issue effectively. Using a dataset of 1,000 DICOM lung cancer images from the LIDC-IDRI repository, each image is classified into four different categories. Although deep learning is still making progress in its ability to analyze and understand cancer data, this research marks a significant step forward in the fight against cancer, promoting better health outcomes and potentially lowering the mortality rate. The Fusion Model, like all other models, achieved 100% precision in classifying Squamous Cells. The Fusion Model and ResNet-50 achieved a precision of 90%, closely followed by EfficientNet-B3 and ResNet-101 with slightly lower precision. To prevent overfitting and improve data collection and planning, the authors implemented a data extension strategy. The relationship between acquiring knowledge and reaching specific scores was also connected to advancing and addressing the issue of imprecise accuracy, ultimately contributing to advancements in health and a reduction in the mortality rate associated with lung cancer.
Collapse
Affiliation(s)
- Vinod Kumar
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, India
| | - Chander Prabha
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Preeti Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Nitin Mittal
- Skill Faculty of Engineering and Technology, Shri Vishwakarma Skill University, Palwal, Haryana, India.
| | - S S Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, 11451, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt
| |
Collapse
|
6
|
S S, Dharani Devi G, V R, Jeyalakshmi J. Privacy-Preserving Breast Cancer Classification: A Federated Transfer Learning Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01035-8. [PMID: 38424280 DOI: 10.1007/s10278-024-01035-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/11/2024] [Accepted: 01/30/2024] [Indexed: 03/02/2024]
Abstract
Breast cancer is deadly cancer causing a considerable number of fatalities among women in worldwide. To enhance patient outcomes as well as survival rates, early and accurate detection is crucial. Machine learning techniques, particularly deep learning, have demonstrated impressive success in various image recognition tasks, including breast cancer classification. However, the reliance on large labeled datasets poses challenges in the medical domain due to privacy issues and data silos. This study proposes a novel transfer learning approach integrated into a federated learning framework to solve the limitations of limited labeled data and data privacy in collaborative healthcare settings. For breast cancer classification, the mammography and MRO images were gathered from three different medical centers. Federated learning, an emerging privacy-preserving paradigm, empowers multiple medical institutions to jointly train the global model while maintaining data decentralization. Our proposed methodology capitalizes on the power of pre-trained ResNet, a deep neural network architecture, as a feature extractor. By fine-tuning the higher layers of ResNet using breast cancer datasets from diverse medical centers, we enable the model to learn specialized features relevant to different domains while leveraging the comprehensive image representations acquired from large-scale datasets like ImageNet. To overcome domain shift challenges caused by variations in data distributions across medical centers, we introduce domain adversarial training. The model learns to minimize the domain discrepancy while maximizing classification accuracy, facilitating the acquisition of domain-invariant features. We conducted extensive experiments on diverse breast cancer datasets obtained from multiple medical centers. Comparative analysis was performed to evaluate the proposed approach against traditional standalone training and federated learning without domain adaptation. When compared with traditional models, our proposed model showed a classification accuracy of 98.8% and a computational time of 12.22 s. The results showcase promising enhancements in classification accuracy and model generalization, underscoring the potential of our method in improving breast cancer classification performance while upholding data privacy in a federated healthcare environment.
Collapse
Affiliation(s)
- Selvakanmani S
- Department of Information Technology, R.M.K Engineering College, Chennai, Tamil Nadu, India.
| | - G Dharani Devi
- Department of Computer Science and Engineering, Rajalakshmi Engineering College, Chennai, Tamil Nadu, India
| | - Rekha V
- Department of Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, Tamil Nadu, India
| | - J Jeyalakshmi
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidhyapeetham, Chennai, India
| |
Collapse
|
7
|
Shankari N, Kudva V, Hegde RB. Breast Mass Detection and Classification Using Machine Learning Approaches on Two-Dimensional Mammogram: A Review. Crit Rev Biomed Eng 2024; 52:41-60. [PMID: 38780105 DOI: 10.1615/critrevbiomedeng.2024051166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Breast cancer is a leading cause of mortality among women, both in India and globally. The prevalence of breast masses is notably common in women aged 20 to 60. These breast masses are classified, according to the breast imaging-reporting and data systems (BI-RADS) standard, into categories such as fibroadenoma, breast cysts, benign, and malignant masses. To aid in the diagnosis of breast disorders, imaging plays a vital role, with mammography being the most widely used modality for detecting breast abnormalities over the years. However, the process of identifying breast diseases through mammograms can be time-consuming, requiring experienced radiologists to review a significant volume of images. Early detection of breast masses is crucial for effective disease management, ultimately reducing mortality rates. To address this challenge, advancements in image processing techniques, specifically utilizing artificial intelligence (AI) and machine learning (ML), have tiled the way for the development of decision support systems. These systems assist radiologists in the accurate identification and classification of breast disorders. This paper presents a review of various studies where diverse machine learning approaches have been applied to digital mammograms. These approaches aim to identify breast masses and classify them into distinct subclasses such as normal, benign and malignant. Additionally, the paper highlights both the advantages and limitations of existing techniques, offering valuable insights for the benefit of future research endeavors in this critical area of medical imaging and breast health.
Collapse
Affiliation(s)
- N Shankari
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte 574110, Karnataka, India
| | - Vidya Kudva
- School of Information Sciences, Manipal Academy of Higher Education, Manipal, India -576104; Nitte Mahalinga Adyanthaya Memorial Institute of Technology, Nitte, India - 574110
| | - Roopa B Hegde
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte - 574110, Karnataka, India
| |
Collapse
|
8
|
Vanmathi P, Jose D. An ensemble-based serial cascaded attention network and improved variational auto encoder for breast cancer prognosis prediction using data. Comput Methods Biomech Biomed Engin 2024; 27:98-115. [PMID: 38006210 DOI: 10.1080/10255842.2023.2280883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 11/02/2023] [Indexed: 11/26/2023]
Abstract
Breast cancer is one of the most common types of cancer in women and it produces a huge amount of death rate in the world. Early recognition is lessening its impact. The early recognition of breast cancer could convince patients to receive surgical therapy, which will significantly improve the chance of restoration. This information is used by the machine learning technique to find links between them and appraise our forecasts of fresh occurrences. Later recognition of breast cancer can lead to death. An accurate prescient framework for breast cancer prediction is urgently needed in the current era. In order to accomplish the objective, an adaptive ensemble model is proposed for breast cancer prognosis prediction using data. At the initial stage, the raw data are fetched from benchmark datasets. It is then followed by data cleaning and preprocessing. Subsequently, the pre-processed data is fed into the Improved Variational Autoencoder (IVAE), where the deep features are extracted. Finally, the resultant features are given as input to the Ensemble-based Serial Cascaded Attention Network (ESCANet), which is built with Deep Temporal Convolution Network (DTCN), Bi-directional Long Short-Term Memory (BiLSTM), and Recurrent Neural Network (RNN). The effectiveness of the model is validated and compared with conventional methodologies. Therefore, the results elucidate that the proposed methodology achieves extensive results; thus, it increases the system's efficiency.
Collapse
Affiliation(s)
- P Vanmathi
- Full time Research Scholar, Department of ECE, KCG College of Technology, Karapakkam, Chennai, Tamil Nadu, India
| | - Deepa Jose
- Professor, Department of ECE, KCG College of Technology, Karapakkam, Chennai, Tamil Nadu, India
| |
Collapse
|
9
|
Labrada A, Barkana BD. A Comprehensive Review of Computer-Aided Models for Breast Cancer Diagnosis Using Histopathology Images. Bioengineering (Basel) 2023; 10:1289. [PMID: 38002413 PMCID: PMC10669627 DOI: 10.3390/bioengineering10111289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Revised: 10/20/2023] [Accepted: 10/25/2023] [Indexed: 11/26/2023] Open
Abstract
Breast cancer is the second most common cancer in women who are mainly middle-aged and older. The American Cancer Society reported that the average risk of developing breast cancer sometime in their life is about 13%, and this incident rate has increased by 0.5% per year in recent years. A biopsy is done when screening tests and imaging results show suspicious breast changes. Advancements in computer-aided system capabilities and performance have fueled research using histopathology images in cancer diagnosis. Advances in machine learning and deep neural networks have tremendously increased the number of studies developing computerized detection and classification models. The dataset-dependent nature and trial-and-error approach of the deep networks' performance produced varying results in the literature. This work comprehensively reviews the studies published between 2010 and 2022 regarding commonly used public-domain datasets and methodologies used in preprocessing, segmentation, feature engineering, machine-learning approaches, classifiers, and performance metrics.
Collapse
Affiliation(s)
- Alberto Labrada
- Department of Electrical Engineering, The University of Bridgeport, Bridgeport, CT 06604, USA;
| | - Buket D. Barkana
- Department of Biomedical Engineering, The University of Akron, Akron, OH 44325, USA
| |
Collapse
|
10
|
AlMulla J, Islam MT, Al-Absi HRH, Alam T. SoccerNet: A Gated Recurrent Unit-based model to predict soccer match winners. PLoS One 2023; 18:e0288933. [PMID: 37527260 PMCID: PMC10393150 DOI: 10.1371/journal.pone.0288933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 07/06/2023] [Indexed: 08/03/2023] Open
Abstract
Winning football matches is the major goal of all football clubs in the world. Football being the most popular game in the world, many studies have been conducted to analyze and predict match winners based on players' physical and technical performance. In this study, we analyzed the matches from the professional football league of Qatar Stars League (QSL) covering the matches held in the last ten seasons. We incorporated the highest number of professional matches from the last ten seasons covering from 2011 up to 2022 and proposed SoccerNet, a Gated Recurrent Unit (GRU)-based deep learning-based model to predict match winners with over 80% accuracy. We considered match- and player-related information captured by STATS platform in a time slot of 15 minutes. Then we analyzed players' performance at different positions on the field at different stages of the match. Our results indicated that in QSL, the defenders' role in matches is more dominant than midfielders and forwarders. Moreover, our analysis suggests that the last 15-30 minutes of match segments of the matches from QSL have a more significant impact on the match result than other match segments. To the best of our knowledge, the proposed model is the first DL-based model in predicting match winners from any professional football leagues in the Middle East and North Africa (MENA) region. We believe the results will support the coaching staff and team management for QSL in designing game strategies and improve the overall quality of performance of the players.
Collapse
Affiliation(s)
- Jassim AlMulla
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mohammad Tariqul Islam
- Computer Science Department, Southern Connecticut State University, New Haven, CT, United States of America
| | - Hamada R H Al-Absi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Tanvir Alam
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
11
|
Ferreira ACBH, Ferreira DD, Barbosa BHG, Aline de Oliveira U, Aparecida Padua E, Oliveira Chiarini F, Baena de Moraes Lopes MH. Neural network-based method to stratify people at risk for developing diabetic foot: A support system for health professionals. PLoS One 2023; 18:e0288466. [PMID: 37440514 DOI: 10.1371/journal.pone.0288466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/27/2023] [Indexed: 07/15/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Diabetes Mellitus (DM) is a chronic disease with a high worldwide prevalence. Diabetic foot is one of the DM complications and compromises health and quality of life, due to the risk of lower limb amputation. This work aimed to build a risk classification system for the evolution of diabetic foot, using Artificial Neural Networks (ANN). METHODS This methodological study used two databases, one for system design (training and validation) containing 250 participants with DM and another for testing, containing 141 participants. Each subject answered a questionnaire with 54 questions about foot care and sociodemographic information. Participants from both databases were classified by specialists as high or low risk for diabetic foot. Supervised ANN (multi-layer Perceptron-MLP) models were exploited and a smartphone app was built. The app returns a personalized report indicating self-care for each user. The System Usability Scale (SUS) was used for the usability evaluation. RESULTS MLP models were built and, based on the principle of parsimony, the simplest model was chosen to be implemented in the application. The model achieved accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 85%, 76%, 91%, 89%, and 79%, respectively, for the test data. The app presented good usability (93.33 points on a scale from 0 to 100). CONCLUSIONS The study showed that the proposed model has satisfactory performance and is simple, considering that it requires only 10 variables. This simplicity facilitates its use by health professionals and patients with diabetes.
Collapse
Affiliation(s)
- Ana Cláudia Barbosa Honório Ferreira
- School of Nursing, Universidade Estadual de Campinas, Campinas, São Paulo, Brazil
- University Center of Lavras, Unilavras, Lavras, Minas Gerais, Brazil
| | | | | | | | | | | | | |
Collapse
|
12
|
Zakareya S, Izadkhah H, Karimpour J. A New Deep-Learning-Based Model for Breast Cancer Diagnosis from Medical Images. Diagnostics (Basel) 2023; 13:diagnostics13111944. [PMID: 37296796 DOI: 10.3390/diagnostics13111944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 05/15/2023] [Accepted: 05/28/2023] [Indexed: 06/12/2023] Open
Abstract
Breast cancer is one of the most prevalent cancers among women worldwide, and early detection of the disease can be lifesaving. Detecting breast cancer early allows for treatment to begin faster, increasing the chances of a successful outcome. Machine learning helps in the early detection of breast cancer even in places where there is no access to a specialist doctor. The rapid advancement of machine learning, and particularly deep learning, leads to an increase in the medical imaging community's interest in applying these techniques to improve the accuracy of cancer screening. Most of the data related to diseases is scarce. On the other hand, deep-learning models need much data to learn well. For this reason, the existing deep-learning models on medical images cannot work as well as other images. To overcome this limitation and improve breast cancer classification detection, inspired by two state-of-the-art deep networks, GoogLeNet and residual block, and developing several new features, this paper proposes a new deep model to classify breast cancer. Utilizing adopted granular computing, shortcut connection, two learnable activation functions instead of traditional activation functions, and an attention mechanism is expected to improve the accuracy of diagnosis and consequently decrease the load on doctors. Granular computing can improve diagnosis accuracy by capturing more detailed and fine-grained information about cancer images. The proposed model's superiority is demonstrated by comparing it to several state-of-the-art deep models and existing works using two case studies. The proposed model achieved an accuracy of 93% and 95% on ultrasound images and breast histopathology images, respectively.
Collapse
Affiliation(s)
- Salman Zakareya
- Department of Computer Science, University of Tabriz, Tabriz 5166616471, Iran
| | - Habib Izadkhah
- Department of Computer Science, University of Tabriz, Tabriz 5166616471, Iran
- Research Department of Computational Algorithms and Mathematical Models, University of Tabriz, Tabriz 5166616471, Iran
| | - Jaber Karimpour
- Department of Computer Science, University of Tabriz, Tabriz 5166616471, Iran
| |
Collapse
|
13
|
Iqbal U, Imtiaz R, Saudagar AKJ, Alam KA. CRV-NET: Robust Intensity Recognition of Coronavirus in Lung Computerized Tomography Scan Images. Diagnostics (Basel) 2023; 13:diagnostics13101783. [PMID: 37238266 DOI: 10.3390/diagnostics13101783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Revised: 05/01/2023] [Accepted: 05/10/2023] [Indexed: 05/28/2023] Open
Abstract
The early diagnosis of infectious diseases is demanded by digital healthcare systems. Currently, the detection of the new coronavirus disease (COVID-19) is a major clinical requirement. For COVID-19 detection, deep learning models are used in various studies, but the robustness is still compromised. In recent years, deep learning models have increased in popularity in almost every area, particularly in medical image processing and analysis. The visualization of the human body's internal structure is critical in medical analysis; many imaging techniques are in use to perform this job. A computerized tomography (CT) scan is one of them, and it has been generally used for the non-invasive observation of the human body. The development of an automatic segmentation method for lung CT scans showing COVID-19 can save experts time and can reduce human error. In this article, the CRV-NET is proposed for the robust detection of COVID-19 in lung CT scan images. A public dataset (SARS-CoV-2 CT Scan dataset), is used for the experimental work and customized according to the scenario of the proposed model. The proposed modified deep-learning-based U-Net model is trained on a custom dataset with 221 training images and their ground truth, which was labeled by an expert. The proposed model is tested on 100 test images, and the results show that the model segments COVID-19 with a satisfactory level of accuracy. Moreover, the comparison of the proposed CRV-NET with different state-of-the-art convolutional neural network models (CNNs), including the U-Net Model, shows better results in terms of accuracy (96.67%) and robustness (low epoch value in detection and the smallest training data size).
Collapse
Affiliation(s)
- Uzair Iqbal
- Department of Artificial Intelligence and Data Science, National University of Computer and Emerging Sciences, Islamabad Campus, Islamabad 44000, Pakistan
| | - Romil Imtiaz
- Information and Communication Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Abdul Khader Jilani Saudagar
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Khubaib Amjad Alam
- Department of Software Engineering, National University of Computer and Emerging Sciences, Islamabad Campus, Islamabad 44000, Pakistan
| |
Collapse
|
14
|
Mondol RK, Millar EKA, Graham PH, Browne L, Sowmya A, Meijering E. hist2RNA: An Efficient Deep Learning Architecture to Predict Gene Expression from Breast Cancer Histopathology Images. Cancers (Basel) 2023; 15:cancers15092569. [PMID: 37174035 PMCID: PMC10177559 DOI: 10.3390/cancers15092569] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 04/23/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Gene expression can be used to subtype breast cancer with improved prediction of risk of recurrence and treatment responsiveness over that obtained using routine immunohistochemistry (IHC). However, in the clinic, molecular profiling is primarily used for ER+ breast cancer, which is costly, tissue destructive, requires specialised platforms, and takes several weeks to obtain a result. Deep learning algorithms can effectively extract morphological patterns in digital histopathology images to predict molecular phenotypes quickly and cost-effectively. We propose a new, computationally efficient approach called hist2RNA inspired by bulk RNA sequencing techniques to predict the expression of 138 genes (incorporated from 6 commercially available molecular profiling tests), including luminal PAM50 subtype, from hematoxylin and eosin (H&E)-stained whole slide images (WSIs). The training phase involves the aggregation of extracted features for each patient from a pretrained model to predict gene expression at the patient level using annotated H&E images from The Cancer Genome Atlas (TCGA, n = 335). We demonstrate successful gene prediction on a held-out test set (n = 160, corr = 0.82 across patients, corr = 0.29 across genes) and perform exploratory analysis on an external tissue microarray (TMA) dataset (n = 498) with known IHC and survival information. Our model is able to predict gene expression and luminal PAM50 subtype (Luminal A versus Luminal B) on the TMA dataset with prognostic significance for overall survival in univariate analysis (c-index = 0.56, hazard ratio = 2.16 (95% CI 1.12-3.06), p < 5 × 10-3), and independent significance in multivariate analysis incorporating standard clinicopathological variables (c-index = 0.65, hazard ratio = 1.87 (95% CI 1.30-2.68), p < 5 × 10-3). The proposed strategy achieves superior performance while requiring less training time, resulting in less energy consumption and computational cost compared to patch-based models. Additionally, hist2RNA predicts gene expression that has potential to determine luminal molecular subtypes which correlates with overall survival, without the need for expensive molecular testing.
Collapse
Affiliation(s)
- Raktim Kumar Mondol
- School of Computer Science and Engineering, UNSW Sydney, Kensington, NSW 2052, Australia
| | - Ewan K A Millar
- Department of Anatomical Pathology, NSW Health Pathology, St. George Hospital, Kogarah, NSW 2217, Australia
- St. George and Sutherland Clinical School, UNSW Sydney, Kensington, NSW 2052, Australia
- Faculty of Medicine and Health Sciences, Sydney Western University, Campbelltown, NSW 2560, Australia
- University of Technology Sydney, Ultimo, NSW 2007, Australia
| | - Peter H Graham
- St. George and Sutherland Clinical School, UNSW Sydney, Kensington, NSW 2052, Australia
- Cancer Care Centre, St George Hospital, Sydney, NSW 2217, Australia
| | - Lois Browne
- Cancer Care Centre, St George Hospital, Sydney, NSW 2217, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, UNSW Sydney, Kensington, NSW 2052, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, UNSW Sydney, Kensington, NSW 2052, Australia
| |
Collapse
|
15
|
Chanchal AK, Lal S, Kumar R, Kwak JT, Kini J. A novel dataset and efficient deep learning framework for automated grading of renal cell carcinoma from kidney histopathology images. Sci Rep 2023; 13:5728. [PMID: 37029115 PMCID: PMC10082027 DOI: 10.1038/s41598-023-31275-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 03/09/2023] [Indexed: 04/09/2023] Open
Abstract
Trends of kidney cancer cases worldwide are expected to increase persistently and this inspires the modification of the traditional diagnosis system to respond to future challenges. Renal Cell Carcinoma (RCC) is the most common kidney cancer and responsible for 80-85% of all renal tumors. This study proposed a robust and computationally efficient fully automated Renal Cell Carcinoma Grading Network (RCCGNet) from kidney histopathology images. The proposed RCCGNet contains a shared channel residual (SCR) block which allows the network to learn feature maps associated with different versions of the input with two parallel paths. The SCR block shares the information between two different layers and operates the shared data separately by providing beneficial supplements to each other. As a part of this study, we also introduced a new dataset for the grading of RCC with five different grades. We obtained 722 Hematoxylin & Eosin (H &E) stained slides of different patients and associated grades from the Department of Pathology, Kasturba Medical College (KMC), Mangalore, India. We performed comparable experiments which include deep learning models trained from scratch as well as transfer learning techniques using pre-trained weights of the ImageNet. To show the proposed model is generalized and independent of the dataset, we experimented with one additional well-established data called BreakHis dataset for eight class-classification. The experimental result shows that proposed RCCGNet is superior in comparison with the eight most recent classification methods on the proposed dataset as well as BreakHis dataset in terms of prediction accuracy and computational complexity.
Collapse
Affiliation(s)
- Amit Kumar Chanchal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, Mangaluru, Karnataka, 575025, India
| | - Shyam Lal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, Mangaluru, Karnataka, 575025, India.
| | - Ranjeet Kumar
- School of Electronics Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India.
| | - Jin Tae Kwak
- School of Electrical Engineering, Korea University, Seoul, Korea
| | - Jyoti Kini
- Department of Pathology, Kasturba Medical College, Mangalore, India.
- Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
16
|
Lal KN. A lung sound recognition model to diagnoses the respiratory diseases by using transfer learning. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-17. [PMID: 37362727 PMCID: PMC10050810 DOI: 10.1007/s11042-023-14727-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 09/29/2022] [Accepted: 02/05/2023] [Indexed: 06/28/2023]
Abstract
Respiratory disease is one of the leading causes of death in the world. Through advances in Artificial Intelligence, it appears possible for the days of misdiagnosis and treatment of respiratory disease symptoms rather than their root cause to move behind us. The traditional convolutional neural network cannot extract the temporal features of lung sounds. To solve the problem, a lung sounds recognition algorithm based on VGGish- stacked BiGRU is proposed which combines the VGGish network with the stacked bidirectional gated recurrent unit neural network. A lung Sound Recognition Algorithm Based on VGGish-Stacked BiGRU is used as a feature extractor which is a pre-trained model used for transfer learning. The target model is built with the same structure as the source model which is the VGGish model and parameter transfer is done from the source model to the target model. The multi-layer BiGRU stack is used to enhance the feature value and retain the model. While fine-tuning of the parameter of VGGish is frozen which successfully improves the model. The experimental results show that the proposed algorithm improves the recognition accuracy of lung sounds and the recognition accuracy of respiratory diseases.
Collapse
Affiliation(s)
- Kumari Nidhi Lal
- Department of Computer Science Engineering, Visvesvaraya National Institute of Technology (VNIT Nagpur), Nagpur, Maharashrta India
| |
Collapse
|
17
|
Feng Q, Liu S, Peng JX, Yan T, Zhu H, Zheng ZJ, Feng HC. Deep learning-based automatic sella turcica segmentation and morphology measurement in X-ray images. BMC Med Imaging 2023; 23:41. [PMID: 36964517 PMCID: PMC10039601 DOI: 10.1186/s12880-023-00998-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 03/14/2023] [Indexed: 03/26/2023] Open
Abstract
BACKGROUND Although the morphological changes of sella turcica have been drawing increasing attention, the acquirement of linear parameters of sella turcica relies on manual measurement. Manual measurement is laborious, time-consuming, and may introduce subjective bias. This paper aims to develop and evaluate a deep learning-based model for automatic segmentation and measurement of sella turcica in cephalometric radiographs. METHODS 1129 images were used to develop a deep learning-based segmentation network for automatic sella turcica segmentation. Besides, 50 images were used to test the generalization ability of the model. The performance of the segmented network was evaluated by the dice coefficient. Images in the test datasets were segmented by the trained segmentation network, and the segmentation results were saved in binary images. Then the extremum points and corner points were detected by calling the function in the OpenCV library to obtain the coordinates of the four landmarks of the sella turcica. Finally, the length, diameter, and depth of the sella turcica can be obtained by calculating the distance between the two points and the distance from the point to the straight line. Meanwhile, images were measured manually using Digimizer. Intraclass correlation coefficients (ICCs) and Bland-Altman plots were used to analyze the consistency between automatic and manual measurements to evaluate the reliability of the proposed methodology. RESULTS The dice coefficient of the segmentation network is 92.84%. For the measurement of sella turcica, there is excellent agreement between the automatic measurement and the manual measurement. In Test1, the ICCs of length, diameter and depth are 0.954, 0.953, and 0.912, respectively. In Test2, ICCs of length, diameter and depth are 0.906, 0.921, and 0.915, respectively. In addition, Bland-Altman plots showed the excellent reliability of the automated measurement method, with the majority measurements differences falling within ± 1.96 SDs intervals around the mean difference and no bias was apparent. CONCLUSIONS Our experimental results indicated that the proposed methodology could complete the automatic segmentation of the sella turcica efficiently, and reliably predict the length, diameter, and depth of the sella turcica. Moreover, the proposed method has generalization ability according to its excellent performance on Test2.
Collapse
Affiliation(s)
- Qi Feng
- College of Medicine, Guizhou University, Guiyang, 550025, China
| | - Shu Liu
- Department of Orthodontics, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Ju-Xiang Peng
- Department of Orthodontics, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Ting Yan
- Department of Radiology, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Hong Zhu
- Department of Medical Information, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Zhi-Jun Zheng
- Department of Orthodontics, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Hong-Chao Feng
- College of Medicine, Guizhou University, Guiyang, 550025, China.
- Department of Oral and Maxillofacial Surgery, Guiyang Hospital of Stomatology, Guiyang, 550002, China.
| |
Collapse
|
18
|
Cardone D, Trevisi G, Perpetuini D, Filippini C, Merla A, Mangiola A. Intraoperative thermal infrared imaging in neurosurgery: machine learning approaches for advanced segmentation of tumors. Phys Eng Sci Med 2023; 46:325-337. [PMID: 36715852 PMCID: PMC10030394 DOI: 10.1007/s13246-023-01222-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 01/17/2023] [Indexed: 01/31/2023]
Abstract
Surgical resection is one of the most relevant practices in neurosurgery. Finding the correct surgical extent of the tumor is a key question and so far several techniques have been employed to assist the neurosurgeon in preserving the maximum amount of healthy tissue. Some of these methods are invasive for patients, not always allowing high precision in the detection of the tumor area. The aim of this study is to overcome these limitations, developing machine learning based models, relying on features obtained from a contactless and non-invasive technique, the thermal infrared (IR) imaging. The thermal IR videos of thirteen patients with heterogeneous tumors were recorded in the intraoperative context. Time (TD)- and frequency (FD)-domain features were extracted and fed different machine learning models. Models relying on FD features have proven to be the best solutions for the optimal detection of the tumor area (Average Accuracy = 90.45%; Average Sensitivity = 84.64%; Average Specificity = 93,74%). The obtained results highlight the possibility to accurately detect the tumor lesion boundary with a completely non-invasive, contactless, and portable technology, revealing thermal IR imaging as a very promising tool for the neurosurgeon.
Collapse
Affiliation(s)
- Daniela Cardone
- Department of Engineering and Geology, University G. d'Annunzio Chieti-Pescara, Pescara, Italy.
| | - Gianluca Trevisi
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - David Perpetuini
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - Chiara Filippini
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - Arcangelo Merla
- Department of Engineering and Geology, University G. d'Annunzio Chieti-Pescara, Pescara, Italy
| | - Annunziato Mangiola
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| |
Collapse
|
19
|
Ahmad M, Sanawar S, Alfandi O, Qadri SF, Saeed IA, Khan S, Hayat B, Ahmad A. Facial expression recognition using lightweight deep learning modeling. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:8208-8225. [PMID: 37161193 DOI: 10.3934/mbe.2023357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Facial expression is a type of communication and is useful in many areas of computer vision, including intelligent visual surveillance, human-robot interaction and human behavior analysis. A deep learning approach is presented to classify happy, sad, angry, fearful, contemptuous, surprised and disgusted expressions. Accurate detection and classification of human facial expression is a critical task in image processing due to the inconsistencies amid the complexity, including change in illumination, occlusion, noise and the over-fitting problem. A stacked sparse auto-encoder for facial expression recognition (SSAE-FER) is used for unsupervised pre-training and supervised fine-tuning. SSAE-FER automatically extracts features from input images, and the softmax classifier is used to classify the expressions. Our method achieved an accuracy of 92.50% on the JAFFE dataset and 99.30% on the CK+ dataset. SSAE-FER performs well compared to the other comparative methods in the same domain.
Collapse
Affiliation(s)
- Mubashir Ahmad
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Tobe Camp, Abbottabad-22060, Pakistan
- Department of Computer Science, the University of Lahore, Sargodha Campus 40100, Pakistan
| | - Saira Sanawar
- Department of Computer Science, the University of Lahore, Sargodha Campus 40100, Pakistan
| | - Omar Alfandi
- College of Technological Innovation at Zayed University in Abu Dhabi, UAE
| | - Syed Furqan Qadri
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, China
| | - Iftikhar Ahmed Saeed
- Department of Computer Science, the University of Lahore, Sargodha Campus 40100, Pakistan
| | - Salabat Khan
- College of Computer Science & Software Engineering, Shenzhen University, Shenzhen 518060, China
| | - Bashir Hayat
- Department of Computer Science, Institute of Management Sciences, Peshawar, Pakistan
| | - Arshad Ahmad
- Department of IT & CS, Pak-Austria Fachhochschule: Institute of Applied Sciences and Technology (PAF-IAST), Haripur 22620, Pakistan
| |
Collapse
|
20
|
Buvaneswari B, Vijayaraj J, Satheesh Kumar B. Histopathological image-based breast cancer detection employing 3D-convolutional neural network feature extraction and Stochastic Diffusion Kernel Recursive Neural Networks classification. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2022.2161148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- B. Buvaneswari
- Department of Information Technology, Panimalar Engineering College, Chennai, India
| | - J. Vijayaraj
- Department of Artificial Intelligence and Data Science, Easwari Engineering College, Chennai, India
| | - B. Satheesh Kumar
- Department of Computer Science and Engineering, School of Computing Science and Engineering, Galgotias University, Greater Noida, India
| |
Collapse
|
21
|
Using Deep Learning with Bayesian–Gaussian Inspired Convolutional Neural Architectural Search for Cancer Recognition and Classification from Histopathological Image Frames. JOURNAL OF HEALTHCARE ENGINEERING 2023. [DOI: 10.1155/2023/4597445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
We propose a neural architectural search model which examines histopathological images to detect the presence of cancer in both lung and colon tissues. In recent times, deep artificial neural networks have made tremendous impacts in healthcare. However, obtaining an optimal artificial neural network model that could yield excellent performance during training, evaluation, and inferencing has been a bottleneck for researchers. Our method uses a Bayesian convolutional neural architectural search algorithm in collaboration with Gaussian processes to provide an efficient neural network architecture for efficient colon and lung cancer classification and recognition. The proposed model learns by using the Gaussian process to estimate the required optimal architectural values by choosing a set of model parameters through the exploitation of the expected improvement (EI) values, thereby minimizing the number of sampled trials and suggesting the best model architecture. Several experiments were conducted, and a landmark performance was obtained in both validation and test data through the evaluation of the proposed model on a dataset consisting of 25,000 images of five different classes with convergence and F1-score matrices.
Collapse
|
22
|
Srikantamurthy MM, Rallabandi VPS, Dudekula DB, Natarajan S, Park J. Classification of benign and malignant subtypes of breast cancer histopathology imaging using hybrid CNN-LSTM based transfer learning. BMC Med Imaging 2023; 23:19. [PMID: 36717788 PMCID: PMC9885590 DOI: 10.1186/s12880-023-00964-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 01/12/2023] [Indexed: 01/31/2023] Open
Abstract
BACKGROUND Grading of cancer histopathology slides requires more pathologists and expert clinicians as well as it is time consuming to look manually into whole-slide images. Hence, an automated classification of histopathological breast cancer sub-type is useful for clinical diagnosis and therapeutic responses. Recent deep learning methods for medical image analysis suggest the utility of automated radiologic imaging classification for relating disease characteristics or diagnosis and patient stratification. METHODS To develop a hybrid model using the convolutional neural network (CNN) and the long short-term memory recurrent neural network (LSTM RNN) to classify four benign and four malignant breast cancer subtypes. The proposed CNN-LSTM leveraging on ImageNet uses a transfer learning approach in classifying and predicting four subtypes of each. The proposed model was evaluated on the BreakHis dataset comprises 2480 benign and 5429 malignant cancer images acquired at magnifications of 40×, 100×, 200× and 400×. RESULTS The proposed hybrid CNN-LSTM model was compared with the existing CNN models used for breast histopathological image classification such as VGG-16, ResNet50, and Inception models. All the models were built using three different optimizers such as adaptive moment estimator (Adam), root mean square propagation (RMSProp), and stochastic gradient descent (SGD) optimizers by varying numbers of epochs. From the results, we noticed that the Adam optimizer was the best optimizer with maximum accuracy and minimum model loss for both the training and validation sets. The proposed hybrid CNN-LSTM model showed the highest overall accuracy of 99% for binary classification of benign and malignant cancer, and, whereas, 92.5% for multi-class classifier of benign and malignant cancer subtypes, respectively. CONCLUSION To conclude, the proposed transfer learning approach outperformed the state-of-the-art machine and deep learning models in classifying benign and malignant cancer subtypes. The proposed method is feasible in classification of other cancers as well as diseases.
Collapse
Affiliation(s)
| | | | - Dawood Babu Dudekula
- 3BIGS Omicscore Pvt. Ltd., 909 Lavelle Building, Richmond Circle, Bangalore, 560025 India
| | - Sathishkumar Natarajan
- 3BIGS Co. Ltd, 156, B-831, Geumgang Penterium IX Tower, Hwaseong, 18469 Republic of Korea
| | - Junhyung Park
- 3BIGS Co. Ltd, 156, B-831, Geumgang Penterium IX Tower, Hwaseong, 18469 Republic of Korea
| |
Collapse
|
23
|
Automated Detection of Broncho-Arterial Pairs Using CT Scans Employing Different Approaches to Classify Lung Diseases. Biomedicines 2023; 11:biomedicines11010133. [PMID: 36672641 PMCID: PMC9855445 DOI: 10.3390/biomedicines11010133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 01/06/2023] Open
Abstract
Current research indicates that for the identification of lung disorders, comprising pneumonia and COVID-19, structural distortions of bronchi and arteries (BA) should be taken into account. CT scans are an effective modality to detect lung anomalies. However, anomalies in bronchi and arteries can be difficult to detect. Therefore, in this study, alterations of bronchi and arteries are considered in the classification of lung diseases. Four approaches to highlight these are introduced: (a) a Hessian-based approach, (b) a region-growing algorithm, (c) a clustering-based approach, and (d) a color-coding-based approach. Prior to this, the lungs are segmented, employing several image preprocessing algorithms. The utilized COVID-19 Lung CT scan dataset contains three classes named Non-COVID, COVID, and community-acquired pneumonia, having 6983, 7593, and 2618 samples, respectively. To classify the CT scans into three classes, two deep learning architectures, (a) a convolutional neural network (CNN) and (b) a CNN with long short-term memory (LSTM) and an attention mechanism, are considered. Both these models are trained with the four datasets achieved from the four approaches. Results show that the CNN model achieved test accuracies of 88.52%, 87.14%, 92.36%, and 95.84% for the Hessian, the region-growing, the color-coding, and the clustering-based approaches, respectively. The CNN with LSTM and an attention mechanism model results in an increase in overall accuracy for all approaches with an 89.61%, 88.28%, 94.61%, and 97.12% test accuracy for the Hessian, region-growing, color-coding, and clustering-based approaches, respectively. To assess overfitting, the accuracy and loss curves and k-fold cross-validation technique are employed. The Hessian-based and region-growing algorithm-based approaches produced nearly equivalent outcomes. Our proposed method outperforms state-of-the-art studies, indicating that it may be worthwhile to pay more attention to BA features in lung disease classification based on CT images.
Collapse
|
24
|
Thalakottor LA, Shirwaikar RD, Pothamsetti PT, Mathews LM. Classification of Histopathological Images from Breast Cancer Patients Using Deep Learning: A Comparative Analysis. Crit Rev Biomed Eng 2023; 51:41-62. [PMID: 37581350 DOI: 10.1615/critrevbiomedeng.2023047793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Cancer, a leading cause of mortality, is distinguished by the multi-stage conversion of healthy cells into cancer cells. Discovery of the disease early can significantly enhance the possibility of survival. Histology is a procedure where the tissue of interest is first surgically removed from a patient and cut into thin slices. A pathologist will then mount these slices on glass slides, stain them with specialized dyes like hematoxylin and eosin (H&E), and then inspect the slides under a microscope. Unfortunately, a manual analysis of histopathology images during breast cancer biopsy is time consuming. Literature suggests that automated techniques based on deep learning algorithms with artificial intelligence can be used to increase the speed and accuracy of detection of abnormalities within the histopathological specimens obtained from breast cancer patients. This paper highlights some recent work on such algorithms, a comparative study on various deep learning methods is provided. For the present study the breast cancer histopathological database (BreakHis) is used. These images are processed to enhance the inherent features, classified and an evaluation is carried out regarding the accuracy of the algorithm. Three convolutional neural network (CNN) models, visual geometry group (VGG19), densely connected convolutional networks (DenseNet201), and residual neural network (ResNet50V2), were employed while analyzing the images. Of these the DenseNet201 model performed better than other models and attained an accuracy of 91.3%. The paper includes a review of different classification techniques based on machine learning methods including CNN-based models and some of which may replace manual breast cancer diagnosis and detection.
Collapse
Affiliation(s)
- Louie Antony Thalakottor
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| | - Rudresh Deepak Shirwaikar
- Department of Computer Engineering, Agnel Institute of Technology and Design (AITD), Goa University, Assagao, Goa, India, 403507
| | - Pavan Teja Pothamsetti
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| | - Lincy Meera Mathews
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| |
Collapse
|
25
|
Improved Bald Eagle Search Optimization with Synergic Deep Learning-Based Classification on Breast Cancer Imaging. Cancers (Basel) 2022; 14:cancers14246159. [PMID: 36551644 PMCID: PMC9776477 DOI: 10.3390/cancers14246159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/24/2022] [Accepted: 11/26/2022] [Indexed: 12/15/2022] Open
Abstract
Medical imaging has attracted growing interest in the field of healthcare regarding breast cancer (BC). Globally, BC is a major cause of mortality amongst women. Now, the examination of histopathology images is the medical gold standard for cancer diagnoses. However, the manual process of microscopic inspections is a laborious task, and the results might be misleading as a result of human error occurring. Thus, the computer-aided diagnoses (CAD) system can be utilized for accurately detecting cancer within essential time constraints, as earlier diagnosis is the key to curing cancer. The classification and diagnosis of BC utilizing the deep learning algorithm has gained considerable attention. This article presents a model of an improved bald eagle search optimization with a synergic deep learning mechanism for breast cancer diagnoses using histopathological images (IBESSDL-BCHI). The proposed IBESSDL-BCHI model concentrates on the identification and classification of BC using HIs. To do so, the presented IBESSDL-BCHI model follows an image preprocessing method using a median filtering (MF) technique as a preprocessing step. In addition, feature extraction using a synergic deep learning (SDL) model is carried out, and the hyperparameters related to the SDL mechanism are tuned by the use of the IBES model. Lastly, long short-term memory (LSTM) was utilized to precisely categorize the HIs into two major classes, such as benign and malignant. The performance validation of the IBESSDL-BCHI system was tested utilizing the benchmark dataset, and the results demonstrate that the IBESSDL-BCHI model has shown better general efficiency for BC classification.
Collapse
|
26
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
27
|
Amyar A, Guo R, Cai X, Assana S, Chow K, Rodriguez J, Yankama T, Cirillo J, Pierce P, Goddu B, Ngo L, Nezafat R. Impact of deep learning architectures on accelerated cardiac T 1 mapping using MyoMapNet. NMR IN BIOMEDICINE 2022; 35:e4794. [PMID: 35767308 PMCID: PMC9532368 DOI: 10.1002/nbm.4794] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 05/19/2022] [Accepted: 06/25/2022] [Indexed: 05/10/2023]
Abstract
The objective of the current study was to investigate the performance of various deep learning (DL) architectures for MyoMapNet, a DL model for T1 estimation using accelerated cardiac T1 mapping from four T1 -weighted images collected after a single inversion pulse (Look-Locker 4 [LL4]). We implemented and tested three DL architectures for MyoMapNet: (a) a fully connected neural network (FC), (b) convolutional neural networks (VGG19, ResNet50), and (c) encoder-decoder networks with skip connections (ResUNet, U-Net). Modified Look-Locker inversion recovery (MOLLI) images from 749 patients at 3 T were used for training, validation, and testing. The first four T1 -weighted images from MOLLI5(3)3 and/or MOLLI4(1)3(1)2 protocols were extracted to create accelerated cardiac T1 mapping data. We also prospectively collected data from 28 subjects using MOLLI and LL4 to further evaluate model performance. Despite rigorous training, conventional VGG19 and ResNet50 models failed to produce anatomically correct T1 maps, and T1 values had significant errors. While ResUNet yielded good quality maps, it significantly underestimated T1 . Both FC and U-Net, however, yielded excellent image quality with good T1 accuracy for both native (FC/U-Net/MOLLI = 1217 ± 64/1208 ± 61/1199 ± 61 ms, all p < 0.05) and postcontrast myocardial T1 (FC/U-Net/MOLLI = 578 ± 57/567 ± 54/574 ± 55 ms, all p < 0.05). In terms of precision, the U-Net model yielded better T1 precision compared with the FC architecture (standard deviation of 61 vs. 67 ms for the myocardium for native [p < 0.05], and 31 vs. 38 ms [p < 0.05], for postcontrast). Similar findings were observed in prospectively collected LL4 data. It was concluded that U-Net and FC DL models in MyoMapNet enable fast myocardial T1 mapping using only four T1 -weighted images collected from a single LL sequence with comparable accuracy. U-Net also provides a slight improvement in precision.
Collapse
Affiliation(s)
- Amine Amyar
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Rui Guo
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Xiaoying Cai
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
- Siemens Medical Solutions USA, Inc., Boston, Massachusetts, USA
| | - Salah Assana
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Kelvin Chow
- Siemens Medical Solutions USA, Inc., Chicago, Illinois, USA
| | - Jennifer Rodriguez
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Tuyen Yankama
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Julia Cirillo
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Patrick Pierce
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Beth Goddu
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Long Ngo
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Reza Nezafat
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
28
|
Classification of Breast Cancer Histopathological Images Using DenseNet and Transfer Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8904768. [PMID: 36262621 PMCID: PMC9576400 DOI: 10.1155/2022/8904768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/19/2022] [Accepted: 07/30/2022] [Indexed: 11/22/2022]
Abstract
Breast cancer is one of the most common invading cancers in women. Analyzing breast cancer is nontrivial and may lead to disagreements among experts. Although deep learning methods achieved an excellent performance in classification tasks including breast cancer histopathological images, the existing state-of-the-art methods are computationally expensive and may overfit due to extracting features from in-distribution images. In this paper, our contribution is mainly twofold. First, we perform a short survey on deep-learning-based models for classifying histopathological images to investigate the most popular and optimized training-testing ratios. Our findings reveal that the most popular training-testing ratio for histopathological image classification is 70%: 30%, whereas the best performance (e.g., accuracy) is achieved by using the training-testing ratio of 80%: 20% on an identical dataset. Second, we propose a method named DenTnet to classify breast cancer histopathological images chiefly. DenTnet utilizes the principle of transfer learning to solve the problem of extracting features from the same distribution using DenseNet as a backbone model. The proposed DenTnet method is shown to be superior in comparison to a number of leading deep learning methods in terms of detection accuracy (up to 99.28% on BreaKHis dataset deeming training-testing ratio of 80%: 20%) with good generalization ability and computational speed. The limitation of existing methods including the requirement of high computation and utilization of the same feature distribution is mitigated by dint of the DenTnet.
Collapse
|
29
|
AAQAL: A Machine Learning-Based Tool for Performance Optimization of Parallel SPMV Computations Using Block CSR. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12147073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The sparse matrix–vector product (SpMV), considered one of the seven dwarfs (numerical methods of significance), is essential in high-performance real-world scientific and analytical applications requiring solution of large sparse linear equation systems, where SpMV is a key computing operation. As the sparsity patterns of sparse matrices are unknown before runtime, we used machine learning-based performance optimization of the SpMV kernel by exploiting the structure of the sparse matrices using the Block Compressed Sparse Row (BCSR) storage format. As the structure of sparse matrices varies across application domains, optimizing the block size is important for reducing the overall execution time. Manual allocation of block sizes is error prone and time consuming. Thus, we propose AAQAL, a data-driven, machine learning-based tool that automates the process of data distribution and selection of near-optimal block sizes based on the structure of the matrix. We trained and tested the tool using different machine learning methods—decision tree, random forest, gradient boosting, ridge regressor, and AdaBoost—and nearly 700 real-world matrices from 43 application domains, including computer vision, robotics, and computational fluid dynamics. AAQAL achieved 93.47% of the maximum attainable performance with a substantial difference compared to in practice manual or random selection of block sizes. This is the first attempt at exploiting matrix structure using BCSR, to select optimal block sizes for the SpMV computations using machine learning techniques.
Collapse
|
30
|
Histopathological Tissue Segmentation of Lung Cancer with Bilinear CNN and Soft Attention. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7966553. [PMID: 35845926 PMCID: PMC9283032 DOI: 10.1155/2022/7966553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 05/15/2022] [Accepted: 06/10/2022] [Indexed: 11/18/2022]
Abstract
Automatic tissue segmentation in whole-slide images (WSIs) is a critical task in hematoxylin and eosin- (H&E-) stained histopathological images for accurate diagnosis and risk stratification of lung cancer. Patch classification and stitching the classification results can fast conduct tissue segmentation of WSIs. However, due to the tumour heterogeneity, large intraclass variability and small interclass variability make the classification task challenging. In this paper, we propose a novel bilinear convolutional neural network- (Bilinear-CNN-) based model with a bilinear convolutional module and a soft attention module to tackle this problem. This method investigates the intraclass semantic correspondence and focuses on the more distinguishable features that make feature output variations relatively large between interclass. The performance of the Bilinear-CNN-based model is compared with other state-of-the-art methods on the histopathological classification dataset, which consists of 107.7 k patches of lung cancer. We further evaluate our proposed algorithm on an additional dataset from colorectal cancer. Extensive experiments show that the performance of our proposed method is superior to that of previous state-of-the-art ones and the interpretability of our proposed method is demonstrated by Grad-CAM.
Collapse
|
31
|
Advanced Analysis of 3D Kinect Data: Supervised Classification of Facial Nerve Function via Parallel Convolutional Neural Networks. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In this paper, we designed a methodology to classify facial nerve function after head and neck surgery. It is important to be able to observe the rehabilitation process objectively after a specific brain surgery, when patients are often affected by face palsy. The dataset that is used for classification problems in this study only contains 236 measurements of 127 patients of complex observations using the most commonly used House–Brackmann (HB) scale, which is based on the subjective opinion of the physician. Although there are several traditional evaluation methods for measuring facial paralysis, they still suffer from ignoring facial movement information. This plays an important role in the analysis of facial paralysis and limits the selection of useful facial features for the evaluation of facial paralysis. In this paper, we present a triple-path convolutional neural network (TPCNN) to evaluate the problem of mimetic muscle rehabilitation, which is observed by a Kinect stereovision camera. A system consisting of three modules for facial landmark measure computation and facial paralysis classification based on a parallel convolutional neural network structure is used to quantitatively assess the classification of facial nerve paralysis by considering facial features based on the region and the temporal variation of facial landmark sequences. The proposed deep network analyzes both the global and local facial movement features of a patient’s face. These extracted high-level representations are then fused for the final evaluation of facial paralysis. The experimental results have verified the better performance of TPCNN compared to state-of-the-art deep learning networks.
Collapse
|
32
|
Shankar K, Dutta AK, Kumar S, Joshi GP, Doo IC. Chaotic Sparrow Search Algorithm with Deep Transfer Learning Enabled Breast Cancer Classification on Histopathological Images. Cancers (Basel) 2022; 14:cancers14112770. [PMID: 35681749 PMCID: PMC9179470 DOI: 10.3390/cancers14112770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 05/30/2022] [Accepted: 05/30/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Cancer is considered the most significant public health issue which severely threatens people’s health. The occurrence and mortality rate of breast cancer have been growing consistently. Initial precise diagnostics act as primary factors in improving the endurance rate of patients. Even though there are several means to identify breast cancer, histopathological diagnosis is now considered the gold standard in the diagnosis of cancer. However, the difficulty of histopathological image and the rapid rise in workload render this process time-consuming, and the outcomes might be subjected to pathologists’ subjectivity. Hence, the development of a precise and automatic histopathological image analysis method is essential for the field. Recently, the deep learning method for breast cancer pathological image classification has made significant progress, which has become mainstream in this field. Therefore, in this work, we focused on the design of metaheuristics with deep learning based breast cancer classification process. The proposed model is found to be an effective tool to assist physicians in the decision making process. Abstract Breast cancer is the major cause behind the death of women worldwide and is responsible for several deaths each year. Even though there are several means to identify breast cancer, histopathological diagnosis is now considered the gold standard in the diagnosis of cancer. However, the difficulty of histopathological image and the rapid rise in workload render this process time-consuming, and the outcomes might be subjected to pathologists’ subjectivity. Hence, the development of a precise and automatic histopathological image analysis method is essential for the field. Recently, the deep learning method for breast cancer pathological image classification has made significant progress, which has become mainstream in this field. This study introduces a novel chaotic sparrow search algorithm with a deep transfer learning-enabled breast cancer classification (CSSADTL-BCC) model on histopathological images. The presented CSSADTL-BCC model mainly focused on the recognition and classification of breast cancer. To accomplish this, the CSSADTL-BCC model primarily applies the Gaussian filtering (GF) approach to eradicate the occurrence of noise. In addition, a MixNet-based feature extraction model is employed to generate a useful set of feature vectors. Moreover, a stacked gated recurrent unit (SGRU) classification approach is exploited to allot class labels. Furthermore, CSSA is applied to optimally modify the hyperparameters involved in the SGRU model. None of the earlier works have utilized the hyperparameter-tuned SGRU model for breast cancer classification on HIs. The design of the CSSA for optimal hyperparameter tuning of the SGRU model demonstrates the novelty of the work. The performance validation of the CSSADTL-BCC model is tested by a benchmark dataset, and the results reported the superior execution of the CSSADTL-BCC model over recent state-of-the-art approaches.
Collapse
Affiliation(s)
- K. Shankar
- Big Data and Machine Learning Laboratory, South Ural State University, 454080 Chelyabinsk, Russia; (K.S.); (S.K.)
| | - Ashit Kumar Dutta
- Department of Computer Science and Information System, College of Applied Sciences, AlMaarefa University, Riyadh 11597, Saudi Arabia;
| | - Sachin Kumar
- Big Data and Machine Learning Laboratory, South Ural State University, 454080 Chelyabinsk, Russia; (K.S.); (S.K.)
| | - Gyanendra Prasad Joshi
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
- Correspondence: (G.P.J.); (I.C.D.)
| | - Ill Chul Doo
- Artificial Intelligence Education, Hankuk University of Foreign Studies, Dongdaemun-gu, Seoul 02450, Korea
- Correspondence: (G.P.J.); (I.C.D.)
| |
Collapse
|
33
|
Robustness Analysis of DCE-MRI-Derived Radiomic Features in Breast Masses: Assessing Quantization Levels and Segmentation Agreement. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Machine learning models based on radiomic features allow us to obtain biomarkers that are capable of modeling the disease and that are able to support the clinical routine. Recent studies have shown that it is fundamental that the computed features are robust and reproducible. Although several initiatives to standardize the definition and extraction process of biomarkers are ongoing, there is a lack of comprehensive guidelines. Therefore, no standardized procedures are available for ROI selection, feature extraction, and processing, with the risk of undermining the effective use of radiomic models in clinical routine. In this study, we aim to assess the impact that the different segmentation methods and the quantization level (defined by means of the number of bins used in the feature-extraction phase) may have on the robustness of the radiomic features. In particular, the robustness of texture features extracted by PyRadiomics, and belonging to five categories—GLCM, GLRLM, GLSZM, GLDM, and NGTDM—was evaluated using the intra-class correlation coefficient (ICC) and mean differences between segmentation raters. In addition to the robustness of each single feature, an overall index for each feature category was quantified. The analysis showed that the level of quantization (i.e., the `bincount’ parameter) plays a key role in defining robust features: in fact, in our study focused on a dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) dataset of 111 breast masses, sets with cardinality varying between 34 and 43 robust features were obtained with `binCount’ values equal to 256 and 32, respectively. Moreover, both manual segmentation methods demonstrated good reliability and agreement, while automated segmentation achieved lower ICC values. Considering the dependence on the quantization level, taking into account only the intersection subset among all the values of `binCount’ could be the best selection strategy. Among radiomic feature categories, GLCM, GLRLM, and GLDM showed the best overall robustness with varying segmentation methods.
Collapse
|
34
|
Ahmad M, Qadri SF, Ashraf MU, Subhi K, Khan S, Zareen SS, Qadri S. Efficient Liver Segmentation from Computed Tomography Images Using Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2665283. [PMID: 35634046 PMCID: PMC9132625 DOI: 10.1155/2022/2665283] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/06/2022] [Indexed: 12/11/2022]
Abstract
Segmentation of a liver in computed tomography (CT) images is an important step toward quantitative biomarkers for a computer-aided decision support system and precise medical diagnosis. To overcome the difficulties that come across the liver segmentation that are affected by fuzzy boundaries, stacked autoencoder (SAE) is applied to learn the most discriminative features of the liver among other tissues in abdominal images. In this paper, we propose a patch-based deep learning method for the segmentation of a liver from CT images using SAE. Unlike the traditional machine learning methods, instead of anticipating pixel by pixel learning, our algorithm utilizes the patches to learn the representations and identify the liver area. We preprocessed the whole dataset to get the enhanced images and converted each image into many overlapping patches. These patches are given as input to SAE for unsupervised feature learning. Finally, the learned features with labels of the images are fine tuned, and the classification is performed to develop the probability map in a supervised way. Experimental results demonstrate that our proposed algorithm shows satisfactory results on test images. Our method achieved a 96.47% dice similarity coefficient (DSC), which is better than other methods in the same domain.
Collapse
Affiliation(s)
- Mubashir Ahmad
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
- Department of Computer Science and IT, The University of Lahore, Sargodha Campus, 40100, Lahore, Pakistan
| | - Syed Furqan Qadri
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
| | - M. Usman Ashraf
- Department of Computer Science, GC Women University, Sialkot 51310, Pakistan
| | - Khalid Subhi
- Department of Computer Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Salabat Khan
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
| | - Syeda Shamaila Zareen
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Salman Qadri
- Department of Computer Science, MNS University of Agriculture, Multan 60650, Pakistan
| |
Collapse
|
35
|
M. A, Govindharaju K, A. J, Mohan S, Ahmadian A, Ciano T. A hybrid learning approach for the stage‐wise classification and prediction of COVID‐19 X‐ray images. EXPERT SYSTEMS 2022; 39. [DOI: 10.1111/exsy.12884] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 10/13/2021] [Indexed: 09/15/2023]
Abstract
AbstractBackgroundThe COVID‐19 pandemic has precipitated global apprehensions about increased fatalities and raised concerns about gaps in healthcare infrastructure and accessibility the world over. Consequently, the importance of timely prediction and treatment of the disease to reduce transmission and mortality rates cannot be emphasized enough. Various symptoms of the disease have been identified as it progresses from the time it is contracted. COVID‐19 has been found to internally affect the lungs, and the four progressive stages of the infection can be categorized as mild, moderate, severe, and critical. Therefore, an accurate analysis of the current stage of the disease that can help predict its progression has become critical. X‐ray imaging has been found to be an effective screening procedure for predicting the various stages of this epidemic. Although many different approaches using machine learning, as well as deep learning were utilized to predict and classify diseases in general, till date, such an approach has not been used to predict the various stages of COVID‐19 by using X‐ray imaging to identify and classify those stages.Materials and methodThe proposed hybrid method used three public datasets for its implementation. In this work, extensive images were used for the purposes of testing and training. The dataset‐1 consists of 1200 COVID‐19 as well as 1200 Non‐COVID‐19 images, while dataset‐2 used 700 COVID‐19 as well as 700 Non‐COVID‐19 images, and finally, dataset‐III utilized 1900 COVID‐19 as well as 1900 Non‐COVID‐19 images for purposes of testing and training. The proposed work undertook the task of pre‐processing using textual and morphological features, while the segmentation and prediction of COVID‐19 as well as Non‐COVID‐19 images were undertaken using VGG‐16 with light GBM for better prediction and handing of huge datasets, and finally, the classification of the various stages of COVID‐19 images was performed using Deep Belief Network.ResultsThe outcomes of the proposed work were subjected to several iterations which were then compared using different parameters such as accuracy, specificity, and sensitivity. In general, the prediction and grouping of the various stages of COVID‐19 by using affected images were found to be 99.2%, 99.4% and 99.5%, respectively. The bacterial pneumonia prediction rates were observed to be 98.5%, 99.4% and 98.3%, respectively. The average classification of the stages were found to be 98.1%, 98.6% and 98.3%, while the combined multi‐classification prediction rates were observed to be 98.6%, 99.1% and 98.7%, respectively.
Collapse
Affiliation(s)
- Adimoolam M.
- Department of Computer Science and Engineering Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences Chennai India
| | - Karthi Govindharaju
- Department of Artificial Intelligence and Data Science Saveetha Engineering College Chennai India
| | - John A.
- School of Computer Science and Engineering Galgotias University Greater Noida India
| | - Senthilkumar Mohan
- School of Information Technology and Engineering Vellore Institute of Technology Vellore India
| | - Ali Ahmadian
- Institute of IR 4.0 The National University of Malaysia, UKM Bangi Malaysia
- Department of Mathematics Near East University Nicosia, TRNC, Mersin 10 Turkey
| | - Tiziana Ciano
- Faculty of Business and Law University of Portsmouth Portsmouth UK
| |
Collapse
|
36
|
A Lightweight Convolutional Neural Network Model for Liver Segmentation in Medical Diagnosis. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7954333. [PMID: 35755754 PMCID: PMC9225858 DOI: 10.1155/2022/7954333] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 02/15/2022] [Accepted: 02/21/2022] [Indexed: 12/24/2022]
Abstract
Liver segmentation and recognition from computed tomography (CT) images is a warm topic in image processing which is helpful for doctors and practitioners. Currently, many deep learning methods are used for liver segmentation that takes a long time to train the model which makes this task challenging and limited to larger hardware resources. In this research, we proposed a very lightweight convolutional neural network (CNN) to extract the liver region from CT scan images. The suggested CNN algorithm consists of 3 convolutional and 2 fully connected layers, where softmax is used to discriminate the liver from background. Random Gaussian distribution is used for weight initialization which achieved a distance-preserving-embedding of the information. The proposed network is known as Ga-CNN (Gaussian-weight initialization of CNN). General experiments are performed on three benchmark datasets including MICCAI SLiver’07, 3Dircadb01, and LiTS17. Experimental results show that the proposed method performed well on each benchmark dataset.
Collapse
|
37
|
SVseg: Stacked Sparse Autoencoder-Based Patch Classification Modeling for Vertebrae Segmentation. MATHEMATICS 2022. [DOI: 10.3390/math10050796] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Precise vertebrae segmentation is essential for the image-related analysis of spine pathologies such as vertebral compression fractures and other abnormalities, as well as for clinical diagnostic treatment and surgical planning. An automatic and objective system for vertebra segmentation is required, but its development is likely to run into difficulties such as low segmentation accuracy and the requirement of prior knowledge or human intervention. Recently, vertebral segmentation methods have focused on deep learning-based techniques. To mitigate the challenges involved, we propose deep learning primitives and stacked Sparse autoencoder-based patch classification modeling for Vertebrae segmentation (SVseg) from Computed Tomography (CT) images. After data preprocessing, we extract overlapping patches from CT images as input to train the model. The stacked sparse autoencoder learns high-level features from unlabeled image patches in an unsupervised way. Furthermore, we employ supervised learning to refine the feature representation to improve the discriminability of learned features. These high-level features are fed into a logistic regression classifier to fine-tune the model. A sigmoid classifier is added to the network to discriminate the vertebrae patches from non-vertebrae patches by selecting the class with the highest probabilities. We validated our proposed SVseg model on the publicly available MICCAI Computational Spine Imaging (CSI) dataset. After configuration optimization, our proposed SVseg model achieved impressive performance, with 87.39% in Dice Similarity Coefficient (DSC), 77.60% in Jaccard Similarity Coefficient (JSC), 91.53% in precision (PRE), and 90.88% in sensitivity (SEN). The experimental results demonstrated the method’s efficiency and significant potential for diagnosing and treating clinical spinal diseases.
Collapse
|
38
|
Yang Y, Guan C. Classification of histopathological images of breast cancer using an improved convolutional neural network model. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:33-44. [PMID: 34719472 DOI: 10.3233/xst-210982] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The accurately automatic classification of medical pathological images has always been an important problem in the field of deep learning. However, the traditional manual extraction of features and image classification usually requires in-depth knowledge and more professional researchers to extract and calculate high-quality image features. This kind of operation generally takes a lot of time and the classification effect is not ideal. In order to solve these problems, this study proposes and tests an improved network model DenseNet-201-MSD to accomplish the task of classification of medical pathological images of breast cancer. First, the image is preprocessed, and the traditional pooling layer is replaced by multiple scaling decomposition to prevent overfitting due to the large dimension of the image data set. Second, the BN algorithm is added before the activation function Softmax and Adam is used in the optimizer to optimize performance of the network model and improve image recognition accuracy of the network model. By verifying the performance of the model using the BreakHis dataset, the new deep learning model yields image classification accuracy of 99.4%, 98.8%, 98.2%and 99.4%when applying to four different magnifications of pathological images, respectively. The study results demonstrate that this new classification method and deep learning model can effectively improve accuracy of pathological image classification, which indicates its potential value in future clinical application.
Collapse
Affiliation(s)
- Yunfeng Yang
- Department of Mathematics and Statistics, Northeast Petroleum University, Daqing, China
| | - Chen Guan
- Department of Mathematics and Statistics, Northeast Petroleum University, Daqing, China
| |
Collapse
|
39
|
Deep Learning on Histopathology Images for Breast Cancer Classification: A Bibliometric Analysis. Healthcare (Basel) 2021; 10:healthcare10010010. [PMID: 35052174 PMCID: PMC8775465 DOI: 10.3390/healthcare10010010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/07/2021] [Accepted: 12/12/2021] [Indexed: 12/16/2022] Open
Abstract
Medical imaging is gaining significant attention in healthcare, including breast cancer. Breast cancer is the most common cancer-related death among women worldwide. Currently, histopathology image analysis is the clinical gold standard in cancer diagnosis. However, the manual process of microscopic examination involves laborious work and can be misleading due to human error. Therefore, this study explored the research status and development trends of deep learning on breast cancer image classification using bibliometric analysis. Relevant works of literature were obtained from the Scopus database between 2014 and 2021. The VOSviewer and Bibliometrix tools were used for analysis through various visualization forms. This study is concerned with the annual publication trends, co-authorship networks among countries, authors, and scientific journals. The co-occurrence network of the authors’ keywords was analyzed for potential future directions of the field. Authors started to contribute to publications in 2016, and the research domain has maintained its growth rate since. The United States and China have strong research collaboration strengths. Only a few studies use bibliometric analysis in this research area. This study provides a recent review on this fast-growing field to highlight status and trends using scientific visualization. It is hoped that the findings will assist researchers in identifying and exploring the potential emerging areas in the related field.
Collapse
|
40
|
Iterative principal component analysis method for improvised classification of breast cancer disease using blood sample analysis. Med Biol Eng Comput 2021; 59:1973-1989. [PMID: 34331636 DOI: 10.1007/s11517-021-02405-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 07/01/2021] [Indexed: 10/20/2022]
Abstract
Breast cancer is the most common cancer in women occurring worldwide. Some of the procedures used to diagnose breast cancer are mammogram, breast ultrasound, biopsy, breast magnetic resonance imaging, and blood tests such as complete blood count. Detecting breast cancer at an early stage plays an important role in diagnostic and curative procedures. This paper aims to develop a predictive model for detecting the breast cancer using blood samples data containing age, body mass index (BMI), glucose, insulin, homeostasis model assessment (HOMA), leptin, adiponectin, resistin, and chemokine monocyte chemoattractant protein 1 (MCP-1).The two main challenges encountered in this process are identification of biomarkers and the precision of disease prediction accuracy. The proposed methodology employs principal component analysis in a peculiar approach followed by random forest tree prediction model to discriminate between healthy and breast cancer patients. This approach extracts high communalities, a linear combination of input attributes in a systematic procedure as principal axis elements. The iteratively extracted principal axis elements combined with minimum number of input attributes are able to predict the disease with higher accuracy of classification with increased sensitivity and specificity score. The results proved that the proposed approach generates a higher predictor performance than the previous reported results by opting relevant extracted principal axis elements and attributes that commend the classifier with increased performance measures.
Collapse
|