1
|
Wang M, Liu R, Luttrell IV J, Zhang C, Xie J. Detection of Masses in Mammogram Images Based on the Enhanced RetinaNet Network With INbreast Dataset. J Multidiscip Healthc 2025; 18:675-695. [PMID: 39935433 PMCID: PMC11812562 DOI: 10.2147/jmdh.s493873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 01/10/2025] [Indexed: 02/13/2025] Open
Abstract
Purpose Breast cancer is the most common major public health problems of women in the world. Until now, analyzing mammogram images is still the main method used by doctors to diagnose and detect breast cancers. However, this process usually depends on the experience of radiologists and is always very time consuming. Patients and Methods We propose to introduce deep learning technology into the process for the facilitation of computer-aided diagnosis (CAD), and address the challenges of class imbalance, enhance the detection of small masses and multiple targets, and reduce false positives and negatives in mammogram analysis. Therefore, we adopted and enhanced RetinaNet to detect masses in mammogram images. Specifically, we introduced a novel modification to the network structure, where the feature map M5 is processed by the ReLU function prior to the original convolution kernel. This strategic adjustment was designed to prevent the loss of resolution for small mass features. Additionally, we introduced transfer learning techniques into training process through leveraging pre-trained weights from other RetinaNet applications, and fine-tuned our improved model using the INbreast dataset. Results The aforementioned innovations facilitate superior performance of the enhanced RetiaNet model on the public dataset INbreast, as evidenced by a mAP (mean average precision) of 1.0000 and TPR (true positive rate) of 1.00 at 0.00 FPPI (false positive per image) on the INbreast dataset. Conclusion The experimental results demonstrate that our enhanced RetinaNet model defeats the existing models by having more generalization performance than other published studies, and it can also be applied to other types of patients to assist doctors in making a proper diagnosis.
Collapse
Affiliation(s)
- Mingzhao Wang
- School of Computer Science, Shaanxi Normal University, Xian, People’s Republic of China
| | - Ran Liu
- School of Computer Science, Shaanxi Normal University, Xian, People’s Republic of China
| | - Joseph Luttrell IV
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, USA
| | - Chaoyang Zhang
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, USA
| | - Juanying Xie
- School of Computer Science, Shaanxi Normal University, Xian, People’s Republic of China
| |
Collapse
|
2
|
Rai HM, Yoo J, Agarwal S, Agarwal N. LightweightUNet: Multimodal Deep Learning with GAN-Augmented Imaging Data for Efficient Breast Cancer Detection. Bioengineering (Basel) 2025; 12:73. [PMID: 39851348 PMCID: PMC11761908 DOI: 10.3390/bioengineering12010073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 01/06/2025] [Accepted: 01/08/2025] [Indexed: 01/26/2025] Open
Abstract
Breast cancer ranks as the second most prevalent cancer globally and is the most frequently diagnosed cancer among women; therefore, early, automated, and precise detection is essential. Most AI-based techniques for breast cancer detection are complex and have high computational costs. Hence, to overcome this challenge, we have presented the innovative LightweightUNet hybrid deep learning (DL) classifier for the accurate classification of breast cancer. The proposed model boasts a low computational cost due to its smaller number of layers in its architecture, and its adaptive nature stems from its use of depth-wise separable convolution. We have employed a multimodal approach to validate the model's performance, using 13,000 images from two distinct modalities: mammogram imaging (MGI) and ultrasound imaging (USI). We collected the multimodal imaging datasets from seven different sources, including the benchmark datasets DDSM, MIAS, INbreast, BrEaST, BUSI, Thammasat, and HMSS. Since the datasets are from various sources, we have resized them to the uniform size of 256 × 256 pixels and normalized them using the Box-Cox transformation technique. Since the USI dataset is smaller, we have applied the StyleGAN3 model to generate 10,000 synthetic ultrasound images. In this work, we have performed two separate experiments: the first on a real dataset without augmentation and the second on a real + GAN-augmented dataset using our proposed method. During the experiments, we used a 5-fold cross-validation method, and our proposed model obtained good results on the real dataset (87.16% precision, 86.87% recall, 86.84% F1-score, and 86.87% accuracy) without adding any extra data. Similarly, the second experiment provides better performance on the real + GAN-augmented dataset (96.36% precision, 96.35% recall, 96.35% F1-score, and 96.35% accuracy). This multimodal approach, which utilizes LightweightUNet, enhances the performance by 9.20% in precision, 9.48% in recall, 9.51% in F1-score, and a 9.48% increase in accuracy on the combined dataset. The LightweightUNet model we proposed works very well thanks to a creative network design, adding fake images to the data, and a multimodal training method. These results show that the model has a lot of potential for use in clinical settings.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea;
| | - Joon Yoo
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea;
| | - Saurabh Agarwal
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| | - Neha Agarwal
- School of Chemical Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| |
Collapse
|
3
|
Pérez-Núñez JR, Rodríguez C, Vásquez-Serpa LJ, Navarro C. The Challenge of Deep Learning for the Prevention and Automatic Diagnosis of Breast Cancer: A Systematic Review. Diagnostics (Basel) 2024; 14:2896. [PMID: 39767257 PMCID: PMC11675111 DOI: 10.3390/diagnostics14242896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Revised: 11/24/2024] [Accepted: 12/18/2024] [Indexed: 01/11/2025] Open
Abstract
OBJECTIVES This review aims to evaluate several convolutional neural network (CNN) models applied to breast cancer detection, to identify and categorize CNN variants in recent studies, and to analyze their specific strengths, limitations, and challenges. METHODS Using PRISMA methodology, this review examines studies that focus on deep learning techniques, specifically CNN, for breast cancer detection. Inclusion criteria encompassed studies from the past five years, with duplicates and those unrelated to breast cancer excluded. A total of 62 articles from the IEEE, SCOPUS, and PubMed databases were analyzed, exploring CNN architectures and their applicability in detecting this pathology. RESULTS The review found that CNN models with advanced architecture and greater depth exhibit high accuracy and sensitivity in image processing and feature extraction for breast cancer detection. CNN variants that integrate transfer learning proved particularly effective, allowing the use of pre-trained models with less training data required. However, challenges include the need for large, labeled datasets and significant computational resources. CONCLUSIONS CNNs represent a promising tool in breast cancer detection, although future research should aim to create models that are more resource-efficient and maintain accuracy while reducing data requirements, thus improving clinical applicability.
Collapse
Affiliation(s)
- Jhelly-Reynaluz Pérez-Núñez
- Facultad de Ingeniería de Sistemas e Informática, Universidad Nacional Mayor de San Marcos (UNMSM), Lima 15081, Peru; (C.R.); (L.-J.V.-S.); (C.N.)
| | | | | | | |
Collapse
|
4
|
Braveen M, Nachiyappan S, Seetha R, Anusha K, Ahilan A, Prasanth A, Jeyam A. RETRACTED ARTICLE: ALBAE feature extraction based lung pneumonia and cancer classification. Soft comput 2024; 28:589. [PMID: 37362264 PMCID: PMC10187954 DOI: 10.1007/s00500-023-08453-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/06/2023] [Indexed: 06/28/2023]
Abstract
Lung cancer is a deadly disease showing uncontrolled proliferation of malignant cells in the lungs. If the lung cancer is detected in early stages, it can be cured before critical stage. In recent years, new technologies have gained much attention in the healthcare industry however, the unpredictable appearance of tumors, finding their presence, determining its shape, size and high discrepancy in medical images are the challenging tasks. To overcome this issue a novel Ant lion-based Autoencoders (ALbAE) model is proposed for efficient classification of lung cancer and pneumonia. Initially Computed Tomography (CT) images are pre-processed using median filters to remove noise artifacts and improving the quality of the images. Consequently, the relevant features such as image edges, pixel rates of the images and blood clots are extracted by ant lion-based autoencoder (ALbAE) technique. Finally, in classification stage, the lung CT images are classified into three different categories such as normal lung, cancer affected lung and pneumonia affected lung using Random forest technique. The effectiveness of the implemented design is estimated by different parameters such as precision, recall, Accuracy and F1-measure. The proposed approach attains 97% accuracy; 98% of recall and F-measure rate is attained through the developed design and the proposed model gains 96% of precision score. Experimental outcomes show that the proposed model performs better than existing SVM, ELM, and MLP in classifying lung cancer and pneumonia.
Collapse
Affiliation(s)
- M. Braveen
- Assistant professor senior, School of
Computer Science and Engineering, Vellore
institute of technology, Chennai, Tamil
Nadu India
| | - S. Nachiyappan
- Associate Professor, School of
Computer Science and Engineering, Vellore
Institute of Technology, Chennai, Tamil
Nadu India
| | - R. Seetha
- Associate Professor, School of
Information Technology and Engineering,
Vellore Institute of Technology,
Vellore, Tamil Nadu India
| | - K. Anusha
- Associate Professor, School of
Computer Science and Engineering, Vellore
Institute of Technology, Chennai, Tamil
Nadu India
| | - A. Ahilan
- Associate Professor, Department of
Electronics and Communication Engineering,
PSN College of Engineering and Technology,
Tirunelveli, Tamil Nadu India
| | - A. Prasanth
- Assistant Professor, Department of
Electronics and Communication Engineering,
Sri Venkateswara College of Engineering,
Sriperumbudur, India
| | - A. Jeyam
- Assistant Professor, Computer Science and
Engineering, Lord Jegannath College of Engineering and Technology, Kanyakumari,
Tamil Nadu 629402 India
| |
Collapse
|
5
|
Sannasi Chakravarthy SR, Bharanidharan N, Vinothini C, Vinoth Kumar V, Mahesh TR, Guluwadi S. Adaptive Mish activation and ranger optimizer-based SEA-ResNet50 model with explainable AI for multiclass classification of COVID-19 chest X-ray images. BMC Med Imaging 2024; 24:206. [PMID: 39123118 PMCID: PMC11313131 DOI: 10.1186/s12880-024-01394-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 08/06/2024] [Indexed: 08/12/2024] Open
Abstract
A recent global health crisis, COVID-19 is a significant global health crisis that has profoundly affected lifestyles. The detection of such diseases from similar thoracic anomalies using medical images is a challenging task. Thus, the requirement of an end-to-end automated system is vastly necessary in clinical treatments. In this way, the work proposes a Squeeze-and-Excitation Attention-based ResNet50 (SEA-ResNet50) model for detecting COVID-19 utilizing chest X-ray data. Here, the idea lies in improving the residual units of ResNet50 using the squeeze-and-excitation attention mechanism. For further enhancement, the Ranger optimizer and adaptive Mish activation function are employed to improve the feature learning of the SEA-ResNet50 model. For evaluation, two publicly available COVID-19 radiographic datasets are utilized. The chest X-ray input images are augmented during experimentation for robust evaluation against four output classes namely normal, pneumonia, lung opacity, and COVID-19. Then a comparative study is done for the SEA-ResNet50 model against VGG-16, Xception, ResNet18, ResNet50, and DenseNet121 architectures. The proposed framework of SEA-ResNet50 together with the Ranger optimizer and adaptive Mish activation provided maximum classification accuracies of 98.38% (multiclass) and 99.29% (binary classification) as compared with the existing CNN architectures. The proposed method achieved the highest Kappa validation scores of 0.975 (multiclass) and 0.98 (binary classification) over others. Furthermore, the visualization of the saliency maps of the abnormal regions is represented using the explainable artificial intelligence (XAI) model, thereby enhancing interpretability in disease diagnosis.
Collapse
Affiliation(s)
- S R Sannasi Chakravarthy
- Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam, India
| | - N Bharanidharan
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, 632014, India
| | - C Vinothini
- Department of Computer Science and Engineering, Dayananda Sagar College of Engineering, Bangalore, India
| | - Venkatesan Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, 632014, India
| | - T R Mahesh
- Department of Computer Science and Engineering, JAIN (Deemed-to-Be University), Bengaluru, 562112, India
| | - Suresh Guluwadi
- Adama Science and Technology University, Adama, 302120, Ethiopia.
| |
Collapse
|
6
|
Kalpana P, Selvy PT. A novel machine learning model for breast cancer detection using mammogram images. Med Biol Eng Comput 2024; 62:2247-2264. [PMID: 38575824 DOI: 10.1007/s11517-024-03057-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 02/20/2024] [Indexed: 04/06/2024]
Abstract
The most fatal disease affecting women worldwide now is breast cancer. Early detection of breast cancer enhances the likelihood of a full recovery and lowers mortality. Based on medical imaging, researchers from all around the world are developing breast cancer screening technologies. Due to their rapid progress, deep learning algorithms have caught the interest of many in the field of medical imaging. This research proposes a novel method in mammogram image feature extraction with classification and optimization using machine learning in breast cancer detection. The input image has been processed for noise removal, smoothening, and normalization. The input image features were extracted using probabilistic principal component analysis for detecting the presence of tumors in mammogram images. The extracted tumor region is classified using the Naïve Bayes classifier and transfer integrated convolution neural networks. The classified output has been optimized using firefly binary grey optimization and metaheuristic moth flame lion optimization. The experimental analysis has been carried out in terms of different parameters based on datasets. The proposed framework used an ensemble model for breast cancer that made use of the proposed Bayes + FBGO and TCNN + MMFLO classifier and optimizer for diverse mammography image datasets. The INbreast dataset was evaluated using the proposed Bayes + FBGO and TCNN + MMFLO classifiers, which achieved 95% and 98% accuracy, respectively.
Collapse
Affiliation(s)
- P Kalpana
- Department of Computer Science and Engineering, Sri Krishna College of Technology, Coimbatore, 641042, India.
| | - P Tamije Selvy
- Department of Computer Science and Engineering, Hindusthan College of Engineering and Technology, Coimbatore, 641032, India
| |
Collapse
|
7
|
Sannasi Chakravarthy SR, Bharanidharan N, Vinoth Kumar V, Mahesh TR, Alqahtani MS, Guluwadi S. Deep transfer learning with fuzzy ensemble approach for the early detection of breast cancer. BMC Med Imaging 2024; 24:82. [PMID: 38589813 PMCID: PMC11389118 DOI: 10.1186/s12880-024-01267-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 03/30/2024] [Indexed: 04/10/2024] Open
Abstract
Breast Cancer is a significant global health challenge, particularly affecting women with higher mortality compared with other cancer types. Timely detection of such cancer types is crucial, and recent research, employing deep learning techniques, shows promise in earlier detection. The research focuses on the early detection of such tumors using mammogram images with deep-learning models. The paper utilized four public databases where a similar amount of 986 mammograms each for three classes (normal, benign, malignant) are taken for evaluation. Herein, three deep CNN models such as VGG-11, Inception v3, and ResNet50 are employed as base classifiers. The research adopts an ensemble method where the proposed approach makes use of the modified Gompertz function for building a fuzzy ranking of the base classification models and their decision scores are integrated in an adaptive manner for constructing the final prediction of results. The classification results of the proposed fuzzy ensemble approach outperform transfer learning models and other ensemble approaches such as weighted average and Sugeno integral techniques. The proposed ResNet50 ensemble network using the modified Gompertz function-based fuzzy ranking approach provides a superior classification accuracy of 98.986%.
Collapse
Affiliation(s)
- S R Sannasi Chakravarthy
- Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam, India
| | - N Bharanidharan
- School of Computer Science Engineering and Information systems, Vellore Institute of Technology, Vellore, 632014, India
| | - V Vinoth Kumar
- School of Computer Science Engineering and Information systems, Vellore Institute of Technology, Vellore, 632014, India
| | - T R Mahesh
- Department of Computer Science and Engineering JAIN (Deemed-to-be University), Bengaluru, 562112, India
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, Abha, 61421, Saudi Arabia
| | - Suresh Guluwadi
- Adama Science and Technology University, Adama, 302120, Ethiopia.
| |
Collapse
|
8
|
Guo Y, Zhang H, Yuan L, Chen W, Zhao H, Yu QQ, Shi W. Machine learning and new insights for breast cancer diagnosis. J Int Med Res 2024; 52:3000605241237867. [PMID: 38663911 PMCID: PMC11047257 DOI: 10.1177/03000605241237867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 02/21/2024] [Indexed: 04/28/2024] Open
Abstract
Breast cancer (BC) is the most prominent form of cancer among females all over the world. The current methods of BC detection include X-ray mammography, ultrasound, computed tomography, magnetic resonance imaging, positron emission tomography and breast thermographic techniques. More recently, machine learning (ML) tools have been increasingly employed in diagnostic medicine for its high efficiency in detection and intervention. The subsequent imaging features and mathematical analyses can then be used to generate ML models, which stratify, differentiate and detect benign and malignant breast lesions. Given its marked advantages, radiomics is a frequently used tool in recent research and clinics. Artificial neural networks and deep learning (DL) are novel forms of ML that evaluate data using computer simulation of the human brain. DL directly processes unstructured information, such as images, sounds and language, and performs precise clinical image stratification, medical record analyses and tumour diagnosis. Herein, this review thoroughly summarizes prior investigations on the application of medical images for the detection and intervention of BC using radiomics, namely DL and ML. The aim was to provide guidance to scientists regarding the use of artificial intelligence and ML in research and the clinic.
Collapse
Affiliation(s)
- Ya Guo
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Heng Zhang
- Department of Laboratory Medicine, Shandong Daizhuang Hospital, Jining, Shandong Province, China
| | - Leilei Yuan
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Weidong Chen
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Haibo Zhao
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Qing-Qing Yu
- Phase I Clinical Research Centre, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Wenjie Shi
- Molecular and Experimental Surgery, University Clinic for General-, Visceral-, Vascular- and Trans-Plantation Surgery, Medical Faculty University Hospital Magdeburg, Otto-von Guericke University, Magdeburg, Germany
| |
Collapse
|
9
|
Jabeen K, Khan MA, Hameed MA, Alqahtani O, Alouane MTH, Masood A. A novel fusion framework of deep bottleneck residual convolutional neural network for breast cancer classification from mammogram images. Front Oncol 2024; 14:1347856. [PMID: 38454931 PMCID: PMC10917916 DOI: 10.3389/fonc.2024.1347856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 02/05/2024] [Indexed: 03/09/2024] Open
Abstract
With over 2.1 million new cases of breast cancer diagnosed annually, the incidence and mortality rate of this disease pose severe global health issues for women. Identifying the disease's influence is the only practical way to lessen it immediately. Numerous research works have developed automated methods using different medical imaging to identify BC. Still, the precision of each strategy differs based on the available resources, the issue's nature, and the dataset being used. We proposed a novel deep bottleneck convolutional neural network with a quantum optimization algorithm for breast cancer classification and diagnosis from mammogram images. Two novel deep architectures named three-residual blocks bottleneck and four-residual blocks bottle have been proposed with parallel and single paths. Bayesian Optimization (BO) has been employed to initialize hyperparameter values and train the architectures on the selected dataset. Deep features are extracted from the global average pool layer of both models. After that, a kernel-based canonical correlation analysis and entropy technique is proposed for the extracted deep features fusion. The fused feature set is further refined using an optimization technique named quantum generalized normal distribution optimization. The selected features are finally classified using several neural network classifiers, such as bi-layered and wide-neural networks. The experimental process was conducted on a publicly available mammogram imaging dataset named INbreast, and a maximum accuracy of 96.5% was obtained. Moreover, for the proposed method, the sensitivity rate is 96.45, the precision rate is 96.5, the F1 score value is 96.64, the MCC value is 92.97%, and the Kappa value is 92.97%, respectively. The proposed architectures are further utilized for the diagnosis process of infected regions. In addition, a detailed comparison has been conducted with a few recent techniques showing the proposed framework's higher accuracy and precision rate.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University, Taxila, Pakistan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon
| | - Mohamed Abdel Hameed
- Department of Computer Science, Faculty of Computers and Information, Luxor University, Luxor, Egypt
| | - Omar Alqahtani
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | | | - Anum Masood
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
10
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
11
|
Aymaz S. A new framework for early diagnosis of breast cancer using mammography images. Neural Comput Appl 2024; 36:1665-1680. [DOI: 10.1007/s00521-023-09156-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 10/20/2023] [Indexed: 10/04/2024]
|
12
|
Umar H, Aliyu MR, Usman AG, Ghali UM, Abba SI, Ozsahin DU. Prediction of cell migration potential on human breast cancer cells treated with Albizia lebbeck ethanolic extract using extreme machine learning. Sci Rep 2023; 13:22242. [PMID: 38097683 PMCID: PMC10721884 DOI: 10.1038/s41598-023-49363-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 12/07/2023] [Indexed: 12/17/2023] Open
Abstract
Cancer is one of the major causes of death in the modern world, and the incidence varies considerably based on race, ethnicity, and region. Novel cancer treatments, such as surgery and immunotherapy, are ineffective and expensive. In this situation, ion channels responsible for cell migration have appeared to be the most promising targets for cancer treatment. This research presents findings on the organic compounds present in Albizia lebbeck ethanolic extracts (ALEE), as well as their impact on the anti-migratory, anti-proliferative and cytotoxic potentials on MDA-MB 231 and MCF-7 human breast cancer cell lines. In addition, artificial intelligence (AI) based models, multilayer perceptron (MLP), extreme gradient boosting (XGB), and extreme learning machine (ELM) were performed to predict in vitro cancer cell migration on both cell lines, based on our experimental data. The organic compounds composition of the ALEE was studied using gas chromatography-mass spectrometry (GC-MS) analysis. Cytotoxicity, anti-proliferations, and anti-migratory activity of the extract using Tryphan Blue, MTT, and Wound Heal assay, respectively. Among the various concentrations (2.5-200 μg/mL) of the ALEE that were used in our study, 2.5-10 μg/mL revealed anti-migratory potential with increased concentrations, and they did not show any effect on the proliferation of the cells (P < 0.05; n ≥ 3). Furthermore, the three data-driven models, Multi-layer perceptron (MLP), Extreme gradient boosting (XGB), and Extreme learning machine (ELM), predict the potential migration ability of the extract on the treated cells based on our experimental data. Overall, the concentrations of the plant extract that do not affect the proliferation of the type cells used demonstrated promising effects in reducing cell migration. XGB outperformed the MLP and ELM models and increased their performance efficiency by up to 3% and 1% for MCF and 1% and 2% for MDA-MB231, respectively, in the testing phase.
Collapse
Affiliation(s)
- Huzaifa Umar
- Near East University, Operational Research Centre in Healthcare, TRNC Mersin 10, 99138, Nicosia, Turkey.
| | - Maryam Rabiu Aliyu
- Department of Energy System Engineering, Cyprus International University, Northern Cyprus via Mersin 10, 99258, Nicosia, Turkey
| | - Abdullahi Garba Usman
- Near East University, Operational Research Centre in Healthcare, TRNC Mersin 10, 99138, Nicosia, Turkey
- Department of Analytical Chemistry, Faculty of Pharmacy, Near East University, TRNC, Mersin 10, 99138, Nicosia, Turkey
| | - Umar Muhammad Ghali
- Department of Chemistry, Faculty of Natural and Applied Sciences, Firat University, Merkezi, 23199, Elazig, Turkey
| | - Sani Isah Abba
- Interdisciplinary Research Centre for Membranes and Water Security, King Fahd University of Petroleum and Minerals, 31261, Dhahran, Saudi Arabia
| | - Dilber Uzun Ozsahin
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, P.O. Box 27272, Sharjah, United Arab Emirates.
- Research Institute for Medical and Health Sciences, University of Sharjah, P.O. Box 27272, Sharjah, United Arab Emirates.
| |
Collapse
|
13
|
Rai HM, Yoo J. A comprehensive analysis of recent advancements in cancer detection using machine learning and deep learning models for improved diagnostics. J Cancer Res Clin Oncol 2023; 149:14365-14408. [PMID: 37540254 DOI: 10.1007/s00432-023-05216-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023]
Abstract
PURPOSE There are millions of people who lose their life due to several types of fatal diseases. Cancer is one of the most fatal diseases which may be due to obesity, alcohol consumption, infections, ultraviolet radiation, smoking, and unhealthy lifestyles. Cancer is abnormal and uncontrolled tissue growth inside the body which may be spread to other body parts other than where it has originated. Hence it is very much required to diagnose the cancer at an early stage to provide correct and timely treatment. Also, manual diagnosis and diagnostic error may cause of the death of many patients hence much research are going on for the automatic and accurate detection of cancer at early stage. METHODS In this paper, we have done the comparative analysis of the diagnosis and recent advancement for the detection of various cancer types using traditional machine learning (ML) and deep learning (DL) models. In this study, we have included four types of cancers, brain, lung, skin, and breast and their detection using ML and DL techniques. In extensive review we have included a total of 130 pieces of literature among which 56 are of ML-based and 74 are from DL-based cancer detection techniques. Only the peer reviewed research papers published in the recent 5-year span (2018-2023) have been included for the analysis based on the parameters, year of publication, feature utilized, best model, dataset/images utilized, and best accuracy. We have reviewed ML and DL-based techniques for cancer detection separately and included accuracy as the performance evaluation metrics to maintain the homogeneity while verifying the classifier efficiency. RESULTS Among all the reviewed literatures, DL techniques achieved the highest accuracy of 100%, while ML techniques achieved 99.89%. The lowest accuracy achieved using DL and ML approaches were 70% and 75.48%, respectively. The difference in accuracy between the highest and lowest performing models is about 28.8% for skin cancer detection. In addition, the key findings, and challenges for each type of cancer detection using ML and DL techniques have been presented. The comparative analysis between the best performing and worst performing models, along with overall key findings and challenges, has been provided for future research purposes. Although the analysis is based on accuracy as the performance metric and various parameters, the results demonstrate a significant scope for improvement in classification efficiency. CONCLUSION The paper concludes that both ML and DL techniques hold promise in the early detection of various cancer types. However, the study identifies specific challenges that need to be addressed for the widespread implementation of these techniques in clinical settings. The presented results offer valuable guidance for future research in cancer detection, emphasizing the need for continued advancements in ML and DL-based approaches to improve diagnostic accuracy and ultimately save more lives.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea
| |
Collapse
|
14
|
Basheri M. Intelligent Breast Mass Classification Approach Using Archimedes Optimization Algorithm with Deep Learning on Digital Mammograms. Biomimetics (Basel) 2023; 8:463. [PMID: 37887593 PMCID: PMC10604039 DOI: 10.3390/biomimetics8060463] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 09/22/2023] [Accepted: 09/27/2023] [Indexed: 10/28/2023] Open
Abstract
Breast cancer (BC) has affected many women around the world. To accomplish the classification and detection of BC, several computer-aided diagnosis (CAD) systems have been introduced for the analysis of mammogram images. This is because analysis by the human radiologist is a complex and time-consuming task. Although CAD systems are used to primarily analyze the disease and offer the best therapy, it is still essential to enhance present CAD systems by integrating novel approaches and technologies in order to provide explicit performances. Presently, deep learning (DL) systems are outperforming promising outcomes in the early detection of BC by creating CAD systems executing convolutional neural networks (CNNs). This article presents an Intelligent Breast Mass Classification Approach using the Archimedes Optimization Algorithm with Deep Learning (BMCA-AOADL) technique on Digital Mammograms. The major aim of the BMCA-AOADL technique is to exploit the DL model with a bio-inspired algorithm for breast mass classification. In the BMCA-AOADL approach, median filtering (MF)-based noise removal and U-Net segmentation take place as a pre-processing step. For feature extraction, the BMCA-AOADL technique utilizes the SqueezeNet model with AOA as a hyperparameter tuning approach. To detect and classify the breast mass, the BMCA-AOADL technique applies a deep belief network (DBN) approach. The simulation value of the BMCA-AOADL system has been studied on the MIAS dataset from the Kaggle repository. The experimental values showcase the significant outcomes of the BMCA-AOADL technique compared to other DL algorithms with a maximum accuracy of 96.48%.
Collapse
Affiliation(s)
- Mohammed Basheri
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
15
|
Rai HM. Cancer detection and segmentation using machine learning and deep learning techniques: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2023. [DOI: 10.1007/s11042-023-16520-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 05/12/2023] [Accepted: 08/13/2023] [Indexed: 09/16/2023]
|
16
|
Chaudhury S, Sau K. A BERT encoding with Recurrent Neural Network and Long-Short Term Memory for breast cancer image classification. DECISION ANALYTICS JOURNAL 2023; 6:100177. [DOI: 10.1016/j.dajour.2023.100177] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
17
|
Elkorany AS, Elsharkawy ZF. Efficient breast cancer mammograms diagnosis using three deep neural networks and term variance. Sci Rep 2023; 13:2663. [PMID: 36792720 PMCID: PMC9932150 DOI: 10.1038/s41598-023-29875-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 02/11/2023] [Indexed: 02/17/2023] Open
Abstract
Breast cancer (BC) is spreading more and more every day. Therefore, a patient's life can be saved by its early discovery. Mammography is frequently used to diagnose BC. The classification of mammography region of interest (ROI) patches (i.e., normal, malignant, or benign) is the most crucial phase in this process since it helps medical professionals to identify BC. In this paper, a hybrid technique that carries out a quick and precise classification that is appropriate for the BC diagnosis system is proposed and tested. Three different Deep Learning (DL) Convolution Neural Network (CNN) models-namely, Inception-V3, ResNet50, and AlexNet-are used in the current study as feature extractors. To extract useful features from each CNN model, our suggested method uses the Term Variance (TV) feature selection algorithm. The TV-selected features from each CNN model are combined and a further selection is performed to obtain the most useful features which are sent later to the multiclass support vector machine (MSVM) classifier. The Mammographic Image Analysis Society (MIAS) image database was used to test the effectiveness of the suggested method for classification. The mammogram's ROI is retrieved, and image patches are assigned to it. Based on the results of testing several TV feature subsets, the 600-feature subset with the highest classification performance was discovered. Higher classification accuracy (CA) is attained when compared to previously published work. The average CA for 70% of training is 97.81%, for 80% of training, it is 98%, and for 90% of training, it reaches its optimal value. Finally, the ablation analysis is performed to emphasize the role of the proposed network's key parameters.
Collapse
Affiliation(s)
- Ahmed S. Elkorany
- grid.411775.10000 0004 0621 4712Department of Electronics and Electrical Comm. Eng., Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
| | - Zeinab F. Elsharkawy
- grid.429648.50000 0000 9052 0245Engineering Department, Nuclear Research Center, Egyptian Atomic Energy Authority, Cairo, Egypt
| |
Collapse
|
18
|
Applying Explainable Machine Learning Models for Detection of Breast Cancer Lymph Node Metastasis in Patients Eligible for Neoadjuvant Treatment. Cancers (Basel) 2023; 15:cancers15030634. [PMID: 36765592 PMCID: PMC9913601 DOI: 10.3390/cancers15030634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/16/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND Due to recent changes in breast cancer treatment strategy, significantly more patients are treated with neoadjuvant systemic therapy (NST). Radiological methods do not precisely determine axillary lymph node status, with up to 30% of patients being misdiagnosed. Hence, supplementary methods for lymph node status assessment are needed. This study aimed to apply and evaluate machine learning models on clinicopathological data, with a focus on patients meeting NST criteria, for lymph node metastasis prediction. METHODS From the total breast cancer patient data (n = 8381), 719 patients were identified as eligible for NST. Machine learning models were applied for the NST-criteria group and the total study population. Model explainability was obtained by calculating Shapley values. RESULTS In the NST-criteria group, random forest achieved the highest performance (AUC: 0.793 [0.713, 0.865]), while in the total study population, XGBoost performed the best (AUC: 0.762 [0.726, 0.795]). Shapley values identified tumor size, Ki-67, and patient age as the most important predictors. CONCLUSION Tree-based models achieve a good performance in assessing lymph node status. Such models can lead to more accurate disease stage prediction and consecutively better treatment selection, especially for NST patients where radiological and clinical findings are often the only way of lymph node assessment.
Collapse
|
19
|
Development of an Artificial Intelligence-Based Breast Cancer Detection Model by Combining Mammograms and Medical Health Records. Diagnostics (Basel) 2023; 13:diagnostics13030346. [PMID: 36766450 PMCID: PMC9913958 DOI: 10.3390/diagnostics13030346] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 01/10/2023] [Accepted: 01/13/2023] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI)-based computational models that analyze breast cancer have been developed for decades. The present study was implemented to investigate the accuracy and efficiency of combined mammography images and clinical records for breast cancer detection using machine learning and deep learning classifiers. METHODS This study was verified using 731 images from 357 women who underwent at least one mammogram and had clinical records for at least six months before mammography. The model was trained on mammograms and clinical variables to discriminate benign and malignant lesions. Multiple pre-trained deep CNN models to detect cancer in mammograms, including X-ception, VGG16, ResNet-v2, ResNet50, and CNN3 were employed. Machine learning models were constructed using k-nearest neighbor (KNN), support vector machine (SVM), random forest (RF), Artificial Neural Network (ANN), and gradient boosting machine (GBM) in the clinical dataset. RESULTS The detection performance obtained an accuracy of 84.5% with a specificity of 78.1% at a sensitivity of 89.7% and an AUC of 0.88. When trained on mammography image data alone, the result achieved a slightly lower score than the combined model (accuracy, 72.5% vs. 84.5%, respectively). CONCLUSIONS A breast cancer-detection model combining machine learning and deep learning models was performed in this study with a satisfactory result, and this model has potential clinical applications.
Collapse
|
20
|
Sun J, Liu Q, Wang Y, Wang L, Song X, Zhao X. Five-year prognosis model of esophageal cancer based on genetic algorithm improved deep neural network. Ing Rech Biomed 2023. [DOI: 10.1016/j.irbm.2022.100748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
21
|
Sannasi Chakravarthy S, Bharanidharan N, Rajaguru H. Deep Learning-based Metaheuristic Weighted K-Nearest Neighbor Algorithm for the Severity Classification of Breast Cancer. Ing Rech Biomed 2023. [DOI: 10.1016/j.irbm.2022.100749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
22
|
Al-Hejri AM, Al-Tam RM, Fazea M, Sable AH, Lee S, Al-antari MA. ETECADx: Ensemble Self-Attention Transformer Encoder for Breast Cancer Diagnosis Using Full-Field Digital X-ray Breast Images. Diagnostics (Basel) 2022; 13:diagnostics13010089. [PMID: 36611382 PMCID: PMC9818801 DOI: 10.3390/diagnostics13010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/22/2022] [Accepted: 12/24/2022] [Indexed: 12/29/2022] Open
Abstract
Early detection of breast cancer is an essential procedure to reduce the mortality rate among women. In this paper, a new AI-based computer-aided diagnosis (CAD) framework called ETECADx is proposed by fusing the benefits of both ensemble transfer learning of the convolutional neural networks as well as the self-attention mechanism of vision transformer encoder (ViT). The accurate and precious high-level deep features are generated via the backbone ensemble network, while the transformer encoder is used to diagnose the breast cancer probabilities in two approaches: Approach A (i.e., binary classification) and Approach B (i.e., multi-classification). To build the proposed CAD system, the benchmark public multi-class INbreast dataset is used. Meanwhile, private real breast cancer images are collected and annotated by expert radiologists to validate the prediction performance of the proposed ETECADx framework. The promising evaluation results are achieved using the INbreast mammograms with overall accuracies of 98.58% and 97.87% for the binary and multi-class approaches, respectively. Compared with the individual backbone networks, the proposed ensemble learning model improves the breast cancer prediction performance by 6.6% for binary and 4.6% for multi-class approaches. The proposed hybrid ETECADx shows further prediction improvement when the ViT-based ensemble backbone network is used by 8.1% and 6.2% for binary and multi-class diagnosis, respectively. For validation purposes using the real breast images, the proposed CAD system provides encouraging prediction accuracies of 97.16% for binary and 89.40% for multi-class approaches. The ETECADx has a capability to predict the breast lesions for a single mammogram in an average of 0.048 s. Such promising performance could be useful and helpful to assist the practical CAD framework applications providing a second supporting opinion of distinguishing various breast cancer malignancies.
Collapse
Affiliation(s)
- Aymen M. Al-Hejri
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Faculty of Administrative and Computer Sciences, University of Albaydha, Albaydha, Yemen
| | - Riyadh M. Al-Tam
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Faculty of Administrative and Computer Sciences, University of Albaydha, Albaydha, Yemen
| | - Muneer Fazea
- Department of Radiology, Al-Ma’amon Diagnostic Center, Sana’a, Yemen
- Department of Radiology, School of Medicine, Ibb University of Medical Sciences, Ibb, Yemen
| | - Archana Harsing Sable
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Correspondence: (A.H.S.); (M.A.A.-a.)
| | - Soojeong Lee
- Department of Computer Engineering, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (A.H.S.); (M.A.A.-a.)
| |
Collapse
|
23
|
Duong LT, Chu CQ, Nguyen PT, Nguyen ST, Tran BQ. Edge detection and graph neural networks to classify mammograms: A case study with a dataset from Vietnamese patients. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
24
|
Thawkar S. Feature selection and classification in mammography using hybrid crow search algorithm with Harris hawks optimization. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
25
|
Rib Fracture Detection with Dual-Attention Enhanced U-Net. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8945423. [PMID: 36035283 PMCID: PMC9410867 DOI: 10.1155/2022/8945423] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 07/24/2022] [Accepted: 08/02/2022] [Indexed: 11/18/2022]
Abstract
Rib fractures are common injuries caused by chest trauma, which may cause serious consequences. It is essential to diagnose rib fractures accurately. Low-dose thoracic computed tomography (CT) is commonly used for rib fracture diagnosis, and convolutional neural network- (CNN-) based methods have assisted doctors in rib fracture diagnosis in recent years. However, due to the lack of rib fracture data and the irregular, various shape of rib fractures, it is difficult for CNN-based methods to extract rib fracture features. As a result, they cannot achieve satisfying results in terms of accuracy and sensitivity in detecting rib fractures. Inspired by the attention mechanism, we proposed the CFSG U-Net for rib fracture detection. The CSFG U-Net uses the U-Net architecture and is enhanced by a dual-attention module, including a channel-wise fusion attention module (CFAM) and a spatial-wise group attention module (SGAM). CFAM uses the channel attention mechanism to reweight the feature map along the channel dimension and refine the U-Net's skip connections. SGAM uses the group technique to generate spatial attention to adjust feature maps in the spatial dimension, which allows the spatial attention module to capture more fine-grained semantic information. To evaluate the effectiveness of our proposed methods, we established a rib fracture dataset in our research. The experimental results on our dataset show that the maximum sensitivity of our proposed method is 89.58%, and the average FROC score is 81.28%, which outperforms the existing rib fracture detection methods and attention modules.
Collapse
|
26
|
Altameem A, Mahanty C, Poonia RC, Saudagar AKJ, Kumar R. Breast Cancer Detection in Mammography Images Using Deep Convolutional Neural Networks and Fuzzy Ensemble Modeling Techniques. Diagnostics (Basel) 2022; 12:1812. [PMID: 36010164 PMCID: PMC9406655 DOI: 10.3390/diagnostics12081812] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/10/2022] [Accepted: 07/13/2022] [Indexed: 11/17/2022] Open
Abstract
Breast cancer has evolved as the most lethal illness impacting women all over the globe. Breast cancer may be detected early, which reduces mortality and increases the chances of a full recovery. Researchers all around the world are working on breast cancer screening tools based on medical imaging. Deep learning approaches have piqued the attention of many in the medical imaging field due to their rapid growth. In this research, mammography pictures were utilized to detect breast cancer. We have used four mammography imaging datasets with a similar number of 1145 normal, benign, and malignant pictures using various deep CNN (Inception V4, ResNet-164, VGG-11, and DenseNet121) models as base classifiers. The proposed technique employs an ensemble approach in which the Gompertz function is used to build fuzzy rankings of the base classification techniques, and the decision scores of the base models are adaptively combined to construct final predictions. The proposed fuzzy ensemble techniques outperform each individual transfer learning methodology as well as multiple advanced ensemble strategies (Weighted Average, Sugeno Integral) with reference to prediction and accuracy. The suggested Inception V4 ensemble model with fuzzy rank based Gompertz function has a 99.32% accuracy rate. We believe that the suggested approach will be of tremendous value to healthcare practitioners in identifying breast cancer patients early on, perhaps leading to an immediate diagnosis.
Collapse
Affiliation(s)
- Ayman Altameem
- Department of Computer Science and Engineering, College of Applied Studies and Community Services, King Saud University, Riyadh 11533, Saudi Arabia;
| | - Chandrakanta Mahanty
- Department of Computer Science and Engineering, GIET University, Odisha 765022, India; (C.M.); (R.K.)
| | - Ramesh Chandra Poonia
- Department of Computer Science, CHRIST (Deemed to be University), Bangalore 560029, India;
| | | | - Raghvendra Kumar
- Department of Computer Science and Engineering, GIET University, Odisha 765022, India; (C.M.); (R.K.)
| |
Collapse
|
27
|
An integrated framework for breast mass classification and diagnosis using stacked ensemble of residual neural networks. Sci Rep 2022; 12:12259. [PMID: 35851592 PMCID: PMC9293883 DOI: 10.1038/s41598-022-15632-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/27/2022] [Indexed: 11/16/2022] Open
Abstract
A computer-aided diagnosis (CAD) system requires automated stages of tumor detection, segmentation, and classification that are integrated sequentially into one framework to assist the radiologists with a final diagnosis decision. In this paper, we introduce the final step of breast mass classification and diagnosis using a stacked ensemble of residual neural network (ResNet) models (i.e. ResNet50V2, ResNet101V2, and ResNet152V2). The work presents the task of classifying the detected and segmented breast masses into malignant or benign, and diagnosing the Breast Imaging Reporting and Data System (BI-RADS) assessment category with a score from 2 to 6 and the shape as oval, round, lobulated, or irregular. The proposed methodology was evaluated on two publicly available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Comparative experiments were conducted on the individual models and an average ensemble of models with an XGBoost classifier. Qualitative and quantitative results show that the proposed model achieved better performance for (1) Pathology classification with an accuracy of 95.13%, 99.20%, and 95.88%; (2) BI-RADS category classification with an accuracy of 85.38%, 99%, and 96.08% respectively on CBIS-DDSM, INbreast, and the private dataset; and (3) shape classification with 90.02% on the CBIS-DDSM dataset. Our results demonstrate that our proposed integrated framework could benefit from all automated stages to outperform the latest deep learning methodologies.
Collapse
|
28
|
An Improved Classification of Pork Adulteration in Beef Based on Electronic Nose Using Modified Deep Extreme Learning with Principal Component Analysis as Feature Learning. FOOD ANAL METHOD 2022. [DOI: 10.1007/s12161-022-02361-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
29
|
Tiryaki V, Kaplanoğlu V. Deep Learning-Based Multi-Label Tissue Segmentation and Density Assessment from Mammograms. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
30
|
Breast Cancer Mammograms Classification Using Deep Neural Network and Entropy-Controlled Whale Optimization Algorithm. Diagnostics (Basel) 2022; 12:diagnostics12020557. [PMID: 35204646 PMCID: PMC8871265 DOI: 10.3390/diagnostics12020557] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 01/22/2022] [Accepted: 01/30/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer has affected many women worldwide. To perform detection and classification of breast cancer many computer-aided diagnosis (CAD) systems have been established because the inspection of the mammogram images by the radiologist is a difficult and time taken task. To early diagnose the disease and provide better treatment lot of CAD systems were established. There is still a need to improve existing CAD systems by incorporating new methods and technologies in order to provide more precise results. This paper aims to investigate ways to prevent the disease as well as to provide new methods of classification in order to reduce the risk of breast cancer in women's lives. The best feature optimization is performed to classify the results accurately. The CAD system's accuracy improved by reducing the false-positive rates.The Modified Entropy Whale Optimization Algorithm (MEWOA) is proposed based on fusion for deep feature extraction and perform the classification. In the proposed method, the fine-tuned MobilenetV2 and Nasnet Mobile are applied for simulation. The features are extracted, and optimization is performed. The optimized features are fused and optimized by using MEWOA. Finally, by using the optimized deep features, the machine learning classifiers are applied to classify the breast cancer images. To extract the features and perform the classification, three publicly available datasets are used: INbreast, MIAS, and CBIS-DDSM. The maximum accuracy achieved in INbreast dataset is 99.7%, MIAS dataset has 99.8% and CBIS-DDSM has 93.8%. Finally, a comparison with other existing methods is performed, demonstrating that the proposed algorithm outperforms the other approaches.
Collapse
|
31
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|