1
|
Yan D, Zhao Z, Duan J, Qu J, Shi L, Wang Q, Zhang H. Deep learning-based immunohistochemical estimation of breast cancer via ultrasound image applications. Front Oncol 2024; 13:1263685. [PMID: 38264739 PMCID: PMC10803514 DOI: 10.3389/fonc.2023.1263685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 12/14/2023] [Indexed: 01/25/2024] Open
Abstract
Background Breast cancer is the key global menace to women's health, which ranks first by mortality rate. The rate reduction and early diagnostics of breast cancer are the mainstream of medical research. Immunohistochemical examination is the most important link in the process of breast cancer treatment, and its results directly affect physicians' decision-making on follow-up medical treatment. Purpose This study aims to develop a computer-aided diagnosis (CAD) method based on deep learning to classify breast ultrasound (BUS) images according to immunohistochemical results. Methods A new depth learning framework guided by BUS image data analysis was proposed for the classification of breast cancer nodes in BUS images. The proposed CAD classification network mainly comprised three innovation points. First, a multilevel feature distillation network (MFD-Net) based on CNN, which could extract feature layers of different scales, was designed. Then, the image features extracted at different depths were fused to achieve multilevel feature distillation using depth separable convolution and reverse depth separable convolution to increase convolution depths. Finally, a new attention module containing two independent submodules, the channel attention module (CAM) and the spatial attention module (SAM), was introduced to improve the model classification ability in channel and space. Results A total of 500 axial BUS images were retrieved from 294 patients who underwent BUS examination, and these images were detected and cropped, resulting in breast cancer node BUS image datasets, which were classified according to immunohistochemical findings, and the datasets were randomly subdivided into a training set (70%) and a test set (30%) in the classification process, with the results of the four immune indices output simultaneously from training and testing, in the model comparison experiment. Taking ER immune indicators as an example, the proposed model achieved a precision of 0.8933, a recall of 0.7563, an F1 score of 0.8191, and an accuracy of 0.8386, significantly outperforming the other models. The results of the designed ablation experiment also showed that the proposed multistage characteristic distillation structure and attention module were key in improving the accuracy rate. Conclusion The extensive experiments verify the high efficiency of the proposed method. It is considered the first classification of breast cancer by immunohistochemical results in breast cancer image processing, and it provides an effective aid for postoperative breast cancer treatment, greatly reduces the difficulty of diagnosis for doctors, and improves work efficiency.
Collapse
Affiliation(s)
- Ding Yan
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Zijian Zhao
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Jiajun Duan
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW, Australia
| | - Jia Qu
- Department of Ultrasound, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
- Department of Ultrasound, Shandong Provincial Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Linlin Shi
- Department of Ultrasound, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Qian Wang
- Department of Ultrasound, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Huawei Zhang
- Department of Ultrasound, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| |
Collapse
|
2
|
Montaha S, Azam S, Bhuiyan MRI, Chowa SS, Mukta MSH, Jonkman M. Malignancy pattern analysis of breast ultrasound images using clinical features and a graph convolutional network. Digit Health 2024; 10:20552076241251660. [PMID: 38817843 PMCID: PMC11138200 DOI: 10.1177/20552076241251660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 04/12/2024] [Indexed: 06/01/2024] Open
Abstract
Objective Early diagnosis of breast cancer can lead to effective treatment, possibly increase long-term survival rates, and improve quality of life. The objective of this study is to present an automated analysis and classification system for breast cancer using clinical markers such as tumor shape, orientation, margin, and surrounding tissue. The novelty and uniqueness of the study lie in the approach of considering medical features based on the diagnosis of radiologists. Methods Using clinical markers, a graph is generated where each feature is represented by a node, and the connection between them is represented by an edge which is derived through Pearson's correlation method. A graph convolutional network (GCN) model is proposed to classify breast tumors into benign and malignant, using the graph data. Several statistical tests are performed to assess the importance of the proposed features. The performance of the proposed GCN model is improved by experimenting with different layer configurations and hyper-parameter settings. Results Results show that the proposed model has a 98.73% test accuracy. The performance of the model is compared with a graph attention network, a one-dimensional convolutional neural network, and five transfer learning models, ten machine learning models, and three ensemble learning models. The performance of the model was further assessed with three supplementary breast cancer ultrasound image datasets, where the accuracies are 91.03%, 94.37%, and 89.62% for Dataset A, Dataset B, and Dataset C (combining Dataset A and Dataset B) respectively. Overfitting issues are assessed through k-fold cross-validation. Conclusion Several variants are utilized to present a more rigorous and fair evaluation of our work, especially the importance of extracting clinically relevant features. Moreover, a GCN model using graph data can be a promising solution for an automated feature-based breast image classification system.
Collapse
Affiliation(s)
- Sidratul Montaha
- Department of Computer Science, University of Calgary, Calgary, Canada
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| | | | - Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| | | | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| |
Collapse
|
3
|
Zhou G, Mosadegh B. Distilling Knowledge From an Ensemble of Vision Transformers for Improved Classification of Breast Ultrasound. Acad Radiol 2024; 31:104-120. [PMID: 37666747 DOI: 10.1016/j.acra.2023.08.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/20/2023] [Accepted: 08/05/2023] [Indexed: 09/06/2023]
Abstract
RATIONALE AND OBJECTIVES To develop a deep learning model for the automated classification of breast ultrasound images as benign or malignant. More specifically, the application of vision transformers, ensemble learning, and knowledge distillation is explored for breast ultrasound classification. MATERIALS AND METHODS Single view, B-mode ultrasound images were curated from the publicly available Breast Ultrasound Image (BUSI) dataset, which has categorical ground truth labels (benign vs malignant) assigned by radiologists and malignant cases confirmed by biopsy. The performance of vision transformers (ViT) is compared to convolutional neural networks (CNN), followed by a comparison between supervised, self-supervised, and randomly initialized ViT. Subsequently, the ensemble of 10 independently trained ViT, where the ensemble model is the unweighted average of the output of each individual model is compared to the performance of each ViT alone. Finally, we train a single ViT to emulate the ensembled ViT using knowledge distillation. RESULTS On this dataset that was trained using five-fold cross validation, ViT outperforms CNN, while self-supervised ViT outperform supervised and randomly initialized ViT. The ensemble model achieves an area under the receiver operating characteristics curve (AuROC) and area under the precision recall curve (AuPRC) of 0.977 and 0.965 on the test set, outperforming the average AuROC and AuPRC of the independently trained ViTs (0.958 ± 0.05 and 0.931 ± 0.016). The distilled ViT achieves an AuROC and AuPRC of 0.972 and 0.960. CONCLUSION Both transfer learning and ensemble learning can each offer increased performance independently and can be sequentially combined to collectively improve the performance of the final model. Furthermore, a single vision transformer can be trained to match the performance of an ensemble of a set of vision transformers using knowledge distillation.
Collapse
Affiliation(s)
| | - Bobak Mosadegh
- Dalio Institute of Cardiovascular Imaging, Department of Radiology, Weill Cornell Medicine, New York, New York
| |
Collapse
|
4
|
Karimzadeh M, Vakanski A, Xian M, Zhang B. POST-HOC EXPLAINABILITY OF BI-RADS DESCRIPTORS IN A MULTI-TASK FRAMEWORK FOR BREAST CANCER DETECTION AND SEGMENTATION. IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING : [PROCEEDINGS]. IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING 2023; 2023:10.1109/mlsp55844.2023.10286006. [PMID: 38572141 PMCID: PMC10989244 DOI: 10.1109/mlsp55844.2023.10286006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
Despite recent medical advancements, breast cancer remains one of the most prevalent and deadly diseases among women. Although machine learning-based Computer-Aided Diagnosis (CAD) systems have shown potential to assist radiologists in analyzing medical images, the opaque nature of the best-performing CAD systems has raised concerns about their trustworthiness and interpretability. This paper proposes MT-BI-RADS, a novel explainable deep learning approach for tumor detection in Breast Ultrasound (BUS) images. The approach offers three levels of explanations to enable radiologists to comprehend the decision-making process in predicting tumor malignancy. Firstly, the proposed model outputs the BI-RADS categories used for BUS image analysis by radiologists. Secondly, the model employs multitask learning to concurrently segment regions in images that correspond to tumors. Thirdly, the proposed approach outputs quantified contributions of each BI-RADS descriptor toward predicting the benign or malignant class using post-hoc explanations with Shapley Values.
Collapse
Affiliation(s)
| | | | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, USA
| | - Boyu Zhang
- Department of Computer Science, University of Idaho, Idaho Falls, USA
| |
Collapse
|
5
|
Zou X, Zhai J, Qian S, Li A, Tian F, Cao X, Wang R. Improved breast ultrasound tumor classification using dual-input CNN with GAP-guided attention loss. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15244-15264. [PMID: 37679179 DOI: 10.3934/mbe.2023682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Ultrasonography is a widely used medical imaging technique for detecting breast cancer. While manual diagnostic methods are subject to variability and time-consuming, computer-aided diagnostic (CAD) methods have proven to be more efficient. However, current CAD approaches neglect the impact of noise and artifacts on the accuracy of image analysis. To enhance the precision of breast ultrasound image analysis for identifying tissues, organs and lesions, we propose a novel approach for improved tumor classification through a dual-input model and global average pooling (GAP)-guided attention loss function. Our approach leverages a convolutional neural network with transformer architecture and modifies the single-input model for dual-input. This technique employs a fusion module and GAP operation-guided attention loss function simultaneously to supervise the extraction of effective features from the target region and mitigate the effect of information loss or redundancy on misclassification. Our proposed method has three key features: (i) ResNet and MobileViT are combined to enhance local and global information extraction. In addition, a dual-input channel is designed to include both attention images and original breast ultrasound images, mitigating the impact of noise and artifacts in ultrasound images. (ii) A fusion module and GAP operation-guided attention loss function are proposed to improve the fusion of dual-channel feature information, as well as supervise and constrain the weight of the attention mechanism on the fused focus region. (iii) Using the collected uterine fibroid ultrasound dataset to train ResNet18 and load the pre-trained weights, our experiments on the BUSI and BUSC public datasets demonstrate that the proposed method outperforms some state-of-the-art methods. The code will be publicly released at https://github.com/425877/Improved-Breast-Ultrasound-Tumor-Classification.
Collapse
Affiliation(s)
- Xiao Zou
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Jintao Zhai
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Shengyou Qian
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Ang Li
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Feng Tian
- School of Physics and Electronics, Hunan Normal University, Changsha 410081, China
| | - Xiaofei Cao
- College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
| | - Runmin Wang
- College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
| |
Collapse
|
6
|
Sasikala S, Arun Kumar S, Ezhilarasi M. Improved breast cancer detection using fusion of bimodal sonographic features through binary firefly algorithm. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2164944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
- S. Sasikala
- Department of Electronics & Communication Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| | - S. Arun Kumar
- Department of Electronics & Communication Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| | - M. Ezhilarasi
- Department of Electronics & Instrumentation Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
7
|
Yang K, Suzuki A, Ye J, Nosato H, Izumori A, Sakanashi H. CTG-Net: Cross-task guided network for breast ultrasound diagnosis. PLoS One 2022; 17:e0271106. [PMID: 35951606 PMCID: PMC9371312 DOI: 10.1371/journal.pone.0271106] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 06/24/2022] [Indexed: 11/19/2022] Open
Abstract
Deep learning techniques have achieved remarkable success in lesion segmentation and classification between benign and malignant tumors in breast ultrasound images. However, existing studies are predominantly focused on devising efficient neural network-based learning structures to tackle specific tasks individually. By contrast, in clinical practice, sonographers perform segmentation and classification as a whole; they investigate the border contours of the tissue while detecting abnormal masses and performing diagnostic analysis. Performing multiple cognitive tasks simultaneously in this manner facilitates exploitation of the commonalities and differences between tasks. Inspired by this unified recognition process, this study proposes a novel learning scheme, called the cross-task guided network (CTG-Net), for efficient ultrasound breast image understanding. CTG-Net integrates the two most significant tasks in computerized breast lesion pattern investigation: lesion segmentation and tumor classification. Further, it enables the learning of efficient feature representations across tasks from ultrasound images and the task-specific discriminative features that can greatly facilitate lesion detection. This is achieved using task-specific attention models to share the prediction results between tasks. Then, following the guidance of task-specific attention soft masks, the joint feature responses are efficiently calibrated through iterative model training. Finally, a simple feature fusion scheme is used to aggregate the attention-guided features for efficient ultrasound pattern analysis. We performed extensive experimental comparisons on multiple ultrasound datasets. Compared to state-of-the-art multi-task learning approaches, the proposed approach can improve the Dice’s coefficient, true-positive rate of segmentation, AUC, and sensitivity of classification by 11%, 17%, 2%, and 6%, respectively. The results demonstrate that the proposed cross-task guided feature learning framework can effectively fuse the complementary information of ultrasound image segmentation and classification tasks to achieve accurate tumor localization. Thus, it can aid sonographers to detect and diagnose breast cancer.
Collapse
Affiliation(s)
- Kaiwen Yang
- Graduate School of Science and Technology, University of Tsukuba, Tsukuba, Japan
- National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan
- * E-mail:
| | - Aiga Suzuki
- National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan
| | - Jiaxing Ye
- National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan
| | - Hirokazu Nosato
- National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan
| | | | - Hidenori Sakanashi
- Graduate School of Science and Technology, University of Tsukuba, Tsukuba, Japan
- National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan
| |
Collapse
|
8
|
Belhaj Soulami K, Kaabouch N, Nabil Saidi M. Breast cancer: Classification of suspicious regions in digital mammograms based on capsule network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103696] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
9
|
Non-Zero Crossing Point Detection in a Distorted Sinusoidal Signal Using Logistic Regression Model. COMPUTERS 2022. [DOI: 10.3390/computers11060094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
Non-Zero crossing point detection in a sinusoidal signal is essential in case of various power system and power electronics applications like power system protection and power converters controller design. In this paper 96 data sets are created from a distorted sinusoidal signal based on MATLAB simulation. Distorted sinusoidal signals are generated in MATLAB with various noise and harmonic levels. In this paper, logistic regression model is used to predict the non-zero crossing point in a distorted signal based on input features like slope, intercept, correlation and RMSE. Logistic regression model is trained and tested in Google Colab environment. As per simulation results, it is observed that logistic regression model is able to predict all non-zero-crossing point in a distorted signal.
Collapse
|
10
|
A gated convolutional neural network for classification of breast lesions in ultrasound images. Soft comput 2022. [DOI: 10.1007/s00500-022-07024-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
11
|
Ragab M, Albukhari A, Alyami J, Mansour RF. Ensemble Deep-Learning-Enabled Clinical Decision Support System for Breast Cancer Diagnosis and Classification on Ultrasound Images. BIOLOGY 2022; 11:biology11030439. [PMID: 35336813 PMCID: PMC8945718 DOI: 10.3390/biology11030439] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/25/2022] [Accepted: 03/11/2022] [Indexed: 01/02/2023]
Abstract
Simple Summary In the literature, there exist plenty of research works focused on the detection and classification of breast cancer. However, only a few works have focused on the classification of breast cancer using ultrasound scan images. Although deep transfer learning models are useful in breast cancer classification, owing to their outstanding performance in a number of applications, image pre-processing and segmentation techniques are essential. In this context, the current study developed a new Ensemble Deep-Learning-Enabled Clinical Decision Support System for the diagnosis and classification of breast cancer using ultrasound images. In the study, an optimal multi-level thresholding-based image segmentation technique was designed to identify the tumor-affected regions. The study also developed an ensemble of three deep learning models for feature extraction and an optimal machine learning classifier for breast cancer detection. The study offers a means of assisting radiologists and healthcare professionals in the breast cancer classification process. Abstract Clinical Decision Support Systems (CDSS) provide an efficient way to diagnose the presence of diseases such as breast cancer using ultrasound images (USIs). Globally, breast cancer is one of the major causes of increased mortality rates among women. Computer-Aided Diagnosis (CAD) models are widely employed in the detection and classification of tumors in USIs. The CAD systems are designed in such a way that they provide recommendations to help radiologists in diagnosing breast tumors and, furthermore, in disease prognosis. The accuracy of the classification process is decided by the quality of images and the radiologist’s experience. The design of Deep Learning (DL) models is found to be effective in the classification of breast cancer. In the current study, an Ensemble Deep-Learning-Enabled Clinical Decision Support System for Breast Cancer Diagnosis and Classification (EDLCDS-BCDC) technique was developed using USIs. The proposed EDLCDS-BCDC technique was intended to identify the existence of breast cancer using USIs. In this technique, USIs initially undergo pre-processing through two stages, namely wiener filtering and contrast enhancement. Furthermore, Chaotic Krill Herd Algorithm (CKHA) is applied with Kapur’s entropy (KE) for the image segmentation process. In addition, an ensemble of three deep learning models, VGG-16, VGG-19, and SqueezeNet, is used for feature extraction. Finally, Cat Swarm Optimization (CSO) with the Multilayer Perceptron (MLP) model is utilized to classify the images based on whether breast cancer exists or not. A wide range of simulations were carried out on benchmark databases and the extensive results highlight the better outcomes of the proposed EDLCDS-BCDC technique over recent methods.
Collapse
Affiliation(s)
- Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
- Mathematics Department, Faculty of Science, Al-Azhar University, Cairo 11884, Egypt
- Correspondence:
| | - Ashwag Albukhari
- Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
- Biochemistry Department, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Jaber Alyami
- Diagnostic Radiology Department, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
- Imaging Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Romany F. Mansour
- Department of Mathematics, Faculty of Science, New Valley University, El-Kharga 72511, Egypt;
| |
Collapse
|
12
|
Meraj T, Alosaimi W, Alouffi B, Rauf HT, Kumar SA, Damaševičius R, Alyami H. A quantization assisted U-Net study with ICA and deep features fusion for breast cancer identification using ultrasonic data. PeerJ Comput Sci 2021; 7:e805. [PMID: 35036531 PMCID: PMC8725669 DOI: 10.7717/peerj-cs.805] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 11/12/2021] [Indexed: 06/14/2023]
Abstract
Breast cancer is one of the leading causes of death in women worldwide-the rapid increase in breast cancer has brought about more accessible diagnosis resources. The ultrasonic breast cancer modality for diagnosis is relatively cost-effective and valuable. Lesion isolation in ultrasonic images is a challenging task due to its robustness and intensity similarity. Accurate detection of breast lesions using ultrasonic breast cancer images can reduce death rates. In this research, a quantization-assisted U-Net approach for segmentation of breast lesions is proposed. It contains two step for segmentation: (1) U-Net and (2) quantization. The quantization assists to U-Net-based segmentation in order to isolate exact lesion areas from sonography images. The Independent Component Analysis (ICA) method then uses the isolated lesions to extract features and are then fused with deep automatic features. Public ultrasonic-modality-based datasets such as the Breast Ultrasound Images Dataset (BUSI) and the Open Access Database of Raw Ultrasonic Signals (OASBUD) are used for evaluation comparison. The OASBUD data extracted the same features. However, classification was done after feature regularization using the lasso method. The obtained results allow us to propose a computer-aided design (CAD) system for breast cancer identification using ultrasonic modalities.
Collapse
Affiliation(s)
- Talha Meraj
- Department of Computer Science, COMSATS University Islamabad-Wah Campus, Wah Cantt, Pakistan
| | - Wael Alosaimi
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Hafiz Tayyab Rauf
- Department of Computer Science, Faculty of Engineering & Informatics, University of Bradford, Bradford, United Kingdom
| | - Swarn Avinash Kumar
- Department of Information Technology, Indian Institute of Information Technology, Uttar Pradesh, Jhalwa, Prayagraj, India
| | | | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| |
Collapse
|