1
|
Azam S, Montaha S, Raiaan MAK, Rafid AKMRH, Mukta SH, Jonkman M. An Automated Decision Support System to Analyze Malignancy Patterns of Breast Masses Employing Medically Relevant Features of Ultrasound Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:45-59. [PMID: 38343240 DOI: 10.1007/s10278-023-00925-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 09/22/2023] [Accepted: 10/23/2023] [Indexed: 03/02/2024]
Abstract
An automated computer-aided approach might aid radiologists in diagnosing breast cancer at a primary stage. This study proposes a novel decision support system to classify breast tumors into benign and malignant based on clinically important features, using ultrasound images. Nine handcrafted features, which align with the clinical markers used by radiologists, are extracted from the region of interest (ROI) of ultrasound images. To validate that these elected clinical markers have a significant impact on predicting the benign and malignant classes, ten machine learning (ML) models are experimented with resulting in test accuracies in the range of 96 to 99%. In addition, four feature selection techniques are explored where two features are eliminated according to the feature ranking score of each feature selection method. The Random Forest classifier is trained with the resultant four feature sets. Results indicate that even when eliminating only two features, the performance of the model is reduced for each feature selection technique. These experiments validate the efficiency and effectiveness of the clinically important features. To develop the decision support system, a probability density function (PDF) graph is generated for each feature in order to find a threshold range to distinguish benign and malignant tumors. Based on the threshold range of particular features, a decision support system is developed in such a way that if at least eight out of nine features are within the threshold range, the image will be denoted as true predicted. With this algorithm, a test accuracy of 99.38% and an F1 Score of 99.05% is achieved, which means that our decision support system outperforms all the previously trained ML models. Moreover, after calculating individual class-based test accuracies, for the benign class, a test accuracy of 99.31% has been attained where only three benign instances are misclassified out of 437 instances, and for the malignant class, a test accuracy of 99.52% has been attained where only one malignant instance is misclassified out of 210 instances. This system is robust, time-effective, and reliable as the radiologists' criteria are followed and may aid specialists in making a diagnosis.
Collapse
Affiliation(s)
- Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | | | | | | | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
2
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
3
|
Cheng K, Wang J, Liu J, Zhang X, Shen Y, Su H. Public health implications of computer-aided diagnosis and treatment technologies in breast cancer care. AIMS Public Health 2023; 10:867-895. [PMID: 38187901 PMCID: PMC10764974 DOI: 10.3934/publichealth.2023057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 10/10/2023] [Indexed: 01/09/2024] Open
Abstract
Breast cancer remains a significant public health issue, being a leading cause of cancer-related mortality among women globally. Timely diagnosis and efficient treatment are crucial for enhancing patient outcomes, reducing healthcare burdens and advancing community health. This systematic review, following the PRISMA guidelines, aims to comprehensively synthesize the recent advancements in computer-aided diagnosis and treatment for breast cancer. The study covers the latest developments in image analysis and processing, machine learning and deep learning algorithms, multimodal fusion techniques and radiation therapy planning and simulation. The results of the review suggest that machine learning, augmented and virtual reality and data mining are the three major research hotspots in breast cancer management. Moreover, this paper discusses the challenges and opportunities for future research in this field. The conclusion highlights the importance of computer-aided techniques in the management of breast cancer and summarizes the key findings of the review.
Collapse
Affiliation(s)
- Kai Cheng
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jiangtao Wang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jian Liu
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Xiangsheng Zhang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Yuanyuan Shen
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Hang Su
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
4
|
Dalal S, Onyema EM, Kumar P, Maryann DC, Roselyn AO, Obichili MI. A hybrid machine learning model for timely prediction of breast cancer. INTERNATIONAL JOURNAL OF MODELING, SIMULATION, AND SCIENTIFIC COMPUTING 2023; 14. [DOI: 10.1142/s1793962323410234] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Abstract
Breast cancer is one of the leading causes of untimely deaths among women in various countries across the world. This can be attributed to many factors including late detection which often increase its severity. Thus, detecting the disease early would help mitigate its mortality rate and other risks associated with it. This study developed a hybrid machine learning model for timely prediction of breast cancer to help combat the disease. The dataset from Kaggle was adopted to predict the breast tumor growth and sizes using random tree classification, logistic regression, XBoost tree and multilayer perceptron on the dataset. The implementation of these machine learning algorithms and visualization of the results was done using Python. The results achieved a high accuracy (99.65%) on training and testing datasets which is far better than traditional means. The predictive model has good potential to enhance early detection and diagnosis of breast cancer and improvement of treatment outcome. It could also assist patients to timely deal with their condition or life patterns to support their recovery or survival.
Collapse
Affiliation(s)
- Surjeet Dalal
- College of Computing Science and IT, Teerthanker Mahaveer University, Moradabad, UP, India
| | - Edeh Michael Onyema
- Department of Mathematics and Computer Science, Coal City University, Enugu, Nigeria
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| | - Pawan Kumar
- College of Computing Science and IT, Teerthanker Mahaveer University, Moradabad, UP, India
| | | | | | - Mercy Ifeyinwa Obichili
- Department of Mass Communication, Alex Ekwueme Federal University, Ndufu-Alike Ikwo, Ebonyi State, Nigeria
| |
Collapse
|
5
|
Pati A, Parhi M, Pattanayak BK, Singh D, Singh V, Kadry S, Nam Y, Kang BG. Breast Cancer Diagnosis Based on IoT and Deep Transfer Learning Enabled by Fog Computing. Diagnostics (Basel) 2023; 13:2191. [PMID: 37443585 DOI: 10.3390/diagnostics13132191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023] Open
Abstract
Across all countries, both developing and developed, women face the greatest risk of breast cancer. Patients who have their breast cancer diagnosed and staged early have a better chance of receiving treatment before the disease spreads. The automatic analysis and classification of medical images are made possible by today's technology, allowing for quicker and more accurate data processing. The Internet of Things (IoT) is now crucial for the early and remote diagnosis of chronic diseases. In this study, mammography images from the publicly available online repository The Cancer Imaging Archive (TCIA) were used to train a deep transfer learning (DTL) model for an autonomous breast cancer diagnostic system. The data were pre-processed before being fed into the model. A popular deep learning (DL) technique, i.e., convolutional neural networks (CNNs), was combined with transfer learning (TL) techniques such as ResNet50, InceptionV3, AlexNet, VGG16, and VGG19 to boost prediction accuracy along with a support vector machine (SVM) classifier. Extensive simulations were analyzed by employing a variety of performances and network metrics to demonstrate the viability of the proposed paradigm. Outperforming some current works based on mammogram images, the experimental accuracy, precision, sensitivity, specificity, and f1-scores reached 97.99%, 99.51%, 98.43%, 80.08%, and 98.97%, respectively, on the huge dataset of mammography images categorized as benign and malignant, respectively. Incorporating Fog computing technologies, this model safeguards the privacy and security of patient data, reduces the load on centralized servers, and increases the output.
Collapse
Affiliation(s)
- Abhilash Pati
- Department of Computer Science and Engineering, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Manoranjan Parhi
- Centre for Data Sciences, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Binod Kumar Pattanayak
- Department of Computer Science and Engineering, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Debabrata Singh
- Department of Computer Applications, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Vijendra Singh
- School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
- MEU Research Unit, Middle East University, Amman 11831, Jordan
| | - Yunyoung Nam
- Department of ICT Convergence, Soonchunhyang University, Asan 31538, Republic of Korea
| | - Byeong-Gwon Kang
- Department of ICT Convergence, Soonchunhyang University, Asan 31538, Republic of Korea
| |
Collapse
|
6
|
Ma Q, Shen C, Gao Y, Duan Y, Li W, Lu G, Qin X, Zhang C, Wang J. Radiomics Analysis of Breast Lesions in Combination with Coronal Plane of ABVS and Strain Elastography. BREAST CANCER (DOVE MEDICAL PRESS) 2023; 15:381-390. [PMID: 37260586 PMCID: PMC10228588 DOI: 10.2147/bctt.s410356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 05/23/2023] [Indexed: 06/02/2023]
Abstract
Background Breast cancer is the most common tumor globally. Automated Breast Volume Scanner (ABVS) and strain elastography (SE) can provide more useful breast information. The use of radiomics combined with ABVS and SE images to predict breast cancer has become a new focus. Therefore, this study developed and validated a radiomics analysis of breast lesions in combination with coronal plane of ABVS and SE to improve the differential diagnosis of benign and malignant breast diseases. Patients and Methods 620 pathologically confirmed breast lesions from January 2017 to August 2021 were retrospectively analyzed and randomly divided into a training set (n=434) and a validation set (n=186). Radiomic features of the lesions were extracted from ABVS, B-ultrasound, and strain elastography (SE) images, respectively. These were then filtered by Gradient Boosted Decision Tree (GBDT) and multiple logistic regression. The ABVS model is based on coronal plane features for the breast, B+SE model is based on features of B-ultrasound and SE, and the multimodal model is based on features of three examinations. The evaluation of the predicted performance of the three models used the receiver operating characteristic (ROC) and decision curve analysis (DCA). Results The area under the curve, accuracy, specificity, and sensitivity of the multimodal model in the training set are 0.975 (95% CI:0.959-0.991),93.78%, 92.02%, and 96.49%, respectively, and 0.946 (95% CI:0.913 -0.978), 87.63%, 83.93%, and 93.24% in the validation set, respectively. The multimodal model outperformed the ABVS model and B+SE model in both the training (P < 0.001, P = 0.002, respectively) and validation sets (P < 0.001, P = 0.034, respectively). Conclusion Radiomics from the coronal plane of the breast lesion provide valuable information for identification. A multimodal model combination with radiomics from ABVS, B-ultrasound, and SE could improve the diagnostic efficacy of breast masses.
Collapse
Affiliation(s)
- Qianqing Ma
- Department of Ultrasound, the First Affiliated Hospital of Anhui Medical University, Hefei, People’s Republic of China
| | - Chunyun Shen
- Department of Ultrasound, Wuhu No. 2 People’s Hospital, Wuhu, People’s Republic of China
| | - Yankun Gao
- Department of Radiology, the First Affiliated Hospital of Anhui Medical University, Hefei, People’s Republic of China
| | - Yayang Duan
- Department of Ultrasound, the First Affiliated Hospital of Anhui Medical University, Hefei, People’s Republic of China
| | - Wanyan Li
- Department of Ultrasound, Linquan Country People’s Hospital, Fuyang, People’s Republic of China
| | - Gensheng Lu
- Department of Pathology, Wuhu No. 2 People’s Hospital, Wuhu, People’s Republic of China
| | - Xiachuan Qin
- Department of Ultrasound, the First Affiliated Hospital of Anhui Medical University, Hefei, People’s Republic of China
| | - Chaoxue Zhang
- Department of Ultrasound, the First Affiliated Hospital of Anhui Medical University, Hefei, People’s Republic of China
| | - Junli Wang
- Department of Ultrasound, Wuhu No. 2 People’s Hospital, Wuhu, People’s Republic of China
| |
Collapse
|
7
|
Alsubai S, Alqahtani A, Sha M. Genetic hyperparameter optimization with Modified Scalable-Neighbourhood Component Analysis for breast cancer prognostication. Neural Netw 2023; 162:240-257. [PMID: 36913821 DOI: 10.1016/j.neunet.2023.02.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 12/30/2022] [Accepted: 02/23/2023] [Indexed: 03/02/2023]
Abstract
Breast cancer is common among women resulting in mortality when left untreated. Early detection is vital so that suitable treatment could assist cancer from spreading further and save people's life. The traditional way of detection is a time-consuming process. With the evolvement of DM (Data Mining), the healthcare industry could be benefitted in predicting the disease as it permits the physicians to determine the significant attributes for diagnosis. Though, conventional techniques have used DM-based methods to identify breast cancer, they lacked in terms of prediction rate. Moreover, parametric-Softmax classifiers have been a general option by conventional works with fixed classes, particularly when huge labelled data are present during training. Nevertheless, this turns into an issue for open set cases where new classes are encountered along with few instances to learn a generalized parametric classifier. Thus, the present study aims to implement a non-parametric strategy by optimizing the embedding of a feature rather than parametric classifiers. This research utilizes Deep CNN (Deep Convolutional Neural Network) and Inception V3 for learning visual features which preserve neighbourhood outline in semantic space relying on NCA (Neighbourhood Component Analysis) criteria. Delimited by its bottleneck, the study proposes MS-NCA (Modified Scalable-Neighbourhood Component Analysis) that relies on a non-linear objective function to perform feature fusion by optimizing the distance-learning objective due to which it gains the capability of computing inner feature products without performing mapping which increases the scalability of MS-NCA. Finally, G-HPO (Genetic-Hyper-parameter Optimization) is proposed. In this case, the new stage in the algorithm simply denotes the enhancement in the length of chromosome bringing several hyperparameters into subsequent XGBoost, NB and RF models having numerous layers for identifying the normal and affected cases of breast cancer for which optimized hyper-parameter values of RF (Random Forest), NB (Naïve Bayes), and XGBoost (eXtreme Gradient Boosting) are determined. This process helps in improvising the classification rate which is confirmed through analytical results.
Collapse
Affiliation(s)
- Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| | - Mohemmed Sha
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| |
Collapse
|
8
|
Xie L, Liu Z, Pei C, Liu X, Cui YY, He NA, Hu L. Convolutional neural network based on automatic segmentation of peritumoral shear-wave elastography images for predicting breast cancer. Front Oncol 2023; 13:1099650. [PMID: 36865812 PMCID: PMC9970986 DOI: 10.3389/fonc.2023.1099650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 01/31/2023] [Indexed: 02/16/2023] Open
Abstract
Objective Our aim was to develop dual-modal CNN models based on combining conventional ultrasound (US) images and shear-wave elastography (SWE) of peritumoral region to improve prediction of breast cancer. Method We retrospectively collected US images and SWE data of 1271 ACR- BIRADS 4 breast lesions from 1116 female patients (mean age ± standard deviation, 45.40 ± 9.65 years). The lesions were divided into three subgroups based on the maximum diameter (MD): ≤15 mm; >15 mm and ≤25 mm; >25 mm. We recorded lesion stiffness (SWV1) and 5-point average stiffness of the peritumoral tissue (SWV5). The CNN models were built based on the segmentation of different widths of peritumoral tissue (0.5 mm, 1.0 mm, 1.5 mm, 2.0 mm) and internal SWE image of the lesions. All single-parameter CNN models, dual-modal CNN models, and quantitative SWE parameters in the training cohort (971 lesions) and the validation cohort (300 lesions) were assessed by receiver operating characteristic (ROC) curve. Results The US + 1.0 mm SWE model achieved the highest area under the ROC curve (AUC) in the subgroup of lesions with MD ≤15 mm in both the training (0.94) and the validation cohorts (0.91). In the subgroups with MD between15 and 25 mm and above 25 mm, the US + 2.0 mm SWE model achieved the highest AUCs in both the training cohort (0.96 and 0.95, respectively) and the validation cohort (0.93 and 0.91, respectively). Conclusion The dual-modal CNN models based on the combination of US and peritumoral region SWE images allow accurate prediction of breast cancer.
Collapse
Affiliation(s)
- Li Xie
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Zhen Liu
- Department of Computing, Hebin Intelligent Robots Co., LTD., Hefei, China
| | - Chong Pei
- Department of Respiratory and Critical Care Medicine, The First People’s Hospital of Hefei City, The Third Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiao Liu
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Ya-yun Cui
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Nian-an He
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China,*Correspondence: Nian-an He, ; Lei Hu,
| | - Lei Hu
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China,*Correspondence: Nian-an He, ; Lei Hu,
| |
Collapse
|
9
|
Sasikala S, Arun Kumar S, Ezhilarasi M. Improved breast cancer detection using fusion of bimodal sonographic features through binary firefly algorithm. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2164944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
- S. Sasikala
- Department of Electronics & Communication Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| | - S. Arun Kumar
- Department of Electronics & Communication Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| | - M. Ezhilarasi
- Department of Electronics & Instrumentation Engineering, Kumaraguru College of Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
10
|
Baghdadi NA, Malki A, Magdy Balaha H, AbdulAzeem Y, Badawy M, Elhosseini M. Classification of breast cancer using a manta-ray foraging optimized transfer learning framework. PeerJ Comput Sci 2022; 8:e1054. [PMID: 36092017 PMCID: PMC9454783 DOI: 10.7717/peerj-cs.1054] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Due to its high prevalence and wide dissemination, breast cancer is a particularly dangerous disease. Breast cancer survival chances can be improved by early detection and diagnosis. For medical image analyzers, diagnosing is tough, time-consuming, routine, and repetitive. Medical image analysis could be a useful method for detecting such a disease. Recently, artificial intelligence technology has been utilized to help radiologists identify breast cancer more rapidly and reliably. Convolutional neural networks, among other technologies, are promising medical image recognition and classification tools. This study proposes a framework for automatic and reliable breast cancer classification based on histological and ultrasound data. The system is built on CNN and employs transfer learning technology and metaheuristic optimization. The Manta Ray Foraging Optimization (MRFO) approach is deployed to improve the framework's adaptability. Using the Breast Cancer Dataset (two classes) and the Breast Ultrasound Dataset (three-classes), eight modern pre-trained CNN architectures are examined to apply the transfer learning technique. The framework uses MRFO to improve the performance of CNN architectures by optimizing their hyperparameters. Extensive experiments have recorded performance parameters, including accuracy, AUC, precision, F1-score, sensitivity, dice, recall, IoU, and cosine similarity. The proposed framework scored 97.73% on histopathological data and 99.01% on ultrasound data in terms of accuracy. The experimental results show that the proposed framework is superior to other state-of-the-art approaches in the literature review.
Collapse
Affiliation(s)
- Nadiah A. Baghdadi
- College of Nursing, Nursing Management and Education Department, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amer Malki
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
| | - Hossam Magdy Balaha
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Yousry AbdulAzeem
- Computer Engineering Department, Misr Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Mahmoud Badawy
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mostafa Elhosseini
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| |
Collapse
|
11
|
Abstract
Machine learning (ML) methods are pervading an increasing number of fields of application because of their capacity to effectively solve a wide variety of challenging problems. The employment of ML techniques in ultrasound imaging applications started several years ago but the scientific interest in this issue has increased exponentially in the last few years. The present work reviews the most recent (2019 onwards) implementations of machine learning techniques for two of the most popular ultrasound imaging fields, medical diagnostics and non-destructive evaluation. The former, which covers the major part of the review, was analyzed by classifying studies according to the human organ investigated and the methodology (e.g., detection, segmentation, and/or classification) adopted, while for the latter, some solutions to the detection/classification of material defects or particular patterns are reported. Finally, the main merits of machine learning that emerged from the study analysis are summarized and discussed.
Collapse
|
12
|
Hassanien MA, Singh VK, Puig D, Abdel-Nasser M. Predicting Breast Tumor Malignancy Using Deep ConvNeXt Radiomics and Quality-Based Score Pooling in Ultrasound Sequences. Diagnostics (Basel) 2022; 12:1053. [PMID: 35626208 PMCID: PMC9139635 DOI: 10.3390/diagnostics12051053] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 04/20/2022] [Accepted: 04/21/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer needs to be detected early to reduce mortality rate. Ultrasound imaging (US) could significantly enhance diagnosing cases with dense breasts. Most of the existing computer-aided diagnosis (CAD) systems employ a single ultrasound image for the breast tumor to extract features to classify it as benign or malignant. However, the accuracy of such CAD system is limited due to the large tumor size and shape variation, irregular and ambiguous tumor boundaries, and low signal-to-noise ratio in ultrasound images due to their noisy nature and the significant similarity between normal and abnormal tissues. To handle these issues, we propose a deep-learning-based radiomics method based on breast US sequences in this paper. The proposed approach involves three main components: radiomic features extraction based on a deep learning network, so-called ConvNeXt, a malignancy score pooling mechanism, and visual interpretations. Specifically, we employ the ConvNeXt network, a deep convolutional neural network (CNN) trained using the vision transformer style. We also propose an efficient pooling mechanism to fuse the malignancy scores of each breast US sequence frame based on image-quality statistics. The ablation study and experimental results demonstrate that our method achieves competitive results compared to other CNN-based methods.
Collapse
Affiliation(s)
- Mohamed A. Hassanien
- Department of Computer Engineering and Mathematics, Univerity Rovira i Virgili, 43007 Tarragona, Spain; (M.A.H.); (D.P.); (M.A.-N.)
| | - Vivek Kumar Singh
- Precision Medicine Centre of Excellence, School of Medicine, Dentistry and Biomedical Sciences, Queen’s University Belfast, Belfast BT7 1NN, UK
| | - Domenec Puig
- Department of Computer Engineering and Mathematics, Univerity Rovira i Virgili, 43007 Tarragona, Spain; (M.A.H.); (D.P.); (M.A.-N.)
| | - Mohamed Abdel-Nasser
- Department of Computer Engineering and Mathematics, Univerity Rovira i Virgili, 43007 Tarragona, Spain; (M.A.H.); (D.P.); (M.A.-N.)
- Electrical Engineering Department, Aswan University, Aswan 81528, Egypt
| |
Collapse
|
13
|
Ukwuoma CC, Urama GC, Qin Z, Bin Heyat MB, Mohammed Khan H, Akhtar F, Masadeh MS, Ibegbulam CS, Delali FL, AlShorman O. Boosting Breast Cancer Classification from Microscopic Images Using Attention Mechanism. 2022 INTERNATIONAL CONFERENCE ON DECISION AID SCIENCES AND APPLICATIONS (DASA) 2022. [DOI: 10.1109/dasa54658.2022.9765013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Chiagoziem C. Ukwuoma
- University of Electronic Science and Technology of China,School of Information and Software Engineering,Chengdu,Sichuan,China
| | - Gilbert C. Urama
- University of Electronic Science and Technology of China,School of Computer Science and Engineering,Chengdu,Sichuan,China
| | - Zhiguang Qin
- University of Electronic Science and Technology of China,School of Information and Software Engineering,Chengdu,Sichuan,China
| | - Md Belal Bin Heyat
- Sichuan University,West China Hospital,Department of Orthopedics Surgery,Chengdu,Sichuan,China
| | - Haider Mohammed Khan
- University of Electronic Science and Technology of China,School of Computer Science and Engineering,Chengdu,Sichuan,China
| | - Faijan Akhtar
- University of Electronic Science and Technology of China,School of Computer Science and Engineering,Chengdu,Sichuan,China
| | - Mahmoud S. Masadeh
- Yarmouk University,Hijjawi Faculty for Engineering,Computer Engineering Department,Irbid,Jordan
| | - Chukwuemeka S. Ibegbulam
- Federal University of Technology,Department of Polymer and Textile Engineering,Owerri,Imo State,Nigeria
| | - Fiasam Linda Delali
- University of Electronic Science and Technology of China,School of Information and Software Engineering,Chengdu,Sichuan,China
| | - Omar AlShorman
- Najran University,Faculty of Engineering and AlShrouk Traiding Company,Najran,KSA
| |
Collapse
|
14
|
Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. SENSORS 2022; 22:s22030807. [PMID: 35161552 PMCID: PMC8840464 DOI: 10.3390/s22030807] [Citation(s) in RCA: 56] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 12/11/2022]
Abstract
After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK;
| | - Ameer Hamza
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Artūras Mickus
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
- Correspondence:
| |
Collapse
|