1
|
S S, Dharani Devi G, V R, Jeyalakshmi J. Privacy-Preserving Breast Cancer Classification: A Federated Transfer Learning Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1488-1504. [PMID: 38424280 PMCID: PMC11300768 DOI: 10.1007/s10278-024-01035-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/11/2024] [Accepted: 01/30/2024] [Indexed: 03/02/2024]
Abstract
Breast cancer is deadly cancer causing a considerable number of fatalities among women in worldwide. To enhance patient outcomes as well as survival rates, early and accurate detection is crucial. Machine learning techniques, particularly deep learning, have demonstrated impressive success in various image recognition tasks, including breast cancer classification. However, the reliance on large labeled datasets poses challenges in the medical domain due to privacy issues and data silos. This study proposes a novel transfer learning approach integrated into a federated learning framework to solve the limitations of limited labeled data and data privacy in collaborative healthcare settings. For breast cancer classification, the mammography and MRO images were gathered from three different medical centers. Federated learning, an emerging privacy-preserving paradigm, empowers multiple medical institutions to jointly train the global model while maintaining data decentralization. Our proposed methodology capitalizes on the power of pre-trained ResNet, a deep neural network architecture, as a feature extractor. By fine-tuning the higher layers of ResNet using breast cancer datasets from diverse medical centers, we enable the model to learn specialized features relevant to different domains while leveraging the comprehensive image representations acquired from large-scale datasets like ImageNet. To overcome domain shift challenges caused by variations in data distributions across medical centers, we introduce domain adversarial training. The model learns to minimize the domain discrepancy while maximizing classification accuracy, facilitating the acquisition of domain-invariant features. We conducted extensive experiments on diverse breast cancer datasets obtained from multiple medical centers. Comparative analysis was performed to evaluate the proposed approach against traditional standalone training and federated learning without domain adaptation. When compared with traditional models, our proposed model showed a classification accuracy of 98.8% and a computational time of 12.22 s. The results showcase promising enhancements in classification accuracy and model generalization, underscoring the potential of our method in improving breast cancer classification performance while upholding data privacy in a federated healthcare environment.
Collapse
Affiliation(s)
- Selvakanmani S
- Department of Information Technology, R.M.K Engineering College, Chennai, Tamil Nadu, India.
| | - G Dharani Devi
- Department of Computer Science and Engineering, Rajalakshmi Engineering College, Chennai, Tamil Nadu, India
| | - Rekha V
- Department of Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, Tamil Nadu, India
| | - J Jeyalakshmi
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidhyapeetham, Chennai, India
| |
Collapse
|
2
|
Bhalla D, Rangarajan K, Chandra T, Banerjee S, Arora C. Reproducibility and Explainability of Deep Learning in Mammography: A Systematic Review of Literature. Indian J Radiol Imaging 2024; 34:469-487. [PMID: 38912238 PMCID: PMC11188703 DOI: 10.1055/s-0043-1775737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background Although abundant literature is currently available on the use of deep learning for breast cancer detection in mammography, the quality of such literature is widely variable. Purpose To evaluate published literature on breast cancer detection in mammography for reproducibility and to ascertain best practices for model design. Methods The PubMed and Scopus databases were searched to identify records that described the use of deep learning to detect lesions or classify images into cancer or noncancer. A modification of Quality Assessment of Diagnostic Accuracy Studies (mQUADAS-2) tool was developed for this review and was applied to the included studies. Results of reported studies (area under curve [AUC] of receiver operator curve [ROC] curve, sensitivity, specificity) were recorded. Results A total of 12,123 records were screened, of which 107 fit the inclusion criteria. Training and test datasets, key idea behind model architecture, and results were recorded for these studies. Based on mQUADAS-2 assessment, 103 studies had high risk of bias due to nonrepresentative patient selection. Four studies were of adequate quality, of which three trained their own model, and one used a commercial network. Ensemble models were used in two of these. Common strategies used for model training included patch classifiers, image classification networks (ResNet in 67%), and object detection networks (RetinaNet in 67%). The highest reported AUC was 0.927 ± 0.008 on a screening dataset, while it reached 0.945 (0.919-0.968) on an enriched subset. Higher values of AUC (0.955) and specificity (98.5%) were reached when combined radiologist and Artificial Intelligence readings were used than either of them alone. None of the studies provided explainability beyond localization accuracy. None of the studies have studied interaction between AI and radiologist in a real world setting. Conclusion While deep learning holds much promise in mammography interpretation, evaluation in a reproducible clinical setting and explainable networks are the need of the hour.
Collapse
Affiliation(s)
- Deeksha Bhalla
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Tany Chandra
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Subhashis Banerjee
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| |
Collapse
|
3
|
Ru J, Zhu Z, Shi J. Spatial and geometric learning for classification of breast tumors from multi-center ultrasound images: a hybrid learning approach. BMC Med Imaging 2024; 24:133. [PMID: 38840240 PMCID: PMC11155188 DOI: 10.1186/s12880-024-01307-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Accepted: 05/27/2024] [Indexed: 06/07/2024] Open
Abstract
BACKGROUND Breast cancer is the most common cancer among women, and ultrasound is a usual tool for early screening. Nowadays, deep learning technique is applied as an auxiliary tool to provide the predictive results for doctors to decide whether to make further examinations or treatments. This study aimed to develop a hybrid learning approach for breast ultrasound classification by extracting more potential features from local and multi-center ultrasound data. METHODS We proposed a hybrid learning approach to classify the breast tumors into benign and malignant. Three multi-center datasets (BUSI, BUS, OASBUD) were used to pretrain a model by federated learning, then every dataset was fine-tuned at local. The proposed model consisted of a convolutional neural network (CNN) and a graph neural network (GNN), aiming to extract features from images at a spatial level and from graphs at a geometric level. The input images are small-sized and free from pixel-level labels, and the input graphs are generated automatically in an unsupervised manner, which saves the costs of labor and memory space. RESULTS The classification AUCROC of our proposed method is 0.911, 0.871 and 0.767 for BUSI, BUS and OASBUD. The balanced accuracy is 87.6%, 85.2% and 61.4% respectively. The results show that our method outperforms conventional methods. CONCLUSIONS Our hybrid approach can learn the inter-feature among multi-center data and the intra-feature of local data. It shows potential in aiding doctors for breast tumor classification in ultrasound at an early stage.
Collapse
Affiliation(s)
- Jintao Ru
- Department of Medical Engineering, Shaoxing Hospital of Traditional Chinese Medicine, Shaoxing, Zhejiang, People's Republic of China.
| | - Zili Zhu
- Department of Radiology, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, People's Republic of China
| | - Jialin Shi
- Rehabilitation Medicine Institute, Zhejiang Rehabilitation Medical Center, Hangzhou, Zhejiang, People's Republic of China
| |
Collapse
|
4
|
Nalla V, Pouriyeh S, Parizi RM, Trivedi H, Sheng QZ, Hwang I, Seyyed-Kalantari L, Woo M. Deep learning for computer-aided abnormalities classification in digital mammogram: A data-centric perspective. Curr Probl Diagn Radiol 2024; 53:346-352. [PMID: 38302303 DOI: 10.1067/j.cpradiol.2024.01.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 11/05/2023] [Accepted: 01/16/2024] [Indexed: 02/03/2024]
Abstract
Breast cancer is the most common type of cancer in women, and early abnormality detection using mammography can significantly improve breast cancer survival rates. Diverse datasets are required to improve the training and validation of deep learning (DL) systems for autonomous breast cancer diagnosis. However, only a small number of mammography datasets are publicly available. This constraint has created challenges when comparing different DL models using the same dataset. The primary contribution of this study is the comprehensive description of a selection of currently available public mammography datasets. The information available on publicly accessible datasets is summarized and their usability reviewed to enable more effective models to be developed for breast cancer detection and to improve understanding of existing models trained using these datasets. This study aims to bridge the existing knowledge gap by offering researchers and practitioners a valuable resource to develop and assess DL models in breast cancer diagnosis.
Collapse
Affiliation(s)
- Vineela Nalla
- Department of Information Technology, Kennesaw State University, Kennesaw, Georgia, USA
| | - Seyedamin Pouriyeh
- Department of Information Technology, Kennesaw State University, Kennesaw, Georgia, USA
| | - Reza M Parizi
- Decentralized Science Lab, Kennesaw State University, Marietta, GA, USA
| | - Hari Trivedi
- Department of Radiology and Imaging Services, Emory University, Atlanta, Georgia, USA
| | - Quan Z Sheng
- School of Computing, Macquarie University, Sydney, Australia
| | - Inchan Hwang
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, Georgia, USA
| | - Laleh Seyyed-Kalantari
- Department of Electrical Engineering and Computer Science, York University, Toronto, Ontario, Canada
| | - MinJae Woo
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, Georgia, USA.
| |
Collapse
|
5
|
Safdar Ali Khan M, Husen A, Nisar S, Ahmed H, Shah Muhammad S, Aftab S. Offloading the computational complexity of transfer learning with generic features. PeerJ Comput Sci 2024; 10:e1938. [PMID: 38660182 PMCID: PMC11041970 DOI: 10.7717/peerj-cs.1938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 02/19/2024] [Indexed: 04/26/2024]
Abstract
Deep learning approaches are generally complex, requiring extensive computational resources and having high time complexity. Transfer learning is a state-of-the-art approach to reducing the requirements of high computational resources by using pre-trained models without compromising accuracy and performance. In conventional studies, pre-trained models are trained on datasets from different but similar domains with many domain-specific features. The computational requirements of transfer learning are directly dependent on the number of features that include the domain-specific and the generic features. This article investigates the prospects of reducing the computational requirements of the transfer learning models by discarding domain-specific features from a pre-trained model. The approach is applied to breast cancer detection using the dataset curated breast imaging subset of the digital database for screening mammography and various performance metrics such as precision, accuracy, recall, F1-score, and computational requirements. It is seen that discarding the domain-specific features to a specific limit provides significant performance improvements as well as minimizes the computational requirements in terms of training time (reduced by approx. 12%), processor utilization (reduced approx. 25%), and memory usage (reduced approx. 22%). The proposed transfer learning strategy increases accuracy (approx. 7%) and offloads computational complexity expeditiously.
Collapse
Affiliation(s)
- Muhammad Safdar Ali Khan
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Arif Husen
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
- Department of Computer Science, COMSATS Institute of Information Technology, Lahore, Punjab, Pakistan
| | - Shafaq Nisar
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Hasnain Ahmed
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Syed Shah Muhammad
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Shabib Aftab
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| |
Collapse
|
6
|
Sharkas M, Attallah O. Color-CADx: a deep learning approach for colorectal cancer classification through triple convolutional neural networks and discrete cosine transform. Sci Rep 2024; 14:6914. [PMID: 38519513 PMCID: PMC10959971 DOI: 10.1038/s41598-024-56820-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 03/11/2024] [Indexed: 03/25/2024] Open
Abstract
Colorectal cancer (CRC) exhibits a significant death rate that consistently impacts human lives worldwide. Histopathological examination is the standard method for CRC diagnosis. However, it is complicated, time-consuming, and subjective. Computer-aided diagnostic (CAD) systems using digital pathology can help pathologists diagnose CRC faster and more accurately than manual histopathology examinations. Deep learning algorithms especially convolutional neural networks (CNNs) are advocated for diagnosis of CRC. Nevertheless, most previous CAD systems obtained features from one CNN, these features are of huge dimension. Also, they relied on spatial information only to achieve classification. In this paper, a CAD system is proposed called "Color-CADx" for CRC recognition. Different CNNs namely ResNet50, DenseNet201, and AlexNet are used for end-to-end classification at different training-testing ratios. Moreover, features are extracted from these CNNs and reduced using discrete cosine transform (DCT). DCT is also utilized to acquire spectral representation. Afterward, it is used to further select a reduced set of deep features. Furthermore, DCT coefficients obtained in the previous step are concatenated and the analysis of variance (ANOVA) feature selection approach is applied to choose significant features. Finally, machine learning classifiers are employed for CRC classification. Two publicly available datasets were investigated which are the NCT-CRC-HE-100 K dataset and the Kather_texture_2016_image_tiles dataset. The highest achieved accuracy reached 99.3% for the NCT-CRC-HE-100 K dataset and 96.8% for the Kather_texture_2016_image_tiles dataset. DCT and ANOVA have successfully lowered feature dimensionality thus reducing complexity. Color-CADx has demonstrated efficacy in terms of accuracy, as its performance surpasses that of the most recent advancements.
Collapse
Affiliation(s)
- Maha Sharkas
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt
| | - Omneya Attallah
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt.
- Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
7
|
Sahu A, Das PK, Meher S. Recent advancements in machine learning and deep learning-based breast cancer detection using mammograms. Phys Med 2023; 114:103138. [PMID: 37914431 DOI: 10.1016/j.ejmp.2023.103138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 07/22/2023] [Accepted: 09/14/2023] [Indexed: 11/03/2023] Open
Abstract
OBJECTIVE Mammogram-based automatic breast cancer detection has a primary role in accurate cancer diagnosis and treatment planning to save valuable lives. Mammography is one basic yet efficient test for screening breast cancer. Very few comprehensive surveys have been presented to briefly analyze methods for detecting breast cancer with mammograms. In this article, our objective is to give an overview of recent advancements in machine learning (ML) and deep learning (DL)-based breast cancer detection systems. METHODS We give a structured framework to categorize mammogram-based breast cancer detection techniques. Several publicly available mammogram databases and different performance measures are also mentioned. RESULTS After deliberate investigation, we find most of the works classify breast tumors either as normal-abnormal or malignant-benign rather than classifying them into three classes. Furthermore, DL-based features are more significant than hand-crafted features. However, transfer learning is preferred over others as it yields better performance in small datasets, unlike classical DL techniques. SIGNIFICANCE AND CONCLUSION In this article, we have made an attempt to give recent advancements in artificial intelligence (AI)-based breast cancer detection systems. Furthermore, a number of challenging issues and possible research directions are mentioned, which will help researchers in further scopes of research in this field.
Collapse
Affiliation(s)
- Adyasha Sahu
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| | - Pradeep Kumar Das
- School of Electronics Engineering (SENSE), VIT Vellore, Tamil Nadu, 632014, India.
| | - Sukadev Meher
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| |
Collapse
|
8
|
Thirumalaisamy S, Thangavilou K, Rajadurai H, Saidani O, Alturki N, Mathivanan SK, Jayagopal P, Gochhait S. Breast Cancer Classification Using Synthesized Deep Learning Model with Metaheuristic Optimization Algorithm. Diagnostics (Basel) 2023; 13:2925. [PMID: 37761292 PMCID: PMC10528264 DOI: 10.3390/diagnostics13182925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/06/2023] [Accepted: 08/12/2023] [Indexed: 09/29/2023] Open
Abstract
Breast cancer is the second leading cause of mortality among women. Early and accurate detection plays a crucial role in lowering its mortality rate. Timely detection and classification of breast cancer enable the most effective treatment. Convolutional neural networks (CNNs) have significantly improved the accuracy of tumor detection and classification in medical imaging compared to traditional methods. This study proposes a comprehensive classification technique for identifying breast cancer, utilizing a synthesized CNN, an enhanced optimization algorithm, and transfer learning. The primary goal is to assist radiologists in rapidly identifying anomalies. To overcome inherent limitations, we modified the Ant Colony Optimization (ACO) technique with opposition-based learning (OBL). The Enhanced Ant Colony Optimization (EACO) methodology was then employed to determine the optimal hyperparameter values for the CNN architecture. Our proposed framework combines the Residual Network-101 (ResNet101) CNN architecture with the EACO algorithm, resulting in a new model dubbed EACO-ResNet101. Experimental analysis was conducted on the MIAS and DDSM (CBIS-DDSM) mammographic datasets. Compared to conventional methods, our proposed model achieved an impressive accuracy of 98.63%, sensitivity of 98.76%, and specificity of 98.89% on the CBIS-DDSM dataset. On the MIAS dataset, the proposed model achieved a classification accuracy of 99.15%, a sensitivity of 97.86%, and a specificity of 98.88%. These results demonstrate the superiority of the proposed EACO-ResNet101 over current methodologies.
Collapse
Affiliation(s)
- Selvakumar Thirumalaisamy
- Department of Artificial intelligence & Data Science, Dr. Mahalingam College of Engineering and Technology, Pollachi 642003, India
| | - Kamaleshwar Thangavilou
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India
| | - Hariharan Rajadurai
- School of Computing Science and Engineering, VIT Bhopal University, Bhopal–Indore Highway Kothrikalan, Sehore 466114, India
| | - Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | | | - Prabhu Jayagopal
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore 632014, India;
| | - Saikat Gochhait
- Symbiosis Institute of Digital and Telecom Management, Constituent of Symbiosis International Deemed University, Pune 412115, India
- Neuroscience Research Institute, Samara State Medical University, 443001 Samara, Russia
| |
Collapse
|
9
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
10
|
Alhussan AA, Abdelhamid AA, Towfek SK, Ibrahim A, Abualigah L, Khodadadi N, Khafaga DS, Al-Otaibi S, Ahmed AE. Classification of Breast Cancer Using Transfer Learning and Advanced Al-Biruni Earth Radius Optimization. Biomimetics (Basel) 2023; 8:270. [PMID: 37504158 PMCID: PMC10377265 DOI: 10.3390/biomimetics8030270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/21/2023] [Accepted: 06/24/2023] [Indexed: 07/29/2023] Open
Abstract
Breast cancer is one of the most common cancers in women, with an estimated 287,850 new cases identified in 2022. There were 43,250 female deaths attributed to this malignancy. The high death rate associated with this type of cancer can be reduced with early detection. Nonetheless, a skilled professional is always necessary to manually diagnose this malignancy from mammography images. Many researchers have proposed several approaches based on artificial intelligence. However, they still face several obstacles, such as overlapping cancerous and noncancerous regions, extracting irrelevant features, and inadequate training models. In this paper, we developed a novel computationally automated biological mechanism for categorizing breast cancer. Using a new optimization approach based on the Advanced Al-Biruni Earth Radius (ABER) optimization algorithm, a boosting to the classification of breast cancer cases is realized. The stages of the proposed framework include data augmentation, feature extraction using AlexNet based on transfer learning, and optimized classification using a convolutional neural network (CNN). Using transfer learning and optimized CNN for classification improved the accuracy when the results are compared to recent approaches. Two publicly available datasets are utilized to evaluate the proposed framework, and the average classification accuracy is 97.95%. To ensure the statistical significance and difference between the proposed methodology, additional tests are conducted, such as analysis of variance (ANOVA) and Wilcoxon, in addition to evaluating various statistical analysis metrics. The results of these tests emphasized the effectiveness and statistical difference of the proposed methodology compared to current methods.
Collapse
Affiliation(s)
- Amel Ali Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Abdelaziz A Abdelhamid
- Department of Computer Science, College of Computing and Information Technology, Shaqra University, Shaqra 11961, Saudi Arabia
- Department of Computer Science, Faculty of Computer and Information Sciences, Ain Shams University, Cairo 11566, Egypt
| | - S K Towfek
- Computer Science and Intelligent Systems Research Center, Blacksburg, VA 24060, USA
- Department of Communications and Electronics, Delta Higher Institute of Engineering and Technology, Mansoura 35111, Egypt
| | - Abdelhameed Ibrahim
- Computer Engineering and Control Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
| | - Laith Abualigah
- Computer Science Department, Prince Hussein Bin Abdullah Faculty for Information Technology, Al al-Bayt University, Mafraq 25113, Jordan
- Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
- MEU Research Unit, Middle East University, Amman 11831, Jordan
- School of Computer Sciences, Universiti Sains Malaysia, Pulau Pinang 11800, Malaysia
| | - Nima Khodadadi
- Department of Civil and Architectural Engineering, University of Miami, Coral Gables, FL 33146, USA
| | - Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Shaha Al-Otaibi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Ayman Em Ahmed
- Faculty of Engineering, King Salman International University, El-Tor 8701301, Egypt
| |
Collapse
|
11
|
Mračko A, Vanovčanová L, Cimrák I. Mammography Datasets for Neural Networks-Survey. J Imaging 2023; 9:jimaging9050095. [PMID: 37233314 DOI: 10.3390/jimaging9050095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 05/02/2023] [Accepted: 05/05/2023] [Indexed: 05/27/2023] Open
Abstract
Deep neural networks have gained popularity in the field of mammography. Data play an integral role in training these models, as training algorithms requires a large amount of data to capture the general relationship between the model's input and output. Open-access databases are the most accessible source of mammography data for training neural networks. Our work focuses on conducting a comprehensive survey of mammography databases that contain images with defined abnormal areas of interest. The survey includes databases such as INbreast, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), the OPTIMAM Medical Image Database (OMI-DB), and The Mammographic Image Analysis Society Digital Mammogram Database (MIAS). Additionally, we surveyed recent studies that have utilized these databases in conjunction with neural networks and the results they have achieved. From these databases, it is possible to obtain at least 3801 unique images with 4125 described findings from approximately 1842 patients. The number of patients with important findings can be increased to approximately 14,474, depending on the type of agreement with the OPTIMAM team. Furthermore, we provide a description of the annotation process for mammography images to enhance the understanding of the information gained from these datasets.
Collapse
Affiliation(s)
- Adam Mračko
- Faculty of Management Science and Informatics, University of Žilina, 010 26 Žilina, Slovakia
- Research Centre, University of Žilina, 010 26 Žilina, Slovakia
| | - Lucia Vanovčanová
- 2nd Radiology Department, Faculty of Medicine, Comenius University in Bratislava, 813 72 Bratislava, Slovakia
- St. Elizabeth Cancer Institute, 812 50 Bratislava, Slovakia
| | - Ivan Cimrák
- Faculty of Management Science and Informatics, University of Žilina, 010 26 Žilina, Slovakia
- Research Centre, University of Žilina, 010 26 Žilina, Slovakia
| |
Collapse
|
12
|
Razali NF, Isa IS, Sulaiman SN, A. Karim NK, Osman MK. CNN-Wavelet scattering textural feature fusion for classifying breast tissue in mammograms. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
13
|
Urban spaces as a positive catalyst during pandemics: Assessing the community’s well-being by using artificial intelligence techniques. AIN SHAMS ENGINEERING JOURNAL 2023; 14:102084. [PMCID: PMC9901912 DOI: 10.1016/j.asej.2022.102084] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 11/14/2022] [Accepted: 12/10/2022] [Indexed: 11/03/2023]
Abstract
Amid the Covid-19 pandemic, lifestyles changed completely. This new normality damages human psychology and mental health. Hence, new approaches must be considered while shaping public spaces to accommodate the pandemic life. This paper aims to show the importance of exploiting outdoor spaces to save people’s mental health. Accordingly, an online survey is conducted and analyzed by Statistical Package for the Social Sciences (SPSS) for more precise answers. Afterward, the most important public spaces during the pandemic are extracted; consequently, another questionnaire has been held to validate these items. The last one has been run through a machine learning technique to classify and categorize the users’ different preferences in three situations only. It was found that 85,17% of the sample declared the importance of outdoor public spaces. However, future research is needed to rethink urban spaces’ design and to relocate the activities done within indoor public spaces to the outdoors to maintain human mental health.
Collapse
|
14
|
Kong X, Zhou M, Bian K, Lai W, Hu F, Dai R, Yan J. Research on SPDTRS-PNN based intelligent assistant diagnosis for breast cancer. Sci Rep 2023; 13:4386. [PMID: 36928059 PMCID: PMC10020448 DOI: 10.1038/s41598-023-28316-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 01/17/2023] [Indexed: 03/18/2023] Open
Abstract
Breast cancer is the second dangerous cancer in the world. Breast cancer data often contains more redundant information. Redundant information makes the breast cancer auxiliary diagnosis less accurate and time consuming. Dimension reduction algorithm combined with machine learning can solve these problems well. This paper proposes the single parameter decision theoretic rough set (SPDTRS) combined with the probability neural network (PNN) model for breast cancer diagnosis. We find that when the parameter value of SPDTRS is 2.5 and the SPREAD value is 0.75, the number of 30 attributes of the original breast cancer data dropped to 12, the accuracy of the SPDTRS-PNN model training set is 99.25%, the accuracy of the test set is 97.04%, and the test time is 0.093 s. The experimental results show that the SPDTRS-PNN model can improve the ac-curacy of breast cancer recognition, reduce the time required for diagnosis.
Collapse
Affiliation(s)
- Xixi Kong
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, China.
| | - Mengran Zhou
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, China
| | - Kai Bian
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, China
| | - Wenhao Lai
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, China
| | - Feng Hu
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, China
| | - Rongying Dai
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, China
| | - Jingjing Yan
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, China
| |
Collapse
|
15
|
Lu SY, Wang SH, Zhang YD. BCDNet: An Optimized Deep Network for Ultrasound Breast Cancer Detection. Ing Rech Biomed 2023. [DOI: 10.1016/j.irbm.2023.100774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
|
16
|
Alsubai S, Alqahtani A, Sha M. Genetic hyperparameter optimization with Modified Scalable-Neighbourhood Component Analysis for breast cancer prognostication. Neural Netw 2023; 162:240-257. [PMID: 36913821 DOI: 10.1016/j.neunet.2023.02.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 12/30/2022] [Accepted: 02/23/2023] [Indexed: 03/02/2023]
Abstract
Breast cancer is common among women resulting in mortality when left untreated. Early detection is vital so that suitable treatment could assist cancer from spreading further and save people's life. The traditional way of detection is a time-consuming process. With the evolvement of DM (Data Mining), the healthcare industry could be benefitted in predicting the disease as it permits the physicians to determine the significant attributes for diagnosis. Though, conventional techniques have used DM-based methods to identify breast cancer, they lacked in terms of prediction rate. Moreover, parametric-Softmax classifiers have been a general option by conventional works with fixed classes, particularly when huge labelled data are present during training. Nevertheless, this turns into an issue for open set cases where new classes are encountered along with few instances to learn a generalized parametric classifier. Thus, the present study aims to implement a non-parametric strategy by optimizing the embedding of a feature rather than parametric classifiers. This research utilizes Deep CNN (Deep Convolutional Neural Network) and Inception V3 for learning visual features which preserve neighbourhood outline in semantic space relying on NCA (Neighbourhood Component Analysis) criteria. Delimited by its bottleneck, the study proposes MS-NCA (Modified Scalable-Neighbourhood Component Analysis) that relies on a non-linear objective function to perform feature fusion by optimizing the distance-learning objective due to which it gains the capability of computing inner feature products without performing mapping which increases the scalability of MS-NCA. Finally, G-HPO (Genetic-Hyper-parameter Optimization) is proposed. In this case, the new stage in the algorithm simply denotes the enhancement in the length of chromosome bringing several hyperparameters into subsequent XGBoost, NB and RF models having numerous layers for identifying the normal and affected cases of breast cancer for which optimized hyper-parameter values of RF (Random Forest), NB (Naïve Bayes), and XGBoost (eXtreme Gradient Boosting) are determined. This process helps in improvising the classification rate which is confirmed through analytical results.
Collapse
Affiliation(s)
- Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| | - Mohemmed Sha
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| |
Collapse
|
17
|
Hassan AM, Yahya A, Aboshosha A. A framework for classifying breast cancer based on deep features integration and selection. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08341-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
Abstract
AbstractDeep convolutional neural networks (DCNNs) are one of the most advanced techniques for classifying images in a range of applications. One of the most prevalent cancers that cause death in women is breast cancer. For survival rates to increase, early detection and treatment of breast cancer is essential. Deep learning (DL) can help radiologists diagnose and classify breast cancer lesions. This paper proposes a computer-aided system based on DL techniques for automatically classify breast cancer tumors in histopathological images. There are nine DCNN architectures used in this work. Four schemes are performed in the proposed framework to find the best approach. The first scheme consists of pre-trained DCNNs based on the transfer learning concept. The second scheme performs feature extraction of the DCNN architectures and uses a support vector machine (SVM) classifier for evaluation. The third one performs feature integration to show how the integrated deep features may enhance the SVM classifiers' accuracy. Finally, in the fourth scheme, the Chi-square (χ2) feature selection method is applied to reduce the large feature size in the feature integration step. The results of the proposed system present a promising performance for breast cancer classification with an accuracy of 99.24%. The system performance shows that the proposed tool is suitable to assist radiologists in diagnosing breast cancer tumors.
Collapse
|
18
|
Auto-MyIn: Automatic diagnosis of myocardial infarction via multiple GLCMs, CNNs, and SVMs. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
19
|
Zheng D, He X, Jing J. Overview of Artificial Intelligence in Breast Cancer Medical Imaging. J Clin Med 2023; 12:419. [PMID: 36675348 PMCID: PMC9864608 DOI: 10.3390/jcm12020419] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/26/2022] [Accepted: 12/30/2022] [Indexed: 01/07/2023] Open
Abstract
The heavy global burden and mortality of breast cancer emphasize the importance of early diagnosis and treatment. Imaging detection is one of the main tools used in clinical practice for screening, diagnosis, and treatment efficacy evaluation, and can visualize changes in tumor size and texture before and after treatment. The overwhelming number of images, which lead to a heavy workload for radiologists and a sluggish reporting period, suggests the need for computer-aid detection techniques and platform. In addition, complex and changeable image features, heterogeneous quality of images, and inconsistent interpretation by different radiologists and medical institutions constitute the primary difficulties in breast cancer screening and imaging diagnosis. The advancement of imaging-based artificial intelligence (AI)-assisted tumor diagnosis is an ideal strategy for improving imaging diagnosis efficient and accuracy. By learning from image data input and constructing algorithm models, AI is able to recognize, segment, and diagnose tumor lesion automatically, showing promising application prospects. Furthermore, the rapid advancement of "omics" promotes a deeper and more comprehensive recognition of the nature of cancer. The fascinating relationship between tumor image and molecular characteristics has attracted attention to the radiomic and radiogenomics, which allow us to perform analysis and detection on the molecular level with no need for invasive operations. In this review, we integrate the current developments in AI-assisted imaging diagnosis and discuss the advances of AI-based breast cancer precise diagnosis from a clinical point of view. Although AI-assisted imaging breast cancer screening and detection is an emerging field and draws much attention, the clinical application of AI in tumor lesion recognition, segmentation, and diagnosis is still limited to research or in limited patients' cohort. Randomized clinical trials based on large and high-quality cohort are lacking. This review aims to describe the progress of the imaging-based AI application in breast cancer screening and diagnosis for clinicians.
Collapse
Affiliation(s)
| | | | - Jing Jing
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu 610041, China
| |
Collapse
|
20
|
Das HS, Das A, Neog A, Mallik S, Bora K, Zhao Z. Breast cancer detection: Shallow convolutional neural network against deep convolutional neural networks based approach. Front Genet 2023; 13:1097207. [PMID: 36685963 PMCID: PMC9846574 DOI: 10.3389/fgene.2022.1097207] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 12/15/2022] [Indexed: 01/06/2023] Open
Abstract
Introduction: Of all the cancers that afflict women, breast cancer (BC) has the second-highest mortality rate, and it is also believed to be the primary cause of the high death rate. Breast cancer is the most common cancer that affects women globally. There are two types of breast tumors: benign (less harmful and unlikely to become breast cancer) and malignant (which are very dangerous and might result in aberrant cells that could result in cancer). Methods: To find breast abnormalities like masses and micro-calcifications, competent and educated radiologists often examine mammographic images. This study focuses on computer-aided diagnosis to help radiologists make more precise diagnoses of breast cancer. This study aims to compare and examine the performance of the proposed shallow convolutional neural network architecture having different specifications against pre-trained deep convolutional neural network architectures trained on mammography images. Mammogram images are pre-processed in this study's initial attempt to carry out the automatic identification of BC. Thereafter, three different types of shallow convolutional neural networks with representational differences are then fed with the resulting data. In the second method, transfer learning via fine-tuning is used to feed the same collection of images into pre-trained convolutional neural networks VGG19, ResNet50, MobileNet-v2, Inception-v3, Xception, and Inception-ResNet-v2. Results: In our experiment with two datasets, the accuracy for the CBIS-DDSM and INbreast datasets are 80.4%, 89.2%, and 87.8%, 95.1% respectively. Discussion: It can be concluded from the experimental findings that the deep network-based approach with precise tuning outperforms all other state-of-the-art techniques in experiments on both datasets.
Collapse
Affiliation(s)
- Himanish Shekhar Das
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Akalpita Das
- Department of Computer Science and Engineering, GIMT Guwahati, Guwahati, India
| | - Anupal Neog
- Department of AI and Machine Learning COE, IQVIA, Bengaluru, Karnataka, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, United States
- Department of Pharmacology and Toxicology, University of Arizona, Tucson, AZ, United States
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Pathology and Laboratory Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
21
|
Mosa DT, Mahmoud A, Zaki J, Sorour SE, El-Sappagh S, Abuhmed T. Henry gas solubility optimization double machine learning classifier for neurosurgical patients. PLoS One 2023; 18:e0285455. [PMID: 37167226 PMCID: PMC10174516 DOI: 10.1371/journal.pone.0285455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 04/24/2023] [Indexed: 05/13/2023] Open
Abstract
This study aims to predict head trauma outcome for Neurosurgical patients in children, adults, and elderly people. As Machine Learning (ML) algorithms are helpful in healthcare field, a comparative study of various ML techniques is developed. Several algorithms are utilized such as k-nearest neighbor, Random Forest (RF), C4.5, Artificial Neural Network, and Support Vector Machine (SVM). Their performance is assessed using anonymous patients' data. Then, a proposed double classifier based on Henry Gas Solubility Optimization (HGSO) is developed with Aquila optimizer (AQO). It is implemented for feature selection to classify patients' outcome status into four states. Those are mortality, morbidity, improved, or the same. The double classifiers are evaluated via various performance metrics including recall, precision, F-measure, accuracy, and sensitivity. Another contribution of this research is the original use of hybrid technique based on RF-SVM and HGSO to predict patient outcome status with high accuracy. It determines outcome status relationship with age and mode of trauma. The algorithm is tested on more than 1000 anonymous patients' data taken from a Neurosurgical unit of Mansoura International Hospital, Egypt. Experimental results show that the proposed method has the highest accuracy of 99.2% (with population size = 30) compared with other classifiers.
Collapse
Affiliation(s)
- Diana T Mosa
- Department of Information Systems, Faculty of Computers and Information, Kafrelsheikh University, Kafr El-Shaikh, Egypt
| | - Amena Mahmoud
- Department of Computer Sciences, Faculty of Computers and Information, Kafrelsheikh University, Kafr El-Shaikh, Egypt
| | - John Zaki
- Department of Computer and Systems, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Shaymaa E Sorour
- Preparation- Computer Science and Education, Faculty of Specific Education, Kafrelsheikh University, Kafr El-Shaikh, Egypt
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez, Egypt
- Faculty of Computers & Artificial Intelligence, Benha University, Banha, Egypt
- College of computing and informatics, Sungkyunkwan University, Seoul, Republic of Korea
| | - Tamer Abuhmed
- College of computing and informatics, Sungkyunkwan University, Seoul, Republic of Korea
| |
Collapse
|
22
|
Al-Hejri AM, Al-Tam RM, Fazea M, Sable AH, Lee S, Al-antari MA. ETECADx: Ensemble Self-Attention Transformer Encoder for Breast Cancer Diagnosis Using Full-Field Digital X-ray Breast Images. Diagnostics (Basel) 2022; 13:diagnostics13010089. [PMID: 36611382 PMCID: PMC9818801 DOI: 10.3390/diagnostics13010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/22/2022] [Accepted: 12/24/2022] [Indexed: 12/29/2022] Open
Abstract
Early detection of breast cancer is an essential procedure to reduce the mortality rate among women. In this paper, a new AI-based computer-aided diagnosis (CAD) framework called ETECADx is proposed by fusing the benefits of both ensemble transfer learning of the convolutional neural networks as well as the self-attention mechanism of vision transformer encoder (ViT). The accurate and precious high-level deep features are generated via the backbone ensemble network, while the transformer encoder is used to diagnose the breast cancer probabilities in two approaches: Approach A (i.e., binary classification) and Approach B (i.e., multi-classification). To build the proposed CAD system, the benchmark public multi-class INbreast dataset is used. Meanwhile, private real breast cancer images are collected and annotated by expert radiologists to validate the prediction performance of the proposed ETECADx framework. The promising evaluation results are achieved using the INbreast mammograms with overall accuracies of 98.58% and 97.87% for the binary and multi-class approaches, respectively. Compared with the individual backbone networks, the proposed ensemble learning model improves the breast cancer prediction performance by 6.6% for binary and 4.6% for multi-class approaches. The proposed hybrid ETECADx shows further prediction improvement when the ViT-based ensemble backbone network is used by 8.1% and 6.2% for binary and multi-class diagnosis, respectively. For validation purposes using the real breast images, the proposed CAD system provides encouraging prediction accuracies of 97.16% for binary and 89.40% for multi-class approaches. The ETECADx has a capability to predict the breast lesions for a single mammogram in an average of 0.048 s. Such promising performance could be useful and helpful to assist the practical CAD framework applications providing a second supporting opinion of distinguishing various breast cancer malignancies.
Collapse
Affiliation(s)
- Aymen M. Al-Hejri
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Faculty of Administrative and Computer Sciences, University of Albaydha, Albaydha, Yemen
| | - Riyadh M. Al-Tam
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Faculty of Administrative and Computer Sciences, University of Albaydha, Albaydha, Yemen
| | - Muneer Fazea
- Department of Radiology, Al-Ma’amon Diagnostic Center, Sana’a, Yemen
- Department of Radiology, School of Medicine, Ibb University of Medical Sciences, Ibb, Yemen
| | - Archana Harsing Sable
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Correspondence: (A.H.S.); (M.A.A.-a.)
| | - Soojeong Lee
- Department of Computer Engineering, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (A.H.S.); (M.A.A.-a.)
| |
Collapse
|
23
|
Alshamrani K, Alshamrani HA, Alqahtani FF, Almutairi BS. Enhancement of Mammographic Images Using Histogram-Based Techniques for Their Classification Using CNN. SENSORS (BASEL, SWITZERLAND) 2022; 23:235. [PMID: 36616832 PMCID: PMC9824687 DOI: 10.3390/s23010235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/21/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
In the world, one in eight women will develop breast cancer. Men can also develop it, but less frequently. This condition starts with uncontrolled cell division brought on by a change in the genes that regulate cell division and growth, which leads to the development of a nodule or tumour. These tumours can be either benign, which poses no health risk, or malignant, also known as cancerous, which puts patients' lives in jeopardy and has the potential to spread. The most common way to diagnose this problem is via mammograms. This kind of examination enables the detection of abnormalities in breast tissue, such as masses and microcalcifications, which are thought to be indicators of the presence of disease. This study aims to determine how histogram-based image enhancement methods affect the classification of mammograms into five groups: benign calcifications, benign masses, malignant calcifications, malignant masses, and healthy tissue, as determined by a CAD system of automatic mammography classification using convolutional neural networks. Both Contrast-limited Adaptive Histogram Equalization (CAHE) and Histogram Intensity Windowing (HIW) will be used (CLAHE). By improving the contrast between the image's background, fibrous tissue, dense tissue, and sick tissue, which includes microcalcifications and masses, the mammography histogram is modified using these procedures. In order to help neural networks, learn, the contrast has been increased to make it easier to distinguish between various types of tissue. The proportion of correctly classified images could rise with this technique. Using Deep Convolutional Neural Networks, a model was developed that allows classifying different types of lesions. The model achieved an accuracy of 62%, based on mini-MIAS data. The final goal of the project is the creation of an update algorithm that will be incorporated into the CAD system and will enhance the automatic identification and categorization of microcalcifications and masses. As a result, it would be possible to increase the possibility of early disease identification, which is important because early discovery increases the likelihood of a cure to almost 100%.
Collapse
Affiliation(s)
- Khalaf Alshamrani
- PhD Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 6641, Saudi Arabia
| | - Hassan A. Alshamrani
- PhD Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 6641, Saudi Arabia
| | - Fawaz F. Alqahtani
- PhD Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 6641, Saudi Arabia
| | - Bander S. Almutairi
- Radiology Department, King Abdulaziz University Hospital, Jeddah 3646, Saudi Arabia
| |
Collapse
|
24
|
Shaban WM. Insight into breast cancer detection: new hybrid feature selection method. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08062-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
AbstractBreast cancer, which is also the leading cause of death among women, is one of the most common forms of the disease that affects females all over the world. The discovery of breast cancer at an early stage is extremely important because it allows selecting appropriate treatment protocol and thus, stops the development of cancer cells. In this paper, a new patients detection strategy has been presented to identify patients with the disease earlier. The proposed strategy composes of two parts which are data preprocessing phase and patient detection phase (PDP). The purpose of this study is to introduce a feature selection methodology for determining the most efficient and significant features for identifying breast cancer patients. This method is known as new hybrid feature selection method (NHFSM). NHFSM is made up of two modules which are quick selection module that uses information gain, and feature selection module that uses hybrid bat algorithm and particle swarm optimization. Consequently, NHFSM is a hybrid method that combines the advantages of bat algorithm and particle swarm optimization based on filter method to eliminate many drawbacks such as being stuck in a local optimal solution and having unbalanced exploitation. The preprocessed data are then used during PDP in order to enable a quick and accurate detection of patients. Based on experimental results, the proposed NHFSM improves the efficiency of patients’ classification in comparison with state-of-the-art feature selection approaches by roughly 0.97, 0.76, 0.75, and 0.716 in terms of accuracy, precision, sensitivity/recall, and F-measure. In contrast, it has the lowest error rate value of 0.03.
Collapse
|
25
|
Garrucho L, Kushibar K, Jouide S, Diaz O, Igual L, Lekadir K. Domain generalization in deep learning based mass detection in mammography: A large-scale multi-center study. Artif Intell Med 2022; 132:102386. [PMID: 36207090 DOI: 10.1016/j.artmed.2022.102386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 08/07/2022] [Accepted: 08/19/2022] [Indexed: 11/02/2022]
Abstract
Computer-aided detection systems based on deep learning have shown great potential in breast cancer detection. However, the lack of domain generalization of artificial neural networks is an important obstacle to their deployment in changing clinical environments. In this study, we explored the domain generalization of deep learning methods for mass detection in digital mammography and analyzed in-depth the sources of domain shift in a large-scale multi-center setting. To this end, we compared the performance of eight state-of-the-art detection methods, including Transformer based models, trained in a single domain and tested in five unseen domains. Moreover, a single-source mass detection training pipeline was designed to improve the domain generalization without requiring images from the new domain. The results show that our workflow generalized better than state-of-the-art transfer learning based approaches in four out of five domains while reducing the domain shift caused by the different acquisition protocols and scanner manufacturers. Subsequently, an extensive analysis was performed to identify the covariate shifts with the greatest effects on detection performance, such as those due to differences in patient age, breast density, mass size, and mass malignancy. Ultimately, this comprehensive study provides key insights and best practices for future research on domain generalization in deep learning based breast cancer detection.
Collapse
Affiliation(s)
- Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Socayna Jouide
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Laura Igual
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| |
Collapse
|
26
|
Deep and dense convolutional neural network for multi category classification of magnification specific and magnification independent breast cancer histopathological images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
27
|
Aljuaid H, Alturki N, Alsubaie N, Cavallaro L, Liotta A. Computer-aided diagnosis for breast cancer classification using deep neural networks and transfer learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106951. [PMID: 35767911 DOI: 10.1016/j.cmpb.2022.106951] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/25/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Many developed and non-developed countries worldwide suffer from cancer-related fatal diseases. In particular, the rate of breast cancer in females increases daily, partially due to unawareness and undiagnosed at the early stages. A proper first breast cancer treatment can only be provided by adequately detecting and classifying cancer during the very early stages of its development. The use of medical image analysis techniques and computer-aided diagnosis may help the acceleration and the automation of both cancer detection and classification by also training and aiding less experienced physicians. For large datasets of medical images, convolutional neural networks play a significant role in detecting and classifying cancer effectively. METHODS This article presents a novel computer-aided diagnosis method for breast cancer classification (both binary and multi-class), using a combination of deep neural networks (ResNet 18, ShuffleNet, and Inception-V3Net) and transfer learning on the BreakHis publicly available dataset. RESULTS AND CONCLUSIONS Our proposed method provides the best average accuracy for binary classification of benign or malignant cancer cases of 99.7%, 97.66%, and 96.94% for ResNet, InceptionV3Net, and ShuffleNet, respectively. Average accuracies for multi-class classification were 97.81%, 96.07%, and 95.79% for ResNet, Inception-V3Net, and ShuffleNet, respectively.
Collapse
Affiliation(s)
- Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Najah Alsubaie
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Lucia Cavallaro
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy
| | - Antonio Liotta
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy.
| |
Collapse
|
28
|
An integrated framework for breast mass classification and diagnosis using stacked ensemble of residual neural networks. Sci Rep 2022; 12:12259. [PMID: 35851592 PMCID: PMC9293883 DOI: 10.1038/s41598-022-15632-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/27/2022] [Indexed: 11/16/2022] Open
Abstract
A computer-aided diagnosis (CAD) system requires automated stages of tumor detection, segmentation, and classification that are integrated sequentially into one framework to assist the radiologists with a final diagnosis decision. In this paper, we introduce the final step of breast mass classification and diagnosis using a stacked ensemble of residual neural network (ResNet) models (i.e. ResNet50V2, ResNet101V2, and ResNet152V2). The work presents the task of classifying the detected and segmented breast masses into malignant or benign, and diagnosing the Breast Imaging Reporting and Data System (BI-RADS) assessment category with a score from 2 to 6 and the shape as oval, round, lobulated, or irregular. The proposed methodology was evaluated on two publicly available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Comparative experiments were conducted on the individual models and an average ensemble of models with an XGBoost classifier. Qualitative and quantitative results show that the proposed model achieved better performance for (1) Pathology classification with an accuracy of 95.13%, 99.20%, and 95.88%; (2) BI-RADS category classification with an accuracy of 85.38%, 99%, and 96.08% respectively on CBIS-DDSM, INbreast, and the private dataset; and (3) shape classification with 90.02% on the CBIS-DDSM dataset. Our results demonstrate that our proposed integrated framework could benefit from all automated stages to outperform the latest deep learning methodologies.
Collapse
|
29
|
Kumar P, Kumar A, Srivastava S, Padma Sai Y. A novel bi-modal extended Huber loss function based refined mask RCNN approach for automatic multi instance detection and localization of breast cancer. Proc Inst Mech Eng H 2022; 236:1036-1053. [DOI: 10.1177/09544119221095416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Breast cancer is an extremely aggressive cancer in women. Its abnormalities can be observed in the form of masses, calcification and lumps. In order to reduce the mortality rate of women its detection is needed at an early stage. The present paper proposes a novel bi-modal extended Huber loss function based refined mask regional convolutional neural network for automatic multi-instance detection and localization of breast cancer. To refine and increase the efficacy of the proposed method three changes are casted. First, a pre-processing step is performed for mammogram and ultrasound breast images. Second, the features of the region proposal network are separately mapped for accurate region of interest. Third, to reduce overfitting and fast convergence, an extended Huber loss function is used at the place of Smooth L1( x) in boundary loss. To extend the functionality of Huber loss, the delta parameter is automated by the aid of median absolute deviation with grid search algorithm. It provides the best optimum value of delta instead of user-based value. The proposed method is compared with pre-existing methods in terms of accuracy, true positive rate, true negative rate, precision, F-score, balanced classification rate, Youden’s index, Jaccard Index and dice coefficient on CBIS-DDSM and ultrasound database. The experimental result shows that the proposed method is a better suited approach for multi-instance detection, localization and classification of breast cancer. It can be used as a diagnostic medium that helps in clinical purposes and leads to a precise diagnosis of breast cancer abnormalities.
Collapse
Affiliation(s)
- Pradeep Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Patna, Bihar, India
| | - Abhinav Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Patna, Bihar, India
| | - Subodh Srivastava
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Patna, Bihar, India
| | - Yarlagadda Padma Sai
- Department of Electronics and Communication Engineering, VNR VJIET, Hyderabad, Telangana, India
| |
Collapse
|
30
|
Attallah O, Samir A. A wavelet-based deep learning pipeline for efficient COVID-19 diagnosis via CT slices. Appl Soft Comput 2022; 128:109401. [PMID: 35919069 PMCID: PMC9335861 DOI: 10.1016/j.asoc.2022.109401] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 05/20/2022] [Accepted: 07/25/2022] [Indexed: 12/30/2022]
Abstract
The quick diagnosis of the novel coronavirus (COVID-19) disease is vital to prevent its propagation and improve therapeutic outcomes. Computed tomography (CT) is believed to be an effective tool for diagnosing COVID-19, however, the CT scan contains hundreds of slices that are complex to be analyzed and could cause delays in diagnosis. Artificial intelligence (AI) especially deep learning (DL), could facilitate and speed up COVID-19 diagnosis from such scans. Several studies employed DL approaches based on 2D CT images from a single view, nevertheless, 3D multiview CT slices demonstrated an excellent ability to enhance the efficiency of COVID-19 diagnosis. The majority of DL-based studies utilized the spatial information of the original CT images to train their models, though, using spectral–temporal information could improve the detection of COVID-19. This article proposes a DL-based pipeline called CoviWavNet for the automatic diagnosis of COVID-19. CoviWavNet uses a 3D multiview dataset called OMNIAHCOV. Initially, it analyzes the CT slices using multilevel discrete wavelet decomposition (DWT) and then uses the heatmaps of the approximation levels to train three ResNet CNN models. These ResNets use the spectral–temporal information of such images to perform classification. Subsequently, it investigates whether the combination of spatial information with spectral–temporal information could improve the diagnostic accuracy of COVID-19. For this purpose, it extracts deep spectral–temporal features from such ResNets using transfer learning and integrates them with deep spatial features extracted from the same ResNets trained with the original CT slices. Then, it utilizes a feature selection step to reduce the dimension of such integrated features and use them as inputs to three support vector machine (SVM) classifiers. To further validate the performance of CoviWavNet, a publicly available benchmark dataset called SARS-COV-2-CT-Scan is employed. The results of CoviWavNet have demonstrated that using the spectral–temporal information of the DWT heatmap images to train the ResNets is superior to utilizing the spatial information of the original CT images. Furthermore, integrating deep spectral–temporal features with deep spatial features has enhanced the classification accuracy of the three SVM classifiers reaching a final accuracy of 99.33% and 99.7% for the OMNIAHCOV and SARS-COV-2-CT-Scan datasets respectively. These accuracies verify the outstanding performance of CoviWavNet compared to other related studies. Thus, CoviWavNet can help radiologists in the rapid and accurate diagnosis of COVID-19 diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| | - Ahmed Samir
- Department of Radiodiagnosis, Faculty of Medicine, University of Alexandria, Egypt
| |
Collapse
|
31
|
Samee NA, Alhussan AA, Ghoneim VF, Atteia G, Alkanhel R, Al-antari MA, Kadah YM. A Hybrid Deep Transfer Learning of CNN-Based LR-PCA for Breast Lesion Diagnosis via Medical Breast Mammograms. SENSORS 2022; 22:s22134938. [PMID: 35808433 PMCID: PMC9269713 DOI: 10.3390/s22134938] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 06/22/2022] [Accepted: 06/27/2022] [Indexed: 12/16/2022]
Abstract
One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis.
Collapse
Affiliation(s)
- Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Amel A. Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence:
| | | | - Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Reem Alkanhel
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Korea;
| | - Yasser M. Kadah
- Electrical and Computer Engineering Department, King Abdulaziz University, Jeddah 22254, Saudi Arabia;
- Biomedical Engineering Department, Cairo University, Giza 12613, Egypt
| |
Collapse
|
32
|
Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126230] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems.
Collapse
|
33
|
An optimized deep learning architecture for breast cancer diagnosis based on improved marine predators algorithm. Neural Comput Appl 2022; 34:18015-18033. [PMID: 35698722 PMCID: PMC9175533 DOI: 10.1007/s00521-022-07445-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 05/14/2022] [Indexed: 11/12/2022]
Abstract
Breast cancer is the second leading cause of death in women; therefore, effective early detection of this cancer can reduce its mortality rate. Breast cancer detection and classification in the early phases of development may allow for optimal therapy. Convolutional neural networks (CNNs) have enhanced tumor detection and classification efficiency in medical imaging compared to traditional approaches. This paper proposes a novel classification model for breast cancer diagnosis based on a hybridized CNN and an improved optimization algorithm, along with transfer learning, to help radiologists detect abnormalities efficiently. The marine predators algorithm (MPA) is the optimization algorithm we used, and we improve it using the opposition-based learning strategy to cope with the implied weaknesses of the original MPA. The improved marine predators algorithm (IMPA) is used to find the best values for the hyperparameters of the CNN architecture. The proposed method uses a pretrained CNN model called ResNet50 (residual network). This model is hybridized with the IMPA algorithm, resulting in an architecture called IMPA-ResNet50. Our evaluation is performed on two mammographic datasets, the mammographic image analysis society (MIAS) and curated breast imaging subset of DDSM (CBIS-DDSM) datasets. The proposed model was compared with other state-of-the-art approaches. The obtained results showed that the proposed model outperforms the compared state-of-the-art approaches, which are beneficial to classification performance, achieving 98.32% accuracy, 98.56% sensitivity, and 98.68% specificity on the CBIS-DDSM dataset and 98.88% accuracy, 97.61% sensitivity, and 98.40% specificity on the MIAS dataset. To evaluate the performance of IMPA in finding the optimal values for the hyperparameters of ResNet50 architecture, it compared to four other optimization algorithms including gravitational search algorithm (GSA), Harris hawks optimization (HHO), whale optimization algorithm (WOA), and the original MPA algorithm. The counterparts algorithms are also hybrid with the ResNet50 architecture produce models named GSA-ResNet50, HHO-ResNet50, WOA-ResNet50, and MPA-ResNet50, respectively. The results indicated that the proposed IMPA-ResNet50 is achieved a better performance than other counterparts.
Collapse
|
34
|
Abstract
Melanoma skin cancer is one of the most dangerous types of skin cancer, which, if not diagnosed early, may lead to death. Therefore, an accurate diagnosis is needed to detect melanoma. Traditionally, a dermatologist utilizes a microscope to inspect and then provide a report on a biopsy for diagnosis; however, this diagnosis process is not easy and requires experience. Hence, there is a need to facilitate the diagnosis process while still yielding an accurate diagnosis. For this purpose, artificial intelligence techniques can assist the dermatologist in carrying out diagnosis. In this study, we considered the detection of melanoma through deep learning based on cutaneous image processing. For this purpose, we tested several convolutional neural network (CNN) architectures, including DenseNet201, MobileNetV2, ResNet50V2, ResNet152V2, Xception, VGG16, VGG19, and GoogleNet, and evaluated the associated deep learning models on graphical processing units (GPUs). A dataset consisting of 7146 images was processed using these models, and we compared the obtained results. The experimental results showed that GoogleNet can obtain the highest performance accuracy on both the training and test sets (74.91% and 76.08%, respectively).
Collapse
|
35
|
Mahmood T, Li J, Pei Y, Akhtar F, Rehman MU, Wasti SH. Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach. PLoS One 2022; 17:e0263126. [PMID: 35085352 PMCID: PMC8794221 DOI: 10.1371/journal.pone.0263126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 01/12/2022] [Indexed: 11/18/2022] Open
Abstract
Breast cancer is one of the worst illnesses, with a higher fatality rate among women globally. Breast cancer detection needs accurate mammography interpretation and analysis, which is challenging for radiologists owing to the intricate anatomy of the breast and low image quality. Advances in deep learning-based models have significantly improved breast lesions’ detection, localization, risk assessment, and categorization. This study proposes a novel deep learning-based convolutional neural network (ConvNet) that significantly reduces human error in diagnosing breast malignancy tissues. Our methodology is most effective in eliciting task-specific features, as feature learning is coupled with classification tasks to achieve higher performance in automatically classifying the suspicious regions in mammograms as benign and malignant. To evaluate the model’s validity, 322 raw mammogram images from Mammographic Image Analysis Society (MIAS) and 580 from Private datasets were obtained to extract in-depth features, the intensity of information, and the high likelihood of malignancy. Both datasets are magnificently improved through preprocessing, synthetic data augmentation, and transfer learning techniques to attain the distinctive combination of breast tumors. The experimental findings indicate that the proposed approach achieved remarkable training accuracy of 0.98, test accuracy of 0.97, high sensitivity of 0.99, and an AUC of 0.99 in classifying breast masses on mammograms. The developed model achieved promising performance that helps the clinician in the speedy computation of mammography, breast masses diagnosis, treatment planning, and follow-up of disease progression. Moreover, it has the immense potential over retrospective approaches in consistency feature extraction and precise lesions classification.
Collapse
Affiliation(s)
- Tariq Mahmood
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Fukushima, Japan
- * E-mail:
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur, Pakistan
| | - Mujeeb Ur Rehman
- Radiology Department, Continental Medical College and Hayat Memorial Teaching Hospital, Lahore, Pakistan
| | - Shahbaz Hassan Wasti
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| |
Collapse
|
36
|
Attallah O. ECG-BiCoNet: An ECG-based pipeline for COVID-19 diagnosis using Bi-Layers of deep features integration. Comput Biol Med 2022; 142:105210. [PMID: 35026574 PMCID: PMC8730786 DOI: 10.1016/j.compbiomed.2022.105210] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 12/29/2022]
Abstract
The accurate and speedy detection of COVID-19 is essential to avert the fast propagation of the virus, alleviate lockdown constraints and diminish the burden on health organizations. Currently, the methods used to diagnose COVID-19 have several limitations, thus new techniques need to be investigated to improve the diagnosis and overcome these limitations. Taking into consideration the great benefits of electrocardiogram (ECG) applications, this paper proposes a new pipeline called ECG-BiCoNet to investigate the potential of using ECG data for diagnosing COVID-19. ECG-BiCoNet employs five deep learning models of distinct structural design. ECG-BiCoNet extracts two levels of features from two different layers of each deep learning technique. Features mined from higher layers are fused using discrete wavelet transform and then integrated with lower-layers features. Afterward, a feature selection approach is utilized. Finally, an ensemble classification system is built to merge predictions of three machine learning classifiers. ECG-BiCoNet accomplishes two classification categories, binary and multiclass. The results of ECG-BiCoNet present a promising COVID-19 performance with an accuracy of 98.8% and 91.73% for binary and multiclass classification categories. These results verify that ECG data may be used to diagnose COVID-19 which can help clinicians in the automatic diagnosis and overcome limitations of manual diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 1029, Egypt.
| |
Collapse
|
37
|
Attallah O. A computer-aided diagnostic framework for coronavirus diagnosis using texture-based radiomics images. Digit Health 2022; 8:20552076221092543. [PMID: 35433024 PMCID: PMC9005822 DOI: 10.1177/20552076221092543] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/21/2022] [Indexed: 12/14/2022] Open
Abstract
The accurate and rapid detection of the novel coronavirus infection, coronavirus is very important to prevent the fast spread of such disease. Thus, reducing negative effects that influenced many industrial sectors, especially healthcare. Artificial intelligence techniques in particular deep learning could help in the fast and precise diagnosis of coronavirus from computed tomography images. Most artificial intelligence-based studies used the original computed tomography images to build their models; however, the integration of texture-based radiomics images and deep learning techniques could improve the diagnostic accuracy of the novel coronavirus diseases. This study proposes a computer-assisted diagnostic framework based on multiple deep learning and texture-based radiomics approaches. It first trains three Residual Networks (ResNets) deep learning techniques with two texture-based radiomics images including discrete wavelet transform and gray-level covariance matrix instead of the original computed tomography images. Then, it fuses the texture-based radiomics deep features sets extracted from each using discrete cosine transform. Thereafter, it further combines the fused texture-based radiomics deep features obtained from the three convolutional neural networks. Finally, three support vector machine classifiers are utilized for the classification procedure. The proposed method is validated experimentally on the benchmark severe respiratory syndrome coronavirus 2 computed tomography image dataset. The accuracies attained indicate that using texture-based radiomics (gray-level covariance matrix, discrete wavelet transform) images for training the ResNet-18 (83.22%, 74.9%), ResNet-50 (80.94%, 78.39%), and ResNet-101 (80.54%, 77.99%) is better than using the original computed tomography images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. Furthermore, the sensitivity, specificity, accuracy, precision, and F1-score achieved using the proposed computer-assisted diagnostic after the two fusion steps are 99.47%, 99.72%, 99.60%, 99.72%, and 99.60% which proves that combining texture-based radiomics deep features obtained from the three ResNets has boosted its performance. Thus, fusing multiple texture-based radiomics deep features mined from several convolutional neural networks is better than using only one type of radiomics approach and a single convolutional neural network. The performance of the proposed computer-assisted diagnostic framework allows it to be used by radiologists in attaining fast and accurate diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
38
|
Assari Z, Mahloojifar A, Ahmadinejad N. A bimodal BI-RADS-guided GoogLeNet-based CAD system for solid breast masses discrimination using transfer learning. Comput Biol Med 2021; 142:105160. [PMID: 34995955 DOI: 10.1016/j.compbiomed.2021.105160] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 12/14/2021] [Accepted: 12/18/2021] [Indexed: 12/14/2022]
Abstract
Numerous solid breast masses require sophisticated analysis to establish a differential diagnosis. Consequently, complementary modalities such as ultrasound imaging are frequently required to evaluate mammographically further detected masses. Radiologists mentally integrate complementary information from images acquired of the same patient to make a more conclusive and effective diagnosis. However, it has always been a challenging task. This paper details a novel bimodal GoogLeNet-based CAD system that addresses the challenges associated with combining information from mammographic and sonographic images for solid breast mass classification. Each modality is initially trained using two distinct monomodal models in the proposed framework. Then, using the high-level feature maps extracted from both modalities, a bimodal model is trained. In order to fully exploit the BI-RADS descriptors, different image content representations of each mass are obtained and used as input images. In addition, using an ImageNet pre-trained GoogLeNet model, two publicly available databases, and our collected dataset, a two-step transfer learning strategy has been proposed. Our bimodal model achieves the best recognition results in terms of sensitivity, specificity, F1-score, Matthews Correlation Coefficient, area under the receiver operating characteristic curve, and accuracy metrics of 90.91%, 89.87%, 90.32%, 80.78%, 95.82%, and 90.38%, respectively. The promising results indicate that the proposed CAD system can facilitate bimodal suspicious mass analysis and thus contribute significantly to improving breast cancer diagnostic performance.
Collapse
Affiliation(s)
- Zahra Assari
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
| | - Ali Mahloojifar
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran.
| | - Nasrin Ahmadinejad
- Medical Imaging Center, Cancer Research Institute, Imam Khomeini Hospital Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Sciences (TUMS), Tehran, Iran
| |
Collapse
|
39
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
40
|
Towards non-data-hungry and fully-automated diagnosis of breast cancer from mammographic images. Comput Biol Med 2021; 139:105011. [PMID: 34753080 DOI: 10.1016/j.compbiomed.2021.105011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 10/12/2021] [Accepted: 10/31/2021] [Indexed: 11/21/2022]
Abstract
Analysing local texture and generating features are two key issues for automatic cancer detection in mammographic images. Recent researches have shown that deep neural networks provide a promising alternative to hand-driven features which suffer from curse of dimensionality and low accuracy rates. However, large and balanced training data are foremost requirements for deep learning-based models and these data are not always available publicly. In this work, we propose a fully-automated method for breast cancer diagnosis that performs training using small sets of data. Feature extraction from mammographic images is performed using a genetic-programming-based descriptor that exploits statistics on a local binary pattern-like local distribution defined in each pixel. The effectiveness of the suggested method is demonstrated on two challenging datasets, (1) the digital database for screening mammography and (2) the mammographic image analysis society digital mammogram database, for content-based image retrieval as well as for abnormality/malignancy classification. The experimental results show that the proposed method outperforms or achieves comparable results with deep learning-based methods even those with transfer learning and/or data-augmentation.
Collapse
|
41
|
Attallah O. DIAROP: Automated Deep Learning-Based Diagnostic Tool for Retinopathy of Prematurity. Diagnostics (Basel) 2021; 11:2034. [PMID: 34829380 PMCID: PMC8620568 DOI: 10.3390/diagnostics11112034] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 09/24/2021] [Accepted: 11/01/2021] [Indexed: 12/12/2022] Open
Abstract
Retinopathy of Prematurity (ROP) affects preterm neonates and could cause blindness. Deep Learning (DL) can assist ophthalmologists in the diagnosis of ROP. This paper proposes an automated and reliable diagnostic tool based on DL techniques called DIAROP to support the ophthalmologic diagnosis of ROP. It extracts significant features by first obtaining spatial features from the four Convolution Neural Networks (CNNs) DL techniques using transfer learning and then applying Fast Walsh Hadamard Transform (FWHT) to integrate these features. Moreover, DIAROP explores the best-integrated features extracted from the CNNs that influence its diagnostic capability. The results of DIAROP indicate that DIAROP achieved an accuracy of 93.2% and an area under receiving operating characteristic curve (AUC) of 0.98. Furthermore, DIAROP performance is compared with recent ROP diagnostic tools. Its promising performance shows that DIAROP may assist the ophthalmologic diagnosis of ROP.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
42
|
Intelligent Dermatologist Tool for Classifying Multiple Skin Cancer Subtypes by Incorporating Manifold Radiomics Features Categories. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:7192016. [PMID: 34621146 PMCID: PMC8457955 DOI: 10.1155/2021/7192016] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 08/20/2021] [Accepted: 09/01/2021] [Indexed: 02/06/2023]
Abstract
The rates of skin cancer (SC) are rising every year and becoming a critical health issue worldwide. SC's early and accurate diagnosis is the key procedure to reduce these rates and improve survivability. However, the manual diagnosis is exhausting, complicated, expensive, prone to diagnostic error, and highly dependent on the dermatologist's experience and abilities. Thus, there is a vital need to create automated dermatologist tools that are capable of accurately classifying SC subclasses. Recently, artificial intelligence (AI) techniques including machine learning (ML) and deep learning (DL) have verified the success of computer-assisted dermatologist tools in the automatic diagnosis and detection of SC diseases. Previous AI-based dermatologist tools are based on features which are either high-level features based on DL methods or low-level features based on handcrafted operations. Most of them were constructed for binary classification of SC. This study proposes an intelligent dermatologist tool to accurately diagnose multiple skin lesions automatically. This tool incorporates manifold radiomics features categories involving high-level features such as ResNet-50, DenseNet-201, and DarkNet-53 and low-level features including discrete wavelet transform (DWT) and local binary pattern (LBP). The results of the proposed intelligent tool prove that merging manifold features of different categories has a high influence on the classification accuracy. Moreover, these results are superior to those obtained by other related AI-based dermatologist tools. Therefore, the proposed intelligent tool can be used by dermatologists to help them in the accurate diagnosis of the SC subcategory. It can also overcome manual diagnosis limitations, reduce the rates of infection, and enhance survival rates.
Collapse
|
43
|
Mahmood T, Li J, Pei Y, Akhtar F. An Automated In-Depth Feature Learning Algorithm for Breast Abnormality Prognosis and Robust Characterization from Mammography Images Using Deep Transfer Learning. BIOLOGY 2021; 10:859. [PMID: 34571736 PMCID: PMC8468800 DOI: 10.3390/biology10090859] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 08/25/2021] [Accepted: 08/27/2021] [Indexed: 01/17/2023]
Abstract
BACKGROUND Diagnosing breast cancer masses and calcification clusters have paramount significance in mammography, which aids in mitigating the disease's complexities and curing it at early stages. However, a wrong mammogram interpretation may lead to an unnecessary biopsy of the false-positive findings, which reduces the patient's survival chances. Consequently, approaches that learn to discern breast masses can reduce the number of misconceptions and incorrect diagnoses. Conventionally used classification models focus on feature extraction techniques specific to a particular problem based on domain information. Deep learning strategies are becoming promising alternatives to solve the many challenges of feature-based approaches. METHODS This study introduces a convolutional neural network (ConvNet)-based deep learning method to extract features at varying densities and discern mammography's normal and suspected regions. Two different experiments were carried out to make an accurate diagnosis and classification. The first experiment consisted of five end-to-end pre-trained and fine-tuned deep convolution neural networks (DCNN). The in-depth features extracted from the ConvNet are also used to train the support vector machine algorithm to achieve excellent performance in the second experiment. Additionally, DCNN is the most frequently used image interpretation and classification method, including VGGNet, GoogLeNet, MobileNet, ResNet, and DenseNet. Moreover, this study pertains to data cleaning, preprocessing, and data augmentation, and improving mass recognition accuracy. The efficacy of all models is evaluated by training and testing three mammography datasets and has exhibited remarkable results. RESULTS Our deep learning ConvNet+SVM model obtained a discriminative training accuracy of 97.7% and validating accuracy of 97.8%, contrary to this, VGGNet16 method yielded 90.2%, 93.5% for VGGNet19, 63.4% for GoogLeNet, 82.9% for MobileNetV2, 75.1% for ResNet50, and 72.9% for DenseNet121. CONCLUSIONS The proposed model's improvement and validation are appropriated in conventional pathological practices that conceivably reduce the pathologist's strain in predicting clinical outcomes by analyzing patients' mammography images.
Collapse
Affiliation(s)
- Tariq Mahmood
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.)
- Division of Science and Technology, University of Education, Lahore 54000, Pakistan
| | - Jianqiang Li
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.)
- Beijing Engineering Research Center for IoT Software and Systems, Beijing 100124, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu 965-8580, Japan
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur 65200, Pakistan;
| |
Collapse
|
44
|
A Cascade Deep Forest Model for Breast Cancer Subtype Classification Using Multi-Omics Data. MATHEMATICS 2021. [DOI: 10.3390/math9131574] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Automated diagnosis systems aim to reduce the cost of diagnosis while maintaining the same efficiency. Many methods have been used for breast cancer subtype classification. Some use single data source, while others integrate many data sources, the case that results in reduced computational performance as opposed to accuracy. Breast cancer data, especially biological data, is known for its imbalance, with lack of extensive amounts of histopathological images as biological data. Recent studies have shown that cascade Deep Forest ensemble model achieves a competitive classification accuracy compared with other alternatives, such as the general ensemble learning methods and the conventional deep neural networks (DNNs), especially for imbalanced training sets, through learning hyper-representations through using cascade ensemble decision trees. In this work, a cascade Deep Forest is employed to classify breast cancer subtypes, IntClust and Pam50, using multi-omics datasets and different configurations. The results obtained recorded an accuracy of 83.45% for 5 subtypes and 77.55% for 10 subtypes. The significance of this work is that it is shown that using gene expression data alone with the cascade Deep Forest classifier achieves comparable accuracy to other techniques with higher computational performance, where the time recorded is about 5 s for 10 subtypes, and 7 s for 5 subtypes.
Collapse
|
45
|
Attallah O. MB-AI-His: Histopathological Diagnosis of Pediatric Medulloblastoma and its Subtypes via AI. Diagnostics (Basel) 2021; 11:359. [PMID: 33672752 PMCID: PMC7924641 DOI: 10.3390/diagnostics11020359] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 02/11/2021] [Accepted: 02/11/2021] [Indexed: 12/17/2022] Open
Abstract
Medulloblastoma (MB) is a dangerous malignant pediatric brain tumor that could lead to death. It is considered the most common pediatric cancerous brain tumor. Precise and timely diagnosis of pediatric MB and its four subtypes (defined by the World Health Organization (WHO)) is essential to decide the appropriate follow-up plan and suitable treatments to prevent its progression and reduce mortality rates. Histopathology is the gold standard modality for the diagnosis of MB and its subtypes, but manual diagnosis via a pathologist is very complicated, needs excessive time, and is subjective to the pathologists' expertise and skills, which may lead to variability in the diagnosis or misdiagnosis. The main purpose of the paper is to propose a time-efficient and reliable computer-aided diagnosis (CADx), namely MB-AI-His, for the automatic diagnosis of pediatric MB and its subtypes from histopathological images. The main challenge in this work is the lack of datasets available for the diagnosis of pediatric MB and its four subtypes and the limited related work. Related studies are based on either textural analysis or deep learning (DL) feature extraction methods. These studies used individual features to perform the classification task. However, MB-AI-His combines the benefits of DL techniques and textural analysis feature extraction methods through a cascaded manner. First, it uses three DL convolutional neural networks (CNNs), including DenseNet-201, MobileNet, and ResNet-50 CNNs to extract spatial DL features. Next, it extracts time-frequency features from the spatial DL features based on the discrete wavelet transform (DWT), which is a textural analysis method. Finally, MB-AI-His fuses the three spatial-time-frequency features generated from the three CNNs and DWT using the discrete cosine transform (DCT) and principal component analysis (PCA) to produce a time-efficient CADx system. MB-AI-His merges the privileges of different CNN architectures. MB-AI-His has a binary classification level for classifying among normal and abnormal MB images, and a multi-classification level to classify among the four subtypes of MB. The results of MB-AI-His show that it is accurate and reliable for both the binary and multi-class classification levels. It is also a time-efficient system as both the PCA and DCT methods have efficiently reduced the training execution time. The performance of MB-AI-His is compared with related CADx systems, and the comparison verified the powerfulness of MB-AI-His and its outperforming results. Therefore, it can support pathologists in the accurate and reliable diagnosis of MB and its subtypes from histopathological images. It can also reduce the time and cost of the diagnosis procedure which will correspondingly lead to lower death rates.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
46
|
Attallah O, Anwar F, Ghanem NM, Ismail MA. Histo-CADx: duo cascaded fusion stages for breast cancer diagnosis from histopathological images. PeerJ Comput Sci 2021; 7:e493. [PMID: 33987459 PMCID: PMC8093954 DOI: 10.7717/peerj-cs.493] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/26/2021] [Indexed: 05/06/2023]
Abstract
Breast cancer (BC) is one of the most common types of cancer that affects females worldwide. It may lead to irreversible complications and even death due to late diagnosis and treatment. The pathological analysis is considered the gold standard for BC detection, but it is a challenging task. Automatic diagnosis of BC could reduce death rates, by creating a computer aided diagnosis (CADx) system capable of accurately identifying BC at an early stage and decreasing the time consumed by pathologists during examinations. This paper proposes a novel CADx system named Histo-CADx for the automatic diagnosis of BC. Most related studies were based on individual deep learning methods. Also, studies did not examine the influence of fusing features from multiple CNNs and handcrafted features. In addition, related studies did not investigate the best combination of fused features that influence the performance of the CADx. Therefore, Histo-CADx is based on two stages of fusion. The first fusion stage involves the investigation of the impact of fusing several deep learning (DL) techniques with handcrafted feature extraction methods using the auto-encoder DL method. This stage also examines and searches for a suitable set of fused features that could improve the performance of Histo-CADx. The second fusion stage constructs a multiple classifier system (MCS) for fusing outputs from three classifiers, to further improve the accuracy of the proposed Histo-CADx. The performance of Histo-CADx is evaluated using two public datasets; specifically, the BreakHis and the ICIAR 2018 datasets. The results from the analysis of both datasets verified that the two fusion stages of Histo-CADx successfully improved the accuracy of the CADx compared to CADx constructed with individual features. Furthermore, using the auto-encoder for the fusion process has reduced the computation cost of the system. Moreover, the results after the two fusion stages confirmed that Histo-CADx is reliable and has the capacity of classifying BC more accurately compared to other latest studies. Consequently, it can be used by pathologists to help them in the accurate diagnosis of BC. In addition, it can decrease the time and effort needed by medical experts during the examination.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Alexandria, Egypt
| | - Fatma Anwar
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| | - Nagia M. Ghanem
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| | - Mohamed A. Ismail
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| |
Collapse
|