1
|
Seo JW, Kim YJ, Kim KG. Leveraging paired mammogram views with deep learning for comprehensive breast cancer detection. Sci Rep 2025; 15:4406. [PMID: 39910228 PMCID: PMC11799187 DOI: 10.1038/s41598-025-88907-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Accepted: 01/31/2025] [Indexed: 02/07/2025] Open
Abstract
Employing two standard mammography views is crucial for radiologists, providing comprehensive insights for reliable clinical evaluations. This study introduces paired mammogram view based-network(PMVnet), a novel algorithm designed to enhance breast lesion detection by integrating relational information from paired whole mammograms, addressing the limitations of current methods. Utilizing 1,636 private mammograms, PMVnet combines cosine similarity and the squeeze-and-excitation method within a U-shaped architecture to leverage correlated information. Performance comparisons with single view-based models with VGGnet16, Resnet50, and EfficientnetB5 as encoders revealed PMVnet's superior capability. Using VGGnet16, PMVnet achieved a Dice Similarity Coefficient (DSC) of 0.709 in segmentation and a recall of 0.950 at 0.156 false positives per image (FPPI) in detection tasks, outperforming the single-view model, which had a DSC of 0.579 and a recall of 0.813 at 0.188 FPPI. These findings demonstrate PMVnet's effectiveness in reducing false positives and avoiding missed true positives, suggesting its potential as a practical tool in computer-aided diagnosis systems. PMVnet can significantly enhance breast lesion detection, aiding radiologists in making more precise evaluations and improving patient outcomes. Future applications of PMVnet may offer substantial benefits in clinical settings, improving patient care through enhanced diagnostic accuracy.
Collapse
Affiliation(s)
- Jae Won Seo
- Department of Health Sciences and Technology, GAIHST, Gachon University, Incheon, 21999, Republic of Korea
| | - Young Jae Kim
- Department of Gachon Biomedical & Convergence Institute, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea
| | - Kwang Gi Kim
- Department of Health Sciences and Technology, GAIHST, Gachon University, Incheon, 21999, Republic of Korea.
- Department of Biomedical Engineering, College of IT Convergence, Gachon University, Seongnam-si, 13120, Republic of Korea.
| |
Collapse
|
2
|
Muduli D, Kumari R, Akhunzada A, Cengiz K, Sharma SK, Kumar RR, Sah DK. Retinal imaging based glaucoma detection using modified pelican optimization based extreme learning machine. Sci Rep 2024; 14:29660. [PMID: 39613799 DOI: 10.1038/s41598-024-79710-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 11/12/2024] [Indexed: 12/01/2024] Open
Abstract
Glaucoma is defined as progressive optic neuropathy that damages the structural appearance of the optic nerve head and is characterized by permanent blindness. For mass fundus image-based glaucoma classification, an improved automated computer-aided diagnosis (CAD) model performing binary classification (glaucoma or healthy), allowing ophthalmologists to detect glaucoma disease correctly in less computational time. We proposed learning technique called fast discrete curvelet transform with wrapping (FDCT-WRP) to create feature set. This method is entitled extracting curve-like features and creating a feature set. The combined feature reduction techniques named as principal component analysis and linear discriminant analysis, have been applied to generate prominent features and decrease the feature vector dimension. Lastly, a newly improved learning algorithm encompasses a modified pelican optimization algorithm (MOD-POA) and an extreme learning machine (ELM) for classification tasks. In this MOD-POA+ELM algorithm, the modified pelican optimization algorithm (MOD-POA) has been utilized to optimize the parameters of ELM's hidden neurons. The effectiveness has been evaluated using two standard datasets called G1020 and ORIGA with the [Formula: see text]-fold stratified cross-validation technique to ensure reliable evaluation. Our employed scheme achieved the best results for both datasets obtaining accuracy of 93.25% (G1020 dataset) and 96.75% (ORIGA dataset), respectively. Furthermore, we have utilized seven Explainable AI methodologies: Vanilla Gradients (VG), Guided Backpropagation (GBP ), Integrated Gradients ( IG), Guided Integrated Gradients (GIG), SmoothGrad, Gradient-weighted Class Activation Mapping (GCAM), and Guided Grad-CAM (GGCAM) for interpretability examination, aiding in the advancement of dependable and credible automation of healthcare detection of glaucoma.
Collapse
Affiliation(s)
- Debendra Muduli
- Department of Computer Science and Engineering, C.V. Raman Global University, Bhubaneswar, 751012, India
| | - Rani Kumari
- Department of Computer Science, Birla Institute of Technology, Ranchi, Jharkhand, 847226, India
- Department of Information Technology, Vi3, Image Analysis, Uppsala University, Uppsala, Sweden
| | - Adnan Akhunzada
- College of Computing and IT, Department of Data and Cybersecurity, University of Doha for Science and Technology, Doha, Qatar
| | - Korhan Cengiz
- Department of Electrical-Electronics Engineering, Istinye University, 34010, Istanbul, Turkey
| | - Santosh Kumar Sharma
- Department of Computer Science and Engineering, C.V. Raman Global University, Bhubaneswar, 751012, India
| | - Rakesh Ranjan Kumar
- Department of Computer Science and Engineering, C.V. Raman Global University, Bhubaneswar, 751012, India
| | - Dinesh Kumar Sah
- Department of Computer Science and Engineering, Indian Institute of Technology, Dhanbad, 826001, India.
- Division of Networked and Embedded Systems, Mälardalen University, 721 23, Västerås, Sweden.
| |
Collapse
|
3
|
Tian R, Lu G, Zhao N, Qian W, Ma H, Yang W. Constructing the Optimal Classification Model for Benign and Malignant Breast Tumors Based on Multifeature Analysis from Multimodal Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1386-1400. [PMID: 38381383 PMCID: PMC11300407 DOI: 10.1007/s10278-024-01036-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 01/28/2024] [Accepted: 02/02/2024] [Indexed: 02/22/2024]
Abstract
The purpose of this study was to fuse conventional radiomic and deep features from digital breast tomosynthesis craniocaudal projection (DBT-CC) and ultrasound (US) images to establish a multimodal benign-malignant classification model and evaluate its clinical value. Data were obtained from a total of 487 patients at three centers, each of whom underwent DBT-CC and US examinations. A total of 322 patients from dataset 1 were used to construct the model, while 165 patients from datasets 2 and 3 formed the prospective testing cohort. Two radiologists with 10-20 years of work experience and three sonographers with 12-20 years of work experience semiautomatically segmented the lesions using ITK-SNAP software while considering the surrounding tissue. For the experiments, we extracted conventional radiomic and deep features from tumors from DBT-CCs and US images using PyRadiomics and Inception-v3. Additionally, we extracted conventional radiomic features from four peritumoral layers around the tumors via DBT-CC and US images. Features were fused separately from the intratumoral and peritumoral regions. For the models, we tested the SVM, KNN, decision tree, RF, XGBoost, and LightGBM classifiers. Early fusion and late fusion (ensemble and stacking) strategies were employed for feature fusion. Using the SVM classifier, stacking fusion of deep features and three peritumoral radiomic features from tumors in DBT-CC and US images achieved the optimal performance, with an accuracy and AUC of 0.953 and 0.959 [CI: 0.886-0.996], a sensitivity and specificity of 0.952 [CI: 0.888-0.992] and 0.955 [0.868-0.985], and a precision of 0.976. The experimental results indicate that the fusion model of deep features and peritumoral radiomic features from tumors in DBT-CC and US images shows promise in differentiating benign and malignant breast tumors.
Collapse
Affiliation(s)
- Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
- Department of Nuclear Medicine, General Hospital of Northern Theatre Command, No. 83 Wenhua Road, Shenhe District, Shenyang, 110016, Liaoning Province, China
| | - Nannan Zhao
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, No. 44 Xiaoheyan Road, Dadong District, Shenyang, 110042, Liaoning Province, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, No. 44 Xiaoheyan Road, Dadong District, Shenyang, 110042, Liaoning Province, China.
| |
Collapse
|
4
|
Abd Elaziz M, Dahou A, Aseeri AO, Ewees AA, Al-Qaness MAA, Ibrahim RA. Cross vision transformer with enhanced Growth Optimizer for breast cancer detection in IoMT environment. Comput Biol Chem 2024; 111:108110. [PMID: 38815500 DOI: 10.1016/j.compbiolchem.2024.108110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Revised: 04/19/2024] [Accepted: 05/19/2024] [Indexed: 06/01/2024]
Abstract
The recent advances in artificial intelligence modern approaches can play vital roles in the Internet of Medical Things (IoMT). Automatic diagnosis is one of the most important topics in the IoMT, including cancer diagnosis. Breast cancer is one of the top causes of death among women. Accurate diagnosis and early detection of breast cancer can improve the survival rate of patients. Deep learning models have demonstrated outstanding potential in accurately detecting and diagnosing breast cancer. This paper proposes a novel technology for breast cancer detection using CrossViT as the deep learning model and an enhanced version of the Growth Optimizer algorithm (MGO) as the feature selection method. CrossVit is a hybrid deep learning model that combines the strengths of both convolutional neural networks (CNNs) and transformers. The MGO is a meta-heuristic algorithm that selects the most relevant features from a large pool of features to enhance the performance of the model. The developed approach was evaluated on three publicly available breast cancer datasets and achieved competitive performance compared to other state-of-the-art methods. The results show that the combination of CrossViT and the MGO can effectively identify the most informative features for breast cancer detection, potentially assisting clinicians in making accurate diagnoses and improving patient outcomes. The MGO algorithm improves accuracy by approximately 1.59% on INbreast, 5.00% on MIAS, and 0.79% on MiniDDSM compared to other methods on each respective dataset. The developed approach can also be utilized to improve the Quality of Service (QoS) in the healthcare system as a deployable IoT-based intelligent solution or a decision-making assistance service, enhancing the efficiency and precision of the diagnosis.
Collapse
Affiliation(s)
- Mohamed Abd Elaziz
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt; Faculty of Computer Science and Engineering, Galala University, Suze 435611, Egypt; Artificial Intelligence Research Center (AIRC), Ajman University, Ajman 346, United Arab Emirates; MEU Research Unit, Middle East University, Amman 11831, Jordan.
| | - Abdelghani Dahou
- Mathematics and Computer Science Department, University of Ahmed DRAIA, 01000, Adrar, Algeria; LDDI Laboratory, Faculty of Science and Technology, University of Ahmed DRAIA, 01000, Adrar, Algeria.
| | - Ahmad O Aseeri
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia.
| | - Ahmed A Ewees
- Department of Computer, Damietta University, Damietta 34517, Egypt.
| | - Mohammed A A Al-Qaness
- College of Physics and Electronic Information Engineering, Zhejiang Normal University, Jinhua 321004, China; Zhejiang Optoelectronics Research Institute, Jinhua 321004, China; College of Engineering and Information Technology, Emirates International University, Sana'a 16881, Yemen.
| | - Rehab Ali Ibrahim
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt.
| |
Collapse
|
5
|
Lu G, Tian R, Yang W, Liu R, Liu D, Xiang Z, Zhang G. Deep learning radiomics based on multimodal imaging for distinguishing benign and malignant breast tumours. Front Med (Lausanne) 2024; 11:1402967. [PMID: 39036101 PMCID: PMC11257849 DOI: 10.3389/fmed.2024.1402967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 06/14/2024] [Indexed: 07/23/2024] Open
Abstract
Objectives This study aimed to develop a deep learning radiomic model using multimodal imaging to differentiate benign and malignant breast tumours. Methods Multimodality imaging data, including ultrasonography (US), mammography (MG), and magnetic resonance imaging (MRI), from 322 patients (112 with benign breast tumours and 210 with malignant breast tumours) with histopathologically confirmed breast tumours were retrospectively collected between December 2018 and May 2023. Based on multimodal imaging, the experiment was divided into three parts: traditional radiomics, deep learning radiomics, and feature fusion. We tested the performance of seven classifiers, namely, SVM, KNN, random forest, extra trees, XGBoost, LightGBM, and LR, on different feature models. Through feature fusion using ensemble and stacking strategies, we obtained the optimal classification model for benign and malignant breast tumours. Results In terms of traditional radiomics, the ensemble fusion strategy achieved the highest accuracy, AUC, and specificity, with values of 0.892, 0.942 [0.886-0.996], and 0.956 [0.873-1.000], respectively. The early fusion strategy with US, MG, and MRI achieved the highest sensitivity of 0.952 [0.887-1.000]. In terms of deep learning radiomics, the stacking fusion strategy achieved the highest accuracy, AUC, and sensitivity, with values of 0.937, 0.947 [0.887-1.000], and 1.000 [0.999-1.000], respectively. The early fusion strategies of US+MRI and US+MG achieved the highest specificity of 0.954 [0.867-1.000]. In terms of feature fusion, the ensemble and stacking approaches of the late fusion strategy achieved the highest accuracy of 0.968. In addition, stacking achieved the highest AUC and specificity, which were 0.997 [0.990-1.000] and 1.000 [0.999-1.000], respectively. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity of 1.000 [0.999-1.000] under the early fusion strategy. Conclusion This study demonstrated the potential of integrating deep learning and radiomic features with multimodal images. As a single modality, MRI based on radiomic features achieved greater accuracy than US or MG. The US and MG models achieved higher accuracy with transfer learning than the single-mode or radiomic models. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity under the early fusion strategy, showed higher diagnostic performance, and provided more valuable information for differentiation between benign and malignant breast tumours.
Collapse
Affiliation(s)
- Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, China
| | - Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital and Institute, Shenyang, Liaoning, China
| | - Ruibo Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Dongmei Liu
- Department of Ultrasound, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Zijie Xiang
- Biomedical Engineering, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Guoxu Zhang
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, China
| |
Collapse
|
6
|
Sannasi Chakravarthy SR, Bharanidharan N, Vinoth Kumar V, Mahesh TR, Alqahtani MS, Guluwadi S. Deep transfer learning with fuzzy ensemble approach for the early detection of breast cancer. BMC Med Imaging 2024; 24:82. [PMID: 38589813 PMCID: PMC11389118 DOI: 10.1186/s12880-024-01267-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 03/30/2024] [Indexed: 04/10/2024] Open
Abstract
Breast Cancer is a significant global health challenge, particularly affecting women with higher mortality compared with other cancer types. Timely detection of such cancer types is crucial, and recent research, employing deep learning techniques, shows promise in earlier detection. The research focuses on the early detection of such tumors using mammogram images with deep-learning models. The paper utilized four public databases where a similar amount of 986 mammograms each for three classes (normal, benign, malignant) are taken for evaluation. Herein, three deep CNN models such as VGG-11, Inception v3, and ResNet50 are employed as base classifiers. The research adopts an ensemble method where the proposed approach makes use of the modified Gompertz function for building a fuzzy ranking of the base classification models and their decision scores are integrated in an adaptive manner for constructing the final prediction of results. The classification results of the proposed fuzzy ensemble approach outperform transfer learning models and other ensemble approaches such as weighted average and Sugeno integral techniques. The proposed ResNet50 ensemble network using the modified Gompertz function-based fuzzy ranking approach provides a superior classification accuracy of 98.986%.
Collapse
Affiliation(s)
- S R Sannasi Chakravarthy
- Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam, India
| | - N Bharanidharan
- School of Computer Science Engineering and Information systems, Vellore Institute of Technology, Vellore, 632014, India
| | - V Vinoth Kumar
- School of Computer Science Engineering and Information systems, Vellore Institute of Technology, Vellore, 632014, India
| | - T R Mahesh
- Department of Computer Science and Engineering JAIN (Deemed-to-be University), Bengaluru, 562112, India
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, Abha, 61421, Saudi Arabia
| | - Suresh Guluwadi
- Adama Science and Technology University, Adama, 302120, Ethiopia.
| |
Collapse
|
7
|
Aguerchi K, Jabrane Y, Habba M, El Hassani AH. A CNN Hyperparameters Optimization Based on Particle Swarm Optimization for Mammography Breast Cancer Classification. J Imaging 2024; 10:30. [PMID: 38392079 PMCID: PMC10889268 DOI: 10.3390/jimaging10020030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/30/2023] [Accepted: 12/08/2023] [Indexed: 02/24/2024] Open
Abstract
Breast cancer is considered one of the most-common types of cancers among females in the world, with a high mortality rate. Medical imaging is still one of the most-reliable tools to detect breast cancer. Unfortunately, manual image detection takes much time. This paper proposes a new deep learning method based on Convolutional Neural Networks (CNNs). Convolutional Neural Networks are widely used for image classification. However, the determination process for accurate hyperparameters and architectures is still a challenging task. In this work, a highly accurate CNN model to detect breast cancer by mammography was developed. The proposed method is based on the Particle Swarm Optimization (PSO) algorithm in order to look for suitable hyperparameters and the architecture for the CNN model. The CNN model using PSO achieved success rates of 98.23% and 97.98% on the DDSM and MIAS datasets, respectively. The experimental results proved that the proposed CNN model gave the best accuracy values in comparison with other studies in the field. As a result, CNN models for mammography classification can now be created automatically. The proposed method can be considered as a powerful technique for breast cancer prediction.
Collapse
Affiliation(s)
| | - Younes Jabrane
- MSC Laboratory, Cadi Ayyad University, Marrakech 40000, Morocco
| | - Maryam Habba
- National School of Applied Sciences of Safi, Cadi Ayyad University, Safi 46000, Morocco
| | - Amir Hajjam El Hassani
- Nanomedicine Imagery & Therapeutics Laboratory, EA4662-Bourgogne-Franche-Comté University, 90010 Belfort, France
| |
Collapse
|
8
|
Oyelade ON, Irunokhai EA, Wang H. A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification. Sci Rep 2024; 14:692. [PMID: 38184742 PMCID: PMC10771515 DOI: 10.1038/s41598-024-51329-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 01/03/2024] [Indexed: 01/08/2024] Open
Abstract
There is a wide application of deep learning technique to unimodal medical image analysis with significant classification accuracy performance observed. However, real-world diagnosis of some chronic diseases such as breast cancer often require multimodal data streams with different modalities of visual and textual content. Mammography, magnetic resonance imaging (MRI) and image-guided breast biopsy represent a few of multimodal visual streams considered by physicians in isolating cases of breast cancer. Unfortunately, most studies applying deep learning techniques to solving classification problems in digital breast images have often narrowed their study to unimodal samples. This is understood considering the challenging nature of multimodal image abnormality classification where the fusion of high dimension heterogeneous features learned needs to be projected into a common representation space. This paper presents a novel deep learning approach combining a dual/twin convolutional neural network (TwinCNN) framework to address the challenge of breast cancer image classification from multi-modalities. First, modality-based feature learning was achieved by extracting both low and high levels features using the networks embedded with TwinCNN. Secondly, to address the notorious problem of high dimensionality associated with the extracted features, binary optimization method is adapted to effectively eliminate non-discriminant features in the search space. Furthermore, a novel method for feature fusion is applied to computationally leverage the ground-truth and predicted labels for each sample to enable multimodality classification. To evaluate the proposed method, digital mammography images and digital histopathology breast biopsy samples from benchmark datasets namely MIAS and BreakHis respectively. Experimental results obtained showed that the classification accuracy and area under the curve (AUC) for the single modalities yielded 0.755 and 0.861871 for histology, and 0.791 and 0.638 for mammography. Furthermore, the study investigated classification accuracy resulting from the fused feature method, and the result obtained showed that 0.977, 0.913, and 0.667 for histology, mammography, and multimodality respectively. The findings from the study confirmed that multimodal image classification based on combination of image features and predicted label improves performance. In addition, the contribution of the study shows that feature dimensionality reduction based on binary optimizer supports the elimination of non-discriminant features capable of bottle-necking the classifier.
Collapse
Affiliation(s)
- Olaide N Oyelade
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT9 SBN, UK.
| | | | - Hui Wang
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT9 SBN, UK
| |
Collapse
|
9
|
Li W, Gou F, Wu J. Artificial intelligence auxiliary diagnosis and treatment system for breast cancer in developing countries. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:395-413. [PMID: 38189731 DOI: 10.3233/xst-230194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
BACKGROUND In many developing countries, a significant number of breast cancer patients are unable to receive timely treatment due to a large population base, high patient numbers, and limited medical resources. OBJECTIVE This paper proposes a breast cancer assisted diagnosis system based on electronic medical records. The goal of this system is to address the limitations of existing systems, which primarily rely on structured electronic records and may miss crucial information stored in unstructured records. METHODS The proposed approach is a breast cancer assisted diagnosis system based on electronic medical records. The system utilizes breast cancer enhanced convolutional neural networks with semantic initialization filters (BC-INIT-CNN). It extracts highly relevant tumor markers from unstructured medical records to aid in breast cancer staging diagnosis and effectively utilizes the important information present in unstructured records. RESULTS The model's performance is assessed using various evaluation metrics. Such as accuracy, ROC curves, and Precision-Recall curves. Comparative analysis demonstrates that the BC-INIT-CNN model outperforms several existing methods in terms of accuracy and computational efficiency. CONCLUSIONS The proposed breast cancer assisted diagnosis system based on BC-INIT-CNN showcases the potential to address the challenges faced by developing countries in providing timely treatment to breast cancer patients. By leveraging unstructured medical records and extracting relevant tumor markers, the system enables accurate staging diagnosis and enhances the utilization of valuable information.
Collapse
Affiliation(s)
- Wenxiu Li
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton VIC, Australia
| |
Collapse
|
10
|
Fatima M, Khan MA, Shaheen S, Almujally NA, Wang S. B 2C 3NetF 2: Breast cancer classification using an end‐to‐end deep learning feature fusion and satin bowerbird optimization controlled Newton Raphson feature selection. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023; 8:1374-1390. [DOI: 10.1049/cit2.12219] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 03/22/2023] [Indexed: 08/25/2024] Open
Abstract
AbstractCurrently, the improvement in AI is mainly related to deep learning techniques that are employed for the classification, identification, and quantification of patterns in clinical images. The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks, such as skin cancer, colorectal cancer, brain tumour, cardiac disease, Breast cancer (BrC), and a few more. The manual diagnosis of medical issues always requires an expert and is also expensive. Therefore, developing some computer diagnosis techniques based on deep learning is essential. Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage. It is estimated that patients with BrC will rise to 70% in the next 20 years. If diagnosed at a later stage, the survival rate of patients with BrC is shallow. Hence, early detection is essential, increasing the survival rate to 50%. A new framework for BrC classification is presented that utilises deep learning and feature optimization. The significant steps of the presented framework include (i) hybrid contrast enhancement of acquired images, (ii) data augmentation to facilitate better learning of the Convolutional Neural Network (CNN) model, (iii) a pre‐trained ResNet‐101 model is utilised and modified according to selected dataset classes, (iv) deep transfer learning based model training for feature extraction, (v) the fusion of features using the proposed highly corrected function‐controlled canonical correlation analysis approach, and (vi) optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers. The experiments of the proposed framework have been carried out using the most critical and publicly available dataset, such as CBIS‐DDSM, and obtained the best accuracy of 94.5% along with improved computation time. The comparison depicts that the presented method surpasses the current state‐of‐the‐art approaches.
Collapse
Affiliation(s)
- Mamuna Fatima
- Department of Computer Science HITEC University Taxila Pakistan
| | | | - Saima Shaheen
- Department of Computer Science HITEC University Taxila Pakistan
| | - Nouf Abdullah Almujally
- Department of Information Systems College of Computer and Information Sciences Princess Nourah bint Abdulrahman University Riyadh Saudi Arabia
| | - Shui‐Hua Wang
- Department of Mathematics University of Leicester Leicester UK
| |
Collapse
|
11
|
Saleh GA, Batouty NM, Gamal A, Elnakib A, Hamdy O, Sharafeldeen A, Mahmoud A, Ghazal M, Yousaf J, Alhalabi M, AbouEleneen A, Tolba AE, Elmougy S, Contractor S, El-Baz A. Impact of Imaging Biomarkers and AI on Breast Cancer Management: A Brief Review. Cancers (Basel) 2023; 15:5216. [PMID: 37958390 PMCID: PMC10650187 DOI: 10.3390/cancers15215216] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/13/2023] [Accepted: 10/21/2023] [Indexed: 11/15/2023] Open
Abstract
Breast cancer stands out as the most frequently identified malignancy, ranking as the fifth leading cause of global cancer-related deaths. The American College of Radiology (ACR) introduced the Breast Imaging Reporting and Data System (BI-RADS) as a standard terminology facilitating communication between radiologists and clinicians; however, an update is now imperative to encompass the latest imaging modalities developed subsequent to the 5th edition of BI-RADS. Within this review article, we provide a concise history of BI-RADS, delve into advanced mammography techniques, ultrasonography (US), magnetic resonance imaging (MRI), PET/CT images, and microwave breast imaging, and subsequently furnish comprehensive, updated insights into Molecular Breast Imaging (MBI), diagnostic imaging biomarkers, and the assessment of treatment responses. This endeavor aims to enhance radiologists' proficiency in catering to the personalized needs of breast cancer patients. Lastly, we explore the augmented benefits of artificial intelligence (AI), machine learning (ML), and deep learning (DL) applications in segmenting, detecting, and diagnosing breast cancer, as well as the early prediction of the response of tumors to neoadjuvant chemotherapy (NAC). By assimilating state-of-the-art computer algorithms capable of deciphering intricate imaging data and aiding radiologists in rendering precise and effective diagnoses, AI has profoundly revolutionized the landscape of breast cancer radiology. Its vast potential holds the promise of bolstering radiologists' capabilities and ameliorating patient outcomes in the realm of breast cancer management.
Collapse
Affiliation(s)
- Gehad A. Saleh
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Nihal M. Batouty
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Abdelrahman Gamal
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elnakib
- Electrical and Computer Engineering Department, School of Engineering, Penn State Erie, The Behrend College, Erie, PA 16563, USA;
| | - Omar Hamdy
- Surgical Oncology Department, Oncology Centre, Mansoura University, Mansoura 35516, Egypt;
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Jawad Yousaf
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Marah Alhalabi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Amal AbouEleneen
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elsaid Tolba
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
- The Higher Institute of Engineering and Automotive Technology and Energy, New Heliopolis, Cairo 11829, Egypt
| | - Samir Elmougy
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Sohail Contractor
- Department of Radiology, University of Louisville, Louisville, KY 40202, USA
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
12
|
Oliveira-Saraiva D, Mendes J, Leote J, Gonzalez FA, Garcia N, Ferreira HA, Matela N. Make It Less Complex: Autoencoder for Speckle Noise Removal-Application to Breast and Lung Ultrasound. J Imaging 2023; 9:217. [PMID: 37888324 PMCID: PMC10607564 DOI: 10.3390/jimaging9100217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/28/2023] [Accepted: 10/07/2023] [Indexed: 10/28/2023] Open
Abstract
Ultrasound (US) imaging is used in the diagnosis and monitoring of COVID-19 and breast cancer. The presence of Speckle Noise (SN) is a downside to its usage since it decreases lesion conspicuity. Filters can be used to remove SN, but they involve time-consuming computation and parameter tuning. Several researchers have been developing complex Deep Learning (DL) models (150,000-500,000 parameters) for the removal of simulated added SN, without focusing on the real-world application of removing naturally occurring SN from original US images. Here, a simpler (<30,000 parameters) Convolutional Neural Network Autoencoder (CNN-AE) to remove SN from US images of the breast and lung is proposed. In order to do so, simulated SN was added to such US images, considering four different noise levels (σ = 0.05, 0.1, 0.2, 0.5). The original US images (N = 1227, breast + lung) were given as targets, while the noised US images served as the input. The Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR) were used to compare the output of the CNN-AE and of the Median and Lee filters with the original US images. The CNN-AE outperformed the use of these classic filters for every noise level. To see how well the model removed naturally occurring SN from the original US images and to test its real-world applicability, a CNN model that differentiates malignant from benign breast lesions was developed. Several inputs were used to train the model (original, CNN-AE denoised, filter denoised, and noised US images). The use of the original US images resulted in the highest Matthews Correlation Coefficient (MCC) and accuracy values, while for sensitivity and negative predicted values, the CNN-AE-denoised US images (for higher σ values) achieved the best results. Our results demonstrate that the application of a simpler DL model for SN removal results in fewer misclassifications of malignant breast lesions in comparison to the use of original US images and the application of the Median filter. This shows that the use of a less-complex model and the focus on clinical practice applicability are relevant and should be considered in future studies.
Collapse
Affiliation(s)
- Duarte Oliveira-Saraiva
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal (N.M.)
- LASIGE, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal;
| | - João Mendes
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal (N.M.)
- LASIGE, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal;
| | - João Leote
- Critical Care Department, Hospital Garcia de Orta E.P.E, 2805-267 Almada, Portugal
| | | | - Nuno Garcia
- LASIGE, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal;
| | - Hugo Alexandre Ferreira
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal (N.M.)
| | - Nuno Matela
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal (N.M.)
| |
Collapse
|
13
|
Luo Y, Lu Z, Liu L, Huang Q. Deep fusion of human-machine knowledge with attention mechanism for breast cancer diagnosis. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2023]
|
14
|
Cruz-Ramos C, García-Avila O, Almaraz-Damian JA, Ponomaryov V, Reyes-Reyes R, Sadovnychiy S. Benign and Malignant Breast Tumor Classification in Ultrasound and Mammography Images via Fusion of Deep Learning and Handcraft Features. ENTROPY (BASEL, SWITZERLAND) 2023; 25:991. [PMID: 37509938 PMCID: PMC10378567 DOI: 10.3390/e25070991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/15/2023] [Accepted: 06/26/2023] [Indexed: 07/30/2023]
Abstract
Breast cancer is a disease that affects women in different countries around the world. The real cause of breast cancer is particularly challenging to determine, and early detection of the disease is necessary for reducing the death rate, due to the high risks associated with breast cancer. Treatment in the early period can increase the life expectancy and quality of life for women. CAD (Computer Aided Diagnostic) systems can perform the diagnosis of the benign and malignant lesions of breast cancer using technologies and tools based on image processing, helping specialist doctors to obtain a more precise point of view with fewer processes when making their diagnosis by giving a second opinion. This study presents a novel CAD system for automated breast cancer diagnosis. The proposed method consists of different stages. In the preprocessing stage, an image is segmented, and a mask of a lesion is obtained; during the next stage, the extraction of the deep learning features is performed by a CNN-specifically, DenseNet 201. Additionally, handcrafted features (Histogram of Oriented Gradients (HOG)-based, ULBP-based, perimeter area, area, eccentricity, and circularity) are obtained from an image. The designed hybrid system uses CNN architecture for extracting deep learning features, along with traditional methods which perform several handcraft features, following the medical properties of the disease with the purpose of later fusion via proposed statistical criteria. During the fusion stage, where deep learning and handcrafted features are analyzed, the genetic algorithms as well as mutual information selection algorithm, followed by several classifiers (XGBoost, AdaBoost, Multilayer perceptron (MLP)) based on stochastic measures, are applied to choose the most sensible information group among the features. In the experimental validation of two modalities of the CAD design, which performed two types of medical studies-mammography (MG) and ultrasound (US)-the databases mini-DDSM (Digital Database for Screening Mammography) and BUSI (Breast Ultrasound Images Dataset) were used. Novel CAD systems were evaluated and compared with recent state-of-the-art systems, demonstrating better performance in commonly used criteria, obtaining ACC of 97.6%, PRE of 98%, Recall of 98%, F1-Score of 98%, and IBA of 95% for the abovementioned datasets.
Collapse
Affiliation(s)
- Clara Cruz-Ramos
- Escuela Superior de Ingenieria Mecanica y Electrica-Culhuacan, Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
| | - Oscar García-Avila
- Escuela Superior de Ingenieria Mecanica y Electrica-Culhuacan, Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
| | - Jose-Agustin Almaraz-Damian
- Escuela Superior de Ingenieria Mecanica y Electrica-Culhuacan, Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
| | - Volodymyr Ponomaryov
- Escuela Superior de Ingenieria Mecanica y Electrica-Culhuacan, Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
| | - Rogelio Reyes-Reyes
- Escuela Superior de Ingenieria Mecanica y Electrica-Culhuacan, Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
| | - Sergiy Sadovnychiy
- Instituto Mexicano del Petroleo, Lazaro Cardenas Ave. # 152, Mexico City 07730, Mexico
| |
Collapse
|
15
|
Muthamilselvan S, Palaniappan A. BrcaDx: precise identification of breast cancer from expression data using a minimal set of features. FRONTIERS IN BIOINFORMATICS 2023; 3:1103493. [PMID: 37287543 PMCID: PMC10242386 DOI: 10.3389/fbinf.2023.1103493] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 05/15/2023] [Indexed: 06/09/2023] Open
Abstract
Background: Breast cancer is the foremost cancer in worldwide incidence, surpassing lung cancer notwithstanding the gender bias. One in four cancer cases among women are attributable to cancers of the breast, which are also the leading cause of death in women. Reliable options for early detection of breast cancer are needed. Methods: Using public-domain datasets, we screened transcriptomic profiles of breast cancer samples, and identified progression-significant linear and ordinal model genes using stage-informed models. We then applied a sequence of machine learning techniques, namely, feature selection, principal components analysis, and k-means clustering, to train a learner to discriminate "cancer" from "normal" based on expression levels of identified biomarkers. Results: Our computational pipeline yielded an optimal set of nine biomarker features for training the learner, namely, NEK2, PKMYT1, MMP11, CPA1, COL10A1, HSD17B13, CA4, MYOC, and LYVE1. Validation of the learned model on an independent test dataset yielded a performance of 99.5% accuracy. Blind validation on an out-of-domain external dataset yielded a balanced accuracy of 95.5%, demonstrating that the model has effectively reduced the dimensionality of the problem, and learnt the solution. The model was rebuilt using the full dataset, and then deployed as a web app for non-profit purposes at: https://apalania.shinyapps.io/brcadx/. To our knowledge, this is the best-performing freely available tool for the high-confidence diagnosis of breast cancer, and represents a promising aid to medical diagnosis.
Collapse
|
16
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
17
|
Jabeen K, Khan MA, Balili J, Alhaisoni M, Almujally NA, Alrashidi H, Tariq U, Cha JH. BC 2NetRF: Breast Cancer Classification from Mammogram Images Using Enhanced Deep Learning Features and Equilibrium-Jaya Controlled Regula Falsi-Based Features Selection. Diagnostics (Basel) 2023; 13:1238. [PMID: 37046456 PMCID: PMC10093018 DOI: 10.3390/diagnostics13071238] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 03/13/2023] [Accepted: 03/23/2023] [Indexed: 03/29/2023] Open
Abstract
One of the most frequent cancers in women is breast cancer, and in the year 2022, approximately 287,850 new cases have been diagnosed. From them, 43,250 women died from this cancer. An early diagnosis of this cancer can help to overcome the mortality rate. However, the manual diagnosis of this cancer using mammogram images is not an easy process and always requires an expert person. Several AI-based techniques have been suggested in the literature. However, still, they are facing several challenges, such as similarities between cancer and non-cancer regions, irrelevant feature extraction, and weak training models. In this work, we proposed a new automated computerized framework for breast cancer classification. The proposed framework improves the contrast using a novel enhancement technique called haze-reduced local-global. The enhanced images are later employed for the dataset augmentation. This step aimed at increasing the diversity of the dataset and improving the training capability of the selected deep learning model. After that, a pre-trained model named EfficientNet-b0 was employed and fine-tuned to add a few new layers. The fine-tuned model was trained separately on original and enhanced images using deep transfer learning concepts with static hyperparameters' initialization. Deep features were extracted from the average pooling layer in the next step and fused using a new serial-based approach. The fused features were later optimized using a feature selection algorithm known as Equilibrium-Jaya controlled Regula Falsi. The Regula Falsi was employed as a termination function in this algorithm. The selected features were finally classified using several machine learning classifiers. The experimental process was conducted on two publicly available datasets-CBIS-DDSM and INbreast. For these datasets, the achieved average accuracy is 95.4% and 99.7%. A comparison with state-of-the-art (SOTA) technology shows that the obtained proposed framework improved the accuracy. Moreover, the confidence interval-based analysis shows consistent results of the proposed framework.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan
| | | | - Jamel Balili
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia
- Higher Institute of Applied Science and Technology of Sousse (ISSATS), Cité Taffala (Ibn Khaldoun) 4003 Sousse, University of Souse, Sousse 4000, Tunisia
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha'il, Ha'il 81451, Saudi Arabia
| | - Nouf Abdullah Almujally
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Huda Alrashidi
- Faculty of Information Technology and Computing, Arab Open University, Ardiya 92400, Kuwait
| | - Usman Tariq
- Department of Management, CoBA, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Jae-Hyuk Cha
- Department of Computer Science, Hanyang University, Seoul 04763, Republic of Korea
| |
Collapse
|
18
|
Elkorany AS, Elsharkawy ZF. Efficient breast cancer mammograms diagnosis using three deep neural networks and term variance. Sci Rep 2023; 13:2663. [PMID: 36792720 PMCID: PMC9932150 DOI: 10.1038/s41598-023-29875-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 02/11/2023] [Indexed: 02/17/2023] Open
Abstract
Breast cancer (BC) is spreading more and more every day. Therefore, a patient's life can be saved by its early discovery. Mammography is frequently used to diagnose BC. The classification of mammography region of interest (ROI) patches (i.e., normal, malignant, or benign) is the most crucial phase in this process since it helps medical professionals to identify BC. In this paper, a hybrid technique that carries out a quick and precise classification that is appropriate for the BC diagnosis system is proposed and tested. Three different Deep Learning (DL) Convolution Neural Network (CNN) models-namely, Inception-V3, ResNet50, and AlexNet-are used in the current study as feature extractors. To extract useful features from each CNN model, our suggested method uses the Term Variance (TV) feature selection algorithm. The TV-selected features from each CNN model are combined and a further selection is performed to obtain the most useful features which are sent later to the multiclass support vector machine (MSVM) classifier. The Mammographic Image Analysis Society (MIAS) image database was used to test the effectiveness of the suggested method for classification. The mammogram's ROI is retrieved, and image patches are assigned to it. Based on the results of testing several TV feature subsets, the 600-feature subset with the highest classification performance was discovered. Higher classification accuracy (CA) is attained when compared to previously published work. The average CA for 70% of training is 97.81%, for 80% of training, it is 98%, and for 90% of training, it reaches its optimal value. Finally, the ablation analysis is performed to emphasize the role of the proposed network's key parameters.
Collapse
Affiliation(s)
- Ahmed S. Elkorany
- grid.411775.10000 0004 0621 4712Department of Electronics and Electrical Comm. Eng., Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
| | - Zeinab F. Elsharkawy
- grid.429648.50000 0000 9052 0245Engineering Department, Nuclear Research Center, Egyptian Atomic Energy Authority, Cairo, Egypt
| |
Collapse
|
19
|
Mendes J, Matela N, Garcia N. Avoiding Tissue Overlap in 2D Images: Single-Slice DBT Classification Using Convolutional Neural Networks. Tomography 2023; 9:398-412. [PMID: 36828384 PMCID: PMC9962912 DOI: 10.3390/tomography9010032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 02/08/2023] [Accepted: 02/13/2023] [Indexed: 02/17/2023] Open
Abstract
Breast cancer was the most diagnosed cancer around the world in 2020. Screening programs, based on mammography, aim to achieve early diagnosis which is of extreme importance when it comes to cancer. There are several flaws associated with mammography, with one of the most important being tissue overlapping that can result in both lesion masking and fake-lesion appearance. To overcome this, digital breast tomosynthesis takes images (slices) at different angles that are later reconstructed into a 3D image. Having in mind that the slices are planar images where tissue overlapping does not occur, the goal of the work done here was to develop a deep learning model that could, based on the said slices, classify lesions as benign or malignant. The developed model was based on the work done by Muduli et. al, with a slight change in the fully connected layers and in the regularization done. In total, 77 DBT volumes-39 benign and 38 malignant-were available. From each volume, nine slices were taken, one where the lesion was most visible and four above/below. To increase the quantity and the variability of the data, common data augmentation techniques (rotation, translation, mirroring) were applied to the original images three times. Therefore, 2772 images were used for training. Data augmentation techniques were then applied two more times-one set used for validation and one set used for testing. Our model achieved, on the testing set, an accuracy of 93.2% while the values of sensitivity, specificity, precision, F1-score, and Cohen's kappa were 92%, 94%, 94%, 94%, and 0.86, respectively. Given these results, the work done here suggests that the use of single-slice DBT can compare to state-of-the-art studies and gives a hint that with more data, better augmentation techniques and the use of transfer learning might overcome the use of mammograms in this type of studies.
Collapse
Affiliation(s)
- João Mendes
- Faculdade de Ciências, Instituto de Biofísica e Engenharia Biomédica, Universidade de Lisboa, 1749-016 Lisboa, Portugal
- Faculdade de Ciências, LASIGE, Universidade de Lisboa, 1749-016 Lisboa, Portugal
| | - Nuno Matela
- Faculdade de Ciências, Instituto de Biofísica e Engenharia Biomédica, Universidade de Lisboa, 1749-016 Lisboa, Portugal
- Correspondence:
| | - Nuno Garcia
- Faculdade de Ciências, LASIGE, Universidade de Lisboa, 1749-016 Lisboa, Portugal
| |
Collapse
|
20
|
High accuracy hybrid CNN classifiers for breast cancer detection using mammogram and ultrasound datasets. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
21
|
Applying Explainable Machine Learning Models for Detection of Breast Cancer Lymph Node Metastasis in Patients Eligible for Neoadjuvant Treatment. Cancers (Basel) 2023; 15:cancers15030634. [PMID: 36765592 PMCID: PMC9913601 DOI: 10.3390/cancers15030634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/16/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND Due to recent changes in breast cancer treatment strategy, significantly more patients are treated with neoadjuvant systemic therapy (NST). Radiological methods do not precisely determine axillary lymph node status, with up to 30% of patients being misdiagnosed. Hence, supplementary methods for lymph node status assessment are needed. This study aimed to apply and evaluate machine learning models on clinicopathological data, with a focus on patients meeting NST criteria, for lymph node metastasis prediction. METHODS From the total breast cancer patient data (n = 8381), 719 patients were identified as eligible for NST. Machine learning models were applied for the NST-criteria group and the total study population. Model explainability was obtained by calculating Shapley values. RESULTS In the NST-criteria group, random forest achieved the highest performance (AUC: 0.793 [0.713, 0.865]), while in the total study population, XGBoost performed the best (AUC: 0.762 [0.726, 0.795]). Shapley values identified tumor size, Ki-67, and patient age as the most important predictors. CONCLUSION Tree-based models achieve a good performance in assessing lymph node status. Such models can lead to more accurate disease stage prediction and consecutively better treatment selection, especially for NST patients where radiological and clinical findings are often the only way of lymph node assessment.
Collapse
|
22
|
Development of an Artificial Intelligence-Based Breast Cancer Detection Model by Combining Mammograms and Medical Health Records. Diagnostics (Basel) 2023; 13:diagnostics13030346. [PMID: 36766450 PMCID: PMC9913958 DOI: 10.3390/diagnostics13030346] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 01/10/2023] [Accepted: 01/13/2023] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI)-based computational models that analyze breast cancer have been developed for decades. The present study was implemented to investigate the accuracy and efficiency of combined mammography images and clinical records for breast cancer detection using machine learning and deep learning classifiers. METHODS This study was verified using 731 images from 357 women who underwent at least one mammogram and had clinical records for at least six months before mammography. The model was trained on mammograms and clinical variables to discriminate benign and malignant lesions. Multiple pre-trained deep CNN models to detect cancer in mammograms, including X-ception, VGG16, ResNet-v2, ResNet50, and CNN3 were employed. Machine learning models were constructed using k-nearest neighbor (KNN), support vector machine (SVM), random forest (RF), Artificial Neural Network (ANN), and gradient boosting machine (GBM) in the clinical dataset. RESULTS The detection performance obtained an accuracy of 84.5% with a specificity of 78.1% at a sensitivity of 89.7% and an AUC of 0.88. When trained on mammography image data alone, the result achieved a slightly lower score than the combined model (accuracy, 72.5% vs. 84.5%, respectively). CONCLUSIONS A breast cancer-detection model combining machine learning and deep learning models was performed in this study with a satisfactory result, and this model has potential clinical applications.
Collapse
|
23
|
Altameem A, Mahanty C, Poonia RC, Saudagar AKJ, Kumar R. Breast Cancer Detection in Mammography Images Using Deep Convolutional Neural Networks and Fuzzy Ensemble Modeling Techniques. Diagnostics (Basel) 2022; 12:1812. [PMID: 36010164 PMCID: PMC9406655 DOI: 10.3390/diagnostics12081812] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/10/2022] [Accepted: 07/13/2022] [Indexed: 11/17/2022] Open
Abstract
Breast cancer has evolved as the most lethal illness impacting women all over the globe. Breast cancer may be detected early, which reduces mortality and increases the chances of a full recovery. Researchers all around the world are working on breast cancer screening tools based on medical imaging. Deep learning approaches have piqued the attention of many in the medical imaging field due to their rapid growth. In this research, mammography pictures were utilized to detect breast cancer. We have used four mammography imaging datasets with a similar number of 1145 normal, benign, and malignant pictures using various deep CNN (Inception V4, ResNet-164, VGG-11, and DenseNet121) models as base classifiers. The proposed technique employs an ensemble approach in which the Gompertz function is used to build fuzzy rankings of the base classification techniques, and the decision scores of the base models are adaptively combined to construct final predictions. The proposed fuzzy ensemble techniques outperform each individual transfer learning methodology as well as multiple advanced ensemble strategies (Weighted Average, Sugeno Integral) with reference to prediction and accuracy. The suggested Inception V4 ensemble model with fuzzy rank based Gompertz function has a 99.32% accuracy rate. We believe that the suggested approach will be of tremendous value to healthcare practitioners in identifying breast cancer patients early on, perhaps leading to an immediate diagnosis.
Collapse
Affiliation(s)
- Ayman Altameem
- Department of Computer Science and Engineering, College of Applied Studies and Community Services, King Saud University, Riyadh 11533, Saudi Arabia;
| | - Chandrakanta Mahanty
- Department of Computer Science and Engineering, GIET University, Odisha 765022, India; (C.M.); (R.K.)
| | - Ramesh Chandra Poonia
- Department of Computer Science, CHRIST (Deemed to be University), Bangalore 560029, India;
| | | | - Raghvendra Kumar
- Department of Computer Science and Engineering, GIET University, Odisha 765022, India; (C.M.); (R.K.)
| |
Collapse
|
24
|
Basurto-Hurtado JA, Cruz-Albarran IA, Toledano-Ayala M, Ibarra-Manzano MA, Morales-Hernandez LA, Perez-Ramirez CA. Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms. Cancers (Basel) 2022; 14:3442. [PMID: 35884503 PMCID: PMC9322973 DOI: 10.3390/cancers14143442] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/02/2022] [Accepted: 07/12/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer is one the main death causes for women worldwide, as 16% of the diagnosed malignant lesions worldwide are its consequence. In this sense, it is of paramount importance to diagnose these lesions in the earliest stage possible, in order to have the highest chances of survival. While there are several works that present selected topics in this area, none of them present a complete panorama, that is, from the image generation to its interpretation. This work presents a comprehensive state-of-the-art review of the image generation and processing techniques to detect Breast Cancer, where potential candidates for the image generation and processing are presented and discussed. Novel methodologies should consider the adroit integration of artificial intelligence-concepts and the categorical data to generate modern alternatives that can have the accuracy, precision and reliability expected to mitigate the misclassifications.
Collapse
Affiliation(s)
- Jesus A. Basurto-Hurtado
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Irving A. Cruz-Albarran
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Manuel Toledano-Ayala
- División de Investigación y Posgrado de la Facultad de Ingeniería (DIPFI), Universidad Autónoma de Querétaro, Cerro de las Campanas S/N Las Campanas, Santiago de Querétaro 76010, Mexico;
| | - Mario Alberto Ibarra-Manzano
- Laboratorio de Procesamiento Digital de Señales, Departamento de Ingeniería Electrónica, Division de Ingenierias Campus Irapuato-Salamanca (DICIS), Universidad de Guanajuato, Carretera Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico;
| | - Luis A. Morales-Hernandez
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
| | - Carlos A. Perez-Ramirez
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| |
Collapse
|
25
|
Kumar P, Kumar A, Srivastava S, Padma Sai Y. A novel bi-modal extended Huber loss function based refined mask RCNN approach for automatic multi instance detection and localization of breast cancer. Proc Inst Mech Eng H 2022; 236:1036-1053. [DOI: 10.1177/09544119221095416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Breast cancer is an extremely aggressive cancer in women. Its abnormalities can be observed in the form of masses, calcification and lumps. In order to reduce the mortality rate of women its detection is needed at an early stage. The present paper proposes a novel bi-modal extended Huber loss function based refined mask regional convolutional neural network for automatic multi-instance detection and localization of breast cancer. To refine and increase the efficacy of the proposed method three changes are casted. First, a pre-processing step is performed for mammogram and ultrasound breast images. Second, the features of the region proposal network are separately mapped for accurate region of interest. Third, to reduce overfitting and fast convergence, an extended Huber loss function is used at the place of Smooth L1( x) in boundary loss. To extend the functionality of Huber loss, the delta parameter is automated by the aid of median absolute deviation with grid search algorithm. It provides the best optimum value of delta instead of user-based value. The proposed method is compared with pre-existing methods in terms of accuracy, true positive rate, true negative rate, precision, F-score, balanced classification rate, Youden’s index, Jaccard Index and dice coefficient on CBIS-DDSM and ultrasound database. The experimental result shows that the proposed method is a better suited approach for multi-instance detection, localization and classification of breast cancer. It can be used as a diagnostic medium that helps in clinical purposes and leads to a precise diagnosis of breast cancer abnormalities.
Collapse
Affiliation(s)
- Pradeep Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Patna, Bihar, India
| | - Abhinav Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Patna, Bihar, India
| | - Subodh Srivastava
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Patna, Bihar, India
| | - Yarlagadda Padma Sai
- Department of Electronics and Communication Engineering, VNR VJIET, Hyderabad, Telangana, India
| |
Collapse
|
26
|
Oza P, Sharma P, Patel S, Adedoyin F, Bruno A. Image Augmentation Techniques for Mammogram Analysis. J Imaging 2022; 8:141. [PMID: 35621905 PMCID: PMC9147240 DOI: 10.3390/jimaging8050141] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 04/19/2022] [Accepted: 04/22/2022] [Indexed: 01/30/2023] Open
Abstract
Research in the medical imaging field using deep learning approaches has become progressively contingent. Scientific findings reveal that supervised deep learning methods' performance heavily depends on training set size, which expert radiologists must manually annotate. The latter is quite a tiring and time-consuming task. Therefore, most of the freely accessible biomedical image datasets are small-sized. Furthermore, it is challenging to have big-sized medical image datasets due to privacy and legal issues. Consequently, not a small number of supervised deep learning models are prone to overfitting and cannot produce generalized output. One of the most popular methods to mitigate the issue above goes under the name of data augmentation. This technique helps increase training set size by utilizing various transformations and has been publicized to improve the model performance when tested on new data. This article surveyed different data augmentation techniques employed on mammogram images. The article aims to provide insights into basic and deep learning-based augmentation techniques.
Collapse
Affiliation(s)
- Parita Oza
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Paawan Sharma
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Samir Patel
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Festus Adedoyin
- Department of Computing and Informatics, Bournemouth University, Poole BH12 5BB, UK;
| | - Alessandro Bruno
- Department of Computing and Informatics, Bournemouth University, Poole BH12 5BB, UK;
| |
Collapse
|
27
|
Ai L, Bai W, Li M. TDABNet: Three-directional attention block network for the determination of IDH status in low- and high-grade gliomas from MRI. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
28
|
Liu H, Cui G, Luo Y, Guo Y, Zhao L, Wang Y, Subasi A, Dogan S, Tuncer T. Artificial Intelligence-Based Breast Cancer Diagnosis Using Ultrasound Images and Grid-Based Deep Feature Generator. Int J Gen Med 2022; 15:2271-2282. [PMID: 35256855 PMCID: PMC8898057 DOI: 10.2147/ijgm.s347491] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 01/11/2022] [Indexed: 01/30/2023] Open
Abstract
Purpose Breast cancer is a prominent cancer type with high mortality. Early detection of breast cancer could serve to improve clinical outcomes. Ultrasonography is a digital imaging technique used to differentiate benign and malignant tumors. Several artificial intelligence techniques have been suggested in the literature for breast cancer detection using breast ultrasonography (BUS). Nowadays, particularly deep learning methods have been applied to biomedical images to achieve high classification performances. Patients and Methods This work presents a new deep feature generation technique for breast cancer detection using BUS images. The widely known 16 pre-trained CNN models have been used in this framework as feature generators. In the feature generation phase, the used input image is divided into rows and columns, and these deep feature generators (pre-trained models) have applied to each row and column. Therefore, this method is called a grid-based deep feature generator. The proposed grid-based deep feature generator can calculate the error value of each deep feature generator, and then it selects the best three feature vectors as a final feature vector. In the feature selection phase, iterative neighborhood component analysis (INCA) chooses 980 features as an optimal number of features. Finally, these features are classified by using a deep neural network (DNN). Results The developed grid-based deep feature generation-based image classification model reached 97.18% classification accuracy on the ultrasonic images for three classes, namely malignant, benign, and normal. Conclusion The findings obviously denoted that the proposed grid deep feature generator and INCA-based feature selection model successfully classified breast ultrasonic images.
Collapse
Affiliation(s)
- Haixia Liu
- Department of Ultrasound, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Guozhong Cui
- Department of Surgical Oncology, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Yi Luo
- Medical Statistics Room, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Yajie Guo
- Department of Ultrasound, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Lianli Zhao
- Department of Internal Medicine teaching and research group, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, China
| | - Yueheng Wang
- Department of Ultrasound, The Second Hospital of Hebei MedicalUniversity, Shijiazhuang, Hebei Province, 050000, People's Republic of China
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, 20520, Finland.,Department of Computer Science, College of Engineering, Effat University, Jeddah, 21478, Saudi Arabia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
| |
Collapse
|