1
|
Manigrasso F, Milazzo R, Russo AS, Lamberti F, Strand F, Pagnani A, Morra L. Mammography classification with multi-view deep learning techniques: Investigating graph and transformer-based architectures. Med Image Anal 2025; 99:103320. [PMID: 39244796 DOI: 10.1016/j.media.2024.103320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 06/20/2024] [Accepted: 08/19/2024] [Indexed: 09/10/2024]
Abstract
The potential and promise of deep learning systems to provide an independent assessment and relieve radiologists' burden in screening mammography have been recognized in several studies. However, the low cancer prevalence, the need to process high-resolution images, and the need to combine information from multiple views and scales still pose technical challenges. Multi-view architectures that combine information from the four mammographic views to produce an exam-level classification score are a promising approach to the automated processing of screening mammography. However, training such architectures from exam-level labels, without relying on pixel-level supervision, requires very large datasets and may result in suboptimal accuracy. Emerging architectures such as Visual Transformers (ViT) and graph-based architectures can potentially integrate ipsi-lateral and contra-lateral breast views better than traditional convolutional neural networks, thanks to their stronger ability of modeling long-range dependencies. In this paper, we extensively evaluate novel transformer-based and graph-based architectures against state-of-the-art multi-view convolutional neural networks, trained in a weakly-supervised setting on a middle-scale dataset, both in terms of performance and interpretability. Extensive experiments on the CSAW dataset suggest that, while transformer-based architecture outperform other architectures, different inductive biases lead to complementary strengths and weaknesses, as each architecture is sensitive to different signs and mammographic features. Hence, an ensemble of different architectures should be preferred over a winner-takes-all approach to achieve more accurate and robust results. Overall, the findings highlight the potential of a wide range of multi-view architectures for breast cancer classification, even in datasets of relatively modest size, although the detection of small lesions remains challenging without pixel-wise supervision or ad-hoc networks.
Collapse
Affiliation(s)
- Francesco Manigrasso
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Corso Duca degli Abruzzi 24, 10129, Turin, Italy
| | - Rosario Milazzo
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Corso Duca degli Abruzzi 24, 10129, Turin, Italy
| | - Alessandro Sebastian Russo
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Corso Duca degli Abruzzi 24, 10129, Turin, Italy
| | - Fabrizio Lamberti
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Corso Duca degli Abruzzi 24, 10129, Turin, Italy
| | - Fredrik Strand
- Department of Oncology-Pathology, Karolinska Institute, Stockholm, Sweden; Department of Breast Radiology, Karolinska University Hospital, Stockholm, Sweden
| | - Andrea Pagnani
- Politecnico di Torino, Dipartimento di Scienza Applicata e Tecnologia, Corso Duca degli Abruzzi 24, 10129, Turin, Italy
| | - Lia Morra
- Politecnico di Torino, Dipartimento di Automatica e Informatica, Corso Duca degli Abruzzi 24, 10129, Turin, Italy.
| |
Collapse
|
2
|
Kumar Saha D, Hossain T, Safran M, Alfarhood S, Mridha MF, Che D. Segmentation for mammography classification utilizing deep convolutional neural network. BMC Med Imaging 2024; 24:334. [PMID: 39696014 DOI: 10.1186/s12880-024-01510-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Accepted: 11/21/2024] [Indexed: 12/20/2024] Open
Abstract
BACKGROUND Mammography for the diagnosis of early breast cancer (BC) relies heavily on the identification of breast masses. However, in the early stages, it might be challenging to ascertain whether a breast mass is benign or malignant. Consequently, many deep learning (DL)-based computer-aided diagnosis (CAD) approaches for BC classification have been developed. METHODS Recently, the transformer model has emerged as a method for overcoming the constraints of convolutional neural networks (CNN). Thus, our primary goal was to determine how well an improved transformer model could distinguish between benign and malignant breast tissues. In this instance, we drew on the Mendeley data repository's INbreast dataset, which includes benign and malignant breast types. Additionally, the segmentation anything model (SAM) method was used to generate the optimized cutoff for region of interest (ROI) extraction from all mammograms. We implemented a successful architecture modification at the bottom layer of a pyramid transformer (PTr) to identify BC from mammography images. RESULTS The proposed PTr model using a transfer learning (TL) approach with a segmentation technique achieved the best accuracy of 99.96% for binary classifications with an area under the curve (AUC) score of 99.98%, respectively. We also compared the performance of the proposed model with other transformer model vision transformers (ViT) and DL models, MobileNetV3 and EfficientNetB7, respectively. CONCLUSIONS In this study, a modified transformer model is proposed for BC prediction and mammography image classification using segmentation approaches. Data segmentation techniques accurately identify the regions affected by BC. Finally, the proposed transformer model accurately classified benign and malignant breast tissues, which is vital for radiologists to guide future treatment.
Collapse
Affiliation(s)
- Dip Kumar Saha
- Department of Computer Science and Engineering, Stamford University Bangladesh, Siddeswari, Dhaka, Bangladesh
| | - Tuhin Hossain
- Department of Computer Science and Engineering, Jahangirnagar University, Savar, Dhaka, Bangladesh
| | - Mejdl Safran
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, 11543, Saudi Arabia.
| | - Sultan Alfarhood
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, 11543, Saudi Arabia
| | - M F Mridha
- Department of Computer Science, American International University-Bangladesh, Kuratoli, Dhaka, Bangladesh.
| | - Dunren Che
- Department of Electrical Engineering and Computer Science, Texas A&M University-Kingsville, Kingsville, 78363, Texas, USA
| |
Collapse
|
3
|
Sunba A, AlShammari M, Almuhanna A, Alkhnbashi OS. An Integrated Multimodal-Based CAD System for Breast Cancer Diagnosis. Cancers (Basel) 2024; 16:3740. [PMID: 39594696 PMCID: PMC11591763 DOI: 10.3390/cancers16223740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2024] [Revised: 10/27/2024] [Accepted: 10/28/2024] [Indexed: 11/28/2024] Open
Abstract
Breast cancer has been one of the main causes of death among women recently, and it has been the focus of attention of many specialists and researchers in the health field. Because of its seriousness and spread speed, breast cancer-resisting methods, early diagnosis, diagnosis, and treatment have been the points of research discussion. Many computers-aided diagnosis (CAD) systems have been proposed to reduce the load on physicians and increase the accuracy of breast tumor diagnosis. To the best of our knowledge, combining patient information, including medical history, breast density, age, and other factors, with mammogram features from both breasts in craniocaudal (CC) and mediolateral oblique (MLO) views has not been previously investigated for breast tumor classification. In this paper, we investigated the effectiveness of using those inputs by comparing two combination approaches. The soft voting approach, produced from statistical information-based models (decision tree, random forest, K-nearest neighbor, Gaussian naive Bayes, gradient boosting, and MLP) and an image-based model (CNN), achieved 90% accuracy. Additionally, concatenating statistical and image-based features in a deep learning model achieved 93% accuracy. We found that it produced promising results that would enhance the CAD systems. As a result, this study finds that using both sides of mammograms outperformed the result of using only the infected side. In addition, integrating the mammogram features with statistical information enhanced the accuracy of the tumor classification. Our findings, based on a novel dataset, incorporate both patient information and four-view mammogram images, covering multiple classes: normal, benign, and malignant.
Collapse
Affiliation(s)
- Amal Sunba
- Information and Computer Science Department, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; (A.S.); (M.A.)
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Maha AlShammari
- Information and Computer Science Department, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; (A.S.); (M.A.)
- Computational Unit, Department of Environmental Health, Institute for Research and Medical Consultations, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
| | - Afnan Almuhanna
- Department of Radiology, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia;
| | - Omer S. Alkhnbashi
- Center for Applied and Translational Genomics (CATG), Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai Healthcare City, Dubai P.O. Box 50505, United Arab Emirates
- College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai Healthcare City, Dubai P.O. Box 50505, United Arab Emirates
| |
Collapse
|
4
|
Murty PSRC, Anuradha C, Naidu PA, Mandru D, Ashok M, Atheeswaran A, Rajeswaran N, Saravanan V. Integrative hybrid deep learning for enhanced breast cancer diagnosis: leveraging the Wisconsin Breast Cancer Database and the CBIS-DDSM dataset. Sci Rep 2024; 14:26287. [PMID: 39487199 PMCID: PMC11530441 DOI: 10.1038/s41598-024-74305-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 09/25/2024] [Indexed: 11/04/2024] Open
Abstract
The objective of this investigation was to improve the diagnosis of breast cancer by combining two significant datasets: the Wisconsin Breast Cancer Database and the DDSM Curated Breast Imaging Subset (CBIS-DDSM). The Wisconsin Breast Cancer Database provides a detailed examination of the characteristics of cell nuclei, including radius, texture, and concavity, for 569 patients, of which 212 had malignant tumors. In addition, the CBIS-DDSM dataset-a revised variant of the Digital Database for Screening Mammography (DDSM)-offers a standardized collection of 2,620 scanned film mammography studies, including cases that are normal, benign, or malignant and that include verified pathology data. To identify complex patterns and trait diagnoses of breast cancer, this investigation used a hybrid deep learning methodology that combines Convolutional Neural Networks (CNNs) with the stochastic gradients method. The Wisconsin Breast Cancer Database is used for CNN training, while the CBIS-DDSM dataset is used for fine-tuning to maximize adaptability across a variety of mammography investigations. Data integration, feature extraction, model development, and thorough performance evaluation are the main objectives. The diagnostic effectiveness of the algorithm was evaluated by the area under the Receiver Operating Characteristic Curve (AUC-ROC), sensitivity, specificity, and accuracy. The generalizability of the model will be validated by independent validation on additional datasets. This research provides an accurate, comprehensible, and therapeutically applicable breast cancer detection method that will advance the field. These predicted results might greatly increase early diagnosis, which could promote improvements in breast cancer research and eventually lead to improved patient outcomes.
Collapse
Affiliation(s)
- Patnala S R Chandra Murty
- Department of CSE, Malla Reddy Engineering College (Autonomous), Maisammaguda, Secunderabad, 500100, Telangana, India
| | - Chinta Anuradha
- Department of CSE, Velagapudi Ramakrishna Siddhartha Engineering College (Deemed to be University), Kanuru, Vijayawada, 520007, Andhra Pradesh, India
| | - P Appala Naidu
- Department of CSE, Raghu Engineering College (Autonomous), Visakhapatnam, 531162, Andhra Pradesh, India
| | - Deenababu Mandru
- Department of IT, Malla Reddy Engineering College (Autonomous), Maisammaguda, Secunderabad, 500100, Telangana, India
| | - Maram Ashok
- Department of CSE, Malla Reddy College of Engineering, Maisammaguda, Secunderabad, 500100, Telangana, India
| | - Athiraja Atheeswaran
- Department of CSE (AIML), Malla Reddy College of Engineering, Secunderabad, India
| | | | - V Saravanan
- Department of Computer Science, Dambi Dollo University, Dambi Dollo, Ethiopia.
| |
Collapse
|
5
|
Karthiga R, Narasimhan K, V T, M H, Amirtharajan R. Review of AI & XAI-based breast cancer diagnosis methods using various imaging modalities. MULTIMEDIA TOOLS AND APPLICATIONS 2024. [DOI: 10.1007/s11042-024-20271-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 08/27/2024] [Accepted: 09/11/2024] [Indexed: 01/02/2025]
|
6
|
Alzubaidi L, Al-Dulaimi K, Salhi A, Alammar Z, Fadhel MA, Albahri AS, Alamoodi AH, Albahri OS, Hasan AF, Bai J, Gilliland L, Peng J, Branni M, Shuker T, Cutbush K, Santamaría J, Moreira C, Ouyang C, Duan Y, Manoufali M, Jomaa M, Gupta A, Abbosh A, Gu Y. Comprehensive review of deep learning in orthopaedics: Applications, challenges, trustworthiness, and fusion. Artif Intell Med 2024; 155:102935. [PMID: 39079201 DOI: 10.1016/j.artmed.2024.102935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 03/18/2024] [Accepted: 07/22/2024] [Indexed: 08/24/2024]
Abstract
Deep learning (DL) in orthopaedics has gained significant attention in recent years. Previous studies have shown that DL can be applied to a wide variety of orthopaedic tasks, including fracture detection, bone tumour diagnosis, implant recognition, and evaluation of osteoarthritis severity. The utilisation of DL is expected to increase, owing to its ability to present accurate diagnoses more efficiently than traditional methods in many scenarios. This reduces the time and cost of diagnosis for patients and orthopaedic surgeons. To our knowledge, no exclusive study has comprehensively reviewed all aspects of DL currently used in orthopaedic practice. This review addresses this knowledge gap using articles from Science Direct, Scopus, IEEE Xplore, and Web of Science between 2017 and 2023. The authors begin with the motivation for using DL in orthopaedics, including its ability to enhance diagnosis and treatment planning. The review then covers various applications of DL in orthopaedics, including fracture detection, detection of supraspinatus tears using MRI, osteoarthritis, prediction of types of arthroplasty implants, bone age assessment, and detection of joint-specific soft tissue disease. We also examine the challenges for implementing DL in orthopaedics, including the scarcity of data to train DL and the lack of interpretability, as well as possible solutions to these common pitfalls. Our work highlights the requirements to achieve trustworthiness in the outcomes generated by DL, including the need for accuracy, explainability, and fairness in the DL models. We pay particular attention to fusion techniques as one of the ways to increase trustworthiness, which have also been used to address the common multimodality in orthopaedics. Finally, we have reviewed the approval requirements set forth by the US Food and Drug Administration to enable the use of DL applications. As such, we aim to have this review function as a guide for researchers to develop a reliable DL application for orthopaedic tasks from scratch for use in the market.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia.
| | - Khamael Al-Dulaimi
- Computer Science Department, College of Science, Al-Nahrain University, Baghdad, Baghdad 10011, Iraq; School of Electrical Engineering and Robotics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Asma Salhi
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Zaenab Alammar
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Mohammed A Fadhel
- Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - A S Albahri
- Technical College, Imam Ja'afar Al-Sadiq University, Baghdad, Iraq
| | - A H Alamoodi
- Institute of Informatics and Computing in Energy, Universiti Tenaga Nasional, Kajang 43000, Malaysia
| | - O S Albahri
- Australian Technical and Management College, Melbourne, Australia
| | - Amjad F Hasan
- Faculty of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Luke Gilliland
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Jing Peng
- Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Marco Branni
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Tristan Shuker
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Kenneth Cutbush
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén 23071, Spain
| | - Catarina Moreira
- Data Science Institute, University of Technology Sydney, Australia
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Ye Duan
- School of Computing, Clemson University, Clemson, 29631, SC, USA
| | - Mohamed Manoufali
- CSIRO, Kensington, WA 6151, Australia; School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4067, Australia
| | - Mohammad Jomaa
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Ashish Gupta
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4067, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| |
Collapse
|
7
|
Tian R, Lu G, Zhao N, Qian W, Ma H, Yang W. Constructing the Optimal Classification Model for Benign and Malignant Breast Tumors Based on Multifeature Analysis from Multimodal Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1386-1400. [PMID: 38381383 PMCID: PMC11300407 DOI: 10.1007/s10278-024-01036-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 01/28/2024] [Accepted: 02/02/2024] [Indexed: 02/22/2024]
Abstract
The purpose of this study was to fuse conventional radiomic and deep features from digital breast tomosynthesis craniocaudal projection (DBT-CC) and ultrasound (US) images to establish a multimodal benign-malignant classification model and evaluate its clinical value. Data were obtained from a total of 487 patients at three centers, each of whom underwent DBT-CC and US examinations. A total of 322 patients from dataset 1 were used to construct the model, while 165 patients from datasets 2 and 3 formed the prospective testing cohort. Two radiologists with 10-20 years of work experience and three sonographers with 12-20 years of work experience semiautomatically segmented the lesions using ITK-SNAP software while considering the surrounding tissue. For the experiments, we extracted conventional radiomic and deep features from tumors from DBT-CCs and US images using PyRadiomics and Inception-v3. Additionally, we extracted conventional radiomic features from four peritumoral layers around the tumors via DBT-CC and US images. Features were fused separately from the intratumoral and peritumoral regions. For the models, we tested the SVM, KNN, decision tree, RF, XGBoost, and LightGBM classifiers. Early fusion and late fusion (ensemble and stacking) strategies were employed for feature fusion. Using the SVM classifier, stacking fusion of deep features and three peritumoral radiomic features from tumors in DBT-CC and US images achieved the optimal performance, with an accuracy and AUC of 0.953 and 0.959 [CI: 0.886-0.996], a sensitivity and specificity of 0.952 [CI: 0.888-0.992] and 0.955 [0.868-0.985], and a precision of 0.976. The experimental results indicate that the fusion model of deep features and peritumoral radiomic features from tumors in DBT-CC and US images shows promise in differentiating benign and malignant breast tumors.
Collapse
Affiliation(s)
- Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
- Department of Nuclear Medicine, General Hospital of Northern Theatre Command, No. 83 Wenhua Road, Shenhe District, Shenyang, 110016, Liaoning Province, China
| | - Nannan Zhao
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, No. 44 Xiaoheyan Road, Dadong District, Shenyang, 110042, Liaoning Province, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, No. 195 Chuangxin Road, Hunnan District, Shenyang, 110819, Liaoning Province, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, No. 44 Xiaoheyan Road, Dadong District, Shenyang, 110042, Liaoning Province, China.
| |
Collapse
|
8
|
Bilal A, Imran A, Liu X, Liu X, Ahmad Z, Shafiq M, El-Sherbeeny AM, Long H. BC-QNet: A quantum-infused ELM model for breast cancer diagnosis. Comput Biol Med 2024; 175:108483. [PMID: 38704900 DOI: 10.1016/j.compbiomed.2024.108483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/11/2024] [Accepted: 04/12/2024] [Indexed: 05/07/2024]
Abstract
The timely and accurate diagnosis of breast cancer is pivotal for effective treatment, but current automated mammography classification methods have their constraints. In this study, we introduce an innovative hybrid model that marries the power of the Extreme Learning Machine (ELM) with FuNet transfer learning, harnessing the potential of the MIAS dataset. This novel approach leverages an Enhanced Quantum-Genetic Binary Grey Wolf Optimizer (Q-GBGWO) within the ELM framework, elevating its performance. Our contributions are twofold: firstly, we employ a feature fusion strategy to optimize feature extraction, significantly enhancing breast cancer classification accuracy. The proposed methodological motivation stems from optimizing feature extraction for improved breast cancer classification accuracy. The Q-GBGWO optimizes ELM parameters, demonstrating its efficacy within the ELM classifier. This innovation marks a considerable advancement beyond traditional methods. Through comparative evaluations against various optimization techniques, the exceptional performance of our Q-GBGWO-ELM model becomes evident. The classification accuracy of the model is exceptionally high, with rates of 96.54 % for Normal, 97.24 % for Benign, and 98.01 % for Malignant classes. Additionally, the model demonstrates a high sensitivity with rates of 96.02 % for Normal, 96.54 % for Benign, and 97.75 % for Malignant classes, and it exhibits impressive specificity with rates of 96.69 % for Normal, 97.38 % for Benign, and 98.16 % for Malignant classes. These metrics are reflected in its ability to classify three different types of breast cancer accurately. Our approach highlights the innovative integration of image data, deep feature extraction, and optimized ELM classification, marking a transformative step in advancing early breast cancer detection and enhancing patient outcomes.
Collapse
Affiliation(s)
- Anas Bilal
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China; Key Laboratory of Data Science and Smart Education, Ministry of Education, Hainan Normal University, Haikou, 571158, China
| | - Azhar Imran
- Department of Creative Technologies, Air University, Islamabad, 44000, Pakistan
| | - Xiaowen Liu
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China; Key Laboratory of Data Science and Smart Education, Ministry of Education, Hainan Normal University, Haikou, 571158, China
| | - Xiling Liu
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China; Key Laboratory of Data Science and Smart Education, Ministry of Education, Hainan Normal University, Haikou, 571158, China
| | - Zohaib Ahmad
- Department of Criminology & Forensic Sciences Technology, Lahore Garrison University, Lahore, Pakistan
| | - Muhammad Shafiq
- School of Information Engineering, Qujing Normal University, Yunnan, China
| | - Ahmed M El-Sherbeeny
- Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh, 11421, Saudi Arabia
| | - Haixia Long
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China; Key Laboratory of Data Science and Smart Education, Ministry of Education, Hainan Normal University, Haikou, 571158, China.
| |
Collapse
|
9
|
Nissar I, Alam S, Masood S, Kashif M. MOB-CBAM: A dual-channel attention-based deep learning generalizable model for breast cancer molecular subtypes prediction using mammograms. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108121. [PMID: 38531147 DOI: 10.1016/j.cmpb.2024.108121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 02/15/2024] [Accepted: 03/06/2024] [Indexed: 03/28/2024]
Abstract
BACKGROUND AND OBJECTIVE Deep Learning models have emerged as a significant tool in generating efficient solutions for complex problems including cancer detection, as they can analyze large amounts of data with high efficiency and performance. Recent medical studies highlight the significance of molecular subtype detection in breast cancer, aiding the development of personalized treatment plans as different subtypes of cancer respond better to different therapies. METHODS In this work, we propose a novel lightweight dual-channel attention-based deep learning model MOB-CBAM that utilizes the backbone of MobileNet-V3 architecture with a Convolutional Block Attention Module to make highly accurate and precise predictions about breast cancer. We used the CMMD mammogram dataset to evaluate the proposed model in our study. Nine distinct data subsets were created from the original dataset to perform coarse and fine-grained predictions, enabling it to identify masses, calcifications, benign, malignant tumors and molecular subtypes of cancer, including Luminal A, Luminal B, HER-2 Positive, and Triple Negative. The pipeline incorporates several image pre-processing techniques, including filtering, enhancement, and normalization, for enhancing the model's generalization ability. RESULTS While identifying benign versus malignant tumors, i.e., coarse-grained classification, the MOB-CBAM model produced exceptional results with 99 % accuracy, precision, recall, and F1-score values of 0.99 and MCC of 0.98. In terms of fine-grained classification, the MOB-CBAM model has proven to be highly efficient in accurately identifying mass with (benign/malignant) and calcification with (benign/malignant) classification tasks with an impressive accuracy rate of 98 %. We have also cross-validated the efficiency of the proposed MOB-CBAM deep learning architecture on two datasets: MIAS and CBIS-DDSM. On the MIAS dataset, an accuracy of 97 % was reported for the task of classifying benign, malignant, and normal images, while on the CBIS-DDSM dataset, an accuracy of 98 % was achieved for the classification of mass with either benign or malignant, and calcification with benign and malignant tumors. CONCLUSION This study presents lightweight MOB-CBAM, a novel deep learning framework, to address breast cancer diagnosis and subtype prediction. The model's innovative incorporation of the CBAM enhances precise predictions. The extensive evaluation of the CMMD dataset and cross-validation on other datasets affirm the model's efficacy.
Collapse
Affiliation(s)
- Iqra Nissar
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India.
| | - Shahzad Alam
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| | - Sarfaraz Masood
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| | - Mohammad Kashif
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| |
Collapse
|
10
|
Subhashini R, Velswamy R, Sree Rathna Lakshmi NVS, Sivanandam C. An innovative breast cancer detection framework using multiscale dilated densenet with attention mechanism. NETWORK (BRISTOL, ENGLAND) 2024:1-37. [PMID: 38648017 DOI: 10.1080/0954898x.2024.2343348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Accepted: 04/05/2024] [Indexed: 04/25/2024]
Abstract
Cancer-related deadly diseases affect both developed and underdeveloped nations worldwide. Effective network learning is crucial to more reliably identify and categorize breast carcinoma in vast and unbalanced image datasets. The absence of early cancer symptoms makes the early identification process challenging. Therefore, from the perspectives of diagnosis, prevention, and therapy, cancer continues to be among the healthcare concerns that numerous researchers work to advance. It is highly essential to design an innovative breast cancer detection model by considering the complications presented in the classical techniques. Initially, breast cancer images are gathered from online sources and it is further subjected to the segmentation region. Here, it is segmented using Adaptive Trans-Dense-Unet (A-TDUNet), and their parameters are tuned using the developed Modified Sheep Flock Optimization Algorithm (MSFOA). The segmented images are further subjected to the breast cancer detection stage and effective breast cancer detection is performed by Multiscale Dilated Densenet with Attention Mechanism (MDD-AM). Throughout the result validation, the Net Present Value (NPV) and accuracy rate of the designed approach are 96.719% and 93.494%. Hence, the implemented breast cancer detection model secured a better efficacy rate than the baseline detection methods in diverse experimental conditions.
Collapse
Affiliation(s)
- R Subhashini
- Department of Information Technology, Sona College of Technology, Salem, Tamil Nadu, India
| | - Rajasekar Velswamy
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India
| | - N V S Sree Rathna Lakshmi
- Department of Electronics and Communication Engineering, Agni College of Technology, Thazhambur, Tamil Nadu, India
| | - Chakaravarthi Sivanandam
- Department of Computer Science and Engineering, Panimalar Engineering College, Poonamallee, Chennai, Tamil Nadu, India
| |
Collapse
|
11
|
Lokaj B, Pugliese MT, Kinkel K, Lovis C, Schmid J. Barriers and facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a scoping review. Eur Radiol 2024; 34:2096-2109. [PMID: 37658895 PMCID: PMC10873444 DOI: 10.1007/s00330-023-10181-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/07/2023] [Accepted: 07/10/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVE Although artificial intelligence (AI) has demonstrated promise in enhancing breast cancer diagnosis, the implementation of AI algorithms in clinical practice encounters various barriers. This scoping review aims to identify these barriers and facilitators to highlight key considerations for developing and implementing AI solutions in breast cancer imaging. METHOD A literature search was conducted from 2012 to 2022 in six databases (PubMed, Web of Science, CINHAL, Embase, IEEE, and ArXiv). The articles were included if some barriers and/or facilitators in the conception or implementation of AI in breast clinical imaging were described. We excluded research only focusing on performance, or with data not acquired in a clinical radiology setup and not involving real patients. RESULTS A total of 107 articles were included. We identified six major barriers related to data (B1), black box and trust (B2), algorithms and conception (B3), evaluation and validation (B4), legal, ethical, and economic issues (B5), and education (B6), and five major facilitators covering data (F1), clinical impact (F2), algorithms and conception (F3), evaluation and validation (F4), and education (F5). CONCLUSION This scoping review highlighted the need to carefully design, deploy, and evaluate AI solutions in clinical practice, involving all stakeholders to yield improvement in healthcare. CLINICAL RELEVANCE STATEMENT The identification of barriers and facilitators with suggested solutions can guide and inform future research, and stakeholders to improve the design and implementation of AI for breast cancer detection in clinical practice. KEY POINTS • Six major identified barriers were related to data; black-box and trust; algorithms and conception; evaluation and validation; legal, ethical, and economic issues; and education. • Five major identified facilitators were related to data, clinical impact, algorithms and conception, evaluation and validation, and education. • Coordinated implication of all stakeholders is required to improve breast cancer diagnosis with AI.
Collapse
Affiliation(s)
- Belinda Lokaj
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland.
- Faculty of Medicine, University of Geneva, Geneva, Switzerland.
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland.
| | - Marie-Thérèse Pugliese
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| | - Karen Kinkel
- Réseau Hospitalier Neuchâtelois, Neuchâtel, Switzerland
| | - Christian Lovis
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| |
Collapse
|
12
|
Shankari N, Kudva V, Hegde RB. Breast Mass Detection and Classification Using Machine Learning Approaches on Two-Dimensional Mammogram: A Review. Crit Rev Biomed Eng 2024; 52:41-60. [PMID: 38780105 DOI: 10.1615/critrevbiomedeng.2024051166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Breast cancer is a leading cause of mortality among women, both in India and globally. The prevalence of breast masses is notably common in women aged 20 to 60. These breast masses are classified, according to the breast imaging-reporting and data systems (BI-RADS) standard, into categories such as fibroadenoma, breast cysts, benign, and malignant masses. To aid in the diagnosis of breast disorders, imaging plays a vital role, with mammography being the most widely used modality for detecting breast abnormalities over the years. However, the process of identifying breast diseases through mammograms can be time-consuming, requiring experienced radiologists to review a significant volume of images. Early detection of breast masses is crucial for effective disease management, ultimately reducing mortality rates. To address this challenge, advancements in image processing techniques, specifically utilizing artificial intelligence (AI) and machine learning (ML), have tiled the way for the development of decision support systems. These systems assist radiologists in the accurate identification and classification of breast disorders. This paper presents a review of various studies where diverse machine learning approaches have been applied to digital mammograms. These approaches aim to identify breast masses and classify them into distinct subclasses such as normal, benign and malignant. Additionally, the paper highlights both the advantages and limitations of existing techniques, offering valuable insights for the benefit of future research endeavors in this critical area of medical imaging and breast health.
Collapse
Affiliation(s)
- N Shankari
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte 574110, Karnataka, India
| | - Vidya Kudva
- School of Information Sciences, Manipal Academy of Higher Education, Manipal, India -576104; Nitte Mahalinga Adyanthaya Memorial Institute of Technology, Nitte, India - 574110
| | - Roopa B Hegde
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte - 574110, Karnataka, India
| |
Collapse
|
13
|
Sadeghi Pour E, Esmaeili M, Romoozi M. Employing Atrous Pyramid Convolutional Deep Learning Approach for Detection to Diagnose Breast Cancer Tumors. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7201479. [PMID: 38025486 PMCID: PMC10663704 DOI: 10.1155/2023/7201479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 10/08/2022] [Accepted: 11/24/2022] [Indexed: 12/01/2023]
Abstract
Breast cancer is among the most common diseases and one of the most common causes of death in the female population worldwide. Early identification of breast cancer improves survival. Therefore, radiologists will be able to make more accurate diagnoses if a computerized system is developed to detect breast cancer. Computer-aided design techniques have the potential to help medical professionals to determine the specific location of breast tumors and better manage this disease more rapidly and accurately. MIAS datasets were used in this study. The aim of this study is to evaluate a noise reduction for mammographic pictures and to identify salt and pepper, Gaussian, and Poisson so that precise mass detection operations can be estimated. As a result, it provides a method for noise reduction known as quantum wavelet transform (QWT) filtering and an image morphology operator for precise mass segmentation in mammographic images by utilizing an Atrous pyramid convolutional neural network as the deep learning model for classification of mammographic images. The hybrid methodology dubbed QWT-APCNN is compared to earlier methods in terms of peak signal-to-noise ratio (PSNR) and mean square error (MSE) in noise reduction and detection accuracy for mass area recognition. Compared to state-of-the-art approaches, the proposed method performed better at noise reduction and segmentation according to different evaluation criteria such as an accuracy rate of 98.57%, 92% sensitivity, 88% specificity, 90% DSS, and ROC and AUC rate of 88.77.
Collapse
Affiliation(s)
- Ehsan Sadeghi Pour
- Department of Electrical and Computer Engineering, Kashan Branch, Islamic Azad University, Kashan 8715998151, Iran
| | - Mahdi Esmaeili
- Department of Electrical and Computer Engineering, Kashan Branch, Islamic Azad University, Kashan 8715998151, Iran
| | - Morteza Romoozi
- Department of Electrical and Computer Engineering, Kashan Branch, Islamic Azad University, Kashan 8715998151, Iran
| |
Collapse
|
14
|
Cheng K, Wang J, Liu J, Zhang X, Shen Y, Su H. Public health implications of computer-aided diagnosis and treatment technologies in breast cancer care. AIMS Public Health 2023; 10:867-895. [PMID: 38187901 PMCID: PMC10764974 DOI: 10.3934/publichealth.2023057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 10/10/2023] [Indexed: 01/09/2024] Open
Abstract
Breast cancer remains a significant public health issue, being a leading cause of cancer-related mortality among women globally. Timely diagnosis and efficient treatment are crucial for enhancing patient outcomes, reducing healthcare burdens and advancing community health. This systematic review, following the PRISMA guidelines, aims to comprehensively synthesize the recent advancements in computer-aided diagnosis and treatment for breast cancer. The study covers the latest developments in image analysis and processing, machine learning and deep learning algorithms, multimodal fusion techniques and radiation therapy planning and simulation. The results of the review suggest that machine learning, augmented and virtual reality and data mining are the three major research hotspots in breast cancer management. Moreover, this paper discusses the challenges and opportunities for future research in this field. The conclusion highlights the importance of computer-aided techniques in the management of breast cancer and summarizes the key findings of the review.
Collapse
Affiliation(s)
- Kai Cheng
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jiangtao Wang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jian Liu
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Xiangsheng Zhang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Yuanyuan Shen
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Hang Su
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
15
|
B. A, Kaur M, Singh D, Roy S, Amoon M. Efficient Skip Connections-Based Residual Network (ESRNet) for Brain Tumor Classification. Diagnostics (Basel) 2023; 13:3234. [PMID: 37892055 PMCID: PMC10606037 DOI: 10.3390/diagnostics13203234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
Brain tumors pose a complex and urgent challenge in medical diagnostics, requiring precise and timely classification due to their diverse characteristics and potentially life-threatening consequences. While existing deep learning (DL)-based brain tumor classification (BTC) models have shown significant progress, they encounter limitations like restricted depth, vanishing gradient issues, and difficulties in capturing intricate features. To address these challenges, this paper proposes an efficient skip connections-based residual network (ESRNet). leveraging the residual network (ResNet) with skip connections. ESRNet ensures smooth gradient flow during training, mitigating the vanishing gradient problem. Additionally, the ESRNet architecture includes multiple stages with increasing numbers of residual blocks for improved feature learning and pattern recognition. ESRNet utilizes residual blocks from the ResNet architecture, featuring skip connections that enable identity mapping. Through direct addition of the input tensor to the convolutional layer output within each block, skip connections preserve the gradient flow. This mechanism prevents vanishing gradients, ensuring effective information propagation across network layers during training. Furthermore, ESRNet integrates efficient downsampling techniques and stabilizing batch normalization layers, which collectively contribute to its robust and reliable performance. Extensive experimental results reveal that ESRNet significantly outperforms other approaches in terms of accuracy, sensitivity, specificity, F-score, and Kappa statistics, with median values of 99.62%, 99.68%, 99.89%, 99.47%, and 99.42%, respectively. Moreover, the achieved minimum performance metrics, including accuracy (99.34%), sensitivity (99.47%), specificity (99.79%), F-score (99.04%), and Kappa statistics (99.21%), underscore the exceptional effectiveness of ESRNet for BTC. Therefore, the proposed ESRNet showcases exceptional performance and efficiency in BTC, holding the potential to revolutionize clinical diagnosis and treatment planning.
Collapse
Affiliation(s)
- Ashwini B.
- Department of ISE, NMAM Institute of Technology, Nitte (Deemed to be University), Nitte 574110, India;
| | - Manjit Kaur
- School of Computer Science and Artificial Intelligence, SR University, Warangal 506371, India
| | - Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA;
- Research and Development Cell, Lovely Professional University, Phagwara 144411, India
| | - Satyabrata Roy
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur 303007, India;
| | - Mohammed Amoon
- Department of Computer Science, Community College, King Saud University, P.O. Box 28095, Riyadh 11437, Saudi Arabia
| |
Collapse
|
16
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
17
|
Al-Jabbar M, Alshahrani M, Senan EM, Ahmed IA. Analyzing Histological Images Using Hybrid Techniques for Early Detection of Multi-Class Breast Cancer Based on Fusion Features of CNN and Handcrafted. Diagnostics (Basel) 2023; 13:diagnostics13101753. [PMID: 37238243 DOI: 10.3390/diagnostics13101753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 05/09/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is the second most common type of cancer among women, and it can threaten women's lives if it is not diagnosed early. There are many methods for detecting breast cancer, but they cannot distinguish between benign and malignant tumors. Therefore, a biopsy taken from the patient's abnormal tissue is an effective way to distinguish between malignant and benign breast cancer tumors. There are many challenges facing pathologists and experts in diagnosing breast cancer, including the addition of some medical fluids of various colors, the direction of the sample, the small number of doctors and their differing opinions. Thus, artificial intelligence techniques solve these challenges and help clinicians resolve their diagnostic differences. In this study, three techniques, each with three systems, were developed to diagnose multi and binary classes of breast cancer datasets and distinguish between benign and malignant types with 40× and 400× factors. The first technique for diagnosing a breast cancer dataset is using an artificial neural network (ANN) with selected features from VGG-19 and ResNet-18. The second technique for diagnosing breast cancer dataset is by ANN with combined features for VGG-19 and ResNet-18 before and after principal component analysis (PCA). The third technique for analyzing breast cancer dataset is by ANN with hybrid features. The hybrid features are a hybrid between VGG-19 and handcrafted; and a hybrid between ResNet-18 and handcrafted. The handcrafted features are mixed features extracted using Fuzzy color histogram (FCH), local binary pattern (LBP), discrete wavelet transform (DWT) and gray level co-occurrence matrix (GLCM) methods. With the multi classes data set, ANN with the hybrid features of the VGG-19 and handcrafted reached a precision of 95.86%, an accuracy of 97.3%, sensitivity of 96.75%, AUC of 99.37%, and specificity of 99.81% with images at magnification factor 400×. Whereas with the binary classes data set, ANN with the hybrid features of the VGG-19 and handcrafted reached a precision of 99.74%, an accuracy of 99.7%, sensitivity of 100%, AUC of 99.85%, and specificity of 100% with images at a magnification factor 400×.
Collapse
Affiliation(s)
- Mohammed Al-Jabbar
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Mohammed Alshahrani
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | |
Collapse
|
18
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
19
|
Alsubai S, Alqahtani A, Sha M. Genetic hyperparameter optimization with Modified Scalable-Neighbourhood Component Analysis for breast cancer prognostication. Neural Netw 2023; 162:240-257. [PMID: 36913821 DOI: 10.1016/j.neunet.2023.02.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 12/30/2022] [Accepted: 02/23/2023] [Indexed: 03/02/2023]
Abstract
Breast cancer is common among women resulting in mortality when left untreated. Early detection is vital so that suitable treatment could assist cancer from spreading further and save people's life. The traditional way of detection is a time-consuming process. With the evolvement of DM (Data Mining), the healthcare industry could be benefitted in predicting the disease as it permits the physicians to determine the significant attributes for diagnosis. Though, conventional techniques have used DM-based methods to identify breast cancer, they lacked in terms of prediction rate. Moreover, parametric-Softmax classifiers have been a general option by conventional works with fixed classes, particularly when huge labelled data are present during training. Nevertheless, this turns into an issue for open set cases where new classes are encountered along with few instances to learn a generalized parametric classifier. Thus, the present study aims to implement a non-parametric strategy by optimizing the embedding of a feature rather than parametric classifiers. This research utilizes Deep CNN (Deep Convolutional Neural Network) and Inception V3 for learning visual features which preserve neighbourhood outline in semantic space relying on NCA (Neighbourhood Component Analysis) criteria. Delimited by its bottleneck, the study proposes MS-NCA (Modified Scalable-Neighbourhood Component Analysis) that relies on a non-linear objective function to perform feature fusion by optimizing the distance-learning objective due to which it gains the capability of computing inner feature products without performing mapping which increases the scalability of MS-NCA. Finally, G-HPO (Genetic-Hyper-parameter Optimization) is proposed. In this case, the new stage in the algorithm simply denotes the enhancement in the length of chromosome bringing several hyperparameters into subsequent XGBoost, NB and RF models having numerous layers for identifying the normal and affected cases of breast cancer for which optimized hyper-parameter values of RF (Random Forest), NB (Naïve Bayes), and XGBoost (eXtreme Gradient Boosting) are determined. This process helps in improvising the classification rate which is confirmed through analytical results.
Collapse
Affiliation(s)
- Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| | - Mohemmed Sha
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| |
Collapse
|
20
|
An Efficient USE-Net Deep Learning Model for Cancer Detection. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8509433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
Breast cancer (BrCa) is the most common disease in women worldwide. Classifying the BrCa image is extremely important for finding BrCa at an earlier stage and monitoring BrCa during treatment. The computer-aided detection methods have been used to interpret BrCa and improve the detection of BrCa during the screening and treatment stages. However, if a new BrCa image is generated for the treatment, it will not classify correctly. The main objective of this research is to classify the BrCa images for newly generated images. The model performs preprocessing, segmentation, feature extraction, and classification. In preprocessing, a hybrid median filtering (HMF) is used to eliminate the noise in the images. The contrast of the images is enhanced using quadrant dynamic histogram equalization (QDHE). Then, ROI segmentation is performed using the USE-Net deep learning model. The CaffeNet model is used for feature extraction on the segmented images, and finally, classification is made using the improved random forest (IRF) with extreme gradient boosting (XGB). The model obtained 97.87% accuracy, 98.45% sensitivity, 95.24% specificity, 98.96% precision, and 98.70% f1-score for ultrasound images. The model gives 98.31% accuracy, 99.29% sensitivity, 90.20% specificity, 98.82% precision, and 99.05% f1-score for mammogram images.
Collapse
|
21
|
Elkorany AS, Elsharkawy ZF. Efficient breast cancer mammograms diagnosis using three deep neural networks and term variance. Sci Rep 2023; 13:2663. [PMID: 36792720 PMCID: PMC9932150 DOI: 10.1038/s41598-023-29875-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 02/11/2023] [Indexed: 02/17/2023] Open
Abstract
Breast cancer (BC) is spreading more and more every day. Therefore, a patient's life can be saved by its early discovery. Mammography is frequently used to diagnose BC. The classification of mammography region of interest (ROI) patches (i.e., normal, malignant, or benign) is the most crucial phase in this process since it helps medical professionals to identify BC. In this paper, a hybrid technique that carries out a quick and precise classification that is appropriate for the BC diagnosis system is proposed and tested. Three different Deep Learning (DL) Convolution Neural Network (CNN) models-namely, Inception-V3, ResNet50, and AlexNet-are used in the current study as feature extractors. To extract useful features from each CNN model, our suggested method uses the Term Variance (TV) feature selection algorithm. The TV-selected features from each CNN model are combined and a further selection is performed to obtain the most useful features which are sent later to the multiclass support vector machine (MSVM) classifier. The Mammographic Image Analysis Society (MIAS) image database was used to test the effectiveness of the suggested method for classification. The mammogram's ROI is retrieved, and image patches are assigned to it. Based on the results of testing several TV feature subsets, the 600-feature subset with the highest classification performance was discovered. Higher classification accuracy (CA) is attained when compared to previously published work. The average CA for 70% of training is 97.81%, for 80% of training, it is 98%, and for 90% of training, it reaches its optimal value. Finally, the ablation analysis is performed to emphasize the role of the proposed network's key parameters.
Collapse
Affiliation(s)
- Ahmed S. Elkorany
- grid.411775.10000 0004 0621 4712Department of Electronics and Electrical Comm. Eng., Faculty of Electronic Engineering, Menoufia University, Menouf, 32952 Egypt
| | - Zeinab F. Elsharkawy
- grid.429648.50000 0000 9052 0245Engineering Department, Nuclear Research Center, Egyptian Atomic Energy Authority, Cairo, Egypt
| |
Collapse
|
22
|
Astolfi RS, da Silva DS, Guedes IS, Nascimento CS, Damaševičius R, Jagatheesaperumal SK, de Albuquerque VHC, Leite JAD. Computer-Aided Ankle Ligament Injury Diagnosis from Magnetic Resonance Images Using Machine Learning Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:1565. [PMID: 36772604 PMCID: PMC9919370 DOI: 10.3390/s23031565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 01/16/2023] [Accepted: 01/28/2023] [Indexed: 06/18/2023]
Abstract
Ankle injuries caused by the Anterior Talofibular Ligament (ATFL) are the most common type of injury. Thus, finding new ways to analyze these injuries through novel technologies is critical for assisting medical diagnosis and, as a result, reducing the subjectivity of this process. As a result, the purpose of this study is to compare the ability of specialists to diagnose lateral tibial tuberosity advancement (LTTA) injury using computer vision analysis on magnetic resonance imaging (MRI). The experiments were carried out on a database obtained from the Vue PACS-Carestream software, which contained 132 images of ATFL and normal (healthy) ankles. Because there were only a few images, image augmentation techniques was used to increase the number of images in the database. Following that, various feature extraction algorithms (GLCM, LBP, and HU invariant moments) and classifiers such as Multi-Layer Perceptron (MLP), Support Vector Machine (SVM), k-Nearest Neighbors (kNN), and Random Forest (RF) were used. Based on the results from this analysis, for cases that lack clear morphologies, the method delivers a hit rate of 85.03% with an increase of 22% over the human expert-based analysis.
Collapse
Affiliation(s)
- Rodrigo S. Astolfi
- Graduate Program in Surgery, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| | - Daniel S. da Silva
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| | - Ingrid S. Guedes
- Graduate Program in Surgery, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| | - Caio S. Nascimento
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| | - Robertas Damaševičius
- Department of Software Engineering, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Senthil K. Jagatheesaperumal
- Department of Electronics and Communication Engineering, Mepco Schlenk Engineering College, Sivakasi 626005, TN, India
| | | | - José Alberto D. Leite
- Graduate Program in Surgery, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| |
Collapse
|
23
|
A Series-Based Deep Learning Approach to Lung Nodule Image Classification. Cancers (Basel) 2023; 15:cancers15030843. [PMID: 36765801 PMCID: PMC9913559 DOI: 10.3390/cancers15030843] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 01/24/2023] [Accepted: 01/28/2023] [Indexed: 02/01/2023] Open
Abstract
Although many studies have shown that deep learning approaches yield better results than traditional methods based on manual features, CADs methods still have several limitations. These are due to the diversity in imaging modalities and clinical pathologies. This diversity creates difficulties because of variation and similarities between classes. In this context, the new approach from our study is a hybrid method that performs classifications using both medical image analysis and radial scanning series features. Hence, the areas of interest obtained from images are subjected to a radial scan, with their centers as poles, in order to obtain series. A U-shape convolutional neural network model is then used for the 4D data classification problem. We therefore present a novel approach to the classification of 4D data obtained from lung nodule images. With radial scanning, the eigenvalue of nodule images is captured, and a powerful classification is performed. According to our results, an accuracy of 92.84% was obtained and much more efficient classification scores resulted as compared to recent classifiers.
Collapse
|
24
|
Razali NF, Isa IS, Sulaiman SN, Abdul Karim NK, Osman MK, Che Soh ZH. Enhancement Technique Based on the Breast Density Level for Mammogram for Computer-Aided Diagnosis. Bioengineering (Basel) 2023; 10:153. [PMID: 36829647 PMCID: PMC9952042 DOI: 10.3390/bioengineering10020153] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 01/04/2023] [Accepted: 01/16/2023] [Indexed: 01/26/2023] Open
Abstract
Mass detection in mammograms has a limited approach to the presence of a mass in overlapping denser fibroglandular breast regions. In addition, various breast density levels could decrease the learning system's ability to extract sufficient feature descriptors and may result in lower accuracy performance. Therefore, this study is proposing a textural-based image enhancement technique named Spatial-based Breast Density Enhancement for Mass Detection (SbBDEM) to boost textural features of the overlapped mass region based on the breast density level. This approach determines the optimal exposure threshold of the images' lower contrast limit and optimizes the parameters by selecting the best intensity factor guided by the best Blind/Reference-less Image Spatial Quality Evaluator (BRISQUE) scores separately for both dense and non-dense breast classes prior to training. Meanwhile, a modified You Only Look Once v3 (YOLOv3) architecture is employed for mass detection by specifically assigning an extra number of higher-valued anchor boxes to the shallower detection head using the enhanced image. The experimental results show that the use of SbBDEM prior to training mass detection promotes superior performance with an increase in mean Average Precision (mAP) of 17.24% improvement over the non-enhanced trained image for mass detection, mass segmentation of 94.41% accuracy, and 96% accuracy for benign and malignant mass classification. Enhancing the mammogram images based on breast density is proven to increase the overall system's performance and can aid in an improved clinical diagnosis process.
Collapse
Affiliation(s)
- Noor Fadzilah Razali
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Iza Sazanita Isa
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Siti Noraini Sulaiman
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
- Integrative Pharmacogenomics Institute (iPROMISE), Universiti Teknologi MARA Cawangan Selangor, Puncak Alam Campus, Puncak Alam 42300, Selangor, Malaysia
| | - Noor Khairiah Abdul Karim
- Department of Biomedical Imaging, Advanced Medical and Dental Institute, Universiti Sains Malaysia Bertam, Kepala Batas 13200, Pulau Pinang, Malaysia
- Breast Cancer Translational Research Programme (BCTRP), Advanced Medical and Dental Institute, Universiti Sains Malaysia Bertam, Kepala Batas 13200, Pulau Pinang, Malaysia
| | - Muhammad Khusairi Osman
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Zainal Hisham Che Soh
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| |
Collapse
|
25
|
Classification and diagnostic prediction of breast cancer metastasis on clinical data using machine learning algorithms. Sci Rep 2023; 13:485. [PMID: 36627367 PMCID: PMC9831019 DOI: 10.1038/s41598-023-27548-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 01/04/2023] [Indexed: 01/12/2023] Open
Abstract
Metastatic Breast Cancer (MBC) is one of the primary causes of cancer-related deaths in women. Despite several limitations, histopathological information about the malignancy is used for the classification of cancer. The objective of our study is to develop a non-invasive breast cancer classification system for the diagnosis of cancer metastases. The anaconda-Jupyter notebook is used to develop various python programming modules for text mining, data processing, and Machine Learning (ML) methods. Utilizing classification model cross-validation criteria, including accuracy, AUC, and ROC, the prediction performance of the ML models is assessed. Welch Unpaired t-test was used to ascertain the statistical significance of the datasets. Text mining framework from the Electronic Medical Records (EMR) made it easier to separate the blood profile data and identify MBC patients. Monocytes revealed a noticeable mean difference between MBC patients as compared to healthy individuals. The accuracy of ML models was dramatically improved by removing outliers from the blood profile data. A Decision Tree (DT) classifier displayed an accuracy of 83% with an AUC of 0.87. Next, we deployed DT classifiers using Flask to create a web application for robust diagnosis of MBC patients. Taken together, we conclude that ML models based on blood profile data may assist physicians in selecting intensive-care MBC patients to enhance the overall survival outcome.
Collapse
|
26
|
A. Mohamed E, Gaber T, Karam O, Rashed EA. A Novel CNN pooling layer for breast cancer segmentation and classification from thermograms. PLoS One 2022; 17:e0276523. [PMID: 36269756 PMCID: PMC9586394 DOI: 10.1371/journal.pone.0276523] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 10/10/2022] [Indexed: 11/06/2022] Open
Abstract
Breast cancer is the second most frequent cancer worldwide, following lung cancer and the fifth leading cause of cancer death and a major cause of cancer death among women. In recent years, convolutional neural networks (CNNs) have been successfully applied for the diagnosis of breast cancer using different imaging modalities. Pooling is a main data processing step in CNN that decreases the feature maps' dimensionality without losing major patterns. However, the effect of pooling layer was not studied efficiently in literature. In this paper, we propose a novel design for the pooling layer called vector pooling block (VPB) for the CCN algorithm. The proposed VPB consists of two data pathways, which focus on extracting features along horizontal and vertical orientations. The VPB makes the CNNs able to collect both global and local features by including long and narrow pooling kernels, which is different from the traditional pooling layer, that gathers features from a fixed square kernel. Based on the novel VPB, we proposed a new pooling module called AVG-MAX VPB. It can collect informative features by using two types of pooling techniques, maximum and average pooling. The VPB and the AVG-MAX VPB are plugged into the backbone CNNs networks, such as U-Net, AlexNet, ResNet18 and GoogleNet, to show the advantages in segmentation and classification tasks associated with breast cancer diagnosis from thermograms. The proposed pooling layer was evaluated using a benchmark thermogram database (DMR-IR) and its results compared with U-Net results which was used as base results. The U-Net results were as follows: global accuracy = 96.6%, mean accuracy = 96.5%, mean IoU = 92.07%, and mean BF score = 78.34%. The VBP-based results were as follows: global accuracy = 98.3%, mean accuracy = 97.9%, mean IoU = 95.87%, and mean BF score = 88.68% while the AVG-MAX VPB-based results were as follows: global accuracy = 99.2%, mean accuracy = 98.97%, mean IoU = 98.03%, and mean BF score = 94.29%. Other network architectures also demonstrate superior improvement considering the use of VPB and AVG-MAX VPB.
Collapse
Affiliation(s)
- Esraa A. Mohamed
- Faculty of Science, Department of Mathematics, Suez Canal University, Ismailia, Egypt
| | - Tarek Gaber
- Faculty of Computers and Informatics, Suez Canal University, Ismailia, Egypt
- School of Science, Engineering and Environment University of Salford, Manchester, United Kingdom
| | - Omar Karam
- Faculty of Informatics and Computer Science, British University in Egypt (BUE), Cairo, Egypt
| | - Essam A. Rashed
- Faculty of Science, Department of Mathematics, Suez Canal University, Ismailia, Egypt
- Graduate School of Information Science, University of Hyogo, Kobe, Japan
| |
Collapse
|
27
|
Maqsood S, Damaševičius R, Maskeliūnas R. Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM. Medicina (B Aires) 2022; 58:medicina58081090. [PMID: 36013557 PMCID: PMC9413317 DOI: 10.3390/medicina58081090] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/03/2022] [Accepted: 08/06/2022] [Indexed: 02/05/2023] Open
Abstract
Background and Objectives: Clinical diagnosis has become very significant in today's health system. The most serious disease and the leading cause of mortality globally is brain cancer which is a key research topic in the field of medical imaging. The examination and prognosis of brain tumors can be improved by an early and precise diagnosis based on magnetic resonance imaging. For computer-aided diagnosis methods to assist radiologists in the proper detection of brain tumors, medical imagery must be detected, segmented, and classified. Manual brain tumor detection is a monotonous and error-prone procedure for radiologists; hence, it is very important to implement an automated method. As a result, the precise brain tumor detection and classification method is presented. Materials and Methods: The proposed method has five steps. In the first step, a linear contrast stretching is used to determine the edges in the source image. In the second step, a custom 17-layered deep neural network architecture is developed for the segmentation of brain tumors. In the third step, a modified MobileNetV2 architecture is used for feature extraction and is trained using transfer learning. In the fourth step, an entropy-based controlled method was used along with a multiclass support vector machine (M-SVM) for the best features selection. In the final step, M-SVM is used for brain tumor classification, which identifies the meningioma, glioma and pituitary images. Results: The proposed method was demonstrated on BraTS 2018 and Figshare datasets. Experimental study shows that the proposed brain tumor detection and classification method outperforms other methods both visually and quantitatively, obtaining an accuracy of 97.47% and 98.92%, respectively. Finally, we adopt the eXplainable Artificial Intelligence (XAI) method to explain the result. Conclusions: Our proposed approach for brain tumor detection and classification has outperformed prior methods. These findings demonstrate that the proposed approach obtained higher performance in terms of both visually and enhanced quantitative evaluation with improved accuracy.
Collapse
|
28
|
Breast Lesions Screening of Mammographic Images with 2D Spatial and 1D Convolutional Neural Network-Based Classifier. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157516] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Mammography is a first-line imaging examination that employs low-dose X-rays to rapidly screen breast tumors, cysts, and calcifications. This study proposes a two-dimensional (2D) spatial and one-dimensional (1D) convolutional neural network (CNN) to early detect possible breast lesions (tumors) to reduce patients’ mortality rates and to develop a classifier for use in mammographic images on regions of interest where breast lesions (tumors) may likely occur. The 2D spatial fractional-order convolutional processes are used to strengthen and sharpen the lesions’ features, denoise, and improve the feature extraction processes. Then, an automatic extraction task is performed using a specific bounding box to sequentially pick out feature patterns from each mammographic image. The multi-round 1D kernel convolutional processes can also strengthen and denoise 1D feature signals and assist in the identification of the differentiation levels of normality and abnormality signals. In the classification layer, a gray relational analysis-based classifier is used to screen the possible lesions, including normal (Nor), benign (B), and malignant (M) classes. The classifier development for clinical applications can reduce classifier’s training time, computational complexity level, computational time, and achieve a more accurate rate for meeting clinical/medical purpose. Mammographic images were selected from the mammographic image analysis society image database for experimental tests on breast lesions screening and K-fold cross-validations were performed. The experimental results showed promising performance in quantifying the classifier’s outcome for medical purpose evaluation in terms of recall (%), precision (%), accuracy (%), and F1 score.
Collapse
|
29
|
Medical Image Classification Using Transfer Learning and Chaos Game Optimization on the Internet of Medical Things. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9112634. [PMID: 35875781 PMCID: PMC9300353 DOI: 10.1155/2022/9112634] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 06/07/2022] [Accepted: 06/21/2022] [Indexed: 12/23/2022]
Abstract
The Internet of Medical Things (IoMT) has dramatically benefited medical professionals that patients and physicians can access from all regions. Although the automatic detection and prediction of diseases such as melanoma and leukemia is still being investigated and studied in IoMT, existing approaches are not able to achieve a high degree of efficiency. Thus, with a new approach that provides better results, patients would access the adequate treatments earlier and the death rate would be reduced. Therefore, this paper introduces an IoMT proposal for medical images' classification that may be used anywhere, i.e., it is an ubiquitous approach. It was designed in two stages: first, we employ a transfer learning (TL)-based method for feature extraction, which is carried out using MobileNetV3; second, we use the chaos game optimization (CGO) for feature selection, with the aim of excluding unnecessary features and improving the performance, which is key in IoMT. Our methodology was evaluated using ISIC-2016, PH2, and Blood-Cell datasets. The experimental results indicated that the proposed approach obtained an accuracy of 88.39% on ISIC-2016, 97.52% on PH2, and 88.79% on Blood-cell datsets. Moreover, our approach had successful performances for the metrics employed compared to other existing methods.
Collapse
|
30
|
Medical Internet-of-Things Based Breast Cancer Diagnosis Using Hyperparameter-Optimized Neural Networks. FUTURE INTERNET 2022. [DOI: 10.3390/fi14050153] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
In today’s healthcare setting, the accurate and timely diagnosis of breast cancer is critical for recovery and treatment in the early stages. In recent years, the Internet of Things (IoT) has experienced a transformation that allows the analysis of real-time and historical data using artificial intelligence (AI) and machine learning (ML) approaches. Medical IoT combines medical devices and AI applications with healthcare infrastructure to support medical diagnostics. The current state-of-the-art approach fails to diagnose breast cancer in its initial period, resulting in the death of most women. As a result, medical professionals and researchers are faced with a tremendous problem in early breast cancer detection. We propose a medical IoT-based diagnostic system that competently identifies malignant and benign people in an IoT environment to resolve the difficulty of identifying early-stage breast cancer. The artificial neural network (ANN) and convolutional neural network (CNN) with hyperparameter optimization are used for malignant vs. benign classification, while the Support Vector Machine (SVM) and Multilayer Perceptron (MLP) were utilized as baseline classifiers for comparison. Hyperparameters are important for machine learning algorithms since they directly control the behaviors of training algorithms and have a significant effect on the performance of machine learning models. We employ a particle swarm optimization (PSO) feature selection approach to select more satisfactory features from the breast cancer dataset to enhance the classification performance using MLP and SVM, while grid-based search was used to find the best combination of the hyperparameters of the CNN and ANN models. The Wisconsin Diagnostic Breast Cancer (WDBC) dataset was used to test the proposed approach. The proposed model got a classification accuracy of 98.5% using CNN, and 99.2% using ANN.
Collapse
|