1
|
S S, Dharani Devi G, V R, Jeyalakshmi J. Privacy-Preserving Breast Cancer Classification: A Federated Transfer Learning Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1488-1504. [PMID: 38424280 PMCID: PMC11300768 DOI: 10.1007/s10278-024-01035-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/11/2024] [Accepted: 01/30/2024] [Indexed: 03/02/2024]
Abstract
Breast cancer is deadly cancer causing a considerable number of fatalities among women in worldwide. To enhance patient outcomes as well as survival rates, early and accurate detection is crucial. Machine learning techniques, particularly deep learning, have demonstrated impressive success in various image recognition tasks, including breast cancer classification. However, the reliance on large labeled datasets poses challenges in the medical domain due to privacy issues and data silos. This study proposes a novel transfer learning approach integrated into a federated learning framework to solve the limitations of limited labeled data and data privacy in collaborative healthcare settings. For breast cancer classification, the mammography and MRO images were gathered from three different medical centers. Federated learning, an emerging privacy-preserving paradigm, empowers multiple medical institutions to jointly train the global model while maintaining data decentralization. Our proposed methodology capitalizes on the power of pre-trained ResNet, a deep neural network architecture, as a feature extractor. By fine-tuning the higher layers of ResNet using breast cancer datasets from diverse medical centers, we enable the model to learn specialized features relevant to different domains while leveraging the comprehensive image representations acquired from large-scale datasets like ImageNet. To overcome domain shift challenges caused by variations in data distributions across medical centers, we introduce domain adversarial training. The model learns to minimize the domain discrepancy while maximizing classification accuracy, facilitating the acquisition of domain-invariant features. We conducted extensive experiments on diverse breast cancer datasets obtained from multiple medical centers. Comparative analysis was performed to evaluate the proposed approach against traditional standalone training and federated learning without domain adaptation. When compared with traditional models, our proposed model showed a classification accuracy of 98.8% and a computational time of 12.22 s. The results showcase promising enhancements in classification accuracy and model generalization, underscoring the potential of our method in improving breast cancer classification performance while upholding data privacy in a federated healthcare environment.
Collapse
Affiliation(s)
- Selvakanmani S
- Department of Information Technology, R.M.K Engineering College, Chennai, Tamil Nadu, India.
| | - G Dharani Devi
- Department of Computer Science and Engineering, Rajalakshmi Engineering College, Chennai, Tamil Nadu, India
| | - Rekha V
- Department of Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, Tamil Nadu, India
| | - J Jeyalakshmi
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidhyapeetham, Chennai, India
| |
Collapse
|
2
|
Joshi RC, Srivastava P, Mishra R, Burget R, Dutta MK. Biomarker profiling and integrating heterogeneous models for enhanced multi-grade breast cancer prognostication. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 255:108349. [PMID: 39096573 DOI: 10.1016/j.cmpb.2024.108349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 07/01/2024] [Accepted: 07/22/2024] [Indexed: 08/05/2024]
Abstract
BACKGROUND Breast cancer remains a leading cause of female mortality worldwide, exacerbated by limited awareness, inadequate screening resources, and treatment options. Accurate and early diagnosis is crucial for improving survival rates and effective treatment. OBJECTIVES This study aims to develop an innovative artificial intelligence (AI) based model for predicting breast cancer and its various histopathological grades by integrating multiple biomarkers and subject age, thereby enhancing diagnostic accuracy and prognostication. METHODS A novel ensemble-based machine learning (ML) framework has been introduced that integrates three distinct biomarkers-beta-human chorionic gonadotropin (β-hCG), Programmed Cell Death Ligand 1 (PD-L1), and alpha-fetoprotein (AFP)-alongside subject age. Hyperparameter optimization was performed using the Particle Swarm Optimization (PSO) algorithm, and minority oversampling techniques were employed to mitigate overfitting. The model's performance was validated through rigorous five-fold cross-validation. RESULTS The proposed model demonstrated superior performance, achieving a 97.93% accuracy and a 98.06% F1-score on meticulously labeled test data across diverse age groups. Comparative analysis showed that the model outperforms state-of-the-art approaches, highlighting its robustness and generalizability. CONCLUSION By providing a comprehensive analysis of multiple biomarkers and effectively predicting tumor grades, this study offers a significant advancement in breast cancer screening, particularly in regions with limited medical resources. The proposed framework has the potential to reduce breast cancer mortality rates and improve early intervention and personalized treatment strategies.
Collapse
Affiliation(s)
- Rakesh Chandra Joshi
- Amity Centre for Artificial Intelligence, Amity University, Noida, Uttar Pradesh, India; Centre for Advanced Studies, Dr. A.P.J. Abdul Kalam Technical University, Lucknow, Uttar Pradesh, India
| | - Pallavi Srivastava
- Department of Biotechnology, Noida Institute of Engineering & Technology, Greater Noida, Uttar Pradesh, India
| | - Rashmi Mishra
- Department of Biotechnology, Noida Institute of Engineering & Technology, Greater Noida, Uttar Pradesh, India
| | - Radim Burget
- Department of Telecommunications, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic
| | - Malay Kishore Dutta
- Amity Centre for Artificial Intelligence, Amity University, Noida, Uttar Pradesh, India.
| |
Collapse
|
3
|
Kaur J, Kaur P. A systematic literature analysis of multi-organ cancer diagnosis using deep learning techniques. Comput Biol Med 2024; 179:108910. [PMID: 39032244 DOI: 10.1016/j.compbiomed.2024.108910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 07/23/2024]
Abstract
Cancer is becoming the most toxic ailment identified among individuals worldwide. The mortality rate has been increasing rapidly every year, which causes progression in the various diagnostic technologies to handle this illness. The manual procedure for segmentation and classification with a large set of data modalities can be a challenging task. Therefore, a crucial requirement is to significantly develop the computer-assisted diagnostic system intended for the initial cancer identification. This article offers a systematic review of Deep Learning approaches using various image modalities to detect multi-organ cancers from 2012 to 2023. It emphasizes the detection of five supreme predominant tumors, i.e., breast, brain, lung, skin, and liver. Extensive review has been carried out by collecting research and conference articles and book chapters from reputed international databases, i.e., Springer Link, IEEE Xplore, Science Direct, PubMed, and Wiley that fulfill the criteria for quality evaluation. This systematic review summarizes the overview of convolutional neural network model architectures and datasets used for identifying and classifying the diverse categories of cancer. This study accomplishes an inclusive idea of ensemble deep learning models that have achieved better evaluation results for classifying the different images into cancer or healthy cases. This paper will provide a broad understanding to the research scientists within the domain of medical imaging procedures of which deep learning technique perform best over which type of dataset, extraction of features, different confrontations, and their anticipated solutions for the complex problems. Lastly, some challenges and issues which control the health emergency have been discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| |
Collapse
|
4
|
A G B, Srinivasan S, D P, P M, Mathivanan SK, Shah MA. Robust brain tumor classification by fusion of deep learning and channel-wise attention mode approach. BMC Med Imaging 2024; 24:147. [PMID: 38886661 PMCID: PMC11181652 DOI: 10.1186/s12880-024-01323-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 06/04/2024] [Indexed: 06/20/2024] Open
Abstract
Diagnosing brain tumors is a complex and time-consuming process that relies heavily on radiologists' expertise and interpretive skills. However, the advent of deep learning methodologies has revolutionized the field, offering more accurate and efficient assessments. Attention-based models have emerged as promising tools, focusing on salient features within complex medical imaging data. However, the precise impact of different attention mechanisms, such as channel-wise, spatial, or combined attention within the Channel-wise Attention Mode (CWAM), for brain tumor classification remains relatively unexplored. This study aims to address this gap by leveraging the power of ResNet101 coupled with CWAM (ResNet101-CWAM) for brain tumor classification. The results show that ResNet101-CWAM surpassed conventional deep learning classification methods like ConvNet, achieving exceptional performance metrics of 99.83% accuracy, 99.21% recall, 99.01% precision, 99.27% F1-score and 99.16% AUC on the same dataset. This enhanced capability holds significant implications for clinical decision-making, as accurate and efficient brain tumor classification is crucial for guiding treatment strategies and improving patient outcomes. Integrating ResNet101-CWAM into existing brain classification software platforms is a crucial step towards enhancing diagnostic accuracy and streamlining clinical workflows for physicians.
Collapse
Affiliation(s)
- Balamurugan A G
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, 600062, India
| | - Saravanan Srinivasan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, 600062, India
| | - Preethi D
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Ramapuram , Chennai, India
| | - Monica P
- School of Electrical and Electronics Engineering, VIT Bhopal University, Bhopal, Indore Highway, Kothrikalan, Sehore, Madhya Pradesh, 466114, India
| | | | - Mohd Asif Shah
- Department of Economics, Kardan University, Parwan-e-Du, Kabul, 1001, Afghanistan.
- Division of Research and Development, Lovely Professional University, Phagwara, Punjab, 144001, India.
| |
Collapse
|
5
|
Ramamoorthy P, Ramakantha Reddy BR, Askar SS, Abouhawwash M. Histopathology-based breast cancer prediction using deep learning methods for healthcare applications. Front Oncol 2024; 14:1300997. [PMID: 38894870 PMCID: PMC11184215 DOI: 10.3389/fonc.2024.1300997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 04/12/2024] [Indexed: 06/21/2024] Open
Abstract
Breast cancer (BC) is the leading cause of female cancer mortality and is a type of cancer that is a major threat to women's health. Deep learning methods have been used extensively in many medical domains recently, especially in detection and classification applications. Studying histological images for the automatic diagnosis of BC is important for patients and their prognosis. Owing to the complication and variety of histology images, manual examination can be difficult and susceptible to errors and thus needs the services of experienced pathologists. Therefore, publicly accessible datasets called BreakHis and invasive ductal carcinoma (IDC) are used in this study to analyze histopathological images of BC. Next, using super-resolution generative adversarial networks (SRGANs), which create high-resolution images from low-quality images, the gathered images from BreakHis and IDC are pre-processed to provide useful results in the prediction stage. The components of conventional generative adversarial network (GAN) loss functions and effective sub-pixel nets were combined to create the concept of SRGAN. Next, the high-quality images are sent to the data augmentation stage, where new data points are created by making small adjustments to the dataset using rotation, random cropping, mirroring, and color-shifting. Next, patch-based feature extraction using Inception V3 and Resnet-50 (PFE-INC-RES) is employed to extract the features from the augmentation. After the features have been extracted, the next step involves processing them and applying transductive long short-term memory (TLSTM) to improve classification accuracy by decreasing the number of false positives. The results of suggested PFE-INC-RES is evaluated using existing methods on the BreakHis dataset, with respect to accuracy (99.84%), specificity (99.71%), sensitivity (99.78%), and F1-score (99.80%), while the suggested PFE-INC-RES performed better in the IDC dataset based on F1-score (99.08%), accuracy (99.79%), specificity (98.97%), and sensitivity (99.17%).
Collapse
Affiliation(s)
- Prabhu Ramamoorthy
- Department of Electronics and Communication Engineering, Gnanamani College of Technology, Namakkal, India
| | | | - S. S. Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, Egypt
| |
Collapse
|
6
|
Chowa SS, Azam S, Montaha S, Bhuiyan MRI, Jonkman M. Improving the Automated Diagnosis of Breast Cancer with Mesh Reconstruction of Ultrasound Images Incorporating 3D Mesh Features and a Graph Attention Network. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1067-1085. [PMID: 38361007 DOI: 10.1007/s10278-024-00983-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/17/2023] [Accepted: 12/11/2023] [Indexed: 02/17/2024]
Abstract
This study proposes a novel approach for breast tumor classification from ultrasound images into benign and malignant by converting the region of interest (ROI) of a 2D ultrasound image into a 3D representation using the point-e system, allowing for in-depth analysis of underlying characteristics. Instead of relying solely on 2D imaging features, this method extracts 3D mesh features that describe tumor patterns more precisely. Ten informative and medically relevant mesh features are extracted and assessed with two feature selection techniques. Additionally, a feature pattern analysis has been conducted to determine the feature's significance. A feature table with dimensions of 445 × 12 is generated and a graph is constructed, considering the rows as nodes and the relationships among the nodes as edges. The Spearman correlation coefficient method is employed to identify edges between the strongly connected nodes (with a correlation score greater than or equal to 0.7), resulting in a graph containing 56,054 edges and 445 nodes. A graph attention network (GAT) is proposed for the classification task and the model is optimized with an ablation study, resulting in the highest accuracy of 99.34%. The performance of the proposed model is compared with ten machine learning (ML) models and one-dimensional convolutional neural network where the test accuracy of these models ranges from 73 to 91%. Our novel 3D mesh-based approach, coupled with the GAT, yields promising performance for breast tumor classification, outperforming traditional models, and has the potential to reduce time and effort of radiologists providing a reliable diagnostic system.
Collapse
Affiliation(s)
- Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Md Rahad Islam Bhuiyan
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
7
|
Heikal A, El-Ghamry A, Elmougy S, Rashad MZ. Fine tuning deep learning models for breast tumor classification. Sci Rep 2024; 14:10753. [PMID: 38730248 PMCID: PMC11087494 DOI: 10.1038/s41598-024-60245-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 04/19/2024] [Indexed: 05/12/2024] Open
Abstract
This paper proposes an approach to enhance the differentiation task between benign and malignant Breast Tumors (BT) using histopathology images from the BreakHis dataset. The main stages involve preprocessing, which encompasses image resizing, data partitioning (training and testing sets), followed by data augmentation techniques. Both feature extraction and classification tasks are employed by a Custom CNN. The experimental results show that the proposed approach using the Custom CNN model exhibits better performance with an accuracy of 84% than applying the same approach using other pretrained models, including MobileNetV3, EfficientNetB0, Vgg16, and ResNet50V2, that present relatively lower accuracies, ranging from 74 to 82%; these four models are used as both feature extractors and classifiers. To increase the accuracy and other performance metrics, Grey Wolf Optimization (GWO), and Modified Gorilla Troops Optimization (MGTO) metaheuristic optimizers are applied to each model separately for hyperparameter tuning. In this case, the experimental results show that the Custom CNN model, refined with MGTO optimization, reaches an exceptional accuracy of 93.13% in just 10 iterations, outperforming the other state-of-the-art methods, and the other four used pretrained models based on the BreakHis dataset.
Collapse
Affiliation(s)
- Abeer Heikal
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt.
- Department of Computer Science, Misr Higher Institute for Commerce and Computers, Mansoura, 35511, Egypt.
| | - Amir El-Ghamry
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Samir Elmougy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - M Z Rashad
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| |
Collapse
|
8
|
Jiménez-Gaona Y, Álvarez MJR, Castillo-Malla D, García-Jaen S, Carrión-Figueroa D, Corral-Domínguez P, Lakshminarayanan V. BraNet: a mobil application for breast image classification based on deep learning algorithms. Med Biol Eng Comput 2024:10.1007/s11517-024-03084-1. [PMID: 38693328 DOI: 10.1007/s11517-024-03084-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/26/2024] [Indexed: 05/03/2024]
Abstract
Mobile health apps are widely used for breast cancer detection using artificial intelligence algorithms, providing radiologists with second opinions and reducing false diagnoses. This study aims to develop an open-source mobile app named "BraNet" for 2D breast imaging segmentation and classification using deep learning algorithms. During the phase off-line, an SNGAN model was previously trained for synthetic image generation, and subsequently, these images were used to pre-trained SAM and ResNet18 segmentation and classification models. During phase online, the BraNet app was developed using the react native framework, offering a modular deep-learning pipeline for mammography (DM) and ultrasound (US) breast imaging classification. This application operates on a client-server architecture and was implemented in Python for iOS and Android devices. Then, two diagnostic radiologists were given a reading test of 290 total original RoI images to assign the perceived breast tissue type. The reader's agreement was assessed using the kappa coefficient. The BraNet App Mobil exhibited the highest accuracy in benign and malignant US images (94.7%/93.6%) classification compared to DM during training I (80.9%/76.9%) and training II (73.7/72.3%). The information contrasts with radiological experts' accuracy, with DM classification being 29%, concerning US 70% for both readers, because they achieved a higher accuracy in US ROI classification than DM images. The kappa value indicates a fair agreement (0.3) for DM images and moderate agreement (0.4) for US images in both readers. It means that not only the amount of data is essential in training deep learning algorithms. Also, it is vital to consider the variety of abnormalities, especially in the mammography data, where several BI-RADS categories are present (microcalcifications, nodules, mass, asymmetry, and dense breasts) and can affect the API accuracy model.
Collapse
Affiliation(s)
- Yuliana Jiménez-Gaona
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador.
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain.
- Theoretical and Experimental Epistemology Lab, School of Opto ΩN2L3G1, Waterloo, Canada.
| | - María José Rodríguez Álvarez
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain
| | - Darwin Castillo-Malla
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain
- Theoretical and Experimental Epistemology Lab, School of Opto ΩN2L3G1, Waterloo, Canada
| | - Santiago García-Jaen
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador
| | | | - Patricio Corral-Domínguez
- Corporación Médica Monte Sinaí-CIPAM (Centro Integral de Patología Mamaria) Cuenca-Ecuador, Facultad de Ciencias Médicas, Universidad de Cuenca, Cuenca, 010203, Ecuador
| | - Vasudevan Lakshminarayanan
- Department of Systems Design Engineering, Physics, and Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, N2L3G1, Canada
| |
Collapse
|
9
|
Bui DC, Song B, Kim K, Kwak JT. DAX-Net: A dual-branch dual-task adaptive cross-weight feature fusion network for robust multi-class cancer classification in pathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108112. [PMID: 38479146 DOI: 10.1016/j.cmpb.2024.108112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 02/15/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND AND OBJECTIVE Multi-class cancer classification has been extensively studied in digital and computational pathology due to its importance in clinical decision-making. Numerous computational tools have been proposed for various types of cancer classification. Many of them are built based on convolutional neural networks. Recently, Transformer-style networks have shown to be effective for cancer classification. Herein, we present a hybrid design that leverages both convolutional neural networks and transformer architecture to obtain superior performance in cancer classification. METHODS We propose a dual-branch dual-task adaptive cross-weight feature fusion network, called DAX-Net, which exploits heterogeneous feature representations from the convolutional neural network and Transformer network, adaptively combines them to boost their representation power, and conducts cancer classification as categorical classification and ordinal classification. For an efficient and effective optimization of the proposed model, we introduce two loss functions that are tailored to the two classification tasks. RESULTS To evaluate the proposed method, we employed colorectal and prostate cancer datasets, of which each contains both in-domain and out-of-domain test sets. For colorectal cancer, the proposed method obtained an accuracy of 88.4%, a quadratic kappa score of 0.945, and an F1 score of 0.831 for the in-domain test set, and 84.4%, 0.910, and 0.768 for the out-of-domain test set. For prostate cancer, it achieved an accuracy of 71.6%, a kappa score of 0.635, and an F1 score of 0.655 for the in-domain test set, 79.2% accuracy, 0.721 kappa score, and 0.686 F1 score for the first out-of-domain test set, and 58.1% accuracy, 0.564 kappa score, and 0.493 F1 score for the second out-of-domain test set. It is worth noting that the performance of the proposed method outperformed other competitors by significant margins, in particular, with respect to the out-of-domain test sets. CONCLUSIONS The experimental results demonstrate that the proposed method is not only accurate but also robust to varying conditions of the test sets in comparison to several, related methods. These results suggest that the proposed method can facilitate automated cancer classification in various clinical settings.
Collapse
Affiliation(s)
- Doanh C Bui
- School of Electrical Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Boram Song
- Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, 03181, Republic of Korea
| | - Kyungeun Kim
- Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, 03181, Republic of Korea
| | - Jin Tae Kwak
- School of Electrical Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
10
|
Subhashini R, Velswamy R, Sree Rathna Lakshmi NVS, Sivanandam C. An innovative breast cancer detection framework using multiscale dilated densenet with attention mechanism. NETWORK (BRISTOL, ENGLAND) 2024:1-37. [PMID: 38648017 DOI: 10.1080/0954898x.2024.2343348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Accepted: 04/05/2024] [Indexed: 04/25/2024]
Abstract
Cancer-related deadly diseases affect both developed and underdeveloped nations worldwide. Effective network learning is crucial to more reliably identify and categorize breast carcinoma in vast and unbalanced image datasets. The absence of early cancer symptoms makes the early identification process challenging. Therefore, from the perspectives of diagnosis, prevention, and therapy, cancer continues to be among the healthcare concerns that numerous researchers work to advance. It is highly essential to design an innovative breast cancer detection model by considering the complications presented in the classical techniques. Initially, breast cancer images are gathered from online sources and it is further subjected to the segmentation region. Here, it is segmented using Adaptive Trans-Dense-Unet (A-TDUNet), and their parameters are tuned using the developed Modified Sheep Flock Optimization Algorithm (MSFOA). The segmented images are further subjected to the breast cancer detection stage and effective breast cancer detection is performed by Multiscale Dilated Densenet with Attention Mechanism (MDD-AM). Throughout the result validation, the Net Present Value (NPV) and accuracy rate of the designed approach are 96.719% and 93.494%. Hence, the implemented breast cancer detection model secured a better efficacy rate than the baseline detection methods in diverse experimental conditions.
Collapse
Affiliation(s)
- R Subhashini
- Department of Information Technology, Sona College of Technology, Salem, Tamil Nadu, India
| | - Rajasekar Velswamy
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India
| | - N V S Sree Rathna Lakshmi
- Department of Electronics and Communication Engineering, Agni College of Technology, Thazhambur, Tamil Nadu, India
| | - Chakaravarthi Sivanandam
- Department of Computer Science and Engineering, Panimalar Engineering College, Poonamallee, Chennai, Tamil Nadu, India
| |
Collapse
|
11
|
Alhassan AM. An improved breast cancer classification with hybrid chaotic sand cat and Remora Optimization feature selection algorithm. PLoS One 2024; 19:e0300622. [PMID: 38603682 PMCID: PMC11008855 DOI: 10.1371/journal.pone.0300622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 03/03/2024] [Indexed: 04/13/2024] Open
Abstract
Breast cancer is one of the most often diagnosed cancers in women, and identifying breast cancer histological images is an essential challenge in automated pathology analysis. According to research, the global BrC is around 12% of all cancer cases. Furthermore, around 25% of women suffer from BrC. Consequently, the prediction of BrC depends critically on the quick and precise processing of imaging data. The primary reason deep learning models are used in breast cancer detection is that they can produce findings more quickly and accurately than current machine learning-based techniques. Using a BreakHis dataset, we demonstrated in this work the viability of automatically identifying and classifying BrC. The first stage is pre-processing, which employs an Adaptive Switching Modified Decision Based Unsymmetrical Trimmed Median Filter (ASMDBUTMF) to remove high-density noise. After the image has been pre-processed, it is segmented using the Thresholding Level set approach. Next, we propose a hybrid chaotic sand cat optimization technique, together with the Remora Optimization Algorithm (ROA) for feature selection. The suggested strategy facilitates the acquisition of precise functionality attributes, hence simplifying the detection procedure. Additionally, it aids in resolving problems pertaining to global optimization. Following the selection, the best characteristics proceed to the categorization procedure. A DL classifier called the Conditional Variation Autoencoder is used to discriminate between cancerous and benign tumors while categorizing them. Consequently, a classification accuracy of 99.4%, Precision of 99.2%, Recall of 99.1%, F- score of 99%, Specificity of 99.14%, FDR of 0.54, FNR of 0.001, FPR of 0.002, MCC of 0.98 and NPV of 0.99 were obtained using the proposed approach. Furthermore, compared to other research using the current BreakHis dataset, the results of our research are more desirable.
Collapse
Affiliation(s)
- Afnan M. Alhassan
- College of Computing and Information Technology, Shaqra University, Shaqra, Saudi Arabia
| |
Collapse
|
12
|
Safdar Ali Khan M, Husen A, Nisar S, Ahmed H, Shah Muhammad S, Aftab S. Offloading the computational complexity of transfer learning with generic features. PeerJ Comput Sci 2024; 10:e1938. [PMID: 38660182 PMCID: PMC11041970 DOI: 10.7717/peerj-cs.1938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 02/19/2024] [Indexed: 04/26/2024]
Abstract
Deep learning approaches are generally complex, requiring extensive computational resources and having high time complexity. Transfer learning is a state-of-the-art approach to reducing the requirements of high computational resources by using pre-trained models without compromising accuracy and performance. In conventional studies, pre-trained models are trained on datasets from different but similar domains with many domain-specific features. The computational requirements of transfer learning are directly dependent on the number of features that include the domain-specific and the generic features. This article investigates the prospects of reducing the computational requirements of the transfer learning models by discarding domain-specific features from a pre-trained model. The approach is applied to breast cancer detection using the dataset curated breast imaging subset of the digital database for screening mammography and various performance metrics such as precision, accuracy, recall, F1-score, and computational requirements. It is seen that discarding the domain-specific features to a specific limit provides significant performance improvements as well as minimizes the computational requirements in terms of training time (reduced by approx. 12%), processor utilization (reduced approx. 25%), and memory usage (reduced approx. 22%). The proposed transfer learning strategy increases accuracy (approx. 7%) and offloads computational complexity expeditiously.
Collapse
Affiliation(s)
- Muhammad Safdar Ali Khan
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Arif Husen
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
- Department of Computer Science, COMSATS Institute of Information Technology, Lahore, Punjab, Pakistan
| | - Shafaq Nisar
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Hasnain Ahmed
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Syed Shah Muhammad
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| | - Shabib Aftab
- Department of Computer Science and Information Technology, Virtual University of Pakistan, Lahore, Punjab, Pakistan
| |
Collapse
|
13
|
Ayana G, Lee E, Choe SW. Vision Transformers for Breast Cancer Human Epidermal Growth Factor Receptor 2 Expression Staging without Immunohistochemical Staining. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:402-414. [PMID: 38096984 DOI: 10.1016/j.ajpath.2023.11.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/10/2023] [Accepted: 11/20/2023] [Indexed: 12/31/2023]
Abstract
Accurate staging of human epidermal growth factor receptor 2 (HER2) expression is vital for evaluating breast cancer treatment efficacy. However, it typically involves costly and complex immunohistochemical staining, along with hematoxylin and eosin staining. This work presents customized vision transformers for staging HER2 expression in breast cancer using only hematoxylin and eosin-stained images. The proposed algorithm comprised three modules: a localization module for weakly localizing critical image features using spatial transformers, an attention module for global learning via vision transformers, and a loss module to determine proximity to a HER2 expression level based on input images by calculating ordinal loss. Results, reported with 95% CIs, reveal the proposed approach's success in HER2 expression staging: area under the receiver operating characteristic curve, 0.9202 ± 0.01; precision, 0.922 ± 0.01; sensitivity, 0.876 ± 0.01; and specificity, 0.959 ± 0.02 over fivefold cross-validation. Comparatively, this approach significantly outperformed conventional vision transformer models and state-of-the-art convolutional neural network models (P < 0.001). Furthermore, it surpassed existing methods when evaluated on an independent test data set. This work holds great importance, aiding HER2 expression staging in breast cancer treatment while circumventing the costly and time-consuming immunohistochemical staining procedure, thereby addressing diagnostic disparities in low-resource settings and low-income countries.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
| | - Eonjin Lee
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea.
| |
Collapse
|
14
|
Chen W, Tan X, Zhang J, Du G, Fu Q, Jiang H. A robust approach for multi-type classification of brain tumor using deep feature fusion. Front Neurosci 2024; 18:1288274. [PMID: 38440396 PMCID: PMC10909817 DOI: 10.3389/fnins.2024.1288274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 02/05/2024] [Indexed: 03/06/2024] Open
Abstract
Brain tumors can be classified into many different types based on their shape, texture, and location. Accurate diagnosis of brain tumor types can help doctors to develop appropriate treatment plans to save patients' lives. Therefore, it is very crucial to improve the accuracy of this classification system for brain tumors to assist doctors in their treatment. We propose a deep feature fusion method based on convolutional neural networks to enhance the accuracy and robustness of brain tumor classification while mitigating the risk of over-fitting. Firstly, the extracted features of three pre-trained models including ResNet101, DenseNet121, and EfficientNetB0 are adjusted to ensure that the shape of extracted features for the three models is the same. Secondly, the three models are fine-tuned to extract features from brain tumor images. Thirdly, pairwise summation of the extracted features is carried out to achieve feature fusion. Finally, classification of brain tumors based on fused features is performed. The public datasets including Figshare (Dataset 1) and Kaggle (Dataset 2) are used to verify the reliability of the proposed method. Experimental results demonstrate that the fusion method of ResNet101 and DenseNet121 features achieves the best performance, which achieves classification accuracy of 99.18 and 97.24% in Figshare dataset and Kaggle dataset, respectively.
Collapse
Affiliation(s)
- Wenna Chen
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Xinghua Tan
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Jincan Zhang
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Ganqin Du
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Qizhi Fu
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Hongwei Jiang
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| |
Collapse
|
15
|
Huang Z, Yang K, Tian H, Wu H, Tang S, Cui C, Shi S, Jiang Y, Chen J, Xu J, Dong F. A validation of an entropy-based artificial intelligence for ultrasound data in breast tumors. BMC Med Inform Decis Mak 2024; 24:1. [PMID: 38166852 PMCID: PMC10759705 DOI: 10.1186/s12911-023-02404-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 12/11/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND The application of artificial intelligence (AI) in the ultrasound (US) diagnosis of breast cancer (BCa) is increasingly prevalent. However, the impact of US-probe frequencies on the diagnostic efficacy of AI models has not been clearly established. OBJECTIVES To explore the impact of using US-video of variable frequencies on the diagnostic efficacy of AI in breast US screening. METHODS This study utilized different frequency US-probes (L14: frequency range: 3.0-14.0 MHz, central frequency 9 MHz, L9: frequency range: 2.5-9.0 MHz, central frequency 6.5 MHz and L13: frequency range: 3.6-13.5 MHz, central frequency 8 MHz, L7: frequency range: 3-7 MHz, central frequency 4.0 MHz, linear arrays) to collect breast-video and applied an entropy-based deep learning approach for evaluation. We analyzed the average two-dimensional image entropy (2-DIE) of these videos and the performance of AI models in processing videos from these different frequencies to assess how probe frequency affects AI diagnostic performance. RESULTS The study found that in testing set 1, L9 was higher than L14 in average 2-DIE; in testing set 2, L13 was higher in average 2-DIE than L7. The diagnostic efficacy of US-data, utilized in AI model analysis, varied across different frequencies (AUC: L9 > L14: 0.849 vs. 0.784; L13 > L7: 0.920 vs. 0.887). CONCLUSION This study indicate that US-data acquired using probes with varying frequencies exhibit diverse average 2-DIE values, and datasets characterized by higher average 2-DIE demonstrate enhanced diagnostic outcomes in AI-driven BCa diagnosis. Unlike other studies, our research emphasizes the importance of US-probe frequency selection on AI model diagnostic performance, rather than focusing solely on the AI algorithms themselves. These insights offer a new perspective for early BCa screening and diagnosis and are of significant for future choices of US equipment and optimization of AI algorithms.
Collapse
Affiliation(s)
- Zhibin Huang
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Keen Yang
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Hongtian Tian
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Huaiyu Wu
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Shuzhen Tang
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Chen Cui
- Research and development department, Illuminate, LLC, 518000, Shenzhen, Guangdong, China
| | - Siyuan Shi
- Research and development department, Illuminate, LLC, 518000, Shenzhen, Guangdong, China
| | - Yitao Jiang
- Research and development department, Illuminate, LLC, 518000, Shenzhen, Guangdong, China
| | - Jing Chen
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Jinfeng Xu
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China.
- Shenzhen People's Hospital, 518020, Shenzhen, China.
| | - Fajin Dong
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China.
- Shenzhen People's Hospital, 518020, Shenzhen, China.
| |
Collapse
|
16
|
Karamti H, Alharthi R, Umer M, Shaiba H, Ishaq A, Abuzinadah N, Alsubai S, Ashraf I. Breast cancer detection employing stacked ensemble model with convolutional features. Cancer Biomark 2024; 40:155-170. [PMID: 38160347 DOI: 10.3233/cbm-230294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Breast cancer is a major cause of female deaths, especially in underdeveloped countries. It can be treated if diagnosed early and chances of survival are high if treated appropriately and timely. For timely and accurate automated diagnosis, machine learning approaches tend to show better results than traditional methods, however, accuracy lacks the desired level. This study proposes the use of an ensemble model to provide accurate detection of breast cancer. The proposed model uses the random forest and support vector classifier along with automatic feature extraction using an optimized convolutional neural network (CNN). Extensive experiments are performed using the original, as well as, CNN-based features to analyze the performance of the deployed models. Experimental results involving the use of the Wisconsin dataset reveal that CNN-based features provide better results than the original features. It is observed that the proposed model achieves an accuracy of 99.99% for breast cancer detection. Performance comparison with existing state-of-the-art models is also carried out showing the superior performance of the proposed model.
Collapse
Affiliation(s)
- Hanen Karamti
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Raed Alharthi
- Department of Computer Science and Engineering, University of Hafr Al-Batin, Hafar, Saudi Arabia
| | - Muhammad Umer
- Department of Computer Science and Information Technology, The Islamia University of Bahawalpur, Bahawalpur, Pakistan
| | - Hadil Shaiba
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Abid Ishaq
- Department of Computer Science and Information Technology, The Islamia University of Bahawalpur, Bahawalpur, Pakistan
| | - Nihal Abuzinadah
- Faculty of Computer Science and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si, Korea
| |
Collapse
|
17
|
Ciobotaru A, Bota MA, Goța DI, Miclea LC. Multi-Instance Classification of Breast Tumor Ultrasound Images Using Convolutional Neural Networks and Transfer Learning. Bioengineering (Basel) 2023; 10:1419. [PMID: 38136010 PMCID: PMC10740646 DOI: 10.3390/bioengineering10121419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 12/07/2023] [Accepted: 12/12/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND Breast cancer is arguably one of the leading causes of death among women around the world. The automation of the early detection process and classification of breast masses has been a prominent focus for researchers in the past decade. The utilization of ultrasound imaging is prevalent in the diagnostic evaluation of breast cancer, with its predictive accuracy being dependent on the expertise of the specialist. Therefore, there is an urgent need to create fast and reliable ultrasound image detection algorithms to address this issue. METHODS This paper aims to compare the efficiency of six state-of-the-art, fine-tuned deep learning models that can classify breast tissue from ultrasound images into three classes: benign, malignant, and normal, using transfer learning. Additionally, the architecture of a custom model is introduced and trained from the ground up on a public dataset containing 780 images, which was further augmented to 3900 and 7800 images, respectively. What is more, the custom model is further validated on another private dataset containing 163 ultrasound images divided into two classes: benign and malignant. The pre-trained architectures used in this work are ResNet-50, Inception-V3, Inception-ResNet-V2, MobileNet-V2, VGG-16, and DenseNet-121. The performance evaluation metrics that are used in this study are as follows: Precision, Recall, F1-Score and Specificity. RESULTS The experimental results show that the models trained on the augmented dataset with 7800 images obtained the best performance on the test set, having 94.95 ± 0.64%, 97.69 ± 0.52%, 97.69 ± 0.13%, 97.77 ± 0.29%, 95.07 ± 0.41%, 98.11 ± 0.10%, and 96.75 ± 0.26% accuracy for the ResNet-50, MobileNet-V2, InceptionResNet-V2, VGG-16, Inception-V3, DenseNet-121, and our model, respectively. CONCLUSION Our proposed model obtains competitive results, outperforming some state-of-the-art models in terms of accuracy and training time.
Collapse
Affiliation(s)
- Alexandru Ciobotaru
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.C.); (D.I.G.)
| | - Maria Aurora Bota
- Department of Advanced Computing Sciences, Faculty of Sciences and Engineering, Maastricht University, 6229 EN Maastricht, The Netherlands;
| | - Dan Ioan Goța
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.C.); (D.I.G.)
| | - Liviu Cristian Miclea
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania; (A.C.); (D.I.G.)
| |
Collapse
|
18
|
Valbuena Rubio S, García-Ordás MT, García-Olalla Olivera O, Alaiz-Moretón H, González-Alonso MI, Benítez-Andrades JA. Survival and grade of the glioma prediction using transfer learning. PeerJ Comput Sci 2023; 9:e1723. [PMID: 38192446 PMCID: PMC10773899 DOI: 10.7717/peerj-cs.1723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 11/06/2023] [Indexed: 01/10/2024]
Abstract
Glioblastoma is a highly malignant brain tumor with a life expectancy of only 3-6 months without treatment. Detecting and predicting its survival and grade accurately are crucial. This study introduces a novel approach using transfer learning techniques. Various pre-trained networks, including EfficientNet, ResNet, VGG16, and Inception, were tested through exhaustive optimization to identify the most suitable architecture. Transfer learning was applied to fine-tune these models on a glioblastoma image dataset, aiming to achieve two objectives: survival and tumor grade prediction.The experimental results show 65% accuracy in survival prediction, classifying patients into short, medium, or long survival categories. Additionally, the prediction of tumor grade achieved an accuracy of 97%, accurately differentiating low-grade gliomas (LGG) and high-grade gliomas (HGG). The success of the approach is attributed to the effectiveness of transfer learning, surpassing the current state-of-the-art methods. In conclusion, this study presents a promising method for predicting the survival and grade of glioblastoma. Transfer learning demonstrates its potential in enhancing prediction models, particularly in scenarios with limited large datasets. These findings hold promise for improving diagnostic and treatment approaches for glioblastoma patients.
Collapse
Affiliation(s)
| | - María Teresa García-Ordás
- SECOMUCI Research Group, Escuela de Ingenierías Industrial e Informática, Universidad de León, León, Spain
| | | | - Héctor Alaiz-Moretón
- SECOMUCI Research Group, Escuela de Ingenierías Industrial e Informática, Universidad de León, León, Spain
| | | | | |
Collapse
|
19
|
Tak D, Ye Z, Zapaishchykova A, Zha Y, Boyd A, Vajapeyam S, Chopra R, Hayat H, Prabhu S, Liu KX, Elhalawani H, Nabavizadeh A, Familiar A, Resnick A, Mueller S, Aerts HJ, Bandopadhayay P, Ligon K, Haas-Kogan D, Poussaint T, Kann BH. Noninvasive molecular subtyping of pediatric low-grade glioma with self-supervised transfer learning. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.08.04.23293673. [PMID: 37609311 PMCID: PMC10441478 DOI: 10.1101/2023.08.04.23293673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Purpose To develop and externally validate a scan-to-prediction deep-learning pipeline for noninvasive, MRI-based BRAF mutational status classification for pLGG. Materials and Methods We conducted a retrospective study of two pLGG datasets with linked genomic and diagnostic T2-weighted MRI of patients: BCH (development dataset, n=214 [60 (28%) BRAF fusion, 50 (23%) BRAF V600E, 104 (49%) wild-type), and Child Brain Tumor Network (CBTN) (external validation, n=112 [60 (53%) BRAF-Fusion, 17 (15%) BRAF-V600E, 35 (32%) wild-type]). We developed a deep learning pipeline to classify BRAF mutational status (V600E vs. fusion vs. wildtype) via a two-stage process: 1) 3D tumor segmentation and extraction of axial tumor images, and 2) slice-wise, deep learning-based classification of mutational status. We investigated knowledge-transfer and self-supervised approaches to prevent model overfitting with a primary endpoint of the area under the receiver operating characteristic curve (AUC). To enhance model interpretability, we developed a novel metric, COMDist, that quantifies the accuracy of model attention around the tumor. Results A combination of transfer learning from a pretrained medical imaging-specific network and self-supervised label cross-training (TransferX) coupled with consensus logic yielded the highest macro-average AUC (0.82 [95% CI: 0.70-0.90]) and accuracy (77%) on internal validation, with an AUC improvement of +17.7% and a COMDist improvement of +6.4% versus training from scratch. On external validation, the TransferX model yielded AUC (0.73 [95% CI 0.68-0.88]) and accuracy (75%). Conclusion Transfer learning and self-supervised cross-training improved classification performance and generalizability for noninvasive pLGG mutational status prediction in a limited data scenario.
Collapse
Affiliation(s)
- Divyanshu Tak
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Zezhong Ye
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Anna Zapaishchykova
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Yining Zha
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Aidan Boyd
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Sridhar Vajapeyam
- Department of Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Rishi Chopra
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Hasaan Hayat
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Sanjay Prabhu
- Department of Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Kevin X. Liu
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Hesham Elhalawani
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Ali Nabavizadeh
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ariana Familiar
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Adam Resnick
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Sabine Mueller
- Department of Neurology, University of California San Francisco, San Francisco, CA. USA
- Department of Pediatrics, University of California San Francisco, San Francisco, CA, USA
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Hugo J.W.L. Aerts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
- Department of Radiology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands
| | - Pratiti Bandopadhayay
- Department of Pediatric Oncology, Dana-Farber Cancer Institute, Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Keith Ligon
- Department of Pathology, Dana-Farber Cancer Institute, Boston Children’s Hospital, Harvard Medical School, Boston, A, USA
| | - Daphne Haas-Kogan
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Tina Poussaint
- Department of Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Benjamin H. Kann
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute | Brigham and Women’s Hospital | Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
20
|
Cheng K, Wang J, Liu J, Zhang X, Shen Y, Su H. Public health implications of computer-aided diagnosis and treatment technologies in breast cancer care. AIMS Public Health 2023; 10:867-895. [PMID: 38187901 PMCID: PMC10764974 DOI: 10.3934/publichealth.2023057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 10/10/2023] [Indexed: 01/09/2024] Open
Abstract
Breast cancer remains a significant public health issue, being a leading cause of cancer-related mortality among women globally. Timely diagnosis and efficient treatment are crucial for enhancing patient outcomes, reducing healthcare burdens and advancing community health. This systematic review, following the PRISMA guidelines, aims to comprehensively synthesize the recent advancements in computer-aided diagnosis and treatment for breast cancer. The study covers the latest developments in image analysis and processing, machine learning and deep learning algorithms, multimodal fusion techniques and radiation therapy planning and simulation. The results of the review suggest that machine learning, augmented and virtual reality and data mining are the three major research hotspots in breast cancer management. Moreover, this paper discusses the challenges and opportunities for future research in this field. The conclusion highlights the importance of computer-aided techniques in the management of breast cancer and summarizes the key findings of the review.
Collapse
Affiliation(s)
- Kai Cheng
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jiangtao Wang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jian Liu
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Xiangsheng Zhang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Yuanyuan Shen
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Hang Su
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
21
|
Walid MAA, Mollick S, Shill PC, Baowaly MK, Islam MR, Ahamad MM, Othman MA, Samad MA. Adapted Deep Ensemble Learning-Based Voting Classifier for Osteosarcoma Cancer Classification. Diagnostics (Basel) 2023; 13:3155. [PMID: 37835898 PMCID: PMC10572954 DOI: 10.3390/diagnostics13193155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 10/05/2023] [Accepted: 10/06/2023] [Indexed: 10/15/2023] Open
Abstract
The study utilizes osteosarcoma hematoxylin and the Eosin-stained image dataset, which is unevenly dispersed, and it raises concerns about the potential impact on the overall performance and reliability of any analyses or models derived from the dataset. In this study, a deep-learning-based convolution neural network (CNN) and adapted heterogeneous ensemble-learning-based voting classifier have been proposed to classify osteosarcoma. The proposed methods can also resolve the issue and develop unbiased learning models by introducing an evenly distributed training dataset. Data augmentation is employed to boost the generalization abilities. Six different pre-trained CNN models, namely MobileNetV1, Mo-bileNetV2, ResNetV250, InceptionV2, EfficientNetV2B0, and NasNetMobile, are applied and evaluated in frozen and fine-tuned-based phases. In addition, a novel CNN model and adapted heterogeneous ensemble-learning-based voting classifier developed from the proposed CNN model, fine-tuned NasNetMobile model, and fine-tuned Efficient-NetV2B0 model are also introduced to classify osteosarcoma. The proposed CNN model outperforms other pre-trained models. The Kappa score obtained from the proposed CNN model is 93.09%. Notably, the proposed voting classifier attains the highest Kappa score of 96.50% and outperforms all other models. The findings of this study have practical implications in telemedicine, mobile healthcare systems, and as a supportive tool for medical professionals.
Collapse
Affiliation(s)
- Md. Abul Ala Walid
- Department of Computer Science and Engineering, Khulna University of Engineering and Technology, Khulna 9203, Bangladesh; (M.A.A.W.)
- Department of Computer Science and Engineering, Northern University of Business and Technology, Khulna 9100, Bangladesh
| | - Swarnali Mollick
- Department of Computer Science and Engineering, Northern University of Business and Technology, Khulna 9100, Bangladesh
| | - Pintu Chandra Shill
- Department of Computer Science and Engineering, Khulna University of Engineering and Technology, Khulna 9203, Bangladesh; (M.A.A.W.)
| | - Mrinal Kanti Baowaly
- Department of Computer Science and Engineering, Bangabandhu Sheikh Mujibur Rahman Science and Technology University, Gopalganj 8100, Bangladesh; (M.K.B.)
| | - Md. Rabiul Islam
- Department of Biomedical Engineering, Islamic University, Kushtia 7003, Bangladesh
| | - Md. Martuza Ahamad
- Department of Computer Science and Engineering, Bangabandhu Sheikh Mujibur Rahman Science and Technology University, Gopalganj 8100, Bangladesh; (M.K.B.)
| | - Manal A. Othman
- Medical Education Department, College of Medicine, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia;
| | - Md Abdus Samad
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| |
Collapse
|
22
|
Xu C, Yi K, Jiang N, Li X, Zhong M, Zhang Y. MDFF-Net: A multi-dimensional feature fusion network for breast histopathology image classification. Comput Biol Med 2023; 165:107385. [PMID: 37633086 DOI: 10.1016/j.compbiomed.2023.107385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 07/23/2023] [Accepted: 08/14/2023] [Indexed: 08/28/2023]
Abstract
Breast cancer is a common malignancy and early detection and treatment of it is crucial. Computer-aided diagnosis (CAD) based on deep learning has significantly advanced medical diagnostics, enhancing accuracy and efficiency in recent years. Despite the convenience, this technology also has certain limitations. When the morphological characteristics of the patient's pathological section are not evident or complex, certain small lesions or cells deep within the lesion cannot be recognized, and misdiagnosis is prone to occur. As a result, MDFF-Net, a CNN-based multidimensional feature fusion network, is proposed. The model consists of a one-dimensional feature extraction network, a two-dimensional feature extraction network, and a feature fusion classification network. The basic part of the two-dimensional feature extraction network is stacked by modules integrated with multi-scale channel shuffling networks and channel attention modules. Furthermore, inspired by natural language processing, this model integrates a one-dimensional feature extraction network to extract detailed information in the image to avoid misdiagnosis caused by insufficient information extraction such as cell morphological characteristics and differentiation degree. Finally, the extracted one-dimensional and two-dimensional features are fused in the feature fusion network and employed for the final classification. The effectiveness of MDFF-Net and classical classification models were evaluated on the BreakHis and the BACH datasets. According to experimental results, MDFF-Net achieves an accuracy of 98.86% on the BreakHis and 86.25% on the BACH dataset. Furthermore, to further assess the effectiveness of the model in other classification tasks, the colon cancer and the lung cancer datasets were employed for additional experiments, achieving a classification accuracy of 100% in both cases.
Collapse
Affiliation(s)
- Cheng Xu
- School of Information Engineering, East China Jiaotong University, Nanchang, 330013, China
| | - Ke Yi
- School of Information Engineering, East China Jiaotong University, Nanchang, 330013, China
| | - Nan Jiang
- School of Information Engineering, East China Jiaotong University, Nanchang, 330013, China
| | - Xiong Li
- School of Software, East China Jiaotong University, Nanchang, 330013, China
| | - Meiling Zhong
- School of Materials Science and Engineering, East China Jiaotong University, 330013, Nanchang, China
| | - Yuejin Zhang
- School of Information Engineering, East China Jiaotong University, Nanchang, 330013, China.
| |
Collapse
|
23
|
Thirumalaisamy S, Thangavilou K, Rajadurai H, Saidani O, Alturki N, Mathivanan SK, Jayagopal P, Gochhait S. Breast Cancer Classification Using Synthesized Deep Learning Model with Metaheuristic Optimization Algorithm. Diagnostics (Basel) 2023; 13:2925. [PMID: 37761292 PMCID: PMC10528264 DOI: 10.3390/diagnostics13182925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/06/2023] [Accepted: 08/12/2023] [Indexed: 09/29/2023] Open
Abstract
Breast cancer is the second leading cause of mortality among women. Early and accurate detection plays a crucial role in lowering its mortality rate. Timely detection and classification of breast cancer enable the most effective treatment. Convolutional neural networks (CNNs) have significantly improved the accuracy of tumor detection and classification in medical imaging compared to traditional methods. This study proposes a comprehensive classification technique for identifying breast cancer, utilizing a synthesized CNN, an enhanced optimization algorithm, and transfer learning. The primary goal is to assist radiologists in rapidly identifying anomalies. To overcome inherent limitations, we modified the Ant Colony Optimization (ACO) technique with opposition-based learning (OBL). The Enhanced Ant Colony Optimization (EACO) methodology was then employed to determine the optimal hyperparameter values for the CNN architecture. Our proposed framework combines the Residual Network-101 (ResNet101) CNN architecture with the EACO algorithm, resulting in a new model dubbed EACO-ResNet101. Experimental analysis was conducted on the MIAS and DDSM (CBIS-DDSM) mammographic datasets. Compared to conventional methods, our proposed model achieved an impressive accuracy of 98.63%, sensitivity of 98.76%, and specificity of 98.89% on the CBIS-DDSM dataset. On the MIAS dataset, the proposed model achieved a classification accuracy of 99.15%, a sensitivity of 97.86%, and a specificity of 98.88%. These results demonstrate the superiority of the proposed EACO-ResNet101 over current methodologies.
Collapse
Affiliation(s)
- Selvakumar Thirumalaisamy
- Department of Artificial intelligence & Data Science, Dr. Mahalingam College of Engineering and Technology, Pollachi 642003, India
| | - Kamaleshwar Thangavilou
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India
| | - Hariharan Rajadurai
- School of Computing Science and Engineering, VIT Bhopal University, Bhopal–Indore Highway Kothrikalan, Sehore 466114, India
| | - Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | | | - Prabhu Jayagopal
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore 632014, India;
| | - Saikat Gochhait
- Symbiosis Institute of Digital and Telecom Management, Constituent of Symbiosis International Deemed University, Pune 412115, India
- Neuroscience Research Institute, Samara State Medical University, 443001 Samara, Russia
| |
Collapse
|
24
|
Gao Y, Lin J, Zhou Y, Lin R. The application of traditional machine learning and deep learning techniques in mammography: a review. Front Oncol 2023; 13:1213045. [PMID: 37637035 PMCID: PMC10453798 DOI: 10.3389/fonc.2023.1213045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023] Open
Abstract
Breast cancer, the most prevalent malignant tumor among women, poses a significant threat to patients' physical and mental well-being. Recent advances in early screening technology have facilitated the early detection of an increasing number of breast cancers, resulting in a substantial improvement in patients' overall survival rates. The primary techniques used for early breast cancer diagnosis include mammography, breast ultrasound, breast MRI, and pathological examination. However, the clinical interpretation and analysis of the images produced by these technologies often involve significant labor costs and rely heavily on the expertise of clinicians, leading to inherent deviations. Consequently, artificial intelligence(AI) has emerged as a valuable technology in breast cancer diagnosis. Artificial intelligence includes Machine Learning(ML) and Deep Learning(DL). By simulating human behavior to learn from and process data, ML and DL aid in lesion localization reduce misdiagnosis rates, and improve accuracy. This narrative review provides a comprehensive review of the current research status of mammography using traditional ML and DL algorithms. It particularly highlights the latest advancements in DL methods for mammogram image analysis and offers insights into future development directions.
Collapse
Affiliation(s)
- Ying’e Gao
- School of Nursing Fujian Medical University, Fuzhou, China
| | - Jingjing Lin
- School of Nursing Fujian Medical University, Fuzhou, China
| | - Yuzhuo Zhou
- Department of Surgery, Hannover Medical School, Hannover, Germany
| | - Rongjin Lin
- School of Nursing Fujian Medical University, Fuzhou, China
- Department of Nursing, the First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| |
Collapse
|
25
|
Giarnieri E, Scardapane S. Towards Artificial Intelligence Applications in Next Generation Cytopathology. Biomedicines 2023; 11:2225. [PMID: 37626721 PMCID: PMC10452064 DOI: 10.3390/biomedicines11082225] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/04/2023] [Accepted: 08/05/2023] [Indexed: 08/27/2023] Open
Abstract
Over the last 20 years we have seen an increase in techniques in the field of computational pathology and machine learning, improving our ability to analyze and interpret imaging. Neural networks, in particular, have been used for more than thirty years, starting with the computer assisted smear test using early generation models. Today, advanced machine learning, working on large image data sets, has been shown to perform classification, detection, and segmentation with remarkable accuracy and generalization in several domains. Deep learning algorithms, as a branch of machine learning, are thus attracting attention in digital pathology and cytopathology, providing feasible solutions for accurate and efficient cytological diagnoses, ranging from efficient cell counts to automatic classification of anomalous cells and queries over large clinical databases. The integration of machine learning with related next-generation technologies powered by AI, such as augmented/virtual reality, metaverse, and computational linguistic models are a focus of interest in health care digitalization, to support education, diagnosis, and therapy. In this work we will consider how all these innovations can help cytopathology to go beyond the microscope and to undergo a hyper-digitalized transformation. We also discuss specific challenges to their applications in the field, notably, the requirement for large-scale cytopathology datasets, the necessity of new protocols for sharing information, and the need for further technological training for pathologists.
Collapse
Affiliation(s)
- Enrico Giarnieri
- Cytopathology Unit, Department of Clinical and Molecular Medicine, Sant’Andrea Hospital, Sapienza University of Rome, Piazzale Aldo Moro 5, 00189 Rome, Italy
| | - Simone Scardapane
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00196 Rome, Italy;
| |
Collapse
|
26
|
Jin T, Pan S, Li X, Chen S. Metadata and Image Features Co-Aware Personalized Federated Learning for Smart Healthcare. IEEE J Biomed Health Inform 2023; 27:4110-4119. [PMID: 37220032 DOI: 10.1109/jbhi.2023.3279096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Recently, artificial intelligence has been widely used in intelligent disease diagnosis and has achieved great success. However, most of the works mainly rely on the extraction of image features but ignore the use of clinical text information of patients, which may limit the diagnosis accuracy fundamentally. In this paper, we propose a metadata and image features co-aware personalized federated learning scheme for smart healthcare. Specifically, we construct an intelligent diagnosis model, by which users can obtain fast and accurate diagnosis services. Meanwhile, a personalized federated learning scheme is designed to utilize the knowledge learned from other edge nodes with larger contributions and customize high-quality personalized classification models for each edge node. Subsequently, a Naïve Bayes classifier is devised for classifying patient metadata. And then the image and metadata diagnosis results are jointly aggregated by different weights to improve the accuracy of intelligent diagnosis. Finally, the simulation results illustrate that, compared with the existing methods, our proposed algorithm achieves better classification accuracy, reaching about 97.16% on PAD-UFES-20 dataset.
Collapse
|
27
|
Fu J, He B, Yang J, Liu J, Ouyang A, Wang Y. CDRNet: Cascaded dense residual network for grayscale and pseudocolor medical image fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107506. [PMID: 37003041 DOI: 10.1016/j.cmpb.2023.107506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 03/18/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
OBJECTIVE Multimodal medical fusion images have been widely used in clinical medicine, computer-aided diagnosis and other fields. However, the existing multimodal medical image fusion algorithms generally have shortcomings such as complex calculations, blurred details and poor adaptability. To solve this problem, we propose a cascaded dense residual network and use it for grayscale and pseudocolor medical image fusion. METHODS The cascaded dense residual network uses a multiscale dense network and a residual network as the basic network architecture, and a multilevel converged network is obtained through cascade. The cascaded dense residual network contains 3 networks, the first-level network inputs two images with different modalities to obtain a fused Image 1, the second-level network uses fused Image 1 as the input image to obtain fused Image 2 and the third-level network uses fused Image 2 as the input image to obtain fused Image 3. The multimodal medical image is trained through each level of the network, and the output fusion image is enhanced step-by-step. RESULTS As the number of networks increases, the fusion image becomes increasingly clearer. Through numerous fusion experiments, the fused images of the proposed algorithm have higher edge strength, richer details, and better performance in the objective indicators than the reference algorithms. CONCLUSION Compared with the reference algorithms, the proposed algorithm has better original information, higher edge strength, richer details and an improvement of the four objective SF, AG, MZ and EN indicator metrics.
Collapse
Affiliation(s)
- Jun Fu
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China.
| | - Baiqing He
- Nanchang Institute of Technology, Nanchang, Jiangxi, 330044, China
| | - Jie Yang
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China
| | - Jianpeng Liu
- School of Science, East China Jiaotong University, Nanchang, Jiangxi, 330013, China
| | - Aijia Ouyang
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China
| | - Ya Wang
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China
| |
Collapse
|
28
|
Zhu C, Hu P, Wang X, Zeng X, Shi L. A real-time computer-aided diagnosis method for hydatidiform mole recognition using deep neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107510. [PMID: 37003042 DOI: 10.1016/j.cmpb.2023.107510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 02/20/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND OBJECTIVE Hydatidiform mole (HM) is one of the most common gestational trophoblastic diseases with malignant potential. Histopathological examination is the primary method for diagnosing HM. However, due to the obscure and confusing pathology features of HM, significant observer variability exists among pathologists, leading to over- and misdiagnosis in clinical practice. Efficient feature extraction can significantly improve the accuracy and speed of the diagnostic process. Deep neural network (DNN) has been proven to have excellent feature extraction and segmentation capabilities, which is widely used in clinical practice for many other diseases. We constructed a deep learning-based CAD method to recognize HM hydrops lesions under the microscopic view in real-time. METHODS To solve the challenge of lesion segmentation due to difficulties in extracting effective features from HM slide images, we proposed a hydrops lesion recognition module that employs DeepLabv3+ with our novel compound loss function and a stepwise training strategy to achieve great performance in recognizing hydrops lesions at both pixel and lesion level. Meanwhile, a Fourier transform-based image mosaic module and an edge extension module for image sequences were developed to make the recognition model more applicable to the case of moving slides in clinical practice. Such an approach also addresses the situation where the model has poor results for image edge recognition. RESULTS We evaluated our method using widely adopted DNNs on an HM dataset and chose DeepLabv3+ with our compound loss function as the segmentation model. The comparison experiments show that the edge extension module is able to improve the model performance by at most 3.4% regarding pixel-level IoU and 9.0% regarding lesion-level IoU. As for the final result, our method is able to achieve a pixel-level IoU of 77.0%, a precision of 86.0%, and a lesion-level recall of 86.2% while having a response time of 82 ms per frame. Experiments show that our method is able to display the full microscopic view with accurately labeled HM hydrops lesions following the movement of slides in real-time. CONCLUSIONS To the best of our knowledge, this is the first method to utilize deep neural networks in HM lesion recognition. This method provides a robust and accurate solution with powerful feature extraction and segmentation capabilities for auxiliary diagnosis of HM.
Collapse
Affiliation(s)
- Chengze Zhu
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Pingge Hu
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Xingtong Wang
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Xianxu Zeng
- Department of Pathology, the Third Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
| | - Li Shi
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
29
|
Gül Y, Yaman S, Avcı D, Çilengir AH, Balaban M, Güler H. A Novel Deep Transfer Learning-Based Approach for Automated Pes Planus Diagnosis Using X-ray Image. Diagnostics (Basel) 2023; 13:diagnostics13091662. [PMID: 37175053 PMCID: PMC10178173 DOI: 10.3390/diagnostics13091662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 05/02/2023] [Accepted: 05/06/2023] [Indexed: 05/15/2023] Open
Abstract
Pes planus, colloquially known as flatfoot, is a deformity defined as the collapse, flattening or loss of the medial longitudinal arch of the foot. The first standard radiographic examination for diagnosing pes planus involves lateral and dorsoplantar weight-bearing radiographs. Recently, many artificial intelligence-based computer-aided diagnosis (CAD) systems and models have been developed for the detection of various diseases from radiological images. However, to the best of our knowledge, no model and system has been proposed in the literature for automated pes planus diagnosis using X-ray images. This study presents a novel deep learning-based model for automated pes planus diagnosis using X-ray images, a first in the literature. To perform this study, a new pes planus dataset consisting of weight-bearing X-ray images was collected and labeled by specialist radiologists. In the preprocessing stage, the number of X-ray images was augmented and then divided into 4 and 16 patches, respectively in a pyramidal fashion. Thus, a total of 21 images are obtained for each image, including 20 patches and one original image. These 21 images were then fed to the pre-trained MobileNetV2 and 21,000 features were extracted from the Logits layer. Among the extracted deep features, the most important 1312 features were selected using the proposed iterative ReliefF algorithm, and then classified with support vector machine (SVM). The proposed deep learning-based framework achieved 95.14% accuracy using 10-fold cross validation. The results demonstrate that our transfer learning-based model can be used as an auxiliary tool for diagnosing pes planus in clinical practice.
Collapse
Affiliation(s)
- Yeliz Gül
- Department of Radiology, Elazig Fethi Sekin City Hospital, 23280 Elazig, Turkey
| | - Süleyman Yaman
- Biomedical Department, Vocational School of Technical Sciences, Firat University, 23119 Elazig, Turkey
| | - Derya Avcı
- Department of Software Engineering, Technology Faculty, Firat University, 23119 Elazig, Turkey
| | - Atilla Hikmet Çilengir
- Department of Radiology, Faculty of Medicine, Izmir Democracy University, 35140 Izmir, Turkey
| | - Mehtap Balaban
- Department of Radiology, Faculty of Medicine, Ankara Yildirim Beyazit University, 06010 Ankara, Turkey
| | - Hasan Güler
- Electrical-Electronics Engineering Department, Engineering Faculty, Firat University, 23119 Elazig, Turkey
| |
Collapse
|
30
|
Lo CM, Lai KL. Deep learning-based assessment of knee septic arthritis using transformer features in sonographic modalities. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 237:107575. [PMID: 37148635 DOI: 10.1016/j.cmpb.2023.107575] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 02/21/2023] [Accepted: 05/02/2023] [Indexed: 05/08/2023]
Abstract
PURPOSE Septic arthritis is an infectious disease. Conventionally, the diagnosis of septic arthritis can only be based on the identification of causal pathogens taken from synovial fluid, synovium or blood samples. However, the cultures require several days for the isolation of pathogens. A rapid assessment performed through computer-aided diagnosis (CAD) would bring timely treatment. METHODS A total of 214 non-septic arthritis and 64 septic arthritis images generated by gray-scale (GS) and Power Doppler (PD) ultrasound modalities were collected for the experiment. A deep learning-based vision transformer (ViT) with pre-trained parameters were used for image feature extraction. The extracted features were then combined in machine learning classifiers with ten-fold cross validation in order to evaluate the abilities of septic arthritis classification. RESULTS Using a support vector machine, GS and PD features can achieve an accuracy rate of 86% and 91%, with the area under the receiver operating characteristic curves (AUCs) being 0.90 and 0.92, respectively. The best accuracy (92%) and best AUC (0.92) was obtained by combining both feature sets. CONCLUSIONS This is the first CAD system based on a deep learning approach for the diagnosis of septic arthritis as seen on knee ultrasound images. Using pre-trained ViT, both the accuracy and computation costs improved more than they had through convolutional neural networks. Additionally, automatically combining GS and PD generates a higher accuracy to better assist the physician's observations, thus providing a timely evaluation of septic arthritis.
Collapse
Affiliation(s)
- Chung-Ming Lo
- Graduate Institute of Library, Information and Archival Studies, National Chengchi University, Taipei, Taiwan
| | - Kuo-Lung Lai
- Division of Allergy, Immunology and Rheumatology, Department of Internal Medicine, Taichung Veterans General Hospital, Taichung, Taiwan; Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung, Taiwan.
| |
Collapse
|
31
|
Rehman SU, Khan MA, Masood A, Almujally NA, Baili J, Alhaisoni M, Tariq U, Zhang YD. BRMI-Net: Deep Learning Features and Flower Pollination-Controlled Regula Falsi-Based Feature Selection Framework for Breast Cancer Recognition in Mammography Images. Diagnostics (Basel) 2023; 13:diagnostics13091618. [PMID: 37175009 PMCID: PMC10178634 DOI: 10.3390/diagnostics13091618] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/16/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023] Open
Abstract
The early detection of breast cancer using mammogram images is critical for lowering women's mortality rates and allowing for proper treatment. Deep learning techniques are commonly used for feature extraction and have demonstrated significant performance in the literature. However, these features do not perform well in several cases due to redundant and irrelevant information. We created a new framework for diagnosing breast cancer using entropy-controlled deep learning and flower pollination optimization from the mammogram images. In the proposed framework, a filter fusion-based method for contrast enhancement is developed. The pre-trained ResNet-50 model is then improved and trained using transfer learning on both the original and enhanced datasets. Deep features are extracted and combined into a single vector in the following phase using a serial technique known as serial mid-value features. The top features are then classified using neural networks and machine learning classifiers in the following stage. To accomplish this, a technique for flower pollination optimization with entropy control has been developed. The exercise used three publicly available datasets: CBIS-DDSM, INbreast, and MIAS. On these selected datasets, the proposed framework achieved 93.8, 99.5, and 99.8% accuracy, respectively. Compared to the current methods, the increase in accuracy and decrease in computational time are explained.
Collapse
Affiliation(s)
- Shams Ur Rehman
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan
| | | | - Anum Masood
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway
| | - Nouf Abdullah Almujally
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Jamel Baili
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha'il, Ha'il 81451, Saudi Arabia
| | - Usman Tariq
- Management Information System Department, College of Business Administration, Prince Sattam Bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK
| |
Collapse
|
32
|
Dikici E, Nguyen XV, Takacs N, Prevedello LM. Prediction of model generalizability for unseen data: Methodology and case study in brain metastases detection in T1-Weighted contrast-enhanced 3D MRI. Comput Biol Med 2023; 159:106901. [PMID: 37068317 DOI: 10.1016/j.compbiomed.2023.106901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/08/2023] [Accepted: 04/09/2023] [Indexed: 04/19/2023]
Abstract
BACKGROUND AND PURPOSE A medical AI system's generalizability describes the continuity of its performance acquired from varying geographic, historical, and methodologic settings. Previous literature on this topic has mostly focused on "how" to achieve high generalizability (e.g., via larger datasets, transfer learning, data augmentation, model regularization schemes), with limited success. Instead, we aim to understand "when" the generalizability is achieved: Our study presents a medical AI system that could estimate its generalizability status for unseen data on-the-fly. MATERIALS AND METHODS We introduce a latent space mapping (LSM) approach utilizing Fréchet distance loss to force the underlying training data distribution into a multivariate normal distribution. During the deployment, a given test data's LSM distribution is processed to detect its deviation from the forced distribution; hence, the AI system could predict its generalizability status for any previously unseen data set. If low model generalizability is detected, then the user is informed by a warning message integrated into a sample deployment workflow. While the approach is applicable for most classification deep neural networks (DNNs), we demonstrate its application to a brain metastases (BM) detector for T1-weighted contrast-enhanced (T1c) 3D MRI. The BM detection model was trained using 175 T1c studies acquired internally (from the authors' institution) and tested using (1) 42 internally acquired exams and (2) 72 externally acquired exams from the publicly distributed Brain Mets dataset provided by the Stanford University School of Medicine. Generalizability scores, false positive (FP) rates, and sensitivities of the BM detector were computed for the test datasets. RESULTS AND CONCLUSION The model predicted its generalizability to be low for 31% of the testing data (i.e., two of the internally and 33 of the externally acquired exams), where it produced (1) ∼13.5 false positives (FPs) at 76.1% BM detection sensitivity for the low and (2) ∼10.5 FPs at 89.2% BM detection sensitivity for the high generalizability groups respectively. These results suggest that the proposed formulation enables a model to predict its generalizability for unseen data.
Collapse
Affiliation(s)
- Engin Dikici
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA.
| | - Xuan V Nguyen
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| | - Noah Takacs
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| | - Luciano M Prevedello
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| |
Collapse
|
33
|
Jabeen K, Khan MA, Balili J, Alhaisoni M, Almujally NA, Alrashidi H, Tariq U, Cha JH. BC2NetRF: Breast Cancer Classification from Mammogram Images Using Enhanced Deep Learning Features and Equilibrium-Jaya Controlled Regula Falsi-Based Features Selection. Diagnostics (Basel) 2023; 13:diagnostics13071238. [PMID: 37046456 PMCID: PMC10093018 DOI: 10.3390/diagnostics13071238] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 03/13/2023] [Accepted: 03/23/2023] [Indexed: 03/29/2023] Open
Abstract
One of the most frequent cancers in women is breast cancer, and in the year 2022, approximately 287,850 new cases have been diagnosed. From them, 43,250 women died from this cancer. An early diagnosis of this cancer can help to overcome the mortality rate. However, the manual diagnosis of this cancer using mammogram images is not an easy process and always requires an expert person. Several AI-based techniques have been suggested in the literature. However, still, they are facing several challenges, such as similarities between cancer and non-cancer regions, irrelevant feature extraction, and weak training models. In this work, we proposed a new automated computerized framework for breast cancer classification. The proposed framework improves the contrast using a novel enhancement technique called haze-reduced local-global. The enhanced images are later employed for the dataset augmentation. This step aimed at increasing the diversity of the dataset and improving the training capability of the selected deep learning model. After that, a pre-trained model named EfficientNet-b0 was employed and fine-tuned to add a few new layers. The fine-tuned model was trained separately on original and enhanced images using deep transfer learning concepts with static hyperparameters’ initialization. Deep features were extracted from the average pooling layer in the next step and fused using a new serial-based approach. The fused features were later optimized using a feature selection algorithm known as Equilibrium-Jaya controlled Regula Falsi. The Regula Falsi was employed as a termination function in this algorithm. The selected features were finally classified using several machine learning classifiers. The experimental process was conducted on two publicly available datasets—CBIS-DDSM and INbreast. For these datasets, the achieved average accuracy is 95.4% and 99.7%. A comparison with state-of-the-art (SOTA) technology shows that the obtained proposed framework improved the accuracy. Moreover, the confidence interval-based analysis shows consistent results of the proposed framework.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (K.J.); (M.A.K.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (K.J.); (M.A.K.)
| | - Jamel Balili
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia;
- Higher Institute of Applied Science and Technology of Sousse (ISSATS), Cité Taffala (Ibn Khaldoun) 4003 Sousse, University of Souse, Sousse 4000, Tunisia
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 81451, Saudi Arabia;
| | - Nouf Abdullah Almujally
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence:
| | - Huda Alrashidi
- Faculty of Information Technology and Computing, Arab Open University, Ardiya 92400, Kuwait;
| | - Usman Tariq
- Department of Management, CoBA, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia;
| | - Jae-Hyuk Cha
- Department of Computer Science, Hanyang University, Seoul 04763, Republic of Korea;
| |
Collapse
|
34
|
Shamshiri MA, Krzyżak A, Kowal M, Korbicz J. Compatible-domain Transfer Learning for Breast Cancer Classification with Limited Annotated Data. Comput Biol Med 2023; 154:106575. [PMID: 36758326 DOI: 10.1016/j.compbiomed.2023.106575] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 12/18/2022] [Accepted: 01/22/2023] [Indexed: 01/26/2023]
Abstract
Microscopic analysis of breast cancer images is the primary task in diagnosing cancer malignancy. Recent attempts to automate this task have employed deep learning models whose success has depended on large volumes of data, while acquiring annotated data in biomedical domains is time-consuming and may not always be feasible. A typical strategy to address this is to apply transfer learning using pre-trained models on a large natural image database (e.g., ImageNet) instead of training a model from scratch. This approach, however, has not been effective in several previous studies due to fundamental differences between natural and medical images. In this study, for the first time we proposed the idea of using a compatible data set of histopathological images to classify breast cancer cytological biopsy specimens. Despite intrinsic differences between histopathological and cytological images, we demonstrate that the features learned by deep networks during the pre-training procedure are compatible with those obtained throughout fine-tuning process. To thoroughly investigate this assertion, we explore three different strategies for training as well as two different approaches for fine-tuning deep learning models. By comparing the obtained results with those of previous state-of-the-art research conducted on the same data set, we demonstrate that the proposed method boasts of improved classification accuracy by 6% to 17% compared to the studies which were based on traditional machine learning techniques, and also enhanced accuracy by roughly 7% compared to those who utilized deep learning methods, eventually achieving 98.73% validation accuracy and 94.55% test accuracy. Exploring different training scenarios also revealed that using a compatible dataset has helped to elevate the classification accuracy by 3.0% compared to the typical approach of using ImageNet. Experimental results show that our approach, despite using a very small number of training images, has achieved performance comparable to that of experienced pathologists and has the potential to be applied in clinical settings.
Collapse
Affiliation(s)
- Mohammad Amin Shamshiri
- Department of Computer Science and Software Engineering, Concordia University, Montreal, H3G 1M8, Canada.
| | - Adam Krzyżak
- Department of Computer Science and Software Engineering, Concordia University, Montreal, H3G 1M8, Canada
| | - Marek Kowal
- Institute of Control and Computation Engineering, University of Zielona Góra, Zielona Góra, Poland
| | - Józef Korbicz
- Institute of Control and Computation Engineering, University of Zielona Góra, Zielona Góra, Poland
| |
Collapse
|
35
|
González-Patiño D, Villuendas-Rey Y, Saldaña-Pérez M, Argüelles-Cruz AJ. A Novel Bioinspired Algorithm for Mixed and Incomplete Breast Cancer Data Classification. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3240. [PMID: 36833936 PMCID: PMC9965500 DOI: 10.3390/ijerph20043240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/23/2023] [Accepted: 02/08/2023] [Indexed: 06/18/2023]
Abstract
The pre-diagnosis of cancer has been approached from various perspectives, so it is imperative to continue improving classification algorithms to achieve early diagnosis of the disease and improve patient survival. In the medical field, there are data that, for various reasons, are lost. There are also datasets that mix numerical and categorical values. Very few algorithms classify datasets with such characteristics. Therefore, this study proposes the modification of an existing algorithm for the classification of cancer. The said algorithm showed excellent results compared with classical classification algorithms. The AISAC-MMD (Mixed and Missing Data) is based on the AISAC and was modified to work with datasets with missing and mixed values. It showed significantly better performance than bio-inspired or classical classification algorithms. Statistical analysis established that the AISAC-MMD significantly outperformed the Nearest Neighbor, C4.5, Naïve Bayes, ALVOT, Naïve Associative Classifier, AIRS1, Immunos1, and CLONALG algorithms in conducting breast cancer classification.
Collapse
Affiliation(s)
- David González-Patiño
- Centro de Investigación en Computación, Instituto Politécnico Nacional, Ciudad de México 07738, Mexico
| | - Yenny Villuendas-Rey
- Instituto Politécnico Nacional, Centro de Innovación y Desarrollo Tecnológico en Cómputo, Ciudad de México 07700, Mexico
| | - Magdalena Saldaña-Pérez
- Centro de Investigación en Computación, Instituto Politécnico Nacional, Ciudad de México 07738, Mexico
| | | |
Collapse
|
36
|
Alharbi F, Vakanski A. Machine Learning Methods for Cancer Classification Using Gene Expression Data: A Review. Bioengineering (Basel) 2023; 10:bioengineering10020173. [PMID: 36829667 PMCID: PMC9952758 DOI: 10.3390/bioengineering10020173] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 01/24/2023] [Accepted: 01/26/2023] [Indexed: 01/31/2023] Open
Abstract
Cancer is a term that denotes a group of diseases caused by the abnormal growth of cells that can spread in different parts of the body. According to the World Health Organization (WHO), cancer is the second major cause of death after cardiovascular diseases. Gene expression can play a fundamental role in the early detection of cancer, as it is indicative of the biochemical processes in tissue and cells, as well as the genetic characteristics of an organism. Deoxyribonucleic acid (DNA) microarrays and ribonucleic acid (RNA)-sequencing methods for gene expression data allow quantifying the expression levels of genes and produce valuable data for computational analysis. This study reviews recent progress in gene expression analysis for cancer classification using machine learning methods. Both conventional and deep learning-based approaches are reviewed, with an emphasis on the application of deep learning models due to their comparative advantages for identifying gene patterns that are distinctive for various types of cancers. Relevant works that employ the most commonly used deep neural network architectures are covered, including multi-layer perceptrons, as well as convolutional, recurrent, graph, and transformer networks. This survey also presents an overview of the data collection methods for gene expression analysis and lists important datasets that are commonly used for supervised machine learning for this task. Furthermore, we review pertinent techniques for feature engineering and data preprocessing that are typically used to handle the high dimensionality of gene expression data, caused by a large number of genes present in data samples. The paper concludes with a discussion of future research directions for machine learning-based gene expression analysis for cancer classification.
Collapse
|
37
|
Vimala BB, Srinivasan S, Mathivanan SK, Muthukumaran V, Babu JC, Herencsar N, Vilcekova L. Image Noise Removal in Ultrasound Breast Images Based on Hybrid Deep Learning Technique. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23031167. [PMID: 36772207 PMCID: PMC9920830 DOI: 10.3390/s23031167] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/11/2023] [Accepted: 01/16/2023] [Indexed: 05/28/2023]
Abstract
Rapid improvements in ultrasound imaging technology have made it much more useful for screening and diagnosing breast problems. Local-speckle-noise destruction in ultrasound breast images may impair image quality and impact observation and diagnosis. It is crucial to remove localized noise from images. In the article, we have used the hybrid deep learning technique to remove local speckle noise from breast ultrasound images. The contrast of ultrasound breast images was first improved using logarithmic and exponential transforms, and then guided filter algorithms were used to enhance the details of the glandular ultrasound breast images. In order to finish the pre-processing of ultrasound breast images and enhance image clarity, spatial high-pass filtering algorithms were used to remove the extreme sharpening. In order to remove local speckle noise without sacrificing the image edges, edge-sensitive terms were eventually added to the Logical-Pool Recurrent Neural Network (LPRNN). The mean square error and false recognition rate both fell below 1.1% at the hundredth training iteration, showing that the LPRNN had been properly trained. Ultrasound images that have had local speckle noise destroyed had signal-to-noise ratios (SNRs) greater than 65 dB, peak SNR ratios larger than 70 dB, edge preservation index values greater than the experimental threshold of 0.48, and quick destruction times. The time required to destroy local speckle noise is low, edge information is preserved, and image features are brought into sharp focus.
Collapse
Affiliation(s)
- Baiju Babu Vimala
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Saravanan Srinivasan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India
| | - Sandeep Kumar Mathivanan
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Venkatesan Muthukumaran
- Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur 603203, India
| | - Jyothi Chinna Babu
- Department of Electronics and Communications Engineering, Annamacharya Institute of Technology and Sciences, Rajampet 516126, India
| | - Norbert Herencsar
- Department of Telecommunications, Faculty of Electrical and Communication Engineering, Brno University of Technology, Technicka 12, 616 00 Brno, Czech Republic
| | - Lucia Vilcekova
- Faculty of Management, Comenius University Bratislava, Odbojarov 10, 820 05 Bratislava, Slovakia
| |
Collapse
|
38
|
Ogundokun RO, Misra S, Akinrotimi AO, Ogul H. MobileNet-SVM: A Lightweight Deep Transfer Learning Model to Diagnose BCH Scans for IoMT-Based Imaging Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:656. [PMID: 36679455 PMCID: PMC9863875 DOI: 10.3390/s23020656] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 12/02/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
Many individuals worldwide pass away as a result of inadequate procedures for prompt illness identification and subsequent treatment. A valuable life can be saved or at least extended with the early identification of serious illnesses, such as various cancers and other life-threatening conditions. The development of the Internet of Medical Things (IoMT) has made it possible for healthcare technology to offer the general public efficient medical services and make a significant contribution to patients' recoveries. By using IoMT to diagnose and examine BreakHis v1 400× breast cancer histology (BCH) scans, disorders may be quickly identified and appropriate treatment can be given to a patient. Imaging equipment having the capability of auto-analyzing acquired pictures can be used to achieve this. However, the majority of deep learning (DL)-based image classification approaches are of a large number of parameters and unsuitable for application in IoMT-centered imaging sensors. The goal of this study is to create a lightweight deep transfer learning (DTL) model suited for BCH scan examination and has a good level of accuracy. In this study, a lightweight DTL-based model "MobileNet-SVM", which is the hybridization of MobileNet and Support Vector Machine (SVM), for auto-classifying BreakHis v1 400× BCH images is presented. When tested against a real dataset of BreakHis v1 400× BCH images, the suggested technique achieved a training accuracy of 100% on the training dataset. It also obtained an accuracy of 91% and an F1-score of 91.35 on the test dataset. Considering how complicated BCH scans are, the findings are encouraging. The MobileNet-SVM model is ideal for IoMT imaging equipment in addition to having a high degree of precision. According to the simulation findings, the suggested model requires a small computation speed and time.
Collapse
Affiliation(s)
- Roseline Oluwaseun Ogundokun
- Department of Multimedia Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania
- Department of Computer Science, Landmark University, Omu Aran 251103, Kwara, Nigeria
| | - Sanjay Misra
- Department of Computer Science and Communication, Østfold University College, 1757 Halden, Norway
| | | | - Hasan Ogul
- Department of Computer Science and Communication, Østfold University College, 1757 Halden, Norway
| |
Collapse
|
39
|
Thalakottor LA, Shirwaikar RD, Pothamsetti PT, Mathews LM. Classification of Histopathological Images from Breast Cancer Patients Using Deep Learning: A Comparative Analysis. Crit Rev Biomed Eng 2023; 51:41-62. [PMID: 37581350 DOI: 10.1615/critrevbiomedeng.2023047793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Cancer, a leading cause of mortality, is distinguished by the multi-stage conversion of healthy cells into cancer cells. Discovery of the disease early can significantly enhance the possibility of survival. Histology is a procedure where the tissue of interest is first surgically removed from a patient and cut into thin slices. A pathologist will then mount these slices on glass slides, stain them with specialized dyes like hematoxylin and eosin (H&E), and then inspect the slides under a microscope. Unfortunately, a manual analysis of histopathology images during breast cancer biopsy is time consuming. Literature suggests that automated techniques based on deep learning algorithms with artificial intelligence can be used to increase the speed and accuracy of detection of abnormalities within the histopathological specimens obtained from breast cancer patients. This paper highlights some recent work on such algorithms, a comparative study on various deep learning methods is provided. For the present study the breast cancer histopathological database (BreakHis) is used. These images are processed to enhance the inherent features, classified and an evaluation is carried out regarding the accuracy of the algorithm. Three convolutional neural network (CNN) models, visual geometry group (VGG19), densely connected convolutional networks (DenseNet201), and residual neural network (ResNet50V2), were employed while analyzing the images. Of these the DenseNet201 model performed better than other models and attained an accuracy of 91.3%. The paper includes a review of different classification techniques based on machine learning methods including CNN-based models and some of which may replace manual breast cancer diagnosis and detection.
Collapse
Affiliation(s)
- Louie Antony Thalakottor
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| | - Rudresh Deepak Shirwaikar
- Department of Computer Engineering, Agnel Institute of Technology and Design (AITD), Goa University, Assagao, Goa, India, 403507
| | - Pavan Teja Pothamsetti
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| | - Lincy Meera Mathews
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| |
Collapse
|
40
|
Efficient Breast Cancer Classification Network with Dual Squeeze and Excitation in Histopathological Images. Diagnostics (Basel) 2022; 13:diagnostics13010103. [PMID: 36611396 PMCID: PMC9818943 DOI: 10.3390/diagnostics13010103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/13/2022] [Accepted: 12/20/2022] [Indexed: 12/31/2022] Open
Abstract
Medical image analysis methods for mammograms, ultrasound, and magnetic resonance imaging (MRI) cannot provide the underline features on the cellular level to understand the cancer microenvironment which makes them unsuitable for breast cancer subtype classification study. In this paper, we propose a convolutional neural network (CNN)-based breast cancer classification method for hematoxylin and eosin (H&E) whole slide images (WSIs). The proposed method incorporates fused mobile inverted bottleneck convolutions (FMB-Conv) and mobile inverted bottleneck convolutions (MBConv) with a dual squeeze and excitation (DSE) network to accurately classify breast cancer tissue into binary (benign and malignant) and eight subtypes using histopathology images. For that, a pre-trained EfficientNetV2 network is used as a backbone with a modified DSE block that combines the spatial and channel-wise squeeze and excitation layers to highlight important low-level and high-level abstract features. Our method outperformed ResNet101, InceptionResNetV2, and EfficientNetV2 networks on the publicly available BreakHis dataset for the binary and multi-class breast cancer classification in terms of precision, recall, and F1-score on multiple magnification levels.
Collapse
|