1
|
Chutia U, Tewari AS, Singh JP, Raj VK. Classification of Lung Diseases Using an Attention-Based Modified DenseNet Model. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1625-1641. [PMID: 38467955 DOI: 10.1007/s10278-024-01005-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 12/19/2023] [Accepted: 12/22/2023] [Indexed: 03/13/2024]
Abstract
Lung diseases represent a significant global health threat, impacting both well-being and mortality rates. Diagnostic procedures such as Computed Tomography (CT) scans and X-ray imaging play a pivotal role in identifying these conditions. X-rays, due to their easy accessibility and affordability, serve as a convenient and cost-effective option for diagnosing lung diseases. Our proposed method utilized the Contrast-Limited Adaptive Histogram Equalization (CLAHE) enhancement technique on X-ray images to highlight the key feature maps related to lung diseases using DenseNet201. We have augmented the existing Densenet201 model with a hybrid pooling and channel attention mechanism. The experimental results demonstrate the superiority of our model over well-known pre-trained models, such as VGG16, VGG19, InceptionV3, Xception, ResNet50, ResNet152, ResNet50V2, ResNet152V2, MobileNetV2, DenseNet121, DenseNet169, and DenseNet201. Our model achieves impressive accuracy, precision, recall, and F1-scores of 95.34%, 97%, 96%, and 96%, respectively. We also provide visual insights into our model's decision-making process using Gradient-weighted Class Activation Mapping (Grad-CAM) to identify normal, pneumothorax, and atelectasis cases. The experimental results of our model in terms of heatmap may help radiologists improve their diagnostic abilities and labelling processes.
Collapse
Affiliation(s)
- Upasana Chutia
- Department of Computer Science and Engineering, National Institute of Technology Patna, Patna, 800005, Bihar, India
| | - Anand Shanker Tewari
- Department of Computer Science and Engineering, National Institute of Technology Patna, Patna, 800005, Bihar, India
| | - Jyoti Prakash Singh
- Department of Computer Science and Engineering, National Institute of Technology Patna, Patna, 800005, Bihar, India.
| | - Vikash Kumar Raj
- National Institute of Technology Patna, Patna, 800005, Bihar, India
| |
Collapse
|
2
|
Cheng Z, Wang S, Gao Y, Zhu Z, Yan C. Invariant Content Representation for Generalizable Medical Image Segmentation. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01088-9. [PMID: 38758420 DOI: 10.1007/s10278-024-01088-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/20/2024] [Accepted: 02/09/2024] [Indexed: 05/18/2024]
Abstract
Domain generalization (DG) for medical image segmentation due to privacy preservation prefers learning from a single-source domain and expects good robustness on unseen target domains. To achieve this goal, previous methods mainly use data augmentation to expand the distribution of samples and learn invariant content from them. However, most of these methods commonly perform global augmentation, leading to limited augmented sample diversity. In addition, the style of the augmented image is more scattered than the source domain, which may cause the model to overfit the style of the source domain. To address the above issues, we propose an invariant content representation network (ICRN) to enhance the learning of invariant content and suppress the learning of variability styles. Specifically, we first design a gamma correction-based local style augmentation (LSA) to expand the distribution of samples by augmenting foreground and background styles, respectively. Then, based on the augmented samples, we introduce invariant content learning (ICL) to learn generalizable invariant content from both augmented and source-domain samples. Finally, we design domain-specific batch normalization (DSBN) based style adversarial learning (SAL) to suppress the learning of preferences for source-domain styles. Experimental results show that our proposed method improves by 8.74% and 11.33% in overall dice coefficient (Dice) and reduces 15.88 mm and 3.87 mm in overall average surface distance (ASD) on two publicly available cross-domain datasets, Fundus and Prostate, compared to the state-of-the-art DG methods. The code is available at https://github.com/ZMC-IIIM/ICRN-DG .
Collapse
Affiliation(s)
- Zhiming Cheng
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Shuai Wang
- School of Cyberspace, Hangzhou Dianzi University, Hangzhou, 310018, China.
- Suzhou Research Institute of Shandong University, SuZhou, 215123, China.
| | - Yuhan Gao
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, China
- Lishui Institute of Hangzhou Dianzi Universitu, Lishui, 323010, China
| | - Zunjie Zhu
- Lishui Institute of Hangzhou Dianzi Universitu, Lishui, 323010, China
- School of Communication, Engineering, Hangzhou Dianzi Universitu, Hangzhou, 310018, China
| | - Chenggang Yan
- School of Communication, Engineering, Hangzhou Dianzi Universitu, Hangzhou, 310018, China
| |
Collapse
|
3
|
Montaha S, Azam S, Bhuiyan MRI, Chowa SS, Mukta MSH, Jonkman M. Malignancy pattern analysis of breast ultrasound images using clinical features and a graph convolutional network. Digit Health 2024; 10:20552076241251660. [PMID: 38817843 PMCID: PMC11138200 DOI: 10.1177/20552076241251660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 04/12/2024] [Indexed: 06/01/2024] Open
Abstract
Objective Early diagnosis of breast cancer can lead to effective treatment, possibly increase long-term survival rates, and improve quality of life. The objective of this study is to present an automated analysis and classification system for breast cancer using clinical markers such as tumor shape, orientation, margin, and surrounding tissue. The novelty and uniqueness of the study lie in the approach of considering medical features based on the diagnosis of radiologists. Methods Using clinical markers, a graph is generated where each feature is represented by a node, and the connection between them is represented by an edge which is derived through Pearson's correlation method. A graph convolutional network (GCN) model is proposed to classify breast tumors into benign and malignant, using the graph data. Several statistical tests are performed to assess the importance of the proposed features. The performance of the proposed GCN model is improved by experimenting with different layer configurations and hyper-parameter settings. Results Results show that the proposed model has a 98.73% test accuracy. The performance of the model is compared with a graph attention network, a one-dimensional convolutional neural network, and five transfer learning models, ten machine learning models, and three ensemble learning models. The performance of the model was further assessed with three supplementary breast cancer ultrasound image datasets, where the accuracies are 91.03%, 94.37%, and 89.62% for Dataset A, Dataset B, and Dataset C (combining Dataset A and Dataset B) respectively. Overfitting issues are assessed through k-fold cross-validation. Conclusion Several variants are utilized to present a more rigorous and fair evaluation of our work, especially the importance of extracting clinically relevant features. Moreover, a GCN model using graph data can be a promising solution for an automated feature-based breast image classification system.
Collapse
Affiliation(s)
- Sidratul Montaha
- Department of Computer Science, University of Calgary, Calgary, Canada
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| | | | - Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| | | | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| |
Collapse
|
4
|
Chowa SS, Azam S, Montaha S, Payel IJ, Bhuiyan MRI, Hasan MZ, Jonkman M. Graph neural network-based breast cancer diagnosis using ultrasound images with optimized graph construction integrating the medically significant features. J Cancer Res Clin Oncol 2023; 149:18039-18064. [PMID: 37982829 PMCID: PMC10725367 DOI: 10.1007/s00432-023-05464-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 10/06/2023] [Indexed: 11/21/2023]
Abstract
PURPOSE An automated computerized approach can aid radiologists in the early diagnosis of breast cancer. In this study, a novel method is proposed for classifying breast tumors into benign and malignant, based on the ultrasound images through a Graph Neural Network (GNN) model utilizing clinically significant features. METHOD Ten informative features are extracted from the region of interest (ROI), based on the radiologists' diagnosis markers. The significance of the features is evaluated using density plot and T test statistical analysis method. A feature table is generated where each row represents individual image, considered as node, and the edges between the nodes are denoted by calculating the Spearman correlation coefficient. A graph dataset is generated and fed into the GNN model. The model is configured through ablation study and Bayesian optimization. The optimized model is then evaluated with different correlation thresholds for getting the highest performance with a shallow graph. The performance consistency is validated with k-fold cross validation. The impact of utilizing ROIs and handcrafted features for breast tumor classification is evaluated by comparing the model's performance with Histogram of Oriented Gradients (HOG) descriptor features from the entire ultrasound image. Lastly, a clustering-based analysis is performed to generate a new filtered graph, considering weak and strong relationships of the nodes, based on the similarities. RESULTS The results indicate that with a threshold value of 0.95, the GNN model achieves the highest test accuracy of 99.48%, precision and recall of 100%, and F1 score of 99.28%, reducing the number of edges by 85.5%. The GNN model's performance is 86.91%, considering no threshold value for the graph generated from HOG descriptor features. Different threshold values for the Spearman's correlation score are experimented with and the performance is compared. No significant differences are observed between the previous graph and the filtered graph. CONCLUSION The proposed approach might aid the radiologists in effective diagnosing and learning tumor pattern of breast cancer.
Collapse
Affiliation(s)
- Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Israt Jahan Payel
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1216, Bangladesh
| | - Md Rahad Islam Bhuiyan
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1216, Bangladesh
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
5
|
Chen H, Ma M, Liu G, Wang Y, Jin Z, Liu C. Breast Tumor Classification in Ultrasound Images by Fusion of Deep Convolutional Neural Network and Shallow LBP Feature. J Digit Imaging 2023; 36:932-946. [PMID: 36720840 PMCID: PMC10287618 DOI: 10.1007/s10278-022-00711-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 09/27/2022] [Accepted: 09/29/2022] [Indexed: 02/02/2023] Open
Abstract
Breast cancer is one of the most dangerous and common cancers in women which leads to a major research topic in medical science. To assist physicians in pre-screening for breast cancer to reduce unnecessary biopsies, breast ultrasound and computer-aided diagnosis (CAD) have been used to distinguish between benign and malignant tumors. In this study, we proposed a CAD system for tumor diagnosis using a multi-channel fusion method and feature extraction structure based on multi-feature fusion on breast ultrasound (BUS) images. In the pre-processing stage, the multi-channel fusion method completed the color conversion of the BUS image to make it contain richer information. In the feature extraction stage, the pre-trained ResNet50 network was selected as the basic network, and three levels of features were combined based on adaptive spatial feature fusion (ASFF), and finally, the shallow local binary pattern (LBP) texture features were fused. Support vector machine (SVM) was used for comparative analysis. A retrospective analysis was carried out, and 1615 breast tumor images (572 benign and 1043 malignant) confirmed by pathological examinations were collected. After data processing and augmentation, for an independent test set consisting of 874 breast ultrasound images (457 benign and 417 malignant), the accuracy, precision, recall, specificity, F1 score, and AUC of our method were 96.91%, 98.75%, 94.72%, 98.91%, 0.97, and 0.991, respectively. The results show that the integration of shallow LBP texture features and multi-level depth features can more effectively improve the comprehensive performance of breast tumor diagnosis, and has strong clinical application value. Compared with the past methods, our proposed method is expected to realize the automatic diagnosis of breast tumors and provide an auxiliary tool for radiologists to accurately diagnose breast diseases.
Collapse
Affiliation(s)
- Hua Chen
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Minglun Ma
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Gang Liu
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China.
| | - Ying Wang
- The Second Hospital of Hebei Medical University, Shijiazhuang, 050000, China
| | - Zhihao Jin
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Chong Liu
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| |
Collapse
|
6
|
Adeoye J, Akinshipo A, Koohi-Moghadam M, Thomson P, Su YX. Construction of machine learning-based models for cancer outcomes in low and lower-middle income countries: A scoping review. Front Oncol 2022; 12:976168. [DOI: 10.3389/fonc.2022.976168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 11/14/2022] [Indexed: 12/05/2022] Open
Abstract
BackgroundThe impact and utility of machine learning (ML)-based prediction tools for cancer outcomes including assistive diagnosis, risk stratification, and adjunctive decision-making have been largely described and realized in the high income and upper-middle-income countries. However, statistical projections have estimated higher cancer incidence and mortality risks in low and lower-middle-income countries (LLMICs). Therefore, this review aimed to evaluate the utilization, model construction methods, and degree of implementation of ML-based models for cancer outcomes in LLMICs.MethodsPubMed/Medline, Scopus, and Web of Science databases were searched and articles describing the use of ML-based models for cancer among local populations in LLMICs between 2002 and 2022 were included. A total of 140 articles from 22,516 citations that met the eligibility criteria were included in this study.ResultsML-based models from LLMICs were often based on traditional ML algorithms than deep or deep hybrid learning. We found that the construction of ML-based models was skewed to particular LLMICs such as India, Iran, Pakistan, and Egypt with a paucity of applications in sub-Saharan Africa. Moreover, models for breast, head and neck, and brain cancer outcomes were frequently explored. Many models were deemed suboptimal according to the Prediction model Risk of Bias Assessment tool (PROBAST) due to sample size constraints and technical flaws in ML modeling even though their performance accuracy ranged from 0.65 to 1.00. While the development and internal validation were described for all models included (n=137), only 4.4% (6/137) have been validated in independent cohorts and 0.7% (1/137) have been assessed for clinical impact and efficacy.ConclusionOverall, the application of ML for modeling cancer outcomes in LLMICs is increasing. However, model development is largely unsatisfactory. We recommend model retraining using larger sample sizes, intensified external validation practices, and increased impact assessment studies using randomized controlled trial designsSystematic review registrationhttps://www.crd.york.ac.uk/prospero/display_record.php?RecordID=308345, identifier CRD42022308345.
Collapse
|
7
|
Automatic Classification of Simulated Breast Tomosynthesis Whole Images for the Presence of Microcalcification Clusters Using Deep CNNs. J Imaging 2022; 8:jimaging8090231. [PMID: 36135397 PMCID: PMC9503015 DOI: 10.3390/jimaging8090231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 07/26/2022] [Accepted: 08/04/2022] [Indexed: 11/30/2022] Open
Abstract
Microcalcification clusters (MCs) are among the most important biomarkers for breast cancer, especially in cases of nonpalpable lesions. The vast majority of deep learning studies on digital breast tomosynthesis (DBT) are focused on detecting and classifying lesions, especially soft-tissue lesions, in small regions of interest previously selected. Only about 25% of the studies are specific to MCs, and all of them are based on the classification of small preselected regions. Classifying the whole image according to the presence or absence of MCs is a difficult task due to the size of MCs and all the information present in an entire image. A completely automatic and direct classification, which receives the entire image, without prior identification of any regions, is crucial for the usefulness of these techniques in a real clinical and screening environment. The main purpose of this work is to implement and evaluate the performance of convolutional neural networks (CNNs) regarding an automatic classification of a complete DBT image for the presence or absence of MCs (without any prior identification of regions). In this work, four popular deep CNNs are trained and compared with a new architecture proposed by us. The main task of these trainings was the classification of DBT cases by absence or presence of MCs. A public database of realistic simulated data was used, and the whole DBT image was taken into account as input. DBT data were considered without and with preprocessing (to study the impact of noise reduction and contrast enhancement methods on the evaluation of MCs with CNNs). The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance. Very promising results were achieved with a maximum AUC of 94.19% for the GoogLeNet. The second-best AUC value was obtained with a new implemented network, CNN-a, with 91.17%. This CNN had the particularity of also being the fastest, thus becoming a very interesting model to be considered in other studies. With this work, encouraging outcomes were achieved in this regard, obtaining similar results to other studies for the detection of larger lesions such as masses. Moreover, given the difficulty of visualizing the MCs, which are often spread over several slices, this work may have an important impact on the clinical analysis of DBT images.
Collapse
|
8
|
Aswiga RV, Shanthi AP. A Multilevel Transfer Learning Technique and LSTM Framework for Generating Medical Captions for Limited CT and DBT Images. J Digit Imaging 2022; 35:564-580. [PMID: 35217942 PMCID: PMC9156604 DOI: 10.1007/s10278-021-00567-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 09/24/2021] [Accepted: 12/06/2021] [Indexed: 12/15/2022] Open
Abstract
Medical image captioning has been recently attracting the attention of the medical community. Also, generating captions for images involving multiple organs is an even more challenging task. Therefore, any attempt toward such medical image captioning becomes the need of the hour. In recent years, the rapid developments in deep learning approaches have made them an effective option for the analysis of medical images and automatic report generation. But analyzing medical images that are scarce and limited is hard, and it is difficult even with machine learning approaches. The concept of transfer learning can be employed in such applications that suffer from insufficient training data. This paper presents an approach to develop a medical image captioning model based on a deep recurrent architecture that combines Multi Level Transfer Learning (MLTL) framework with a Long Short-Term-Memory (LSTM) model. A basic MLTL framework with three models is designed to detect and classify very limited datasets, using the knowledge acquired from easily available datasets. The first model for the source domain uses the abundantly available non-medical images and learns the generalized features. The acquired knowledge is then transferred to the second model for the intermediate and auxiliary domain, which is related to the target domain. This information is then used for the final target domain, which consists of medical datasets that are very limited in nature. Therefore, the knowledge learned from a non-medical source domain is transferred to improve the learning in the target domain that deals with medical images. Then, a novel LSTM model, which is used for sequence generation and machine translation, is proposed to generate captions for the given medical image from the MLTL framework. To improve the captioning of the target sentence further, an enhanced multi-input Convolutional Neural Network (CNN) model along with feature extraction techniques is proposed. This enhanced multi-input CNN model extracts the most important features of an image that help in generating a more precise and detailed caption of the medical image. Experimental results show that the proposed model performs well with an accuracy of 96.90%, with BLEU score of 76.9%, even with very limited datasets, when compared to the work reported in literature.
Collapse
Affiliation(s)
- R. V. Aswiga
- Department of Computer Science & Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Tamil Nadu, Chennai, 601103 India
| | - A. P. Shanthi
- Department of Computer Science & Engineering, College of Engineering, Guindy (CEG), Anna University, Tamil Nadu, Chennai, 600025 India
| |
Collapse
|
9
|
Sun L, Wen J, Wang J, Zhang Z, Zhao Y, Zhang G, Xu Y. Breast mass classification based on supervised contrastive learning and multi‐view consistency penalty on mammography. IET BIOMETRICS 2022. [DOI: 10.1049/bme2.12076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Lilei Sun
- College of Computer Science and Technology Guizhou University Guiyang China
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
| | - Jie Wen
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
- Harbin Institute of Technology Shenzhen China
| | - Junqian Wang
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
- Harbin Institute of Technology Shenzhen China
| | - Zheng Zhang
- Harbin Institute of Technology Shenzhen China
| | - Yong Zhao
- College of Computer Science and Technology Guizhou University Guiyang China
- School of Electronic and Computer Engineering Shenzhen Graduate School of Peking University Shenzhen China
| | - Guiying Zhang
- Qingyuan People's Hospital Guangzhou Medical University Qingyuan China
| | - Yong Xu
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
- Harbin Institute of Technology Shenzhen China
| |
Collapse
|
10
|
Chowdhury D, Das A, Dey A, Sarkar S, Dwivedi AD, Rao Mukkamala R, Murmu L. ABCanDroid: A Cloud Integrated Android App for Noninvasive Early Breast Cancer Detection Using Transfer Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:832. [PMID: 35161576 PMCID: PMC8838592 DOI: 10.3390/s22030832] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 01/20/2022] [Accepted: 01/20/2022] [Indexed: 12/17/2022]
Abstract
Many patients affected by breast cancer die every year because of improper diagnosis and treatment. In recent years, applications of deep learning algorithms in the field of breast cancer detection have proved to be quite efficient. However, the application of such techniques has a lot of scope for improvement. Major works have been done in this field, however it can be made more efficient by the use of transfer learning to get impressive results. In the proposed approach, Convolutional Neural Network (CNN) is complemented with Transfer Learning for increasing the efficiency and accuracy of early detection of breast cancer for better diagnosis. The thought process involved using a pre-trained model, which already had some weights assigned rather than building the complete model from scratch. This paper mainly focuses on ResNet101 based Transfer Learning Model paired with the ImageNet dataset. The proposed framework provided us with an accuracy of 99.58%. Extensive experiments and tuning of hyperparameters have been performed to acquire the best possible results in terms of classification. The proposed frameworks aims to be an efficient tool for all doctors and society as a whole and help the user in early detection of breast cancer.
Collapse
Affiliation(s)
- Deepraj Chowdhury
- Department of Electronics and Communication, International Institute of Information Technology, Naya Raipur 493661, India; (D.C.); (L.M.)
| | - Anik Das
- Department of Computer Science, RCC Institute of Information Technology, Kolkata 700015, India;
| | - Ajoy Dey
- Department of Electronics and Telecommunication, Jadavpur University, Kolkata 700032, India;
| | - Shreya Sarkar
- Department of Electronics and Communication, B.P. Poddar Institute of Management and Technology, Kolkata 700052, India;
| | - Ashutosh Dhar Dwivedi
- Centre for Business Data Analytics, Department of Digitalization, Copenhagen Business School, 2000 Frederiksberg, Denmark;
| | - Raghava Rao Mukkamala
- Centre for Business Data Analytics, Department of Digitalization, Copenhagen Business School, 2000 Frederiksberg, Denmark;
| | - Lakhindar Murmu
- Department of Electronics and Communication, International Institute of Information Technology, Naya Raipur 493661, India; (D.C.); (L.M.)
| |
Collapse
|
11
|
A Novel Unsupervised Computational Method for Ventricular and Supraventricular Origin Beats Classification. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11156711] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Arrhythmias are the most common events tracked by a physician. The need for continuous monitoring of such events in the ECG has opened the opportunity for automatic detection. Intra- and inter-patient paradigms are the two approaches currently followed by the scientific community. The intra-patient approach seems to resolve the problem with a high classification percentage but requires a physician to label key samples. The inter-patient makes use of historic data of different patients to build a general classifier, but the inherent variability in the ECG’s signal among patients leads to lower classification percentages compared to the intra-patient approach. In this work, we propose a new unsupervised algorithm that adapts to every patient using the heart rate and morphological features of the ECG beats to classify beats between supraventricular origin and ventricular origin. The results of our work in terms of F-score are 0.88, 0.89, and 0.93 for the ventricular origin beats for three popular ECG databases, and around 0.99 for the supraventricular origin for the same databases, comparable to supervised approaches presented in other works. This paper presents a new path to make use of ECG data to classify heartbeats without the assistance of a physician despite the needed improvements.
Collapse
|