1
|
Tekin H, Kaya Y. A new approach for heart disease detection using Motif transform-based CWT's time-frequency images with DenseNet deep transfer learning methods. BIOMED ENG-BIOMED TE 2024; 69:407-417. [PMID: 38425179 DOI: 10.1515/bmt-2023-0580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 02/15/2024] [Indexed: 03/02/2024]
Abstract
OBJECTIVES Electrocardiogram (ECG) signals are extensively utilized in the identification and assessment of diverse cardiac conditions, including congestive heart failure (CHF) and cardiac arrhythmias (ARR), which present potential hazards to human health. With the aim of facilitating disease diagnosis and assessment, advanced computer-aided systems are being developed to analyze ECG signals. METHODS This study proposes a state-of-the-art ECG data pattern recognition algorithm based on Continuous Wavelet Transform (CWT) as a novel signal preprocessing model. The Motif Transformation (MT) method was devised to diminish the drawbacks and limitations inherent in the CWT, such as the issue of boundary effects, limited localization in time and frequency, and overfitting conditions. This transformation technique facilitates the formation of diverse patterns (motifs) within the signals. The patterns (motifs) are constructed by comparing the amplitudes of each individual sample value in the ECG signals in terms of their largeness and smallness. In the subsequent stage, the obtained one-dimensional signals from the MT transformation were subjected to CWT to obtain scalogram images. In the last stage, the obtained scalogram images were subjected to classification using DenseNET deep transfer learning techniques. RESULTS AND CONCLUSIONS The combined approach of MT + CWT + DenseNET yielded an impressive success rate of 99.31 %.
Collapse
Affiliation(s)
- Hazret Tekin
- Electrical Department, Sirnak University, Sirnak, Türkiye
| | - Yılmaz Kaya
- Computer Engineering, Batman University, Batman, Türkiye
| |
Collapse
|
2
|
Hase H, Mine Y, Okazaki S, Yoshimi Y, Ito S, Peng TY, Sano M, Koizumi Y, Kakimoto N, Tanimoto K, Murayama T. Sex estimation from maxillofacial radiographs using a deep learning approach. Dent Mater J 2024; 43:394-399. [PMID: 38599831 DOI: 10.4012/dmj.2023-253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
The purpose of this study was to construct deep learning models for more efficient and reliable sex estimation. Two deep learning models, VGG16 and DenseNet-121, were used in this retrospective study. In total, 600 lateral cephalograms were analyzed. A saliency map was generated by gradient-weighted class activation mapping for each output. The two deep learning models achieved high values in each performance metric according to accuracy, sensitivity (recall), precision, F1 score, and areas under the receiver operating characteristic curve. Both models showed substantial differences in the positions indicated in saliency maps for male and female images. The positions in saliency maps also differed between VGG16 and DenseNet-121, regardless of sex. This analysis of our proposed system suggested that sex estimation from lateral cephalograms can be achieved with high accuracy using deep learning.
Collapse
Affiliation(s)
- Hiroki Hase
- Department of Medical Systems Engineering, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Yuichi Mine
- Department of Medical Systems Engineering, Graduate School of Biomedical and Health Sciences, Hiroshima University
- Project Research Center for Integrating Digital Dentistry, Hiroshima University
| | - Shota Okazaki
- Department of Medical Systems Engineering, Graduate School of Biomedical and Health Sciences, Hiroshima University
- Project Research Center for Integrating Digital Dentistry, Hiroshima University
| | - Yuki Yoshimi
- Department of Orthodontics and Craniofacial Developmental Biology, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Shota Ito
- Department of Orthodontics and Craniofacial Developmental Biology, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Tzu-Yu Peng
- School of Dentistry, College of Oral Medicine, Taipei Medical University
| | - Mizuho Sano
- Department of Medical Systems Engineering, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Yuma Koizumi
- Department of Orthodontics and Craniofacial Developmental Biology, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Naoya Kakimoto
- School of Dentistry, College of Oral Medicine, Taipei Medical University
| | - Kotaro Tanimoto
- Department of Oral and Maxillofacial Radiology, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Takeshi Murayama
- Department of Medical Systems Engineering, Graduate School of Biomedical and Health Sciences, Hiroshima University
- Project Research Center for Integrating Digital Dentistry, Hiroshima University
| |
Collapse
|
3
|
AlDahoul N, Karim HA, Momo MA, Escobar FIF, Magallanes VA, Tan MJT. Parasitic egg recognition using convolution and attention network. Sci Rep 2023; 13:14475. [PMID: 37660120 PMCID: PMC10475085 DOI: 10.1038/s41598-023-41711-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 08/30/2023] [Indexed: 09/04/2023] Open
Abstract
Intestinal parasitic infections (IPIs) caused by protozoan and helminth parasites are among the most common infections in humans in low-and-middle-income countries. IPIs affect not only the health status of a country, but also the economic sector. Over the last decade, pattern recognition and image processing techniques have been developed to automatically identify parasitic eggs in microscopic images. Existing identification techniques are still suffering from diagnosis errors and low sensitivity. Therefore, more accurate and faster solution is still required to recognize parasitic eggs and classify them into several categories. A novel Chula-ParasiteEgg dataset including 11,000 microscopic images proposed in ICIP2022 was utilized to train various methods such as convolutional neural network (CNN) based models and convolution and attention (CoAtNet) based models. The experiments conducted show high recognition performance of the proposed CoAtNet that was tuned with microscopic images of parasitic eggs. The CoAtNet produced an average accuracy of 93%, and an average F1 score of 93%. The finding opens door to integrate the proposed solution in automated parasitological diagnosis.
Collapse
Affiliation(s)
- Nouar AlDahoul
- Computer Science, New York University, Abu Dhabi, United Arab Emirates.
- Faculty of Engineering, Multimedia University, Cyberjaya, Malaysia.
| | | | - Mhd Adel Momo
- Fleet Management Systems and Technologies, Istanbul, Turkey
| | | | | | | |
Collapse
|
4
|
Lee J, Lee S, Lee WJ, Moon NJ, Lee JK. Neural network application for assessing thyroid-associated orbitopathy activity using orbital computed tomography. Sci Rep 2023; 13:13018. [PMID: 37563272 PMCID: PMC10415276 DOI: 10.1038/s41598-023-40331-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/08/2023] [Indexed: 08/12/2023] Open
Abstract
This study aimed to propose a neural network (NN)-based method to evaluate thyroid-associated orbitopathy (TAO) patient activity using orbital computed tomography (CT). Orbital CT scans were obtained from 144 active and 288 inactive TAO patients. These CT scans were preprocessed by selecting eleven slices from axial, coronal, and sagittal planes and segmenting the region of interest. We devised an NN employing information extracted from 13 pipelines to assess these slices and clinical patient age and sex data for TAO activity evaluation. The proposed NN's performance in evaluating active and inactive TAO patients achieved a 0.871 area under the receiver operating curve (AUROC), 0.786 sensitivity, and 0.779 specificity values. In contrast, the comparison models CSPDenseNet and ConvNeXt were significantly inferior to the proposed model, with 0.819 (p = 0.029) and 0.774 (p = 0.04) AUROC values, respectively. Ablation studies based on the Sequential Forward Selection algorithm identified vital information for optimal performance and evidenced that NNs performed best with three to five active pipelines. This study establishes a promising TAO activity diagnosing tool with further validation.
Collapse
Affiliation(s)
- Jaesung Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
- AI/ML Research Innovation Center, Chung-Ang University, Seoul, Korea
| | - Sanghyuck Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
| | - Won Jun Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, 102 Heukseok-Ro, Dongjak-Gu, Seoul, 06973, Korea
| | - Nam Ju Moon
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, 102 Heukseok-Ro, Dongjak-Gu, Seoul, 06973, Korea
| | - Jeong Kyu Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, 102 Heukseok-Ro, Dongjak-Gu, Seoul, 06973, Korea.
| |
Collapse
|
5
|
Cuevas-Rodriguez EO, Galvan-Tejada CE, Maeda-Gutiérrez V, Moreno-Chávez G, Galván-Tejada JI, Gamboa-Rosales H, Luna-García H, Moreno-Baez A, Celaya-Padilla JM. Comparative study of convolutional neural network architectures for gastrointestinal lesions classification. PeerJ 2023; 11:e14806. [PMID: 36945355 PMCID: PMC10024900 DOI: 10.7717/peerj.14806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 01/05/2023] [Indexed: 03/18/2023] Open
Abstract
The gastrointestinal (GI) tract can be affected by different diseases or lesions such as esophagitis, ulcers, hemorrhoids, and polyps, among others. Some of them can be precursors of cancer such as polyps. Endoscopy is the standard procedure for the detection of these lesions. The main drawback of this procedure is that the diagnosis depends on the expertise of the doctor. This means that some important findings may be missed. In recent years, this problem has been addressed by deep learning (DL) techniques. Endoscopic studies use digital images. The most widely used DL technique for image processing is the convolutional neural network (CNN) due to its high accuracy for modeling complex phenomena. There are different CNNs that are characterized by their architecture. In this article, four architectures are compared: AlexNet, DenseNet-201, Inception-v3, and ResNet-101. To determine which architecture best classifies GI tract lesions, a set of metrics; accuracy, precision, sensitivity, specificity, F1-score, and area under the curve (AUC) were used. These architectures were trained and tested on the HyperKvasir dataset. From this dataset, a total of 6,792 images corresponding to 10 findings were used. A transfer learning approach and a data augmentation technique were applied. The best performing architecture was DenseNet-201, whose results were: 97.11% of accuracy, 96.3% sensitivity, 99.67% specificity, and 95% AUC.
Collapse
|
6
|
Self-promotion and online shaming during COVID-19: A toxic combination. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT DATA INSIGHTS 2022; 2. [PMCID: PMC9444892 DOI: 10.1016/j.jjimei.2022.100117] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2023]
Abstract
A public shaming frenzy has spread through social media (SM) following the instigation of lockdown policies as a way to counter the spread of COVID-19. On SM, individuals shun the idea of self-promotion and shame others who do not follow the COVID-19 guidelines. When it comes to the crime of not taking a pandemic seriously, perhaps the ultimate penalty is online shaming. The study proposes the black swan theory from the human-computer interaction lens and examines the toxic combination of online shaming and self-promotion in SM to discern whether pointing the finger of blame is a productive way of changing rule-breaking behaviour. A quantitative methodology is applied to survey data, acquired from 375 respondents. The findings reveal that the adverse effect of online shaming results in self-destructive behaviour. Change in behaviour of individuals shamed online is higher for females over males and is higher for adults over middle-aged and older-aged.
Collapse
|
7
|
Altuntas F, Altuntas S, Dereli T. Social network analysis of tourism data: A case study of quarantine decisions in COVID-19 pandemic. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT DATA INSIGHTS 2022. [PMCID: PMC9364723 DOI: 10.1016/j.jjimei.2022.100108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Tourism is one of the most affected sector during the COVID-19 pandemic all over the world. Quarantine decisions are the leading measures taken in practice to reduce possible negative consequences of the COVID-19 pandemic. There is limited work in the literature on how to make the right quarantine decisions in a pandemic. Therefore, the aim of this study is to propose the use of social network analysis (SNA) based on tourism data to make the right quarantine decisions in the COVID-19 pandemic. A case study on quarantine decision is conducted based on data obtained from Turkish Statistical Institute to show how to perform SNA. Household domestic tourism survey is used as input data for SNA. The most critical region among 12 regions in Türkiye is Istanbul to decrease possible negative affect of COVID-19 pandemic on the tourism sector.
Collapse
|
8
|
Ascencio-Cabral A, Reyes-Aldasoro CC. Comparison of Convolutional Neural Networks and Transformers for the Classification of Images of COVID-19, Pneumonia and Healthy Individuals as Observed with Computed Tomography. J Imaging 2022; 8:237. [PMID: 36135403 PMCID: PMC9500990 DOI: 10.3390/jimaging8090237] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 08/12/2022] [Accepted: 08/22/2022] [Indexed: 11/16/2022] Open
Abstract
In this work, the performance of five deep learning architectures in classifying COVID-19 in a multi-class set-up is evaluated. The classifiers were built on pretrained ResNet-50, ResNet-50r (with kernel size 5×5 in the first convolutional layer), DenseNet-121, MobileNet-v3 and the state-of-the-art CaiT-24-XXS-224 (CaiT) transformer. The cross entropy and weighted cross entropy were minimised with Adam and AdamW. In total, 20 experiments were conducted with 10 repetitions and obtained the following metrics: accuracy (Acc), balanced accuracy (BA), F1 and F2 from the general Fβ macro score, Matthew's Correlation Coefficient (MCC), sensitivity (Sens) and specificity (Spec) followed by bootstrapping. The performance of the classifiers was compared by using the Friedman-Nemenyi test. The results show that less complex architectures such as ResNet-50, ResNet-50r and DenseNet-121 were able to achieve better generalization with rankings of 1.53, 1.71 and 3.05 for the Matthew Correlation Coefficient, respectively, while MobileNet-v3 and CaiT obtained rankings of 3.72 and 5.0, respectively.
Collapse
|
9
|
Sreenivasu SVN, Gomathi S, Kumar MJ, Prathap L, Madduri A, Almutairi KMA, Alonazi WB, Kali D, Jayadhas SA. Dense Convolutional Neural Network for Detection of Cancer from CT Images. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1293548. [PMID: 35769667 PMCID: PMC9236787 DOI: 10.1155/2022/1293548] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/17/2022] [Accepted: 04/23/2022] [Indexed: 11/17/2022]
Abstract
In this paper, we develop a detection module with strong training testing to develop a dense convolutional neural network model. The model is designed in such a way that it is trained with necessary features for optimal modelling of the cancer detection. The method involves preprocessing of computerized tomography (CT) images for optimal classification at the testing stages. A 10-fold cross-validation is conducted to test the reliability of the model for cancer detection. The experimental validation is conducted in python to validate the effectiveness of the model. The result shows that the model offers robust detection of cancer instances that novel approaches on large image datasets. The simulation result shows that the proposed method provides analyzes with 94% accuracy than other methods. Also, it helps to reduce the detection errors while classifying the cancer instances than other methods the several existing methods.
Collapse
Affiliation(s)
- S. V. N. Sreenivasu
- Department of Computer Science and Engineering, Narasaraopeta Engineering College, Narasaraopeta, Andhra Pradesh 522601, India
| | - S. Gomathi
- Department of Information Technology, Sri Sairam Engineering College, Chennai, Tamil Nadu 602109, India
| | - M. Jogendra Kumar
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh 522502, India
| | - Lavanya Prathap
- Department of Anatomy, Saveetha Dental College and Hospital, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu 600077, India
| | - Abhishek Madduri
- Department of Engineering Management, Duke University, North Carolina 27708, USA
| | - Khalid M. A. Almutairi
- Department of Community Health Sciences, College of Applied Medical Sciences, King Saud University, P. O. Box: 10219, Riyadh-11433, Saudi Arabia
| | - Wadi B. Alonazi
- Health Administration Department, College of Business Administration, King Saud University, PO Box: 71115, Riyadh-11587, Saudi Arabia
| | - D. Kali
- Department of Mechanical Engineering, Ryerson University, Canada
| | | |
Collapse
|
10
|
An Artificial Intelligence-Enabled ECG Algorithm for the Prediction and Localization of Angiography-Proven Coronary Artery Disease. Biomedicines 2022; 10:biomedicines10020394. [PMID: 35203603 PMCID: PMC8962407 DOI: 10.3390/biomedicines10020394] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 01/25/2022] [Accepted: 01/28/2022] [Indexed: 12/28/2022] Open
Abstract
(1) Background: The role of using artificial intelligence (AI) with electrocardiograms (ECGs) for the diagnosis of significant coronary artery disease (CAD) is unknown. We first tested the hypothesis that using AI to read ECG could identify significant CAD and determine which vessel was obstructed. (2) Methods: We collected ECG data from a multi-center retrospective cohort with patients of significant CAD documented by invasive coronary angiography and control patients in Taiwan from 1 January 2018 to 31 December 2020. (3) Results: We trained convolutional neural networks (CNN) models to identify patients with significant CAD (>70% stenosis), using the 12,954 ECG from 2303 patients with CAD and 2090 ECG from 1053 patients without CAD. The Marco-average area under the ROC curve (AUC) for detecting CAD was 0.869 for image input CNN model. For detecting individual coronary artery obstruction, the AUC was 0.885 for left anterior descending artery, 0.776 for right coronary artery, and 0.816 for left circumflex artery obstruction, and 1.0 for no coronary artery obstruction. Marco-average AUC increased up to 0.973 if ECG had features of myocardial ischemia. (4) Conclusions: We for the first time show that using the AI-enhanced CNN model to read standard 12-lead ECG permits ECG to serve as a powerful screening tool to identify significant CAD and localize the coronary obstruction. It could be easily implemented in health check-ups with asymptomatic patients and identifying high-risk patients for future coronary events.
Collapse
|