1
|
Sergio AR, Schimit PHT. Optimizing Contact Network Topological Parameters of Urban Populations Using the Genetic Algorithm. ENTROPY (BASEL, SWITZERLAND) 2024; 26:661. [PMID: 39202131 PMCID: PMC11353388 DOI: 10.3390/e26080661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 07/11/2024] [Accepted: 07/26/2024] [Indexed: 09/03/2024]
Abstract
This paper explores the application of complex network models and genetic algorithms in epidemiological modeling. By considering the small-world and Barabási-Albert network models, we aim to replicate the dynamics of disease spread in urban environments. This study emphasizes the importance of accurately mapping individual contacts and social networks to forecast disease progression. Using a genetic algorithm, we estimate the input parameters for network construction, thereby simulating disease transmission within these networks. Our results demonstrate the networks' resemblance to real social interactions, highlighting their potential in predicting disease spread. This study underscores the significance of complex network models and genetic algorithms in understanding and managing public health crises.
Collapse
|
2
|
Heredia Cacha I, Sáinz-Pardo Díaz J, Castrillo M, López García Á. Forecasting COVID-19 spreading through an ensemble of classical and machine learning models: Spain's case study. Sci Rep 2023; 13:6750. [PMID: 37185927 PMCID: PMC10127188 DOI: 10.1038/s41598-023-33795-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Accepted: 04/19/2023] [Indexed: 05/17/2023] Open
Abstract
In this work the applicability of an ensemble of population and machine learning models to predict the evolution of the COVID-19 pandemic in Spain is evaluated, relying solely on public datasets. Firstly, using only incidence data, we trained machine learning models and adjusted classical ODE-based population models, especially suited to capture long term trends. As a novel approach, we then made an ensemble of these two families of models in order to obtain a more robust and accurate prediction. We then proceed to improve machine learning models by adding more input features: vaccination, human mobility and weather conditions. However, these improvements did not translate to the overall ensemble, as the different model families had also different prediction patterns. Additionally, machine learning models degraded when new COVID variants appeared after training. We finally used Shapley Additive Explanation values to discern the relative importance of the different input features for the machine learning models' predictions. The conclusion of this work is that the ensemble of machine learning models and population models can be a promising alternative to SEIR-like compartmental models, especially given that the former do not need data from recovered patients, which are hard to collect and generally unavailable.
Collapse
Affiliation(s)
- Ignacio Heredia Cacha
- Instituto de Física de Cantabria (IFCA), CSIC-UC, Avda. los Castros s/n., 39005, Santander, Spain
| | - Judith Sáinz-Pardo Díaz
- Instituto de Física de Cantabria (IFCA), CSIC-UC, Avda. los Castros s/n., 39005, Santander, Spain
| | - María Castrillo
- Instituto de Física de Cantabria (IFCA), CSIC-UC, Avda. los Castros s/n., 39005, Santander, Spain
| | - Álvaro López García
- Instituto de Física de Cantabria (IFCA), CSIC-UC, Avda. los Castros s/n., 39005, Santander, Spain.
| |
Collapse
|
3
|
Lyu H, Imtiaz A, Zhao Y, Luo J. Human behavior in the time of COVID-19: Learning from big data. Front Big Data 2023; 6:1099182. [PMID: 37091459 PMCID: PMC10118015 DOI: 10.3389/fdata.2023.1099182] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 03/21/2023] [Indexed: 04/09/2023] Open
Abstract
Since the World Health Organization (WHO) characterized COVID-19 as a pandemic in March 2020, there have been over 600 million confirmed cases of COVID-19 and more than six million deaths as of October 2022. The relationship between the COVID-19 pandemic and human behavior is complicated. On one hand, human behavior is found to shape the spread of the disease. On the other hand, the pandemic has impacted and even changed human behavior in almost every aspect. To provide a holistic understanding of the complex interplay between human behavior and the COVID-19 pandemic, researchers have been employing big data techniques such as natural language processing, computer vision, audio signal processing, frequent pattern mining, and machine learning. In this study, we present an overview of the existing studies on using big data techniques to study human behavior in the time of the COVID-19 pandemic. In particular, we categorize these studies into three groups-using big data to measure, model, and leverage human behavior, respectively. The related tasks, data, and methods are summarized accordingly. To provide more insights into how to fight the COVID-19 pandemic and future global catastrophes, we further discuss challenges and potential opportunities.
Collapse
Affiliation(s)
| | | | | | - Jiebo Luo
- Department of Computer Science, University of Rochester, Rochester, NY, United States
| |
Collapse
|
4
|
Renaissance of Creative Accounting Due to the Pandemic: New Patterns Explored by Correspondence Analysis. STATS 2023. [DOI: 10.3390/stats6010025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023] Open
Abstract
The COVID-19 outbreak has rapidly affected global economies and the parties involved. There was a need to ensure the sustainability of corporate finance and avoid bankruptcy. The reactions of individuals were not routine, but covered a wide range of approaches to surviving the crisis. A creative way of accounting was also adopted. This study is primarily concerned with the behavior of businesses in the Visegrad Four countries between 2019 and 2021. The pandemic era was the driving force behind the renaissance of manipulation. Thus, the purpose of the article is to explore how the behavior of enterprises changed during the ongoing pandemic. The Beneish model was applied to reveal creative manipulation in the analyzed samples. Its M-score was calculated for 6113 Slovak, 153 Czech, 585 Polish, and 155 Hungarian enterprises. Increasing numbers of handling enterprises were confirmed in the V4 region. The dependency between the size of the enterprise and the occurrence of creative accounting was also proven. However, the structure of manipulators has been changing. Correspondence analysis specifically showed behavioral changes over time. Correspondence maps demonstrate which enterprises already used creative accounting before the pandemic in 2019. Then, it was noted that enterprises were influenced to modify their patterns in 2020 and 2021. The coronavirus pandemic had a significant potency on the use of creative accounting, not only for individual units, but for businesses of all sizes. In addition, the methodology may be applied for the investigation of individual sectors post-COVID.
Collapse
|
5
|
Utility of an Automated Artificial Intelligence Echocardiography Software in Risk Stratification of Hospitalized COVID-19 Patients. Life (Basel) 2022; 12:life12091413. [PMID: 36143448 PMCID: PMC9501328 DOI: 10.3390/life12091413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 12/03/2022] Open
Abstract
Cardiovascular risk factors, biomarkers, and diseases are associated with poor prognosis in COVID-19 infections. Significant progress in artificial intelligence (AI) applied to cardiac imaging has recently been made. We assessed the utility of AI analytic software EchoGo in COVID-19 inpatients. Fifty consecutive COVID-19+ inpatients (age 66 ± 13 years, 22 women) who had echocardiography in 4/17/2020−8/5/2020 were analyzed with EchoGo software, with output correlated against standard echocardiography measurements. After adjustment for the APACHE-4 score, associations with clinical outcomes were assessed. Mean EchoGo outputs were left ventricular end-diastolic volume (LVEDV) 121 ± 42 mL, end-systolic volume (LVESV) 53 ± 30 mL, ejection fraction (LVEF) 58 ± 11%, and global longitudinal strain (GLS) −16.1 ± 5.1%. Pearson correlation coefficients (p-value) with standard measurements were 0.810 (<0.001), 0.873 (<0.001), 0.528 (<0.001), and 0.690 (<0.001). The primary endpoint occurred in 26 (52%) patients. Adjusting for APACHE-4 score, EchoGo LVEF and LVGLS were associated with the primary endpoint, odds ratios (95% confidence intervals) of 0.92 (0.85−0.99) and 1.22 (1.03−1.45) per 1% increase, respectively. Automated AI software is a new clinical tool that may assist with patient care. EchoGo LVEF and LVGLS were associated with adverse outcomes in hospitalized COVID-19 patients and can play a role in their risk stratification.
Collapse
|
6
|
Performance Analysis for COVID-19 Diagnosis Using Custom and State-of-the-Art Deep Learning Models. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136364] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The modern scientific world continuously endeavors to battle and devise solutions for newly arising pandemics. One such pandemic which has turned the world’s accustomed routine upside down is COVID-19: it has devastated the world economy and destroyed around 45 million lives, globally. Governments and scientists have been on the front line, striving towards the diagnosis and engineering of a vaccination for the said virus. COVID-19 can be diagnosed using artificial intelligence more accurately than traditional methods using chest X-rays. This research involves an evaluation of the performance of deep learning models for COVID-19 diagnosis using chest X-ray images from a dataset containing the largest number of COVID-19 images ever used in the literature, according to the best of the authors’ knowledge. The size of the utilized dataset is about 4.25 times the maximum COVID-19 chest X-ray image dataset used in the explored literature. Further, a CNN model was developed, named the Custom-Model in this study, for evaluation against, and comparison to, the state-of-the-art deep learning models. The intention was not to develop a new high-performing deep learning model, but rather to evaluate the performance of deep learning models on a larger COVID-19 chest X-ray image dataset. Moreover, Xception- and MobilNetV2- based models were also used for evaluation purposes. The criteria for evaluation were based on accuracy, precision, recall, F1 score, ROC curves, AUC, confusion matrix, and macro and weighted averages. Among the deployed models, Xception was the top performer in terms of precision and accuracy, while the MobileNetV2-based model could detect slightly more COVID-19 cases than Xception, and showed slightly fewer false negatives, while giving far more false positives than the other models. Also, the custom CNN model exceeds the MobileNetV2 model in terms of precision. The best accuracy, precision, recall, and F1 score out of these three models were 94.2%, 99%, 95%, and 97%, respectively, as shown by the Xception model. Finally, it was found that the overall accuracy in the current evaluation was curtailed by approximately 2% compared with the average accuracy of previous work on multi-class classification, while a very high precision value was observed, which is of high scientific value.
Collapse
|
7
|
Automatic COVID-19 Lung Infection Segmentation through Modified Unet Model. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6566982. [PMID: 35422980 PMCID: PMC9002904 DOI: 10.1155/2022/6566982] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/23/2022] [Accepted: 02/28/2022] [Indexed: 11/23/2022]
Abstract
The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as “convUnet.” The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.
Collapse
|
8
|
Awan MJ, Mohd Rahim MS, Salim N, Rehman A, Nobanee H. Machine Learning-Based Performance Comparison to Diagnose Anterior Cruciate Ligament Tears. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2550120. [PMID: 35444781 PMCID: PMC9015864 DOI: 10.1155/2022/2550120] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 01/02/2022] [Accepted: 03/21/2022] [Indexed: 12/14/2022]
Abstract
In recent times, knee joint pains have become severe enough to make daily tasks difficult. Knee osteoarthritis is a type of arthritis and a leading cause of disability worldwide. The middle of the knee contains a vital portion, the anterior cruciate ligament (ACL). It is necessary to diagnose the ACL ruptured tears early to avoid surgery. The study aimed to perform a comparative analysis of machine learning models to identify the condition of three ACL tears. In contrast to previous studies, this study also considers imbalanced data distributions as machine learning techniques struggle to deal with this problem. The paper applied and analyzed four machine learning classification models, namely, random forest (RF), categorical boosting (Cat Boost), light gradient boosting machines (LGBM), and highly randomized classifier (ETC) on the balanced, structured dataset of ACL. After oversampling a hyperparameter adjustment, the above four models have achieved an average accuracy of 95.72%, 94.98%, 94.98%, and 98.26%. There are 2070 observations and eight features in the collection of three diagnosis ACL classes after oversampling. The area under curve value was approximately 0.998, respectively. Experiments were performed using twelve machine learning algorithms with imbalanced and balanced datasets. However, the accuracy of the imbalanced dataset has remained under 76% for all twelve models. After oversampling, the proposed model may contribute to the investigation of ACL tears on magnetic resonance imaging and other knee ligaments efficiently and automatically without involving radiologists.
Collapse
Affiliation(s)
- Mazhar Javed Awan
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia (UTM), Johor 81310, Malaysia
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan
| | - Mohd Shafry Mohd Rahim
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia (UTM), Johor 81310, Malaysia
| | - Naomie Salim
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia (UTM), Johor 81310, Malaysia
| | - Amjad Rehman
- Artificial Intelligence and Data Analytics Laboratory, College of Computer and Information Sciences (CCIS), Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Haitham Nobanee
- College of Business, Abu Dhabi University, P.O. Box 59911, Abu Dhabi, UAE
- Oxford Centre for Islamic Studies, University of Oxford, Oxford OX1 2J, UK
- School of Histories Languages and Cultures, The University of Liverpool, Liverpool L69 3BX, UK
| |
Collapse
|
9
|
Threat Analysis and Distributed Denial of Service (DDoS) Attack Recognition in the Internet of Things (IoT). ELECTRONICS 2022. [DOI: 10.3390/electronics11030494] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The Internet of Things (IoT) plays a crucial role in various sectors such as automobiles and the logistic tracking medical field because it consists of distributed nodes, servers, and software for effective communication. Although this IoT paradigm has suffered from intrusion threats and attacks that cause security and privacy issues, existing intrusion detection techniques fail to maintain reliability against the attacks. Therefore, the IoT intrusion threat has been analyzed using the sparse convolute network to contest the threats and attacks. The web is trained using sets of intrusion data, characteristics, and suspicious activities, which helps identify and track the attacks, mainly, Distributed Denial of Service (DDoS) attacks. Along with this, the network is optimized using evolutionary techniques that identify and detect the regular, error, and intrusion attempts under different conditions. The sparse network forms the complex hypotheses evaluated using neurons, and the obtained event stream outputs are propagated to further hidden layer processes. This process minimizes the intrusion involvement in IoT data transmission. Effective utilization of training patterns in the network successfully classifies the standard and threat patterns. Then, the effectiveness of the system is evaluated using experimental results and discussion. Network intrusion detection systems are superior to other types of traditional network defense in providing network security. The research applied an IGA-BP network to combat the growing challenge of Internet security in the big data era, using an autoencoder network model and an improved genetic algorithm to detect intrusions. MATLAB built it, which ensures a 98.98% detection rate and 99.29% accuracy with minimal processing complexity, and the performance ratio is 90.26%. A meta-heuristic optimizer was used in the future to increase the system’s ability to forecast attacks.
Collapse
|
10
|
Harris Hawks Sparse Auto-Encoder Networks for Automatic Speech Recognition System. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031091] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Automatic speech recognition (ASR) is an effective technique that can convert human speech into text format or computer actions. ASR systems are widely used in smart appliances, smart homes, and biometric systems. Signal processing and machine learning techniques are incorporated to recognize speech. However, traditional systems have low performance due to a noisy environment. In addition to this, accents and local differences negatively affect the ASR system’s performance while analyzing speech signals. A precise speech recognition system was developed to improve the system performance to overcome these issues. This paper uses speech information from jim-schwoebel voice datasets processed by Mel-frequency cepstral coefficients (MFCCs). The MFCC algorithm extracts the valuable features that are used to recognize speech. Here, a sparse auto-encoder (SAE) neural network is used to classify the model, and the hidden Markov model (HMM) is used to decide on the speech recognition. The network performance is optimized by applying the Harris Hawks optimization (HHO) algorithm to fine-tune the network parameter. The fine-tuned network can effectively recognize speech in a noisy environment.
Collapse
|