1
|
Rajendran EG, Mohd Hairi F, Krishna Supramaniam R, T Mohd TAM. Precision public health, the key for future outbreak management: A scoping review. Digit Health 2024; 10:20552076241256877. [PMID: 39139190 PMCID: PMC11320687 DOI: 10.1177/20552076241256877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 05/07/2024] [Indexed: 08/15/2024] Open
Abstract
Background Precision Public Health (PPH) is a newly emerging field in public health medicine. The application of various types of data allows PPH to deliver more tailored interventions to a specific population within a specific timeframe. However, the application of PPH possesses several challenges and limitations that need to be addressed. Objective We aim to provide evidence of the various use of PPH in outbreak management, the types of data that could be used in PPH application, and the limitations and barriers in the application of the PPH approach. Methods and analysis Articles were searched in PubMed, Web of Science, and Science Direct. Our selection of articles was based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) for Scoping Review guidelines. The outcome of the evidence assessment was presented in narrative format instead of quantitative. Results A total of 27 articles were included in the scoping review. Most of the articles (74.1%) focused on PPH applications in performing disease surveillance and signal detection. Furthermore, the data type mostly used in the studies was surveillance (51.9%), environment (44.4), and Internet query data. Most of the articles emphasized data quality and availability (81.5%) as the main barriers in PPH applications followed by data integration and interoperability (29.6%). Conclusions PPH applications in outbreak management utilize a wide range of data sources and analytical techniques to enhance disease surveillance, investigation, modeling, and prediction. By leveraging these tools and approaches, PPH contributes to more effective and efficient outbreak management, ultimately reducing the burden of infectious diseases on populations. The limitation and challenges in the application of PPH approaches in outbreak management emphasize the need to strengthen the surveillance systems, promote data sharing and collaboration among relevant stakeholders, and standardize data collection methods while upholding privacy and ethical principles.
Collapse
Affiliation(s)
- Ellappa Ghanthan Rajendran
- Department of Social and Preventive Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Farizah Mohd Hairi
- Department of Social and Preventive Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Rama Krishna Supramaniam
- Department of Social and Preventive Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | | |
Collapse
|
2
|
Okeibunor JC, Jaca A, Iwu-Jaja CJ, Idemili-Aronu N, Ba H, Zantsi ZP, Ndlambe AM, Mavundza E, Muneene D, Wiysonge CS, Makubalo L. The use of artificial intelligence for delivery of essential health services across WHO regions: a scoping review. Front Public Health 2023; 11:1102185. [PMID: 37469694 PMCID: PMC10352788 DOI: 10.3389/fpubh.2023.1102185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 06/19/2023] [Indexed: 07/21/2023] Open
Abstract
Background Artificial intelligence (AI) is a broad outlet of computer science aimed at constructing machines capable of simulating and performing tasks usually done by human beings. The aim of this scoping review is to map existing evidence on the use of AI in the delivery of medical care. Methods We searched PubMed and Scopus in March 2022, screened identified records for eligibility, assessed full texts of potentially eligible publications, and extracted data from included studies in duplicate, resolving differences through discussion, arbitration, and consensus. We then conducted a narrative synthesis of extracted data. Results Several AI methods have been used to detect, diagnose, classify, manage, treat, and monitor the prognosis of various health issues. These AI models have been used in various health conditions, including communicable diseases, non-communicable diseases, and mental health. Conclusions Presently available evidence shows that AI models, predominantly deep learning, and machine learning, can significantly advance medical care delivery regarding the detection, diagnosis, management, and monitoring the prognosis of different illnesses.
Collapse
Affiliation(s)
| | - Anelisa Jaca
- Cochrane South Africa, South African Medical Research Council, Cape Town, South Africa
| | | | - Ngozi Idemili-Aronu
- Department of Sociology/Anthropology, University of Nigeria, Nsukka, Nigeria
| | - Housseynou Ba
- World Health Organization Regional Office for Africa, Brazzaville, Republic of Congo
| | - Zukiswa Pamela Zantsi
- Cochrane South Africa, South African Medical Research Council, Cape Town, South Africa
| | - Asiphe Mavis Ndlambe
- Cochrane South Africa, South African Medical Research Council, Cape Town, South Africa
| | - Edison Mavundza
- World Health Organization Regional Office for Africa, Brazzaville, Republic of Congo
| | | | - Charles Shey Wiysonge
- Cochrane South Africa, South African Medical Research Council, Cape Town, South Africa
- HIV and Other Infectious Diseases Research Unit, South African Medical Research Council, Durban, South Africa
| | - Lindiwe Makubalo
- World Health Organization Regional Office for Africa, Brazzaville, Republic of Congo
| |
Collapse
|
3
|
Tenali N, Babu GRM. HQDCNet: Hybrid Quantum Dilated Convolution Neural Network for detecting covid-19 in the context of Big Data Analytics. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 83:1-27. [PMID: 37362720 PMCID: PMC10176300 DOI: 10.1007/s11042-023-15515-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 01/12/2023] [Accepted: 04/19/2023] [Indexed: 06/28/2023]
Abstract
Medical care services are changing to address problems with the development of big data frameworks as a result of the widespread use of big data analytics. Covid illness has recently been one of the leading causes of death in people. Since then, related input chest X-ray image for diagnosing COVID illness have been enhanced by diagnostic tools. Big data technological breakthroughs provide a fantastic option for reducing contagious Covid disease. To increase the model's confidence, it is necessary to integrate a large number of training sets, however handling the data may be difficult. With the development of big data technology, a unique method to identify and categorise covid illness is now found in this research. In order to manage incoming big data, a massive volume of chest x-ray images is gathered and analysed using a distributed computing server built on the Hadoop framework. In order to group identical groups in the input x-ray images, which in turn segments the dominating portions of an image, the fuzzy empowered weighted k-means algorithm is then employed. A hybrid quantum dilated convolution neural network is suggested to classify various kinds of covid instances, and a Black Widow-based Moth Flame is also shown to improve the performance of the classifier pattern. The performance analysis of COVID-19 detection makes use of the COVID-19 radiography dataset. The suggested HQDCNet approach has an accuracy of 99.01. The experimental results are evaluated in Python using performance metrics such as accuracy, precision, recall, f-measure, and loss function.
Collapse
Affiliation(s)
- Nagamani Tenali
- Department of CSE, Y.S. Rajasekhar Reddy University College of Engineering & Technology, Acharya Nagarjuna University, Guntur, Nagarjuna Nagar India
| | - Gatram Rama Mohan Babu
- Computer Science and Engineering (AI&ML), RVR & JC College of Engineering, Guntur, Chowdavaram India
| |
Collapse
|
4
|
Tenali N, Babu GRM. A Systematic Literature Review and Future Perspectives for Handling Big Data Analytics in COVID-19 Diagnosis. NEW GENERATION COMPUTING 2023; 41:243-280. [PMID: 37229177 PMCID: PMC10019802 DOI: 10.1007/s00354-023-00211-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
In today's digital world, information is growing along with the expansion of Internet usage worldwide. As a consequence, bulk of data is generated constantly which is known to be "Big Data". One of the most evolving technologies in twenty-first century is Big Data analytics, it is promising field for extracting knowledge from very large datasets and enhancing benefits while lowering costs. Due to the enormous success of big data analytics, the healthcare sector is increasingly shifting toward adopting these approaches to diagnose diseases. Due to the recent boom in medical big data and the development of computational methods, researchers and practitioners have gained the ability to mine and visualize medical big data on a larger scale. Thus, with the aid of integration of big data analytics in healthcare sectors, precise medical data analysis is now feasible with early sickness detection, health status monitoring, patient treatment, and community services is now achievable. With all these improvements, a deadly disease COVID is considered in this comprehensive review with the intention of offering remedies utilizing big data analytics. The use of big data applications is vital to managing pandemic conditions, such as predicting outbreaks of COVID-19 and identifying cases and patterns of spread of COVID-19. Research is still being done on leveraging big data analytics to forecast COVID-19. But precise and early identification of COVID disease is still lacking due to the volume of medical records like dissimilar medical imaging modalities. Meanwhile, Digital imaging has now become essential to COVID diagnosis, but the main challenge is the storage of massive volumes of data. Taking these limitations into account, a comprehensive analysis is presented in the systematic literature review (SLR) to provide a deeper understanding of big data in the field of COVID-19.
Collapse
Affiliation(s)
- Nagamani Tenali
- Department of CSE, Dr.Y.S. Rajasekhar Reddy University College of Engineering & Technology, Acharya Nagarjuna University, Nagarjuna Nagar, Guntur, India
| | - Gatram Rama Mohan Babu
- Computer Science and Engineering (AI&ML), RVR & JC College of Engineering, Chowdavaram, Guntur, India
| |
Collapse
|
5
|
Ahamed MKU, Islam MM, Uddin MA, Akhter A, Acharjee UK, Paul BK, Moni MA. DTLCx: An Improved ResNet Architecture to Classify Normal and Conventional Pneumonia Cases from COVID-19 Instances with Grad-CAM-Based Superimposed Visualization Utilizing Chest X-ray Images. Diagnostics (Basel) 2023; 13:diagnostics13030551. [PMID: 36766662 PMCID: PMC9914155 DOI: 10.3390/diagnostics13030551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 01/04/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
COVID-19 is a severe respiratory contagious disease that has now spread all over the world. COVID-19 has terribly impacted public health, daily lives and the global economy. Although some developed countries have advanced well in detecting and bearing this coronavirus, most developing countries are having difficulty in detecting COVID-19 cases for the mass population. In many countries, there is a scarcity of COVID-19 testing kits and other resources due to the increasing rate of COVID-19 infections. Therefore, this deficit of testing resources and the increasing figure of daily cases encouraged us to improve a deep learning model to aid clinicians, radiologists and provide timely assistance to patients. In this article, an efficient deep learning-based model to detect COVID-19 cases that utilizes a chest X-ray images dataset has been proposed and investigated. The proposed model is developed based on ResNet50V2 architecture. The base architecture of ResNet50V2 is concatenated with six extra layers to make the model more robust and efficient. Finally, a Grad-CAM-based discriminative localization is used to readily interpret the detection of radiological images. Two datasets were gathered from different sources that are publicly available with class labels: normal, confirmed COVID-19, bacterial pneumonia and viral pneumonia cases. Our proposed model obtained a comprehensive accuracy of 99.51% for four-class cases (COVID-19/normal/bacterial pneumonia/viral pneumonia) on Dataset-2, 96.52% for the cases with three classes (normal/ COVID-19/bacterial pneumonia) and 99.13% for the cases with two classes (COVID-19/normal) on Dataset-1. The accuracy level of the proposed model might motivate radiologists to rapidly detect and diagnose COVID-19 cases.
Collapse
Affiliation(s)
- Md. Khabir Uddin Ahamed
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Md Manowarul Islam
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
- Correspondence:
| | - Md. Ashraf Uddin
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
- School of Information Technology, Geelong, Deakin University, Geelong, VIC 3216, Australia
| | - Arnisha Akhter
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Uzzal Kumar Acharjee
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Bikash Kumar Paul
- Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Tangail 1902, Bangladesh
- Department of Software Engineering, Daffodil International University, Dhaka 1207, Bangladesh
| | - Mohammad Ali Moni
- Artificial Intelligence & Data Science, School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland, St. Lucia, QLD 4072, Australia
| |
Collapse
|
6
|
Islam R, Tarique M. Chest X-Ray Images to Differentiate COVID-19 from Pneumonia with Artificial Intelligence Techniques. Int J Biomed Imaging 2022; 2022:5318447. [PMID: 36588667 PMCID: PMC9800093 DOI: 10.1155/2022/5318447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 11/05/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
This paper presents an automated and noninvasive technique to discriminate COVID-19 patients from pneumonia patients using chest X-ray images and artificial intelligence. The reverse transcription-polymerase chain reaction (RT-PCR) test is commonly administered to detect COVID-19. However, the RT-PCR test necessitates person-to-person contact to administer, requires variable time to produce results, and is expensive. Moreover, this test is still unreachable to the significant global population. The chest X-ray images can play an important role here as the X-ray machines are commonly available at any healthcare facility. However, the chest X-ray images of COVID-19 and viral pneumonia patients are very similar and often lead to misdiagnosis subjectively. This investigation has employed two algorithms to solve this problem objectively. One algorithm uses lower-dimension encoded features extracted from the X-ray images and applies them to the machine learning algorithms for final classification. The other algorithm relies on the inbuilt feature extractor network to extract features from the X-ray images and classifies them with a pretrained deep neural network VGG16. The simulation results show that the proposed two algorithms can extricate COVID-19 patients from pneumonia with the best accuracy of 100% and 98.1%, employing VGG16 and the machine learning algorithm, respectively. The performances of these two algorithms have also been collated with those of other existing state-of-the-art methods.
Collapse
Affiliation(s)
- Rumana Islam
- Department of ECE, University of Windsor, ON, Canada N9B 3P4
| | - Mohammed Tarique
- Department of ECE, University of Science and Technology of Fujairah, UAE
| |
Collapse
|
7
|
Banerjee S, Dong M, Shi W. Spatial-Temporal Synchronous Graph Transformer network (STSGT) for COVID-19 forecasting. SMART HEALTH (AMSTERDAM, NETHERLANDS) 2022; 26:100348. [PMID: 36277841 PMCID: PMC9577246 DOI: 10.1016/j.smhl.2022.100348] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 09/30/2022] [Indexed: 11/17/2022]
Abstract
COVID-19 has become a matter of serious concern over the last few years. It has adversely affected numerous people around the globe and has led to the loss of billions of dollars of business capital. In this paper, we propose a novel Spatial-Temporal Synchronous Graph Transformer network (STSGT) to capture the complex spatial and temporal dependency of the COVID-19 time series data and forecast the future status of an evolving pandemic. The layers of STSGT combine the graph convolution network (GCN) with the self-attention mechanism of transformers on a synchronous spatial-temporal graph to capture the dynamically changing pattern of the COVID time series. The spatial-temporal synchronous graph simultaneously captures the spatial and temporal dependencies between the vertices of the graph at a given and subsequent time-steps, which helps capture the heterogeneity in the time series and improve the forecasting accuracy. Our extensive experiments on two publicly available real-world COVID-19 time series datasets demonstrate that STSGT significantly outperforms state-of-the-art algorithms that were designed for spatial-temporal forecasting tasks. Specifically, on average over a 12-day horizon, we observe a potential improvement of 12.19% and 3.42% in Mean Absolute Error (MAE) over the next best algorithm while forecasting the daily infected and death cases respectively for the 50 states of US and Washington, D.C. Additionally, STSGT also outperformed others when forecasting the daily infected cases at the state level, e.g., for all the counties in the State of Michigan. The code and models are publicly available at https://github.com/soumbane/STSGT.
Collapse
Affiliation(s)
- Soumyanil Banerjee
- Department of Computer Science, Wayne State University, 5057 Woodward Ave, Detroit, MI 48202, USA
| | - Ming Dong
- Department of Computer Science, Wayne State University, 5057 Woodward Ave, Detroit, MI 48202, USA
| | - Weisong Shi
- Department of Computer Science, Wayne State University, 5057 Woodward Ave, Detroit, MI 48202, USA
| |
Collapse
|
8
|
Utility of an Automated Artificial Intelligence Echocardiography Software in Risk Stratification of Hospitalized COVID-19 Patients. Life (Basel) 2022; 12:life12091413. [PMID: 36143448 PMCID: PMC9501328 DOI: 10.3390/life12091413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 12/03/2022] Open
Abstract
Cardiovascular risk factors, biomarkers, and diseases are associated with poor prognosis in COVID-19 infections. Significant progress in artificial intelligence (AI) applied to cardiac imaging has recently been made. We assessed the utility of AI analytic software EchoGo in COVID-19 inpatients. Fifty consecutive COVID-19+ inpatients (age 66 ± 13 years, 22 women) who had echocardiography in 4/17/2020−8/5/2020 were analyzed with EchoGo software, with output correlated against standard echocardiography measurements. After adjustment for the APACHE-4 score, associations with clinical outcomes were assessed. Mean EchoGo outputs were left ventricular end-diastolic volume (LVEDV) 121 ± 42 mL, end-systolic volume (LVESV) 53 ± 30 mL, ejection fraction (LVEF) 58 ± 11%, and global longitudinal strain (GLS) −16.1 ± 5.1%. Pearson correlation coefficients (p-value) with standard measurements were 0.810 (<0.001), 0.873 (<0.001), 0.528 (<0.001), and 0.690 (<0.001). The primary endpoint occurred in 26 (52%) patients. Adjusting for APACHE-4 score, EchoGo LVEF and LVGLS were associated with the primary endpoint, odds ratios (95% confidence intervals) of 0.92 (0.85−0.99) and 1.22 (1.03−1.45) per 1% increase, respectively. Automated AI software is a new clinical tool that may assist with patient care. EchoGo LVEF and LVGLS were associated with adverse outcomes in hospitalized COVID-19 patients and can play a role in their risk stratification.
Collapse
|
9
|
Real-Time Facemask Detection for Preventing COVID-19 Spread Using Transfer Learning Based Deep Neural Network. ELECTRONICS 2022. [DOI: 10.3390/electronics11142250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
The COVID-19 pandemic disrupted people’s livelihoods and hindered global trade and transportation. During the COVID-19 pandemic, the World Health Organization mandated that masks be worn to protect against this deadly virus. Protecting one’s face with a mask has become the standard. Many public service providers will encourage clients to wear masks properly in the foreseeable future. On the other hand, monitoring the individuals while standing alone in one location is exhausting. This paper offers a solution based on deep learning for identifying masks worn over faces in public places to minimize the coronavirus community transmission. The main contribution of the proposed work is the development of a real-time system for determining whether the person on a webcam is wearing a mask or not. The ensemble method makes it easier to achieve high accuracy and makes considerable strides toward enhancing detection speed. In addition, the implementation of transfer learning on pretrained models and stringent testing on an objective dataset led to the development of a highly dependable and inexpensive solution. The findings provide validity to the application’s potential for use in real-world settings, contributing to the reduction in pandemic transmission. Compared to the existing methodologies, the proposed method delivers improved accuracy, specificity, precision, recall, and F-measure performance in three-class outputs. These metrics include accuracy, specificity, precision, and recall. An appropriate balance is kept between the number of necessary parameters and the time needed to conclude the various models.
Collapse
|
10
|
Performance Analysis for COVID-19 Diagnosis Using Custom and State-of-the-Art Deep Learning Models. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136364] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The modern scientific world continuously endeavors to battle and devise solutions for newly arising pandemics. One such pandemic which has turned the world’s accustomed routine upside down is COVID-19: it has devastated the world economy and destroyed around 45 million lives, globally. Governments and scientists have been on the front line, striving towards the diagnosis and engineering of a vaccination for the said virus. COVID-19 can be diagnosed using artificial intelligence more accurately than traditional methods using chest X-rays. This research involves an evaluation of the performance of deep learning models for COVID-19 diagnosis using chest X-ray images from a dataset containing the largest number of COVID-19 images ever used in the literature, according to the best of the authors’ knowledge. The size of the utilized dataset is about 4.25 times the maximum COVID-19 chest X-ray image dataset used in the explored literature. Further, a CNN model was developed, named the Custom-Model in this study, for evaluation against, and comparison to, the state-of-the-art deep learning models. The intention was not to develop a new high-performing deep learning model, but rather to evaluate the performance of deep learning models on a larger COVID-19 chest X-ray image dataset. Moreover, Xception- and MobilNetV2- based models were also used for evaluation purposes. The criteria for evaluation were based on accuracy, precision, recall, F1 score, ROC curves, AUC, confusion matrix, and macro and weighted averages. Among the deployed models, Xception was the top performer in terms of precision and accuracy, while the MobileNetV2-based model could detect slightly more COVID-19 cases than Xception, and showed slightly fewer false negatives, while giving far more false positives than the other models. Also, the custom CNN model exceeds the MobileNetV2 model in terms of precision. The best accuracy, precision, recall, and F1 score out of these three models were 94.2%, 99%, 95%, and 97%, respectively, as shown by the Xception model. Finally, it was found that the overall accuracy in the current evaluation was curtailed by approximately 2% compared with the average accuracy of previous work on multi-class classification, while a very high precision value was observed, which is of high scientific value.
Collapse
|
11
|
Awan MJ, Rahim MSM, Salim N, Rehman A, Garcia-Zapirain B. Automated Knee MR Images Segmentation of Anterior Cruciate Ligament Tears. SENSORS 2022; 22:s22041552. [PMID: 35214451 PMCID: PMC8876207 DOI: 10.3390/s22041552] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 02/12/2022] [Accepted: 02/14/2022] [Indexed: 12/10/2022]
Abstract
The anterior cruciate ligament (ACL) is one of the main stabilizer parts of the knee. ACL injury leads to causes of osteoarthritis risk. ACL rupture is common in the young athletic population. Accurate segmentation at an early stage can improve the analysis and classification of anterior cruciate ligaments tears. This study automatically segmented the anterior cruciate ligament (ACL) tears from magnetic resonance imaging through deep learning. The knee mask was generated on the original Magnetic Resonance (MR) images to apply a semantic segmentation technique with convolutional neural network architecture U-Net. The proposed segmentation method was measured by accuracy, intersection over union (IoU), dice similarity coefficient (DSC), precision, recall and F1-score of 98.4%, 99.0%, 99.4%, 99.6%, 99.6% and 99.6% on 11451 training images, whereas on the validation images of 3817 was, respectively, 97.7%, 93.8%,96.8%, 96.5%, 97.3% and 96.9%. We also provide dice loss of training and test datasets that have remained 0.005 and 0.031, respectively. The experimental results show that the ACL segmentation on JPEG MRI images with U-Nets achieves accuracy that outperforms the human segmentation. The strategy has promising potential applications in medical image analytics for the segmentation of knee ACL tears for MR images.
Collapse
Affiliation(s)
- Mazhar Javed Awan
- Faculty of Engineering, School of Computing, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia; (M.S.M.R.); (N.S.)
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan
- Correspondence: (M.J.A.); (B.G.-Z.)
| | - Mohd Shafry Mohd Rahim
- Faculty of Engineering, School of Computing, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia; (M.S.M.R.); (N.S.)
| | - Naomie Salim
- Faculty of Engineering, School of Computing, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia; (M.S.M.R.); (N.S.)
| | - Amjad Rehman
- Artificial Intelligence and Data Analytics Laboratory, College of Computer and Information Sciences (CCIS), Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | | |
Collapse
|
12
|
Threat Analysis and Distributed Denial of Service (DDoS) Attack Recognition in the Internet of Things (IoT). ELECTRONICS 2022. [DOI: 10.3390/electronics11030494] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The Internet of Things (IoT) plays a crucial role in various sectors such as automobiles and the logistic tracking medical field because it consists of distributed nodes, servers, and software for effective communication. Although this IoT paradigm has suffered from intrusion threats and attacks that cause security and privacy issues, existing intrusion detection techniques fail to maintain reliability against the attacks. Therefore, the IoT intrusion threat has been analyzed using the sparse convolute network to contest the threats and attacks. The web is trained using sets of intrusion data, characteristics, and suspicious activities, which helps identify and track the attacks, mainly, Distributed Denial of Service (DDoS) attacks. Along with this, the network is optimized using evolutionary techniques that identify and detect the regular, error, and intrusion attempts under different conditions. The sparse network forms the complex hypotheses evaluated using neurons, and the obtained event stream outputs are propagated to further hidden layer processes. This process minimizes the intrusion involvement in IoT data transmission. Effective utilization of training patterns in the network successfully classifies the standard and threat patterns. Then, the effectiveness of the system is evaluated using experimental results and discussion. Network intrusion detection systems are superior to other types of traditional network defense in providing network security. The research applied an IGA-BP network to combat the growing challenge of Internet security in the big data era, using an autoencoder network model and an improved genetic algorithm to detect intrusions. MATLAB built it, which ensures a 98.98% detection rate and 99.29% accuracy with minimal processing complexity, and the performance ratio is 90.26%. A meta-heuristic optimizer was used in the future to increase the system’s ability to forecast attacks.
Collapse
|
13
|
Harris Hawks Sparse Auto-Encoder Networks for Automatic Speech Recognition System. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031091] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Automatic speech recognition (ASR) is an effective technique that can convert human speech into text format or computer actions. ASR systems are widely used in smart appliances, smart homes, and biometric systems. Signal processing and machine learning techniques are incorporated to recognize speech. However, traditional systems have low performance due to a noisy environment. In addition to this, accents and local differences negatively affect the ASR system’s performance while analyzing speech signals. A precise speech recognition system was developed to improve the system performance to overcome these issues. This paper uses speech information from jim-schwoebel voice datasets processed by Mel-frequency cepstral coefficients (MFCCs). The MFCC algorithm extracts the valuable features that are used to recognize speech. Here, a sparse auto-encoder (SAE) neural network is used to classify the model, and the hidden Markov model (HMM) is used to decide on the speech recognition. The network performance is optimized by applying the Harris Hawks optimization (HHO) algorithm to fine-tune the network parameter. The fine-tuned network can effectively recognize speech in a noisy environment.
Collapse
|
14
|
Kufel J, Bargieł K, Koźlik M, Czogalik Ł, Dudek P, Jaworski A, Cebula M, Gruszczyńska K. Application of artificial intelligence in diagnosing COVID-19 disease symptoms on chest X-rays: A systematic review. Int J Med Sci 2022; 19:1743-1752. [PMID: 36313227 PMCID: PMC9608047 DOI: 10.7150/ijms.76515] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 09/07/2022] [Indexed: 11/06/2022] Open
Abstract
This systematic review focuses on using artificial intelligence (AI) to detect COVID-19 infection with the help of X-ray images. Methodology: In January 2022, the authors searched PubMed, Embase and Scopus using specific medical subject headings terms and filters. All articles were independently reviewed by two reviewers. All conflicts resulting from a misunderstanding were resolved by a third independent researcher. After assessing abstracts and article usefulness, eliminating repetitions and applying inclusion and exclusion criteria, six studies were found to be qualified for this study. Results: The findings from individual studies differed due to the various approaches of the authors. Sensitivity was 72.59%-100%, specificity was 79%-99.9%, precision was 74.74%-98.7%, accuracy was 76.18%-99.81%, and the area under the curve was 95.24%-97.7%. Conclusion: AI computational models used to assess chest X-rays in the process of diagnosing COVID-19 should achieve sufficiently high sensitivity and specificity. Their results and performance should be repeatable to make them dependable for clinicians. Moreover, these additional diagnostic tools should be more affordable and faster than the currently available procedures. The performance and calculations of AI-based systems should take clinical data into account.
Collapse
Affiliation(s)
- Jakub Kufel
- Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland
| | - Katarzyna Bargieł
- Faculty of Medical Sciences in Katowice, Medical University of Silesia, 40-752 Katowice, Poland
| | - Maciej Koźlik
- Division of Cardiology and Structural Heart Disease, Medical University of Silesia, 40-635 Katowice, Poland
| | - Łukasz Czogalik
- Professor Zbigniew Religa Student Scientific Association at the Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland
| | - Piotr Dudek
- Professor Zbigniew Religa Student Scientific Association at the Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland
| | - Aleksander Jaworski
- Professor Zbigniew Religa Student Scientific Association at the Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland
| | - Maciej Cebula
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences in Katowice, Medical University of Silesia, 40-754 Katowice, Poland
| | - Katarzyna Gruszczyńska
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences in Katowice, Medical University of Silesia, 40-754 Katowice, Poland
| |
Collapse
|
15
|
Abstract
The COVID-19 pandemic has frightened people worldwide, and coronavirus has become the most commonly used phrase in recent years. Therefore, there is a need for a systematic literature review (SLR) related to Big Data applications in the COVID-19 pandemic crisis. The objective is to highlight recent technological advancements. Many studies emphasize the area of the COVID-19 pandemic crisis. Our study categorizes the many applications used to manage and control the pandemic. There is a very limited SLR prospective of COVID-19 with Big Data. Our SLR study picked five databases: Science direct, IEEE Xplore, Springer, ACM, and MDPI. Before the screening, following the recommendation, Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) were reported for 893 studies from 2019, 2020 and until September 2021. After screening, 60 studies met the inclusion criteria through COVID-19 data statistics, and Big Data analysis was used as the search string. Our research’s findings successfully dealt with COVID-19 healthcare with risk diagnosis, estimation or prevention, decision making, and drug Big Data applications problems. We believe that this review study will motivate the research community to perform expandable and transparent research against the pandemic crisis of COVID-19.
Collapse
|
16
|
Blockchain-Based IoT Devices in Supply Chain Management: A Systematic Literature Review. SUSTAINABILITY 2021. [DOI: 10.3390/su132413646] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Through recent progress, the forms of modern supply chains have evolved into complex networks. The supply chain management systems face a variety of challenges. These include lack of visibility of the upstream party (Provider) to the downstream party (Client); lack of flexibility in the face of sudden variations in demand and control of operating costs; lack of reliance on safety stakeholders; ineffective management of supply chain risks. Blockchain (BC) is used in the supply chain to overcome the growing demands for items. The Internet of Things (IoT) is a profoundly encouraging innovation that can help companies observe, track, and monitor products, activities, and processes within their respective value chain networks. Research establishments and logical gatherings are ceaselessly attempting to answer IoT gadgets in supply chain management. This paper presents orderly writing on and reviewing of Blockchain-based IoT advances and their current usage. We discuss the smart devices used in this system and which device is the most appropriate in the supply chain. This paper also looks at future examination themes in blockchain-based IoT, referred to as the executive’s framework production network. The essential deliberate writing audit has been consolidated by surveying research articles circulated in highly reputable publications between 2016 and 2021. Lastly, current issues and challenges are present to provide researchers with promising future directions in IoT supply chain management systems.
Collapse
|
17
|
Awan MJ, Rahim MSM, Salim N, Rehman A, Nobanee H, Shabir H. Improved Deep Convolutional Neural Network to Classify Osteoarthritis from Anterior Cruciate Ligament Tear Using Magnetic Resonance Imaging. J Pers Med 2021; 11:jpm11111163. [PMID: 34834515 PMCID: PMC8617867 DOI: 10.3390/jpm11111163] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 11/01/2021] [Accepted: 11/03/2021] [Indexed: 12/14/2022] Open
Abstract
Anterior cruciate ligament (ACL) tear is caused by partially or completely torn ACL ligament in the knee, especially in sportsmen. There is a need to classify the ACL tear before it fully ruptures to avoid osteoarthritis. This research aims to identify ACL tears automatically and efficiently with a deep learning approach. A dataset was gathered, consisting of 917 knee magnetic resonance images (MRI) from Clinical Hospital Centre Rijeka, Croatia. The dataset we used consists of three classes: non-injured, partial tears, and fully ruptured knee MRI. The study compares and evaluates two variants of convolutional neural networks (CNN). We first tested the standard CNN model of five layers and then a customized CNN model of eleven layers. Eight different hyper-parameters were adjusted and tested on both variants. Our customized CNN model showed good results after a 25% random split using RMSprop and a learning rate of 0.001. The average evaluations are measured by accuracy, precision, sensitivity, specificity, and F1-score in the case of the standard CNN using the Adam optimizer with a learning rate of 0.001, i.e., 96.3%, 95%, 96%, 96.9%, and 95.6%, respectively. In the case of the customized CNN model, using the same evaluation measures, the model performed at 98.6%, 98%, 98%, 98.5%, and 98%, respectively, using an RMSprop optimizer with a learning rate of 0.001. Moreover, we also present our results on the receiver operating curve and area under the curve (ROC AUC). The customized CNN model with the Adam optimizer and a learning rate of 0.001 achieved 0.99 over three classes was highest among all. The model showed good results overall, and in the future, we can improve it to apply other CNN architectures to detect and segment other ligament parts like meniscus and cartilages.
Collapse
Affiliation(s)
- Mazhar Javed Awan
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan;
- Correspondence: (M.J.A.); or or or (H.N.)
| | - Mohd Shafry Mohd Rahim
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
| | - Naomie Salim
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Skudai 81310, Malaysia; (M.S.M.R.); (N.S.)
| | - Amjad Rehman
- Artificial Intelligence and Data Analytics Research Laboratory, CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Haitham Nobanee
- College of Business, Abu Dhabi University, P.O. Box 59911, Abu Dhabi 59911, United Arab Emirates
- Oxford Centre for Islamic Studies, University of Oxford, Oxford OX1 2J, UK
- School of Histories, Languages and Cultures, The University of Liverpool, Liverpool L69 3BX, UK
- Correspondence: (M.J.A.); or or or (H.N.)
| | - Hassan Shabir
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan;
| |
Collapse
|
18
|
Image-Based Malware Classification Using VGG19 Network and Spatial Convolutional Attention. ELECTRONICS 2021. [DOI: 10.3390/electronics10192444] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
In recent years the amount of malware spreading through the internet and infecting computers and other communication devices has tremendously increased. To date, countless techniques and methodologies have been proposed to detect and neutralize these malicious agents. However, as new and automated malware generation techniques emerge, a lot of malware continues to be produced, which can bypass some state-of-the-art malware detection methods. Therefore, there is a need for the classification and detection of these adversarial agents that can compromise the security of people, organizations, and countless other forms of digital assets. In this paper, we propose a spatial attention and convolutional neural network (SACNN) based on deep learning framework for image-based classification of 25 well-known malware families with and without class balancing. Performance was evaluated on the Malimg benchmark dataset using precision, recall, specificity, precision, and F1 score on which our proposed model with class balancing reached 97.42%, 97.95%, 97.33%, 97.11%, and 97.32%. We also conducted experiments on SACNN with class balancing on benign class, also produced above 97%. The results indicate that our proposed model can be used for image-based malware detection with high performance, despite being simpler as compared to other available solutions.
Collapse
|
19
|
Abstract
Suicide bomb attacks are a high priority concern nowadays for every country in the world. They are a massively destructive criminal activity known as terrorism where one explodes a bomb attached to himself or herself, usually in a public place, taking the lives of many. Terrorist activity in different regions of the world depends and varies according to geopolitical situations and significant regional factors. There has been no significant work performed previously by utilizing the Pakistani suicide attack dataset and no data mining-based solutions have been given related to suicide attacks. This paper aims to contribute to the counterterrorism initiative for the safety of this world against suicide bomb attacks by extracting hidden patterns from suicidal bombing attack data. In order to analyze the psychology of suicide bombers and find a correlation between suicide attacks and the prediction of the next possible venue for terrorist activities, visualization analysis is performed and data mining techniques of classification, clustering and association rule mining are incorporated. For classification, Naïve Bayes, ID3 and J48 algorithms are applied on distinctive selected attributes. The results exhibited by classification show high accuracy against all three algorithms applied, i.e., 73.2%, 73.8% and 75.4%. We adapt the K-means algorithm to perform clustering and, consequently, the risk of blast intensity is identified in a particular location. Frequent patterns are also obtained through the Apriori algorithm for the association rule to extract the factors involved in suicide attacks.
Collapse
|
20
|
Abstract
Currently, the Distributed Denial of Service (DDoS) attack has become rampant, and shows up in various shapes and patterns, therefore it is not easy to detect and solve with previous solutions. Classification algorithms have been used in many studies and have aimed to detect and solve the DDoS attack. DDoS attacks are performed easily by using the weaknesses of networks and by generating requests for services for software. Real-time detection of DDoS attacks is difficult to detect and mitigate, but this solution holds significant value as these attacks can cause big issues. This paper addresses the prediction of application layer DDoS attacks in real-time with different machine learning models. We applied the two machine learning approaches Random Forest (RF) and Multi-Layer Perceptron (MLP) through the Scikit ML library and big data framework Spark ML library for the detection of Denial of Service (DoS) attacks. In addition to the detection of DoS attacks, we optimized the performance of the models by minimizing the prediction time as compared with other existing approaches using big data framework (Spark ML). We achieved a mean accuracy of 99.5% of the models both with and without big data approaches. However, in training and testing time, the big data approach outperforms the non-big data approach due to that the Spark computations in memory are in a distributed manner. The minimum average training and testing time in minutes was 14.08 and 0.04, respectively. Using a big data tool (Apache Spark), the maximum intermediate training and testing time in minutes was 34.11 and 0.46, respectively, using a non-big data approach. We also achieved these results using the big data approach. We can detect an attack in real-time in few milliseconds.
Collapse
|