1
|
Developments in empowering and supporting women's role in scientific research in the United Arab Emirates. J Adv Nurs 2024; 80:4-7. [PMID: 37309056 DOI: 10.1111/jan.15731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Accepted: 05/30/2023] [Indexed: 06/14/2023]
|
2
|
Simulation of electricity consumption data using multiple artificial intelligence models and cross validation techniques. Data Brief 2023; 51:109718. [PMID: 38020440 PMCID: PMC10661651 DOI: 10.1016/j.dib.2023.109718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 10/16/2023] [Accepted: 10/19/2023] [Indexed: 12/01/2023] Open
Abstract
Worldwide, electricity production exceeds its consumption which leads to wasted financial and energy resources. Machine learning models can be utilized to predict the future consumption to avoid these significant losses. This paper presents the data for the monthly electricity consumption on the community level during May 2017-December 2019 in Dubai, United Arab Emirates. It was acquired from Dubai Pulse, an online repository containing consumption data from Dubai Electricity and Water Authority which provides utility services to the Emirate. Multiple parameters, such as population and number of buildings, were acquired from Dubai Statistics Center in addition to temperature which was obtained from Dubai International Airport. Additional features, such as expatriate ratio, number of customers, and building occupancy, were computed from the available data and utilized to generate a dataset towards accurate prediction. Various linear regression variants, support vector machines, decision tree models, ensemble models, and neural networks were implemented to forecast electricity consumption. The models were trained on two different formats of the same dataset, which were generated by sorting the data with respect to time, named as temporally ordered dataset, and by randomly dividing the data, labelled as randomly split dataset. In addition, the dependence of the models on the amount of data was identified by varying the size of the testing data. Moreover, two cross-validation (CV) procedures, namely rolling CV method and moving CV method, were applied to assess the reliability of the models. All analyses were evaluated by utilizing several performance metrics, namely root mean squared error, coefficient of determination, i.e., R2, 10-fold CV score, mean absolute error, median absolute error, and computational time. Furthermore, this data could be utilized to analyze the effect of coronavirus disease 2019 (COVID-19) prevention measures in Dubai on electricity usage as well as evaluate the consumption patterns at the consumer level.
Collapse
|
3
|
Software defect prediction using learning to rank approach. Sci Rep 2023; 13:18885. [PMID: 37919406 PMCID: PMC10622444 DOI: 10.1038/s41598-023-45915-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 10/25/2023] [Indexed: 11/04/2023] Open
Abstract
Software defect prediction (SDP) plays a significant role in detecting the most likely defective software modules and optimizing the allocation of testing resources. In practice, though, project managers must not only identify defective modules, but also rank them in a specific order to optimize the resource allocation and minimize testing costs, especially for projects with limited budgets. This vital task can be accomplished using Learning to Rank (LTR) algorithm. This algorithm is a type of machine learning methodology that pursues two important tasks: prediction and learning. Although this algorithm is commonly used in information retrieval, it also presents high efficiency for other problems, like SDP. The LTR approach is mainly used in defect prediction to predict and rank the most likely buggy modules based on their bug count or bug density. This research paper conducts a comprehensive comparison study on the behavior of eight selected LTR models using two target variables: bug count and bug density. It also studies the effect of using imbalance learning and feature selection on the employed LTR models. The models are empirically evaluated using Fault Percentile Average. Our results show that using bug count as ranking criteria produces higher scores and more stable results across multiple experiment settings. Moreover, using imbalance learning has a positive impact for bug density, but on the other hand it leads to a negative impact for bug count. Lastly, using the feature selection does not show significant improvement for bug density, while there is no impact when bug count is used. Therefore, we conclude that using feature selection and imbalance learning with LTR does not come up with superior or significant results.
Collapse
|
4
|
Wearable Devices and Explainable Unsupervised Learning for COVID-19 Detection and Monitoring. Diagnostics (Basel) 2023; 13:3071. [PMID: 37835814 PMCID: PMC10572947 DOI: 10.3390/diagnostics13193071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/20/2023] [Accepted: 09/22/2023] [Indexed: 10/15/2023] Open
Abstract
Despite the declining COVID-19 cases, global healthcare systems still face significant challenges due to ongoing infections, especially among fully vaccinated individuals, including adolescents and young adults (AYA). To tackle this issue, cost-effective alternatives utilizing technologies like Artificial Intelligence (AI) and wearable devices have emerged for disease screening, diagnosis, and monitoring. However, many AI solutions in this context heavily rely on supervised learning techniques, which pose challenges such as human labeling reliability and time-consuming data annotation. In this study, we propose an innovative unsupervised framework that leverages smartwatch data to detect and monitor COVID-19 infections. We utilize longitudinal data, including heart rate (HR), heart rate variability (HRV), and physical activity measured via step count, collected through the continuous monitoring of volunteers. Our goal is to offer effective and affordable solutions for COVID-19 detection and monitoring. Our unsupervised framework employs interpretable clusters of normal and abnormal measures, facilitating disease progression detection. Additionally, we enhance result interpretation by leveraging the language model Davinci GPT-3 to gain deeper insights into the underlying data patterns and relationships. Our results demonstrate the effectiveness of unsupervised learning, achieving a Silhouette score of 0.55. Furthermore, validation using supervised learning techniques yields high accuracy (0.884 ± 0.005), precision (0.80 ± 0.112), and recall (0.817 ± 0.037). These promising findings indicate the potential of unsupervised techniques for identifying inflammatory markers, contributing to the development of efficient and reliable COVID-19 detection and monitoring methods. Our study shows the capabilities of AI and wearables, reflecting the pursuit of low-cost, accessible solutions for addressing health challenges related to inflammatory diseases, thereby opening new avenues for scalable and widely applicable health monitoring solutions.
Collapse
|
5
|
The Potential of Blockchain Technology in Dental Healthcare: A Literature Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:3277. [PMID: 36991986 PMCID: PMC10052552 DOI: 10.3390/s23063277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 02/28/2023] [Accepted: 03/09/2023] [Indexed: 06/19/2023]
Abstract
Blockchain technology in the healthcare industry has potential to enable enhanced privacy, increased security, and an interoperable data record. Blockchain technology is being implemented in dental care systems to store and share medical information, improve insurance claims, and provide innovative dental data ledgers. Because the healthcare sector is a large and ever-growing industry, the use of blockchain technology would have many benefits. To improve dental care delivery, researchers advocate using blockchain technology and smart contracts due to their numerous advantages. In this research, we concentrate on blockchain-based dental care systems. In particular, we examine the current research literature, pinpoint issues with existing dental care systems, and consider how blockchain technology may be used to address these issues. Finally, the limitations of the proposed blockchain-based dental care systems are discussed which may be regarded as open issues.
Collapse
|
6
|
Breast cancer detection using artificial intelligence techniques: A systematic literature review. Artif Intell Med 2022; 127:102276. [DOI: 10.1016/j.artmed.2022.102276] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 10/18/2021] [Accepted: 03/04/2022] [Indexed: 02/07/2023]
|
7
|
Wearable Devices, Smartphones, and Interpretable Artificial Intelligence in Combating COVID-19. SENSORS (BASEL, SWITZERLAND) 2021; 21:8424. [PMID: 34960517 PMCID: PMC8709136 DOI: 10.3390/s21248424] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 12/14/2021] [Accepted: 12/15/2021] [Indexed: 12/23/2022]
Abstract
Physiological measures, such as heart rate variability (HRV) and beats per minute (BPM), can be powerful health indicators of respiratory infections. HRV and BPM can be acquired through widely available wrist-worn biometric wearables and smartphones. Successive abnormal changes in these indicators could potentially be an early sign of respiratory infections such as COVID-19. Thus, wearables and smartphones should play a significant role in combating COVID-19 through the early detection supported by other contextual data and artificial intelligence (AI) techniques. In this paper, we investigate the role of the heart measurements (i.e., HRV and BPM) collected from wearables and smartphones in demonstrating early onsets of the inflammatory response to the COVID-19. The AI framework consists of two blocks: an interpretable prediction model to classify the HRV measurements status (as normal or affected by inflammation) and a recurrent neural network (RNN) to analyze users' daily status (i.e., textual logs in a mobile application). Both classification decisions are integrated to generate the final decision as either "potentially COVID-19 infected" or "no evident signs of infection". We used a publicly available dataset, which comprises 186 patients with more than 3200 HRV readings and numerous user textual logs. The first evaluation of the approach showed an accuracy of 83.34 ± 1.68% with 0.91, 0.88, 0.89 precision, recall, and F1-Score, respectively, in predicting the infection two days before the onset of the symptoms supported by a model interpretation using the local interpretable model-agnostic explanations (LIME).
Collapse
|
8
|
Blockchain-Based Authentication in Internet of Vehicles: A Survey. SENSORS 2021; 21:s21237927. [PMID: 34883933 PMCID: PMC8659854 DOI: 10.3390/s21237927] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/20/2021] [Accepted: 11/22/2021] [Indexed: 11/16/2022]
Abstract
Internet of Vehicles (IoV) has emerged as an advancement over the traditional Vehicular Ad-hoc Networks (VANETs) towards achieving a more efficient intelligent transportation system that is capable of providing various intelligent services and supporting different applications for the drivers and passengers on roads. In order for the IoV and VANETs environments to be able to offer such beneficial road services, huge amounts of data are generated and exchanged among the different communicated entities in these vehicular networks wirelessly via open channels, which could attract the adversaries and threaten the network with several possible types of security attacks. In this survey, we target the authentication part of the security system while highlighting the efficiency of blockchains in the IoV and VANETs environments. First, a detailed background on IoV and blockchain is provided, followed by a wide range of security requirements, challenges, and possible attacks in vehicular networks. Then, a more focused review is provided on the recent blockchain-based authentication schemes in IoV and VANETs with a detailed comparative study in terms of techniques used, network models, evaluation tools, and attacks counteracted. Lastly, some future challenges for IoV security are discussed that are necessary to be addressed in the upcoming research.
Collapse
|
9
|
Analytical study on the impact of technology in higher education during the age of COVID-19: Systematic literature review. EDUCATION AND INFORMATION TECHNOLOGIES 2021; 26:6719-6746. [PMID: 33814958 PMCID: PMC8008019 DOI: 10.1007/s10639-021-10507-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 03/16/2021] [Indexed: 05/08/2023]
Abstract
With the advent of COVID-19 arose the need for social distancing measures, including the imposition of far-reaching lockdowns in many countries. The lockdown has wreaked havoc on many aspects of daily life, but education has been particularly hard hit by this unprecedented situation. The closure of educational institutions brought along many changes, including the transition to more technology-based education. This is a systematic literature review that seeks to explore the transition, in the context of the pandemic, from traditional education that involves face-to-face interaction in physical classrooms to online distance education. It examines the ways in which this transition has impacted academia and students and looks at the potential long-term consequences it may have caused. It also presents some of the suggestions made by the studies included in the paper, which may help alleviate the negative impact of lockdown on education and promote a smoother transition to online learning. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s10639-021-10507-1.
Collapse
|
10
|
Artificial intelligence applications in solid waste management: A systematic research review. WASTE MANAGEMENT (NEW YORK, N.Y.) 2020; 109:231-246. [PMID: 32428727 DOI: 10.1016/j.wasman.2020.04.057] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 04/20/2020] [Accepted: 04/30/2020] [Indexed: 05/21/2023]
Abstract
The waste management processes typically involve numerous technical, climatic, environmental, demographic, socio-economic, and legislative parameters. Such complex nonlinear processes are challenging to model, predict and optimize using conventional methods. Recently, artificial intelligence (AI) techniques have gained momentum in offering alternative computational approaches to solve solid waste management (SWM) problems. AI has been efficient at tackling ill-defined problems, learning from experience, and handling uncertainty and incomplete data. Although significant research was carried out in this domain, very few review studies have assessed the potential of AI in solving the diverse SWM problems. This systematic literature review compiled 85 research studies, published between 2004 and 2019, analyzing the application of AI in various SWM fields, including forecasting of waste characteristics, waste bin level detection, process parameters prediction, vehicle routing, and SWM planning. This review provides comprehensive analysis of the different AI models and techniques applied in SWM, application domains and reported performance parameters, as well as the software platforms used to implement such models. The challenges and insights of applying AI techniques in SWM are also discussed.
Collapse
|
11
|
Application of Quality in Use Model to Evaluate the User Experience of Online Banking Software. JOURNAL OF CASES ON INFORMATION TECHNOLOGY 2020. [DOI: 10.4018/jcit.2020040103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Open source software (OSS) has recently become very important due to the rapid expansion of the software industry. In order to determine whether the quality of the software can achieve the intended purposes, the components of OSS need to be assessed as they are in closed source (conventional) software. Several quality in-use models have been introduced to evaluate software quality in various fields. The banking sector is one of the most critical sectors, as it deals with highly sensitive data; it therefore requires an accurate and effective assessment of software quality. In this article, two pieces of banking software are compared: one open source and one closed source. A new quality in use model, inspired by ISO/IEC 25010, is used to ensure concise results in the comparison. The results obtained show the great potential of OSS, especially in the banking field.
Collapse
|
12
|
Application of Quality in Use Model to Assess The User Experience of Open Source Digital Forensics Tools. INTERNATIONAL JOURNAL OF ELECTRONIC SECURITY AND DIGITAL FORENSICS 2020. [DOI: 10.1504/ijesdf.2020.10025165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
13
|
|
14
|
Application of quality in use model to assess the user experience of open source digital forensics tools. INTERNATIONAL JOURNAL OF ELECTRONIC SECURITY AND DIGITAL FORENSICS 2020. [DOI: 10.1504/ijesdf.2020.103870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
15
|
Enhancing computing studies in high schools: A systematic literature review & UAE case study. Heliyon 2019; 5:e01235. [PMID: 30815605 PMCID: PMC6378336 DOI: 10.1016/j.heliyon.2019.e01235] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2018] [Revised: 01/12/2019] [Accepted: 02/11/2019] [Indexed: 11/18/2022] Open
Abstract
Open source software (OSS) is increasingly being integrated into educational institutions, and many countries require the use of OSS in government departments. However, not much focus is placed on integrating it into the educational sector in a strategic and productive manner. This paper examines the existing literature on the use of OSS in terms of the potential enhancements it can provide for computer science studies in high schools in general, and those in the UAE more specifically. It also details a survey conducted among 400 high school teachers after teaching them about multiple types of OSS that might enhance their teaching experience. After examining more than 69 different research papers and taking the survey findings into account, we drafted a roadmap that can be used by any educational institute—especially high schools—to strategically integrate OSS into the educational system.
Collapse
|
16
|
Towards early software reliability prediction for computer forensic tools (case study). SPRINGERPLUS 2016; 5:827. [PMID: 27386276 PMCID: PMC4917517 DOI: 10.1186/s40064-016-2539-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Accepted: 06/08/2016] [Indexed: 11/10/2022]
Abstract
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
Collapse
|