1
|
Salau AO, Beyene MM. Software defined networking based network traffic classification using machine learning techniques. Sci Rep 2024; 14:20060. [PMID: 39209938 PMCID: PMC11362285 DOI: 10.1038/s41598-024-70983-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 08/22/2024] [Indexed: 09/04/2024] Open
Abstract
The classification of network traffic has become increasingly crucial due to the rapid growth in the number of internet users. Conventional approaches, such as identifying traffic based on port numbers and payload inspection are becoming ineffective due to the dynamic and encrypted nature of modern network traffic. A number of researchers have implemented Software Defined Networking (SDN) based traffic classification using Machine Learning (ML) and Deep Learning (DL) models. However, the studies had various limitations such as encrypted traffic detection, payload inspection, poor detection accuracy, and challenges with testing models both in offline and real-time traffic modes. ML models together with SDN are adopted nowadays to enhance classification performance. In this paper, both supervised (Logistic Regression, Decision Tree, Random Forest, AdaBoost, and Support Vector Machine) and unsupervised (K-means clustering) ML models were used to classify Domain Name System (DNS), Telnet, Ping, and Voice traffic flows simulated using the Distributed Internet Traffic Generator (D-ITG) tool. The use of this tool effectively manages and classifies traffic types based on their application. The study discussed the dataset used, model selection, implementation of the model, and implementation techniques (such as pre-processing, feature extraction, ML algorithm, and model evaluation metrics). The proposed model in SDN was implemented in Mininet for designing the network architecture and generating network traffic. Anaconda Python environment was utilized for traffic classification using various ML techniques. Among the models tested, the Decision Tree supervised learning achieved the highest accuracy of 99.81%, outperforming other supervised and unsupervised learning algorithms. These results indicate that the integration of ML with SDN provides an efficient classification method for identifying and accurately classifying both offline and real-time network traffic, enhanced quality of service (QoS), detection of encrypted packets, deep packet inspection and management.
Collapse
Affiliation(s)
- Ayodeji Olalekan Salau
- Department of Electrical/Electronics and Computer Engineering, Afe Babalola University, Ado-Ekiti, Nigeria.
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu, India.
| | - Melesew Mossie Beyene
- Department of Computer Science, Institute of Technology, Debre Markos University, Debre Markos, Ethiopia
| |
Collapse
|
2
|
Yeslam HE, Freifrau von Maltzahn N, Nassar HM. Revolutionizing CAD/CAM-based restorative dental processes and materials with artificial intelligence: a concise narrative review. PeerJ 2024; 12:e17793. [PMID: 39040936 PMCID: PMC11262301 DOI: 10.7717/peerj.17793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 07/01/2024] [Indexed: 07/24/2024] Open
Abstract
Artificial intelligence (AI) is increasingly prevalent in biomedical and industrial development, capturing the interest of dental professionals and patients. Its potential to improve the accuracy and speed of dental procedures is set to revolutionize dental care. The use of AI in computer-aided design/computer-aided manufacturing (CAD/CAM) within the restorative dental and material science fields offers numerous benefits, providing a new dimension to these practices. This study aims to provide a concise overview of the implementation of AI-powered technologies in CAD/CAM restorative dental procedures and materials. A comprehensive literature search was conducted using keywords from 2000 to 2023 to obtain pertinent information. This method was implemented to guarantee a thorough investigation of the subject matter. Keywords included; "Artificial Intelligence", "Machine Learning", "Neural Networks", "Virtual Reality", "Digital Dentistry", "CAD/CAM", and "Restorative Dentistry". Artificial intelligence in digital restorative dentistry has proven to be highly beneficial in various dental CAD/CAM applications. It helps in automating and incorporating esthetic factors, occlusal schemes, and previous practitioners' CAD choices in fabricating dental restorations. AI can also predict the debonding risk of CAD/CAM restorations and the compositional effects on the mechanical properties of its materials. Continuous enhancements are being made to overcome its limitations and open new possibilities for future developments in this field.
Collapse
Affiliation(s)
- Hanin E. Yeslam
- Department of Restorative Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| | | | - Hani M. Nassar
- Department of Restorative Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
3
|
Muller BP, Olds BE, Wong LJ, Michaels AJ. Transferring Learned Behaviors between Similar and Different Radios. SENSORS (BASEL, SWITZERLAND) 2024; 24:3574. [PMID: 38894364 PMCID: PMC11175177 DOI: 10.3390/s24113574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/24/2024] [Accepted: 05/28/2024] [Indexed: 06/21/2024]
Abstract
Transfer learning (TL) techniques have proven useful in a wide variety of applications traditionally dominated by machine learning (ML), such as natural language processing, computer vision, and computer-aided design. Recent extrapolations of TL to the radio frequency (RF) domain are being used to increase the potential applicability of RFML algorithms, seeking to improve the portability of models for spectrum situational awareness and transmission source identification. Unlike most of the computer vision and natural language processing applications of TL, applications within the RF modality must contend with inherent hardware distortions and channel condition variations. This paper seeks to evaluate the feasibility and performance trade-offs when transferring learned behaviors from functional RFML classification algorithms, specifically those designed for automatic modulation classification (AMC) and specific emitter identification (SEI), between homogeneous radios of similar construction and quality and heterogeneous radios of different construction and quality. Results derived from both synthetic data and over-the-air experimental collection show promising performance benefits from the application of TL to the RFML algorithms of SEI and AMC.
Collapse
Affiliation(s)
- Braeden P. Muller
- Virginia Tech National Security Institute, Blacksburg, VA 24060, USA; (B.P.M.); (B.E.O.)
| | - Brennan E. Olds
- Virginia Tech National Security Institute, Blacksburg, VA 24060, USA; (B.P.M.); (B.E.O.)
| | | | - Alan J. Michaels
- Virginia Tech National Security Institute, Blacksburg, VA 24060, USA; (B.P.M.); (B.E.O.)
| |
Collapse
|
4
|
Qamar F, Dobler G. Atmospheric correction of vegetation reflectance with simulation-trained deep learning for ground-based hyperspectral remote sensing. PLANT METHODS 2023; 19:74. [PMID: 37516859 PMCID: PMC10385980 DOI: 10.1186/s13007-023-01046-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 06/30/2023] [Indexed: 07/31/2023]
Abstract
BACKGROUND Vegetation spectral reflectance obtained with hyperspectral imaging (HSI) offer non-invasive means for the non-destructive study of their physiological status. The light intensity at visible and near-infrared wavelengths (VNIR, 0.4-1.0µm) captured by the sensor are composed of mixtures of spectral components that include the vegetation reflectance, atmospheric attenuation, top-of-atmosphere solar irradiance, and sensor artifacts. Common methods for the extraction of spectral reflectance from the at-sensor spectral radiance offer a trade-off between explicit knowledge of atmospheric conditions and concentrations, computational efficiency, and prediction accuracy, and are generally geared towards nadir pointing platforms. Therefore, a method is needed for the accurate extraction of vegetation reflectance from spectral radiance captured by ground-based remote sensors with a side-facing orientation towards the target, and a lack of knowledge of the atmospheric parameters. RESULTS We propose a framework for obtaining the vegetation spectral reflectance from at-sensor spectral radiance, which relies on a time-dependent Encoder-Decoder Convolutional Neural Network trained and tested using simulated spectra generated from radiative transfer modeling. Simulated at-sensor spectral radiance are produced from combining 1440 unique simulated solar angles and atmospheric absorption profiles, and 1000 different spectral reflectance curves of vegetation with various health indicator values, together with sensor artifacts. Creating an ensemble of 10 models, each trained and tested on a separate 10% of the dataset, results in the prediction of the vegetation spectral reflectance with a testing r2 of 98.1% (±0.4). This method produces consistently high performance with accuracies >90% for spectra with resolutions as low as 40 channels in VNIR each with 40 nm full width at half maximum (FWHM) and greater, and remains viable with accuracies >80% down to a resolution of 10 channels with 60 nm FWHM. When applied to real sensor obtained spectral radiance data, the predicted spectral reflectance curves showed general agreement and consistency with those corrected by the Compound Ratio method. CONCLUSIONS We propose a method that allows for the accurate estimation of the vegetation spectral reflectance from ground-based HSI platforms with sufficient spectral resolution. It is capable of extracting the vegetation spectral reflectance at high accuracy in the absence of knowledge of the exact atmospheric compositions and conditions at time of capture, and the lack of available sensor-measured spectral radiance and their true ground-truth spectral reflectance profiles.
Collapse
Affiliation(s)
- Farid Qamar
- Department of Civil and Environmental Engineering, University of Delaware, Newark, DE, 19716, USA.
- Data Science Institute, University of Delaware, Newark, DE, 19716, USA.
- Biden School of Public Policy and Administration, University of Delaware, Newark, DE, 19716, USA.
| | - Gregory Dobler
- Data Science Institute, University of Delaware, Newark, DE, 19716, USA
- Biden School of Public Policy and Administration, University of Delaware, Newark, DE, 19716, USA
- Department of Physics and Astronomy, University of Delaware, Newark, DE, 19716, USA
- Center for Urban Science and Progress, New York University, New York, NY, 10003, USA
| |
Collapse
|
5
|
Al-Alawi L, Al Shaqsi J, Tarhini A, Al-Busaidi AS. Using machine learning to predict factors affecting academic performance: the case of college students on academic probation. EDUCATION AND INFORMATION TECHNOLOGIES 2023:1-26. [PMID: 37361752 PMCID: PMC9999331 DOI: 10.1007/s10639-023-11700-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 02/27/2023] [Indexed: 06/28/2023]
Abstract
This study aims to employ the supervised machine learning algorithms to examine factors that negatively impacted academic performance among college students on probation (underperforming students). We used the Knowledge Discovery in Databases (KDD) methodology on a sample of N = 6514 college students spanning 11 years (from 2009 to 2019) provided by a major public university in Oman. We used the Information Gain (InfoGain) algorithm to select the most effective features and ensemble methods to compare the accuracy with more robust algorithms, including Logit Boost, Vote, and Bagging. The algorithms were evaluated based on the performance evaluation metrics such as accuracy, precision, recall, F-measure, and ROC curve, and then validated using 10-folds cross-validation. The study revealed that the main identified factors affecting student academic achievement include study duration in the university and previous performance in secondary school. Based on the experimental results, these features were consistently ranked as the top factors that negatively impacted academic performance. The study also indicated that gender, estimated graduation year, cohort, and academic specialization significantly contributed to whether a student was under probation. Domain experts and other students were involved in verifying some of the results. The theoretical and practical implications of this study are discussed.
Collapse
Affiliation(s)
- Lamees Al-Alawi
- Department of Information Systems, College of Economics and Political Science, Sultan Qaboos University, P.O. Box 20, PC 123 Muscat, Oman
| | - Jamil Al Shaqsi
- Department of Information Systems, College of Economics and Political Science, Sultan Qaboos University, P.O. Box 20, PC 123 Muscat, Oman
| | - Ali Tarhini
- Department of Information Systems, College of Economics and Political Science, Sultan Qaboos University, P.O. Box 20, PC 123 Muscat, Oman
| | - Adil S. Al-Busaidi
- Department of Information Systems, College of Economics and Political Science, Sultan Qaboos University, P.O. Box 20, PC 123 Muscat, Oman
- Innovation and Technology Transfer Center, Sultan Qaboos University, Muscat, Oman; Department of Business Communication, Sultan Qaboos University, P.O. Box 20, PC 123 Muscat, Oman
| |
Collapse
|
6
|
Zheng Z, Jiang S, Feng R, Ge L, Gu C. Survey of Reinforcement-Learning-Based MAC Protocols for Wireless Ad Hoc Networks with a MAC Reference Model. ENTROPY (BASEL, SWITZERLAND) 2023; 25:101. [PMID: 36673242 PMCID: PMC9858361 DOI: 10.3390/e25010101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 12/19/2022] [Accepted: 12/28/2022] [Indexed: 06/17/2023]
Abstract
In this paper, we conduct a survey of the literature about reinforcement learning (RL)-based medium access control (MAC) protocols. As the scale of the wireless ad hoc network (WANET) increases, traditional MAC solutions are becoming obsolete. Dynamic topology, resource allocation, interference management, limited bandwidth and energy constraint are crucial problems needing resolution for designing modern WANET architectures. In order for future MAC protocols to overcome the current limitations in frequently changing WANETs, more intelligence need to be deployed to maintain efficient communications. After introducing some classic RL schemes, we investigate the existing state-of-the-art MAC protocols and related solutions for WANETs according to the MAC reference model and discuss how each proposed protocol works and the challenging issues on the related MAC model components. Finally, this paper discusses future research directions on how RL can be used to enable MAC protocols for high performance.
Collapse
|
7
|
Cortes-Aguilar TA, Cantoral-Ceballos JA, Tovar-Arriaga A. Link Quality Estimation for Wireless ANDON Towers Based on Deep Learning Models. SENSORS (BASEL, SWITZERLAND) 2022; 22:6383. [PMID: 36080840 PMCID: PMC9460744 DOI: 10.3390/s22176383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 06/15/2023]
Abstract
Data reliability is of paramount importance for decision-making processes in the industry, and for this, having quality links for wireless sensor networks plays a vital role. Process and machine monitoring can be carried out through ANDON towers with wireless transmission and machine learning algorithms that predict link quality (LQE) to save time, hence reducing expenses by early failure detection and problem prevention. Indeed, alarm signals used in conjunction with LQE classification models represent a novel paradigm for ANDON towers, allowing low-cost remote sensing within industrial environments. In this research, we propose a deep learning model, suitable for implementation in small workshops with limited computational resources. As part of our work, we collected a novel dataset from a realistic experimental scenario with actual industrial machinery, similar to that commonly found in industrial applications. Then, we carried out extensive data analyses using a variety of machine learning models, each with a methodical search process to adjust hyper-parameters, achieving results from common features such as payload, distance, power, and bit error rate not previously reported in the state of the art. We achieved an accuracy of 99.3% on the test dataset with very little use of computational resources.
Collapse
|
8
|
Maximizing Channel Capacity of 3D MIMO System via Antenna Downtilt Angle Adaptation Using a Q-learning Algorithm. ELECTRONICS 2022. [DOI: 10.3390/electronics11081189] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
3D MIMO introduces the vertical dimension of the antenna downtilt angle to make the direction of signal transmission more accurate to improve system capacity. In this paper, we verify the effect of antenna downtilt angle on channel capacity through simulations of four fixed antenna downtilt angles, 90, 96, 99, and 102 degrees under the conditions that the distance between mobile station (MS) and base station (BS) is 250 m, and the heights of antenna in BS and MS are 25 m and 1.5 m, respectively. The simulation results show that the antenna downtilt angle of 96 degrees has a larger channel capacity than the others. In addition, we proposed an adaptive optimization method by applying the Q-learning algorithm to adaptively optimize the antenna downtilt angles to maximize system capacity. The performance of the proposed method is to investigate the Q-learning algorithm with three different discount rates at 0.9, 0.5, and 0.1, and four different propagation distances on 20 × 1 and 60 × 4 MIMO. We demonstrate that there is only a 1% difference between the adaptively optimized antenna downtilt angle and the ideal optimal antenna downtilt angle when the discount rate of Q-learning algorithm is 0.9, and its channel capacity performance can reach more than 99.72% of the ideal optimal one.
Collapse
|
9
|
Gupta C, Johri I, Srinivasan K, Hu YC, Qaisar SM, Huang KY. A Systematic Review on Machine Learning and Deep Learning Models for Electronic Information Security in Mobile Networks. SENSORS 2022; 22:s22052017. [PMID: 35271163 PMCID: PMC8915055 DOI: 10.3390/s22052017] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Revised: 02/28/2022] [Accepted: 02/28/2022] [Indexed: 02/04/2023]
Abstract
Today’s advancements in wireless communication technologies have resulted in a tremendous volume of data being generated. Most of our information is part of a widespread network that connects various devices across the globe. The capabilities of electronic devices are also increasing day by day, which leads to more generation and sharing of information. Similarly, as mobile network topologies become more diverse and complicated, the incidence of security breaches has increased. It has hampered the uptake of smart mobile apps and services, which has been accentuated by the large variety of platforms that provide data, storage, computation, and application services to end-users. It becomes necessary in such scenarios to protect data and check its use and misuse. According to the research, an artificial intelligence-based security model should assure the secrecy, integrity, and authenticity of the system, its equipment, and the protocols that control the network, independent of its generation, in order to deal with such a complicated network. The open difficulties that mobile networks still face, such as unauthorised network scanning, fraud links, and so on, have been thoroughly examined. Numerous ML and DL techniques that can be utilised to create a secure environment, as well as various cyber security threats, are discussed. We address the necessity to develop new approaches to provide high security of electronic data in mobile networks because the possibilities for increasing mobile network security are inexhaustible.
Collapse
Affiliation(s)
- Chaitanya Gupta
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India; (C.G.); (K.S.)
| | - Ishita Johri
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India;
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India; (C.G.); (K.S.)
| | - Yuh-Chung Hu
- Department of Mechanical and Electromechanical Engineering, National ILan University, Yilan 26047, Taiwan;
| | - Saeed Mian Qaisar
- Electrical and Computer Engineering Department, Effat University, Jeddah 22332, Saudi Arabia;
| | - Kuo-Yi Huang
- Department of Bio-Industrial Mechatronic Engineering, National Chung Hsing University, Taichung 402, Taiwan
- Correspondence:
| |
Collapse
|
10
|
Random Access Using Deep Reinforcement Learning in Dense Mobile Networks. SENSORS 2021; 21:s21093210. [PMID: 34063132 PMCID: PMC8124859 DOI: 10.3390/s21093210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/29/2021] [Accepted: 04/29/2021] [Indexed: 11/21/2022]
Abstract
5G and Beyond 5G mobile networks use several high-frequency spectrum bands such as the millimeter-wave (mmWave) bands to alleviate the problem of bandwidth scarcity. However high-frequency bands do not cover larger distances. The coverage problem is addressed by using a heterogeneous network which comprises numerous small and macrocells, defined by transmission and reception points (TRxPs). For such a network, random access is considered a challenging function in which users attempt to select an efficient TRxP by random access within a given time. Ideally, an efficient TRxP is less congested, minimizing delays in users’ random access. However, owing to the nature of random access, it is not feasible to deploy a centralized controller estimating the congestion level of each cell and deliver this information back to users during random access. To solve this problem, we establish an optimization problem and employ a reinforcement-learning-based scheme. The proposed scheme estimates congestion of TRxPs in service and selects the optimal access point. Mathematically, this approach is beneficial in approximating and minimizing a random access delay function. Through simulation, we demonstrate that our proposed deep learning-based algorithm improves performance on random access. Notably, the average access delay is improved by 58.89% from the original 3GPP algorithm, and the probability of successful access also improved.
Collapse
|
11
|
Vehicular Communications Utility in Road Safety Applications: A Step toward Self-Aware Intelligent Traffic Systems. Symmetry (Basel) 2021. [DOI: 10.3390/sym13030438] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The potential of wireless technologies is significant in the area of the safety and efficiency of road transport and communications systems. The challenges and requirements imposed by end users and competent institutions demonstrate the need for viable solutions. A common protocol by which there could be vehicle-to-vehicle and vehicle-to-road communications is ideal for avoiding collisions and road accidents, all in a vehicular ad hoc network (VANET). Ways of transmitting warning messages simultaneously by vehicle-to-vehicle and vehicle-to-infrastructure communications by various multi-hop routings are set out. Approaches to how to improve communication reliability by achieving low latency are addressed through the multi-channel (MC) technique based on two non-overlaps for vehicle-to-vehicle (V2V) and vehicle-to-road (V2R) or road-to-vehicle (R2V) communications. The contributions of this paper offer an opportunity to use common communication adaptable protocols, depending on the context of the situation, coding techniques, scenarios, analysis of transfer rates, and reception of messages according to the type of protocol used. Communications between the road infrastructure and users through a relative communication protocol are highlighted and simulated in this manuscript. The results obtained by the proposed and simulated scenarios demonstrate that it is complementary and that the common node of V2V/V2R (R2V) communication protocols substantially improves the process of transmitting messages in low-latency conditions and is ideal for the development of road safety systems.
Collapse
|