1
|
Plant disease recognition using residual convolutional enlightened Swin transformer networks. Sci Rep 2024; 14:8660. [PMID: 38622177 PMCID: PMC11018742 DOI: 10.1038/s41598-024-56393-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 03/06/2024] [Indexed: 04/17/2024] Open
Abstract
Agriculture plays a pivotal role in the economic development of a nation, but, growth of agriculture is affected badly by the many factors one such is plant diseases. Early stage prediction of these disease is crucial role for global health and even for game changers the farmer's life. Recently, adoption of modern technologies, such as the Internet of Things (IoT) and deep learning concepts has given the brighter light of inventing the intelligent machines to predict the plant diseases before it is deep-rooted in the farmlands. But, precise prediction of plant diseases is a complex job due to the presence of noise, changes in the intensities, similar resemblance between healthy and diseased plants and finally dimension of plant leaves. To tackle this problem, high-accurate and intelligently tuned deep learning algorithms are mandatorily needed. In this research article, novel ensemble of Swin transformers and residual convolutional networks are proposed. Swin transformers (ST) are hierarchical structures with linearly scalable computing complexity that offer performance and flexibility at various scales. In order to extract the best deep key-point features, the Swin transformers and residual networks has been combined, followed by Feed forward networks for better prediction. Extended experimentation is conducted using Plant Village Kaggle datasets, and performance metrics, including accuracy, precision, recall, specificity, and F1-rating, are evaluated and analysed. Existing structure along with FCN-8s, CED-Net, SegNet, DeepLabv3, Dense nets, and Central nets are used to demonstrate the superiority of the suggested version. The experimental results show that in terms of accuracy, precision, recall, and F1-rating, the introduced version shown better performances than the other state-of-art hybrid learning models.
Collapse
|
2
|
Adaptive habitat biogeography-based optimizer for optimizing deep CNN hyperparameters in image classification. Heliyon 2024; 10:e28147. [PMID: 38689992 PMCID: PMC11059399 DOI: 10.1016/j.heliyon.2024.e28147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 03/12/2024] [Accepted: 03/12/2024] [Indexed: 05/02/2024] Open
Abstract
Deep Convolutional Neural Networks (DCNNs) have shown remarkable success in image classification tasks, but optimizing their hyperparameters can be challenging due to their complex structure. This paper develops the Adaptive Habitat Biogeography-Based Optimizer (AHBBO) for tuning the hyperparameters of DCNNs in image classification tasks. In complicated optimization problems, the BBO suffers from premature convergence and insufficient exploration. In this regard, an adaptable habitat is presented as a solution to these problems; it would permit variable habitat sizes and regulated mutation. Better optimization performance and a greater chance of finding high-quality solutions across a wide range of problem domains are the results of this modification's increased exploration and population diversity. AHBBO is tested on 53 benchmark optimization functions and demonstrates its effectiveness in improving initial stochastic solutions and converging faster to the optimum. Furthermore, DCNN-AHBBO is compared to 23 well-known image classifiers on nine challenging image classification problems and shows superior performance in reducing the error rate by up to 5.14%. Our proposed algorithm outperforms 13 benchmark classifiers in 87 out of 95 evaluations, providing a high-performance and reliable solution for optimizing DNNs in image classification tasks. This research contributes to the field of deep learning by proposing a new optimization algorithm that can improve the efficiency of deep neural networks in image classification.
Collapse
|
3
|
FUZ-SMO: A fuzzy slime mould optimizer for mitigating false alarm rates in the classification of underwater datasets using deep convolutional neural networks. Heliyon 2024; 10:e28681. [PMID: 38586386 PMCID: PMC10998124 DOI: 10.1016/j.heliyon.2024.e28681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 04/09/2024] Open
Abstract
Sonar sound datasets are of significant importance in the domains of underwater surveillance and marine research as they enable experts to discern intricate patterns within the depths of the water. Nevertheless, the task of classifying sonar sound datasets continues to pose significant challenges. In this study, we present a novel approach aimed at enhancing the precision and efficacy of sonar sound dataset classification. The integration of deep long-short-term memory (DLSTM) and convolutional neural networks (CNNs) models is employed in order to capitalize on their respective advantages while also utilizing distinctive feature engineering techniques to achieve the most favorable outcomes. Although DLSTM networks have demonstrated effectiveness in tasks involving sequence classification, attaining their optimal performance necessitates careful adjustment of hyperparameters. While traditional methods such as grid and random search are effective, they frequently encounter challenges related to computational inefficiencies. This study aims to investigate the unexplored capabilities of the fuzzy slime mould optimizer (FUZ-SMO) in the context of LSTM hyperparameter tuning, with the objective of addressing the existing research gap in this area. Drawing inspiration from the adaptive behavior exhibited by slime moulds, the FUZ-SMO proposes a novel approach to optimization. The amalgamated model, which combines CNN, LSTM, fuzzy, and SMO, exhibits a notable improvement in classification accuracy, outperforming conventional LSTM architectures by a margin of 2.142%. This model not only demonstrates accelerated convergence milestones but also displays significant resilience against overfitting tendencies.
Collapse
|
4
|
Mesenchymal stem cells for cartilage regeneration: Insights into molecular mechanism and therapeutic strategies. Tissue Cell 2024; 88:102380. [PMID: 38615643 DOI: 10.1016/j.tice.2024.102380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 03/15/2024] [Accepted: 04/09/2024] [Indexed: 04/16/2024]
Abstract
The use of mesenchymal stem cells (MSCs) in cartilage regeneration has gained significant attention in regenerative medicine. This paper reviews the molecular mechanisms underlying MSC-based cartilage regeneration and explores various therapeutic strategies to enhance the efficacy of MSCs in this context. MSCs exhibit multipotent capabilities and can differentiate into various cell lineages under specific microenvironmental cues. Chondrogenic differentiation, a complex process involving signaling pathways, transcription factors, and growth factors, plays a pivotal role in the successful regeneration of cartilage tissue. The chondrogenic differentiation of MSCs is tightly regulated by growth factors and signaling pathways such as TGF-β, BMP, Wnt/β-catenin, RhoA/ROCK, NOTCH, and IHH (Indian hedgehog). Understanding the intricate balance between these pathways is crucial for directing lineage-specific differentiation and preventing undesirable chondrocyte hypertrophy. Additionally, paracrine effects of MSCs, mediated by the secretion of bioactive factors, contribute significantly to immunomodulation, recruitment of endogenous stem cells, and maintenance of chondrocyte phenotype. Pre-treatment strategies utilized to potentiate MSCs, such as hypoxic conditions, low-intensity ultrasound, kartogenin treatment, and gene editing, are also discussed for their potential to enhance MSC survival, differentiation, and paracrine effects. In conclusion, this paper provides a comprehensive overview of the molecular mechanisms involved in MSC-based cartilage regeneration and outlines promising therapeutic strategies. The insights presented contribute to the ongoing efforts in optimizing MSC-based therapies for effective cartilage repair.
Collapse
|
5
|
Multi-objective liver cancer algorithm: A novel algorithm for solving engineering design problems. Heliyon 2024; 10:e26665. [PMID: 38486727 PMCID: PMC10937593 DOI: 10.1016/j.heliyon.2024.e26665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 02/14/2024] [Accepted: 02/16/2024] [Indexed: 03/17/2024] Open
Abstract
This research introduces the Multi-Objective Liver Cancer Algorithm (MOLCA), a novel approach inspired by the growth and proliferation patterns of liver tumors. MOLCA emulates the evolutionary tendencies of liver tumors, leveraging their expansion dynamics as a model for solving multi-objective optimization problems in engineering design. The algorithm uniquely combines genetic operators with the Random Opposition-Based Learning (ROBL) strategy, optimizing both local and global search capabilities. Further enhancement is achieved through the integration of elitist non-dominated sorting (NDS), information feedback mechanism (IFM) and Crowding Distance (CD) selection method, which collectively aim to efficiently identify the Pareto optimal front. The performance of MOLCA is rigorously assessed using a comprehensive set of standard multi-objective test benchmarks, including ZDT, DTLZ and various Constraint (CONSTR, TNK, SRN, BNH, OSY and KITA) and real-world engineering design problems like Brushless DC wheel motor, Safety isolating transformer, Helical spring, Two-bar truss and Welded beam. Its efficacy is benchmarked against prominent algorithms such as the non-dominated sorting grey wolf optimizer (NSGWO), multiobjective multi-verse optimization (MOMVO), non-dominated sorting genetic algorithm (NSGA-II), decomposition-based multiobjective evolutionary algorithm (MOEA/D) and multiobjective marine predator algorithm (MOMPA). Quantitative analysis is conducted using GD, IGD, SP, SD, HV and RT metrics to represent convergence and distribution, while qualitative aspects are presented through graphical representations of the Pareto fronts. The MOLCA source code is available at: https://github.com/kanak02/MOLCA.
Collapse
|
6
|
Augmented weighted K-means grey wolf optimizer: An enhanced metaheuristic algorithm for data clustering problems. Sci Rep 2024; 14:5434. [PMID: 38443569 PMCID: PMC10914809 DOI: 10.1038/s41598-024-55619-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 02/26/2024] [Indexed: 03/07/2024] Open
Abstract
This study presents the K-means clustering-based grey wolf optimizer, a new algorithm intended to improve the optimization capabilities of the conventional grey wolf optimizer in order to address the problem of data clustering. The process that groups similar items within a dataset into non-overlapping groups. Grey wolf hunting behaviour served as the model for grey wolf optimizer, however, it frequently lacks the exploration and exploitation capabilities that are essential for efficient data clustering. This work mainly focuses on enhancing the grey wolf optimizer using a new weight factor and the K-means algorithm concepts in order to increase variety and avoid premature convergence. Using a partitional clustering-inspired fitness function, the K-means clustering-based grey wolf optimizer was extensively evaluated on ten numerical functions and multiple real-world datasets with varying levels of complexity and dimensionality. The methodology is based on incorporating the K-means algorithm concept for the purpose of refining initial solutions and adding a weight factor to increase the diversity of solutions during the optimization phase. The results show that the K-means clustering-based grey wolf optimizer performs much better than the standard grey wolf optimizer in discovering optimal clustering solutions, indicating a higher capacity for effective exploration and exploitation of the solution space. The study found that the K-means clustering-based grey wolf optimizer was able to produce high-quality cluster centres in fewer iterations, demonstrating its efficacy and efficiency on various datasets. Finally, the study demonstrates the robustness and dependability of the K-means clustering-based grey wolf optimizer in resolving data clustering issues, which represents a significant advancement over conventional techniques. In addition to addressing the shortcomings of the initial algorithm, the incorporation of K-means and the innovative weight factor into the grey wolf optimizer establishes a new standard for further study in metaheuristic clustering algorithms. The performance of the K-means clustering-based grey wolf optimizer is around 34% better than the original grey wolf optimizer algorithm for both numerical test problems and data clustering problems.
Collapse
|
7
|
Optimizing brushless direct current motor design: An application of the multi-objective generalized normal distribution optimization. Heliyon 2024; 10:e26369. [PMID: 38404848 PMCID: PMC10884493 DOI: 10.1016/j.heliyon.2024.e26369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 02/12/2024] [Accepted: 02/12/2024] [Indexed: 02/27/2024] Open
Abstract
In this study, we tackle the challenge of optimizing the design of a Brushless Direct Current (BLDC) motor. Utilizing an established analytical model, we introduced the Multi-Objective Generalized Normal Distribution Optimization (MOGNDO) method, a biomimetic approach based on Pareto optimality, dominance, and external archiving. We initially tested MOGNDO on standard multi-objective benchmark functions, where it showed strong performance. When applied to the BLDC motor design with the objectives of either maximizing operational efficiency or minimizing motor mass, the MOGNDO algorithm consistently outperformed other techniques like Ant Lion Optimizer (ALO), Ion Motion Optimization (IMO), and Sine Cosine Algorithm (SCA). Specifically, MOGNDO yielded the most optimal values across efficiency and mass metrics, providing practical solutions for real-world BLDC motor design. The MOGNDO source code is available at: https://github.com/kanak02/MOGNDO.
Collapse
|
8
|
Revolutionizing crop disease detection with computational deep learning: a comprehensive review. ENVIRONMENTAL MONITORING AND ASSESSMENT 2024; 196:302. [PMID: 38401024 PMCID: PMC10894121 DOI: 10.1007/s10661-024-12454-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 02/12/2024] [Indexed: 02/26/2024]
Abstract
Digital image processing has witnessed a significant transformation, owing to the adoption of deep learning (DL) algorithms, which have proven to be vastly superior to conventional methods for crop detection. These DL algorithms have recently found successful applications across various domains, translating input data, such as images of afflicted plants, into valuable insights, like the identification of specific crop diseases. This innovation has spurred the development of cutting-edge techniques for early detection and diagnosis of crop diseases, leveraging tools such as convolutional neural networks (CNN), K-nearest neighbour (KNN), support vector machines (SVM), and artificial neural networks (ANN). This paper offers an all-encompassing exploration of the contemporary literature on methods for diagnosing, categorizing, and gauging the severity of crop diseases. The review examines the performance analysis of the latest machine learning (ML) and DL techniques outlined in these studies. It also scrutinizes the methodologies and datasets and outlines the prevalent recommendations and identified gaps within different research investigations. As a conclusion, the review offers insights into potential solutions and outlines the direction for future research in this field. The review underscores that while most studies have concentrated on traditional ML algorithms and CNN, there has been a noticeable dearth of focus on emerging DL algorithms like capsule neural networks and vision transformers. Furthermore, it sheds light on the fact that several datasets employed for training and evaluating DL models have been tailored to suit specific crop types, emphasizing the pressing need for a comprehensive and expansive image dataset encompassing a wider array of crop varieties. Moreover, the survey draws attention to the prevailing trend where the majority of research endeavours have concentrated on individual plant diseases, ML, or DL algorithms. In light of this, it advocates for the development of a unified framework that harnesses an ensemble of ML and DL algorithms to address the complexities of multiple plant diseases effectively.
Collapse
|
9
|
Multi-objective exponential distribution optimizer (MOEDO): a novel math-inspired multi-objective algorithm for global optimization and real-world engineering design problems. Sci Rep 2024; 14:1816. [PMID: 38245654 PMCID: PMC10799915 DOI: 10.1038/s41598-024-52083-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 01/13/2024] [Indexed: 01/22/2024] Open
Abstract
The exponential distribution optimizer (EDO) represents a heuristic approach, capitalizing on exponential distribution theory to identify global solutions for complex optimization challenges. This study extends the EDO's applicability by introducing its multi-objective version, the multi-objective EDO (MOEDO), enhanced with elite non-dominated sorting and crowding distance mechanisms. An information feedback mechanism (IFM) is integrated into MOEDO, aiming to balance exploration and exploitation, thus improving convergence and mitigating the stagnation in local optima, a notable limitation in traditional approaches. Our research demonstrates MOEDO's superiority over renowned algorithms such as MOMPA, NSGA-II, MOAOA, MOEA/D and MOGNDO. This is evident in 72.58% of test scenarios, utilizing performance metrics like GD, IGD, HV, SP, SD and RT across benchmark test collections (DTLZ, ZDT and various constraint problems) and five real-world engineering design challenges. The Wilcoxon Rank Sum Test (WRST) further confirms MOEDO as a competitive multi-objective optimization algorithm, particularly in scenarios where existing methods struggle with balancing diversity and convergence efficiency. MOEDO's robust performance, even in complex real-world applications, underscores its potential as an innovative solution in the optimization domain. The MOEDO source code is available at: https://github.com/kanak02/MOEDO .
Collapse
|
10
|
A CNN-based model to count the leaves of rosette plants (LC-Net). Sci Rep 2024; 14:1496. [PMID: 38233479 PMCID: PMC10794187 DOI: 10.1038/s41598-024-51983-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/11/2024] [Indexed: 01/19/2024] Open
Abstract
Plant image analysis is a significant tool for plant phenotyping. Image analysis has been used to assess plant trails, forecast plant growth, and offer geographical information about images. The area segmentation and counting of the leaf is a major component of plant phenotyping, which can be used to measure the growth of the plant. Therefore, this paper developed a convolutional neural network-based leaf counting model called LC-Net. The original plant image and segmented leaf parts are fed as input because the segmented leaf part provides additional information to the proposed LC-Net. The well-known SegNet model has been utilised to obtain segmented leaf parts because it outperforms four other popular Convolutional Neural Network (CNN) models, namely DeepLab V3+, Fast FCN with Pyramid Scene Parsing (PSP), U-Net, and Refine Net. The proposed LC-Net is compared to the other recent CNN-based leaf counting models over the combined Computer Vision Problems in Plant Phenotyping (CVPPP) and KOMATSUNA datasets. The subjective and numerical evaluations of the experimental results demonstrate the superiority of the LC-Net to other tested models.
Collapse
|
11
|
Auto-detection of the coronavirus disease by using deep convolutional neural networks and X-ray photographs. Sci Rep 2024; 14:534. [PMID: 38177156 PMCID: PMC10766625 DOI: 10.1038/s41598-023-47038-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 11/08/2023] [Indexed: 01/06/2024] Open
Abstract
The most widely used method for detecting Coronavirus Disease 2019 (COVID-19) is real-time polymerase chain reaction. However, this method has several drawbacks, including high cost, lengthy turnaround time for results, and the potential for false-negative results due to limited sensitivity. To address these issues, additional technologies such as computed tomography (CT) or X-rays have been employed for diagnosing the disease. Chest X-rays are more commonly used than CT scans due to the widespread availability of X-ray machines, lower ionizing radiation, and lower cost of equipment. COVID-19 presents certain radiological biomarkers that can be observed through chest X-rays, making it necessary for radiologists to manually search for these biomarkers. However, this process is time-consuming and prone to errors. Therefore, there is a critical need to develop an automated system for evaluating chest X-rays. Deep learning techniques can be employed to expedite this process. In this study, a deep learning-based method called Custom Convolutional Neural Network (Custom-CNN) is proposed for identifying COVID-19 infection in chest X-rays. The Custom-CNN model consists of eight weighted layers and utilizes strategies like dropout and batch normalization to enhance performance and reduce overfitting. The proposed approach achieved a classification accuracy of 98.19% and aims to accurately classify COVID-19, normal, and pneumonia samples.
Collapse
|
12
|
An Optimized Model Based on Deep Learning and Gated Recurrent Unit for COVID-19 Death Prediction. Biomimetics (Basel) 2023; 8:552. [PMID: 37999193 PMCID: PMC10669113 DOI: 10.3390/biomimetics8070552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 11/05/2023] [Accepted: 11/14/2023] [Indexed: 11/25/2023] Open
Abstract
The COVID-19 epidemic poses a worldwide threat that transcends provincial, philosophical, spiritual, radical, social, and educational borders. By using a connected network, a healthcare system with the Internet of Things (IoT) functionality can effectively monitor COVID-19 cases. IoT helps a COVID-19 patient recognize symptoms and receive better therapy more quickly. A critical component in measuring, evaluating, and diagnosing the risk of infection is artificial intelligence (AI). It can be used to anticipate cases and forecast the alternate incidences number, retrieved instances, and injuries. In the context of COVID-19, IoT technologies are employed in specific patient monitoring and diagnosing processes to reduce COVID-19 exposure to others. This work uses an Indian dataset to create an enhanced convolutional neural network with a gated recurrent unit (CNN-GRU) model for COVID-19 death prediction via IoT. The data were also subjected to data normalization and data imputation. The 4692 cases and eight characteristics in the dataset were utilized in this research. The performance of the CNN-GRU model for COVID-19 death prediction was assessed using five evaluation metrics, including median absolute error (MedAE), mean absolute error (MAE), root mean squared error (RMSE), mean square error (MSE), and coefficient of determination (R2). ANOVA and Wilcoxon signed-rank tests were used to determine the statistical significance of the presented model. The experimental findings showed that the CNN-GRU model outperformed other models regarding COVID-19 death prediction.
Collapse
|
13
|
Optimizing HCV Disease Prediction in Egypt: The hyOPTGB Framework. Diagnostics (Basel) 2023; 13:3439. [PMID: 37998575 PMCID: PMC10670002 DOI: 10.3390/diagnostics13223439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 11/04/2023] [Accepted: 11/08/2023] [Indexed: 11/25/2023] Open
Abstract
The paper focuses on the hepatitis C virus (HCV) infection in Egypt, which has one of the highest rates of HCV in the world. The high prevalence is linked to several factors, including the use of injection drugs, poor sterilization practices in medical facilities, and low public awareness. This paper introduces a hyOPTGB model, which employs an optimized gradient boosting (GB) classifier to predict HCV disease in Egypt. The model's accuracy is enhanced by optimizing hyperparameters with the OPTUNA framework. Min-Max normalization is used as a preprocessing step for scaling the dataset values and using the forward selection (FS) wrapped method to identify essential features. The dataset used in the study contains 1385 instances and 29 features and is available at the UCI machine learning repository. The authors compare the performance of five machine learning models, including decision tree (DT), support vector machine (SVM), dummy classifier (DC), ridge classifier (RC), and bagging classifier (BC), with the hyOPTGB model. The system's efficacy is assessed using various metrics, including accuracy, recall, precision, and F1-score. The hyOPTGB model outperformed the other machine learning models, achieving a 95.3% accuracy rate. The authors also compared the hyOPTGB model against other models proposed by authors who used the same dataset.
Collapse
|
14
|
Modified prairie dog optimization algorithm for global optimization and constrained engineering problems. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:19086-19132. [PMID: 38052592 DOI: 10.3934/mbe.2023844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
The prairie dog optimization (PDO) algorithm is a metaheuristic optimization algorithm that simulates the daily behavior of prairie dogs. The prairie dog groups have a unique mode of information exchange. They divide into several small groups to search for food based on special signals and build caves around the food sources. When encountering natural enemies, they emit different sound signals to remind their companions of the dangers. According to this unique information exchange mode, we propose a randomized audio signal factor to simulate the specific sounds of prairie dogs when encountering different foods or natural enemies. This strategy restores the prairie dog habitat and improves the algorithm's merit-seeking ability. In the initial stage of the algorithm, chaotic tent mapping is also added to initialize the population of prairie dogs and increase population diversity, even use lens opposition-based learning strategy to enhance the algorithm's global exploration ability. To verify the optimization performance of the modified prairie dog optimization algorithm, we applied it to 23 benchmark test functions, IEEE CEC2014 test functions, and six engineering design problems for testing. The experimental results illustrated that the modified prairie dog optimization algorithm has good optimization performance.
Collapse
|
15
|
An efficient churn prediction model using gradient boosting machine and metaheuristic optimization. Sci Rep 2023; 13:14441. [PMID: 37660198 PMCID: PMC10475067 DOI: 10.1038/s41598-023-41093-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Accepted: 08/22/2023] [Indexed: 09/04/2023] Open
Abstract
Customer churn remains a critical challenge in telecommunications, necessitating effective churn prediction (CP) methodologies. This paper introduces the Enhanced Gradient Boosting Model (EGBM), which uses a Support Vector Machine with a Radial Basis Function kernel (SVMRBF) as a base learner and exponential loss function to enhance the learning process of the GBM. The novel base learner significantly improves the initial classification performance of the traditional GBM and achieves enhanced performance in CP-EGBM after multiple boosting stages by utilizing state-of-the-art decision tree learners. Further, a modified version of Particle Swarm Optimization (PSO) using the consumption operator of the Artificial Ecosystem Optimization (AEO) method to prevent premature convergence of the PSO in the local optima is developed to tune the hyper-parameters of the CP-EGBM effectively. Seven open-source CP datasets are used to evaluate the performance of the developed CP-EGBM model using several quantitative evaluation metrics. The results showed that the CP-EGBM is significantly better than GBM and SVM models. Results are statistically validated using the Friedman ranking test. The proposed CP-EGBM is also compared with recently reported models in the literature. Comparative analysis with state-of-the-art models showcases CP-EGBM's promising improvements, making it a robust and effective solution for churn prediction in the telecommunications industry.
Collapse
|
16
|
Hybrid Slime Mold and Arithmetic Optimization Algorithm with Random Center Learning and Restart Mutation. Biomimetics (Basel) 2023; 8:396. [PMID: 37754147 PMCID: PMC10526150 DOI: 10.3390/biomimetics8050396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/21/2023] [Accepted: 08/23/2023] [Indexed: 09/28/2023] Open
Abstract
The slime mold algorithm (SMA) and the arithmetic optimization algorithm (AOA) are two novel meta-heuristic optimization algorithms. Among them, the slime mold algorithm has a strong global search ability. Still, the oscillation effect in the later iteration stage is weak, making it difficult to find the optimal position in complex functions. The arithmetic optimization algorithm utilizes multiplication and division operators for position updates, which have strong randomness and good convergence ability. For the above, this paper integrates the two algorithms and adds a random central solution strategy, a mutation strategy, and a restart strategy. A hybrid slime mold and arithmetic optimization algorithm with random center learning and restart mutation (RCLSMAOA) is proposed. The improved algorithm retains the position update formula of the slime mold algorithm in the global exploration section. It replaces the convergence stage of the slime mold algorithm with the multiplication and division algorithm in the local exploitation stage. At the same time, the stochastic center learning strategy is adopted to improve the global search efficiency and the diversity of the algorithm population. In addition, the restart strategy and mutation strategy are also used to improve the convergence accuracy of the algorithm and enhance the later optimization ability. In comparison experiments, different kinds of test functions are used to test the specific performance of the improvement algorithm. We determine the final performance of the algorithm by analyzing experimental data and convergence images, using the Wilcoxon rank sum test and Friedman test. The experimental results show that the improvement algorithm, which combines the slime mold algorithm and arithmetic optimization algorithm, is effective. Finally, the specific performance of the improvement algorithm on practical engineering problems was evaluated.
Collapse
|
17
|
Fractional-order chaotic oscillator-based Aquila optimization algorithm for maximization of the chaotic with Lorentz oscillator. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08945-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 08/01/2023] [Indexed: 09/02/2023]
|
18
|
Boosted Harris Hawks gravitational force algorithm for global optimization and industrial engineering problems. JOURNAL OF INTELLIGENT MANUFACTURING 2023; 34:2693-2728. [DOI: 10.1007/s10845-022-01921-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 01/31/2022] [Indexed: 09/02/2023]
|
19
|
A Machine Learning Approach to Classify Biomedical Acoustic Features for Baby Cries. J Voice 2023:S0892-1997(23)00188-1. [PMID: 37479635 DOI: 10.1016/j.jvoice.2023.06.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/15/2023] [Accepted: 06/15/2023] [Indexed: 07/23/2023]
Abstract
Communication is imperative for living beings for exchanging information. But for newborns, the only way of communicating with the world is through crying, and it is the only medium through which caregivers can know about the needs of their children. Timely addressing baby cries is very important so that the child is relieved at the earliest. It has been a challenge, especially for new parents. The literature says newborn babies use The Dustan Baby Language to communicate. According to this language, there are five words to understand a baby's needs, which are "Neh" (hungry), "Eh" (burp is needed), "Owh/Oah" (fatigue), "Eair/Eargghh" (cramps), "Heh" (feel hot or wet, physical discomfort). This research aims to develop a model for recognizing baby cries and distinguishing between different kinds of baby cries. Here we more broadly focus on whether the infant is in pain due to hunger or discomfort. The study proposes a comparative approach using four classification models: random forest, support vector machine, logistic regression, and decision tree. These algorithms learn from the spectral features: chroma_stft, spectral_centroid, bandwidth, spectral_rolloff, mel-frequency cepstral coefficients, linear predictive coding, res, zero_crossing_rate extracted from the infant cry. The support vector machine model outperforms other classifiers for correctly classifying infant cries.
Collapse
|
20
|
Diagnosis of Monkeypox Disease Using Transfer Learning and Binary Advanced Dipper Throated Optimization Algorithm. Biomimetics (Basel) 2023; 8:313. [PMID: 37504202 PMCID: PMC10807651 DOI: 10.3390/biomimetics8030313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/03/2023] [Accepted: 07/12/2023] [Indexed: 07/29/2023] Open
Abstract
The virus that causes monkeypox has been observed in Africa for several years, and it has been linked to the development of skin lesions. Public panic and anxiety have resulted from the deadly repercussions of virus infections following the COVID-19 pandemic. Rapid detection approaches are crucial since COVID-19 has reached a pandemic level. This study's overarching goal is to use metaheuristic optimization to boost the performance of feature selection and classification methods to identify skin lesions as indicators of monkeypox in the event of a pandemic. Deep learning and transfer learning approaches are used to extract the necessary features. The GoogLeNet network is the deep learning framework used for feature extraction. In addition, a binary implementation of the dipper throated optimization (DTO) algorithm is used for feature selection. The decision tree classifier is then used to label the selected set of features. The decision tree classifier is optimized using the continuous version of the DTO algorithm to improve the classification accuracy. Various evaluation methods are used to compare and contrast the proposed approach and the other competing methods using the following metrics: accuracy, sensitivity, specificity, p-Value, N-Value, and F1-score. Through feature selection and a decision tree classifier, the following results are achieved using the proposed approach; F1-score of 0.92, sensitivity of 0.95, specificity of 0.61, p-Value of 0.89, and N-Value of 0.79. The overall accuracy of the proposed methodology after optimizing the parameters of the decision tree classifier is 94.35%. Furthermore, the analysis of variation (ANOVA) and Wilcoxon signed rank test have been applied to the results to investigate the statistical distinction between the proposed methodology and the alternatives. This comparison verified the uniqueness and importance of the proposed approach to Monkeypox case detection.
Collapse
|
21
|
Multi-Agent Variational Approach for Robotics: A Bio-Inspired Perspective. Biomimetics (Basel) 2023; 8:294. [PMID: 37504182 PMCID: PMC10807404 DOI: 10.3390/biomimetics8030294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 06/21/2023] [Accepted: 06/26/2023] [Indexed: 07/29/2023] Open
Abstract
This study proposes an adaptable, bio-inspired optimization algorithm for Multi-Agent Space Exploration. The recommended approach combines a parameterized Aquila Optimizer, a bio-inspired technology, with deterministic Multi-Agent Exploration. Stochastic factors are integrated into the Aquila Optimizer to enhance the algorithm's efficiency. The architecture, called the Multi-Agent Exploration-Parameterized Aquila Optimizer (MAE-PAO), starts by using deterministic MAE to assess the cost and utility values of nearby cells encircling the agents. A parameterized Aquila Optimizer is then used to further increase the exploration pace. The effectiveness of the proposed MAE-PAO methodology is verified through extended simulations in various environmental conditions. The algorithm viability is further evaluated by comparing the results with those of the contemporary CME-Aquila Optimizer (CME-AO) and the Whale Optimizer. The comparison adequately considers various performance parameters, such as the percentage of the map explored, the number of unsuccessful runs, and the time needed to explore the map. The comparisons are performed on numerous maps simulating different scenarios. A detailed statistical analysis is performed to check the efficacy of the algorithm. We conclude that the proposed algorithm's average rate of exploration does not deviate much compared to contemporary algorithms. The same idea is checked for exploration time. Thus, we conclude that the results obtained for the proposed MAE-PAO algorithm provide significant advantages in terms of enhanced map exploration with lower execution times and nearly no failed runs.
Collapse
|
22
|
Improving clinical documentation: automatic inference of ICD-10 codes from patient notes using BERT model. THE JOURNAL OF SUPERCOMPUTING 2023; 79:12766-12790. [DOI: 10.1007/s11227-023-05160-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/04/2023] [Indexed: 09/01/2023]
|
23
|
Classification of Breast Cancer Using Transfer Learning and Advanced Al-Biruni Earth Radius Optimization. Biomimetics (Basel) 2023; 8:270. [PMID: 37504158 PMCID: PMC10377265 DOI: 10.3390/biomimetics8030270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/21/2023] [Accepted: 06/24/2023] [Indexed: 07/29/2023] Open
Abstract
Breast cancer is one of the most common cancers in women, with an estimated 287,850 new cases identified in 2022. There were 43,250 female deaths attributed to this malignancy. The high death rate associated with this type of cancer can be reduced with early detection. Nonetheless, a skilled professional is always necessary to manually diagnose this malignancy from mammography images. Many researchers have proposed several approaches based on artificial intelligence. However, they still face several obstacles, such as overlapping cancerous and noncancerous regions, extracting irrelevant features, and inadequate training models. In this paper, we developed a novel computationally automated biological mechanism for categorizing breast cancer. Using a new optimization approach based on the Advanced Al-Biruni Earth Radius (ABER) optimization algorithm, a boosting to the classification of breast cancer cases is realized. The stages of the proposed framework include data augmentation, feature extraction using AlexNet based on transfer learning, and optimized classification using a convolutional neural network (CNN). Using transfer learning and optimized CNN for classification improved the accuracy when the results are compared to recent approaches. Two publicly available datasets are utilized to evaluate the proposed framework, and the average classification accuracy is 97.95%. To ensure the statistical significance and difference between the proposed methodology, additional tests are conducted, such as analysis of variance (ANOVA) and Wilcoxon, in addition to evaluating various statistical analysis metrics. The results of these tests emphasized the effectiveness and statistical difference of the proposed methodology compared to current methods.
Collapse
|
24
|
An improved multi-strategy beluga whale optimization for global optimization problems. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:13267-13317. [PMID: 37501488 DOI: 10.3934/mbe.2023592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
This paper presents an improved beluga whale optimization (IBWO) algorithm, which is mainly used to solve global optimization problems and engineering problems. This improvement is proposed to solve the imbalance between exploration and exploitation and to solve the problem of insufficient convergence accuracy and speed of beluga whale optimization (BWO). In IBWO, we use a new group action strategy (GAS), which replaces the exploration phase in BWO. It was inspired by the group hunting behavior of beluga whales in nature. The GAS keeps individual belugas whales together, allowing them to hide together from the threat posed by their natural enemy, the tiger shark. It also enables the exchange of location information between individual belugas whales to enhance the balance between local and global lookups. On this basis, the dynamic pinhole imaging strategy (DPIS) and quadratic interpolation strategy (QIS) are added to improve the global optimization ability and search rate of IBWO and maintain diversity. In a comparison experiment, the performance of the optimization algorithm (IBWO) was tested by using CEC2017 and CEC2020 benchmark functions of different dimensions. Performance was analyzed by observing experimental data, convergence curves, and box graphs, and the results were tested using the Wilcoxon rank sum test. The results show that IBWO has good optimization performance and robustness. Finally, the applicability of IBWO to practical engineering problems is verified by five engineering problems.
Collapse
|
25
|
Improved Dipper-Throated Optimization for Forecasting Metamaterial Design Bandwidth for Engineering Applications. Biomimetics (Basel) 2023; 8:241. [PMID: 37366836 DOI: 10.3390/biomimetics8020241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 05/23/2023] [Accepted: 06/02/2023] [Indexed: 06/28/2023] Open
Abstract
Metamaterials have unique physical properties. They are made of several elements and are structured in repeating patterns at a smaller wavelength than the phenomena they affect. Metamaterials' exact structure, geometry, size, orientation, and arrangement allow them to manipulate electromagnetic waves by blocking, absorbing, amplifying, or bending them to achieve benefits not possible with ordinary materials. Microwave invisibility cloaks, invisible submarines, revolutionary electronics, microwave components, filters, and antennas with a negative refractive index utilize metamaterials. This paper proposed an improved dipper throated-based ant colony optimization (DTACO) algorithm for forecasting the bandwidth of the metamaterial antenna. The first scenario in the tests covered the feature selection capabilities of the proposed binary DTACO algorithm for the dataset that was being evaluated, and the second scenario illustrated the algorithm's regression skills. Both scenarios are part of the studies. The state-of-the-art algorithms of DTO, ACO, particle swarm optimization (PSO), grey wolf optimizer (GWO), and whale optimization (WOA) were explored and compared to the DTACO algorithm. The basic multilayer perceptron (MLP) regressor model, the support vector regression (SVR) model, and the random forest (RF) regressor model were contrasted with the optimal ensemble DTACO-based model that was proposed. In order to assess the consistency of the DTACO-based model that was developed, the statistical research made use of Wilcoxon's rank-sum and ANOVA tests.
Collapse
|
26
|
Swarm Intelligence to Face IoT Challenges. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4254194. [PMID: 37284052 PMCID: PMC10241578 DOI: 10.1155/2023/4254194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 01/30/2023] [Accepted: 03/26/2023] [Indexed: 06/08/2023]
Abstract
The Internet of Things (IoT) paradigm denotes billions of physical entities connected to Internet that allow the collecting and sharing of big amounts of data. Everything may become a component of the IoT thanks to advancements in hardware, software, and wireless network availability. Devices get an advanced level of digital intelligence that enables them to transmit real-time data without applying for human support. However, IoT also comes with its own set of unique challenges. Heavy network traffic is generated in the IoT environment for transmitting data. Reducing network traffic by determining the shortest route from the source to the aim decreases overall system response time and energy consumption costs. This translates into the need to define efficient routing algorithms. Many IoT devices are powered by batteries with limited lifetime, so in order to ensure remote, continuous, distributed, and decentralized control and self-organization of these devices, power-aware techniques are highly desirable. Another requirement is to manage huge amounts of dynamically changing data. This paper reviews a set of swarm intelligence (SI) algorithms applied to the main challenges introduced by the IoT. SI algorithms try to determine the best path for insects by modeling the hunting behavior of the agent community. These algorithms are suitable for IoT needs because of their flexibility, resilience, dissemination degree, and extension.
Collapse
|
27
|
Solar power forecasting beneath diverse weather conditions using GD and LM-artificial neural networks. Sci Rep 2023; 13:8517. [PMID: 37231039 DOI: 10.1038/s41598-023-35457-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 05/18/2023] [Indexed: 05/27/2023] Open
Abstract
Large-scale solar energy production is still a great deal of obstruction due to the unpredictability of solar power. The intermittent, chaotic, and random quality of solar energy supply has to be dealt with by some comprehensive solar forecasting technologies. Despite forecasting for the long-term, it becomes much more essential to predict short-term forecasts in minutes or even seconds prior. Because key factors such as sudden movement of the clouds, instantaneous deviation of temperature in ambiance, the increased proportion of relative humidity and uncertainty in the wind velocities, haziness, and rains cause the undesired up and down ramping rates, thereby affecting the solar power generation to a greater extent. This paper aims to acknowledge the extended stellar forecasting algorithm using artificial neural network common sensical aspect. Three layered systems have been suggested, consisting of an input layer, hidden layer, and output layer feed-forward in conjunction with back propagation. A prior 5-min te output forecast fed to the input layer to reduce the error has been introduced to have a more precise forecast. Weather remains the most vital input for the ANN type of modeling. The forecasting errors might enhance considerably, thereby affecting the solar power supply relatively due to the variations in the solar irradiations and temperature on any forecasting day. Prior approximation of stellar radiations exhibits a small amount of qualm depending upon climatic conditions such as temperature, shading conditions, soiling effects, relative humidity, etc. All these environmental factors incorporate uncertainty regarding the prediction of the output parameter. In such a case, the approximation of PV output could be much more suitable than direct solar radiation. This paper uses Gradient Descent (GD) and Levenberg Maquarndt Artificial Neural Network (LM-ANN) techniques to apply to data obtained and recorded milliseconds from a 100 W solar panel. The essential purpose of this paper is to establish a time perspective with the greatest deal for the output forecast of small solar power utilities. It has been observed that 5 ms to 12 h time perspective gives the best short- to medium-term prediction for April. A case study has been done in the Peer Panjal region. The data collected for four months with various parameters have been applied randomly as input data using GD and LM type of artificial neural network compared to actual solar energy data. The proposed ANN based algorithm has been used for unswerving petite term forecasting. The model output has been presented in root mean square error and mean absolute percentage error. The results exhibit a improved concurrence between the forecasted and real models. The forecasting of solar energy and load variations assists in fulfilling the cost-effective aspects.
Collapse
|
28
|
Multi-objective chaos game optimization. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08432-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
AbstractThe Chaos Game Optimization (CGO) has only recently gained popularity, but its effective searching capabilities have a lot of potential for addressing single-objective optimization issues. Despite its advantages, this method can only tackle problems formulated with one objective. The multi-objective CGO proposed in this study is utilized to handle the problems with several objectives (MOCGO). In MOCGO, Pareto-optimal solutions are stored in a fixed-sized external archive. In addition, the leader selection functionality needed to carry out multi-objective optimization has been included in CGO. The technique is also applied to eight real-world engineering design challenges with multiple objectives. The MOCGO algorithm uses several mathematical models in chaos theory and fractals inherited from CGO. This algorithm's performance is evaluated using seventeen case studies, such as CEC-09, ZDT, and DTLZ. Six well-known multi-objective algorithms are compared with MOCGO using four different performance metrics. The results demonstrate that the suggested method is better than existing ones. These Pareto-optimal solutions show excellent convergence and coverage.
Collapse
|
29
|
Efficient Initialization Methods for Population-Based Metaheuristic Algorithms: A Comparative Study. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING 2023; 30:1727-1787. [DOI: 10.1007/s11831-022-09850-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 10/26/2022] [Indexed: 09/02/2023]
|
30
|
Modified reptile search algorithm with multi-hunting coordination strategy for global optimization problems. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:10090-10134. [PMID: 37322925 DOI: 10.3934/mbe.2023443] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
The reptile search algorithm (RSA) is a bionic algorithm proposed by Abualigah. et al. in 2020. RSA simulates the whole process of crocodiles encircling and catching prey. Specifically, the encircling stage includes high walking and belly walking, and the hunting stage includes hunting coordination and cooperation. However, in the middle and later stages of the iteration, most search agents will move towards the optimal solution. However, if the optimal solution falls into local optimum, the population will fall into stagnation. Therefore, RSA cannot converge when solving complex problems. To enable RSA to solve more problems, this paper proposes a multi-hunting coordination strategy by combining Lagrange interpolation and teaching-learning-based optimization (TLBO) algorithm's student stage. Multi-hunting cooperation strategy will make multiple search agents coordinate with each other. Compared with the hunting cooperation strategy in the original RSA, the multi-hunting cooperation strategy has been greatly improved RSA's global capability. Moreover, considering RSA's weak ability to jump out of the local optimum in the middle and later stages, this paper adds the Lens pposition-based learning (LOBL) and restart strategy. Based on the above strategy, a modified reptile search algorithm with a multi-hunting coordination strategy (MRSA) is proposed. To verify the above strategies' effectiveness for RSA, 23 benchmark and CEC2020 functions were used to test MRSA's performance. In addition, MRSA's solutions to six engineering problems reflected MRSA's engineering applicability. It can be seen from the experiment that MRSA has better performance in solving test functions and engineering problems.
Collapse
|
31
|
Autokeras Approach: A Robust Automated Deep Learning Network for Diagnosis Disease Cases in Medical Images. J Imaging 2023; 9:jimaging9030064. [PMID: 36976115 PMCID: PMC10053523 DOI: 10.3390/jimaging9030064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/16/2023] [Accepted: 03/01/2023] [Indexed: 03/29/2023] Open
Abstract
Automated deep learning is promising in artificial intelligence (AI). However, a few applications of automated deep learning networks have been made in the clinical medical fields. Therefore, we studied the application of an open-source automated deep learning framework, Autokeras, for detecting smear blood images infected with malaria parasites. Autokeras is able to identify the optimal neural network to perform the classification task. Hence, the robustness of the adopted model is due to it not needing any prior knowledge from deep learning. In contrast, the traditional deep neural network methods still require more construction to identify the best convolutional neural network (CNN). The dataset used in this study consisted of 27,558 blood smear images. A comparative process proved the superiority of our proposed approach over other traditional neural networks. The evaluation results of our proposed model achieved high efficiency with impressive accuracy, reaching 95.6% when compared with previous competitive models.
Collapse
|
32
|
Mathematical modelling and analysis of COVID-19 and tuberculosis transmission dynamics. INFORMATICS IN MEDICINE UNLOCKED 2023; 38:101235. [PMID: 37033412 PMCID: PMC10065048 DOI: 10.1016/j.imu.2023.101235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Revised: 03/28/2023] [Accepted: 03/28/2023] [Indexed: 04/03/2023] Open
Abstract
In this paper, a mathematical model for assessing the impact of COVID-19 on tuberculosis disease is proposed and analysed. There are pieces of evidence that patients with Tuberculosis (TB) have more chances of developing the SARS-CoV-2 infection. The mathematical model is qualitatively and quantitatively analysed by using the theory of stability analysis. The dynamic system shows endemic equilibrium point which is stable when R 0 < 1 and unstable when R 0 > 1 . The global stability of the endemic point is analysed by constructing the Lyapunov function. The dynamic stability also exhibits bifurcation behaviour. The optimal control theory is used to find an optimal solution to the problem in the mathematical model. The sensitivity analysis is performed to clarify the effective parameters which affect the reproduction number the most. Numerical simulation is carried out to assess the effect of various biological parameters in the dynamic of both tuberculosis and COVID-19 classes. Our simulation results show that the COVID-19 and TB infections can be mitigated by controlling the transmission rate γ .
Collapse
|
33
|
AOEHO: A New Hybrid Data Replication Method in Fog Computing for IoT Application. SENSORS (BASEL, SWITZERLAND) 2023; 23:2189. [PMID: 36850784 PMCID: PMC9963718 DOI: 10.3390/s23042189] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 12/24/2022] [Accepted: 12/27/2022] [Indexed: 06/18/2023]
Abstract
Recently, the concept of the internet of things and its services has emerged with cloud computing. Cloud computing is a modern technology for dealing with big data to perform specified operations. The cloud addresses the problem of selecting and placing iterations across nodes in fog computing. Previous studies focused on original swarm intelligent and mathematical models; thus, we proposed a novel hybrid method based on two modern metaheuristic algorithms. This paper combined the Aquila Optimizer (AO) algorithm with the elephant herding optimization (EHO) for solving dynamic data replication problems in the fog computing environment. In the proposed method, we present a set of objectives that determine data transmission paths, choose the least cost path, reduce network bottlenecks, bandwidth, balance, and speed data transfer rates between nodes in cloud computing. A hybrid method, AOEHO, addresses the optimal and least expensive path, determines the best replication via cloud computing, and determines optimal nodes to select and place data replication near users. Moreover, we developed a multi-objective optimization based on the proposed AOEHO to decrease the bandwidth and enhance load balancing and cloud throughput. The proposed method is evaluated based on data replication using seven criteria. These criteria are data replication access, distance, costs, availability, SBER, popularity, and the Floyd algorithm. The experimental results show the superiority of the proposed AOEHO strategy performance over other algorithms, such as bandwidth, distance, load balancing, data transmission, and least cost path.
Collapse
|
34
|
Impact of the COVID-19 pandemic on imaging case volumes in King Abdullah University Hospitals (KAUH). Front Med (Lausanne) 2023; 10:1103083. [PMID: 36844230 PMCID: PMC9947495 DOI: 10.3389/fmed.2023.1103083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Accepted: 01/23/2023] [Indexed: 02/11/2023] Open
Abstract
Objective COVID-19 has an increased burden on the delivery of services because the measures taken by the governments forced hospitals to cancel most of their elective procedures and led to the shutting down of outpatient clinics. This study aimed to evaluate the impact COVID-19 pandemic on the volume of radiology exams based on patient service locations and imaging modality in the North of Jordan. Methods The imaging case volumes that were performed at the King Abdullah University Hospital (KAUH), Jordan, from 1 January 2020 to 8 May 2020, were retrospectively collected and compared to those from 1 January 2019 to 28 May 2019, to determine the impact of the pandemic of COVID-19 on the volume of radiological examinations. The 2020 study period was chosen to cover the peak of COVID-19 cases and to record the effects on imaging case volumes. Results A total of 46,194 imaging case volumes were performed at our tertiary center in 2020 compared to 65,441 imaging cases in 2019. Overall, the imaging case volume in 2020 decreased by 29.4% relative to the same period in 2019. The imaging case volumes decreased for all imaging modalities relative to 2019. The number of nuclear images showed the highest decline (41.0%) in 2020, followed by the number of ultrasounds (33.2%). Interventional radiology was the least affected imaging modality by this decline, with about a 22.9% decline. Conclusion The number of imaging case volumes decreased significantly during the COVID-19 pandemic and its associated lockdown. The outpatient service location was the most affected by this decline. Effective strategies must be adopted to avoid the aforementioned effect on the healthcare system in future pandemics.
Collapse
|
35
|
Improved Reptile Search Algorithm by Salp Swarm Algorithm for Medical Image Segmentation. JOURNAL OF BIONIC ENGINEERING 2023; 20:1-25. [PMID: 36777369 PMCID: PMC9902839 DOI: 10.1007/s42235-023-00332-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 12/24/2022] [Accepted: 01/04/2023] [Indexed: 06/18/2023]
Abstract
This study proposes a novel nature-inspired meta-heuristic optimizer based on the Reptile Search Algorithm combed with Salp Swarm Algorithm for image segmentation using gray-scale multi-level thresholding, called RSA-SSA. The proposed method introduces a better search space to find the optimal solution at each iteration. However, we proposed RSA-SSA to avoid the searching problem in the same area and determine the optimal multi-level thresholds. The obtained solutions by the proposed method are represented using the image histogram. The proposed RSA-SSA employed Otsu's variance class function to get the best threshold values at each level. The performance measure for the proposed method is valid by detecting fitness function, structural similarity index, peak signal-to-noise ratio, and Friedman ranking test. Several benchmark images of COVID-19 validate the performance of the proposed RSA-SSA. The results showed that the proposed RSA-SSA outperformed other metaheuristics optimization algorithms published in the literature.
Collapse
|
36
|
Modified arithmetic optimization algorithm for drones measurements and tracks assignment problem. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08242-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
37
|
Velocity pausing particle swarm optimization: a novel variant for global optimization. Neural Comput Appl 2023. [DOI: 10.1007/s00521-022-08179-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
AbstractParticle swarm optimization (PSO) is one of the most well-regard metaheuristics with remarkable performance when solving diverse optimization problems. However, PSO faces two main problems that degrade its performance: slow convergence and local optima entrapment. In addition, the performance of this algorithm substantially degrades on high-dimensional problems. In the classical PSO, particles can move in each iteration with either slower or faster speed. This work proposes a novel idea called velocity pausing where particles in the proposed velocity pausing PSO (VPPSO) variant are supported by a third movement option that allows them to move with the same velocity as they did in the previous iteration. As a result, VPPSO has a higher potential to balance exploration and exploitation. To avoid the PSO premature convergence, VPPSO modifies the first term of the PSO velocity equation. In addition, the population of VPPSO is divided into two swarms to maintain diversity. The performance of VPPSO is validated on forty three benchmark functions and four real-world engineering problems. According to the Wilcoxon rank-sum and Friedman tests, VPPSO can significantly outperform seven prominent algorithms on most of the tested functions on both low- and high-dimensional cases. Due to its superior performance in solving complex high-dimensional problems, VPPSO can be applied to solve diverse real-world optimization problems. Moreover, the velocity pausing concept can be easily integrated with new or existing metaheuristic algorithms to enhance their performances. The Matlab code of VPPSO is available at: https://uk.mathworks.com/matlabcentral/fileexchange/119633-vppso.
Collapse
|
38
|
Editorial: Artificial intelligence for mental disorder prevention and diagnosis: Technologies and challenges. Front Psychiatry 2023; 14:1161158. [PMID: 37065881 PMCID: PMC10098303 DOI: 10.3389/fpsyt.2023.1161158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 03/15/2023] [Indexed: 04/18/2023] Open
|
39
|
Predicting the Severity of COVID-19 from Lung CT Images Using Novel Deep Learning. J Med Biol Eng 2023; 43:135-146. [PMID: 37077696 PMCID: PMC10010231 DOI: 10.1007/s40846-023-00783-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 02/16/2023] [Indexed: 04/21/2023]
Abstract
Purpose Coronavirus 2019 (COVID-19) had major social, medical, and economic impacts globally. The study aims to develop a deep-learning model that can predict the severity of COVID-19 in patients based on CT images of their lungs. Methods COVID-19 causes lung infections, and qRT-PCR is an essential tool used to detect virus infection. However, qRT-PCR is inadequate for detecting the severity of the disease and the extent to which it affects the lung. In this paper, we aim to determine the severity level of COVID-19 by studying lung CT scans of people diagnosed with the virus. Results We used images from King Abdullah University Hospital in Jordan; we collected our dataset from 875 cases with 2205 CT images. A radiologist classified the images into four levels of severity: normal, mild, moderate, and severe. We used various deep-learning algorithms to predict the severity of lung diseases. The results show that the best deep-learning algorithm used is Resnet101, with an accuracy score of 99.5% and a data loss rate of 0.03%. Conclusion The proposed model assisted in diagnosing and treating COVID-19 patients and helped improve patient outcomes.
Collapse
|
40
|
Simulation and analysis performance of ad-hoc routing protocols under DDoS attack and proposed solution. INTERNATIONAL JOURNAL OF DATA AND NETWORK SCIENCE 2023. [DOI: 10.5267/j.ijdns.2023.2.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023] Open
Abstract
Ad hoc networks, known as infrastructure-less networks, are composed of mobile nodes that connect without a centralized system controlling them. These networks have a wide range of potential applications, including emergency response, events, military operations, wireless access, and intelligent transportation. They can take on various forms, such as wireless sensor networks, wireless mesh networks, and mobile ad hoc networks. Because users in these networks can move around at any time, routing protocols must adapt to the constantly changing network layout. However, these networks are also susceptible to various security threats, including DDoS attacks. This paper aims to analyze the performance and impact of security attacks on the performance of reactive and proactive routing protocols in CBR connection patterns with different pause times. The analysis is provided in metrics such as throughput, packet loss, end-to-end delay, and load. The simulation results show that, on average, the OPNET Modeler simulator analyzed the performance results under DDoS attacks under voice and video traffic conditions. Furthermore, the paper explores the use of Honeypot intelligent agents as a solution to increase security by creating a dummy node to fool DDoS attackers. The results show that the OLSR protocol is most affected by DDoS attacks in terms of quality-of-service metrics such as packet loss, throughput, end-to-end delay, and load. The number of responses to the honeypot solutions differs for each protocol.
Collapse
|
41
|
A novel security analysis for a new NTRU variant with additional private key. INTERNATIONAL JOURNAL OF DATA AND NETWORK SCIENCE 2023. [DOI: 10.5267/j.ijdns.2023.2.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023] Open
Abstract
This paper proposes a new variant of NTRU with a slightly different critical formulation. The significance of this new variant is that it requires an additional private key to provide a tighter scheme. Because of these changes, modified key generation, encryption and decryption algorithms have been developed accordingly. The new variant is analyzed and tested against several well-known attacks, namely the alternate private key attack, brute force attack, meet-in-the-middle attack, multiple transmission attacks and lattice attack. Security properties related to these attacks have been established and explored to ensure the new variant is secure against the said attacks. Several examples are provided to illustrate the ideas.
Collapse
|
42
|
Optimal parameters extracting of fuel cell based on Gorilla Troops Optimizer. FUEL 2023; 332:126162. [DOI: 10.1016/j.fuel.2022.126162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
43
|
Enhanced feature selection technique using slime mould algorithm: a case study on chemical data. Neural Comput Appl 2023; 35:3307-3324. [PMID: 36245794 PMCID: PMC9547998 DOI: 10.1007/s00521-022-07852-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 09/16/2022] [Indexed: 01/31/2023]
Abstract
Feature selection techniques are considered one of the most important preprocessing steps, which has the most significant influence on the performance of data analysis and decision making. These FS techniques aim to achieve several objectives (such as reducing classification error and minimizing the number of features) at the same time to increase the classification rate. FS based on Metaheuristic (MH) is considered one of the most promising techniques to improve the classification process. This paper presents a modified method of the Slime mould algorithm depending on the Marine Predators Algorithm (MPA) operators as a local search strategy, which leads to increasing the convergence rate of the developed method, named SMAMPA and avoiding the attraction to local optima. The efficiency of SMAMPA is evaluated using twenty datasets and compared its results with the state-of-the-art FS methods. In addition, the applicability of SMAMPA to work with real-world problems is evaluated by using it as a quantitative structure-activity relationship (QSAR) model. The obtained results show the high ability of the developed SMAMPA method to reduce the dimension of the tested datasets by increasing the prediction rate. In addition, it provides results better than other FS techniques in terms of performance metrics.
Collapse
|
44
|
Gradient-Based Optimizer (GBO): A Review, Theory, Variants, and Applications. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2022; 30:2431-2449. [PMID: 36597494 PMCID: PMC9801167 DOI: 10.1007/s11831-022-09872-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 12/14/2022] [Indexed: 06/17/2023]
Abstract
This paper introduces a comprehensive survey of a new population-based algorithm so-called gradient-based optimizer (GBO) and analyzes its major features. GBO considers as one of the most effective optimization algorithm where it was utilized in different problems and domains, successfully. This review introduces set of related works of GBO where distributed into; GBO variants, GBO applications, and evaluate the efficiency of GBO compared with other metaheuristic algorithms. Finally, the conclusions concentrate on the existing work on GBO, showing its disadvantages, and propose future works. The review paper will be helpful for the researchers and practitioners of GBO belonging to a wide range of audiences from the domains of optimization, engineering, medical, data mining and clustering. As well, it is wealthy in research on health, environment and public safety. Also, it will aid those who are interested by providing them with potential future research.
Collapse
|
45
|
A Deep Batch Normalized Convolution Approach for Improving COVID-19 Detection from Chest X-ray Images. Pathogens 2022; 12:pathogens12010017. [PMID: 36678365 PMCID: PMC9860560 DOI: 10.3390/pathogens12010017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/12/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
Pre-trained machine learning models have recently been widely used to detect COVID-19 automatically from X-ray images. Although these models can selectively retrain their layers for the desired task, the output remains biased due to the massive number of pre-trained weights and parameters. This paper proposes a novel batch normalized convolutional neural network (BNCNN) model to identify COVID-19 cases from chest X-ray images in binary and multi-class frameworks with a dual aim to extract salient features that improve model performance over pre-trained image analysis networks while reducing computational complexity. The BNCNN model has three phases: Data pre-processing to normalize and resize X-ray images, Feature extraction to generate feature maps, and Classification to predict labels based on the feature maps. Feature extraction uses four repetitions of a block comprising a convolution layer to learn suitable kernel weights for the features map, a batch normalization layer to solve the internal covariance shift of feature maps, and a max-pooling layer to find the highest-level patterns by increasing the convolution span. The classifier section uses two repetitions of a block comprising a dense layer to learn complex feature maps, a batch normalization layer to standardize internal feature maps, and a dropout layer to avoid overfitting while aiding the model generalization. Comparative analysis shows that when applied to an open-access dataset, the proposed BNCNN model performs better than four other comparative pre-trained models for three-way and two-way class datasets. Moreover, the BNCNN requires fewer parameters than the pre-trained models, suggesting better deployment suitability on low-resource devices.
Collapse
|
46
|
Improved Dwarf Mongoose Optimization for Constrained Engineering Design Problems. JOURNAL OF BIONIC ENGINEERING 2022; 20:1263-1295. [PMID: 36530517 PMCID: PMC9745293 DOI: 10.1007/s42235-022-00316-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/26/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
This paper proposes a modified version of the Dwarf Mongoose Optimization Algorithm (IDMO) for constrained engineering design problems. This optimization technique modifies the base algorithm (DMO) in three simple but effective ways. First, the alpha selection in IDMO differs from the DMO, where evaluating the probability value of each fitness is just a computational overhead and contributes nothing to the quality of the alpha or other group members. The fittest dwarf mongoose is selected as the alpha, and a new operator ω is introduced, which controls the alpha movement, thereby enhancing the exploration ability and exploitability of the IDMO. Second, the scout group movements are modified by randomization to introduce diversity in the search process and explore unvisited areas. Finally, the babysitter's exchange criterium is modified such that once the criterium is met, the babysitters that are exchanged interact with the dwarf mongoose exchanging them to gain information about food sources and sleeping mounds, which could result in better-fitted mongooses instead of initializing them afresh as done in DMO, then the counter is reset to zero. The proposed IDMO was used to solve the classical and CEC 2020 benchmark functions and 12 continuous/discrete engineering optimization problems. The performance of the IDMO, using different performance metrics and statistical analysis, is compared with the DMO and eight other existing algorithms. In most cases, the results show that solutions achieved by the IDMO are better than those obtained by the existing algorithms.
Collapse
|
47
|
Chaotic honey badger algorithm for single and double photovoltaic cell/module. FRONTIERS IN ENERGY RESEARCH 2022; 10. [DOI: 10.3389/fenrg.2022.1011887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
PV cell/module/characteristic array accuracy is mainly influenced by their circuit elements, based on established circuit characteristics, under varied radiation and temperature operating conditions. As a result, this study provides a modified accessible Honey Badger algorithm (HBA) to identify the trustworthy parameters of diode models for various PV cells and modules. This approach relies on modifying the 2D chaotic Henon map settings to improve HBA’s searching ability. A series of experiments are done utilizing the RTC France cell and SLP080 solar module datasets for the single and double-diode models to validate the performance of the presented technique. It is also compared to other state-of-the-art methods. Furthermore, a variety of statistical and non-parametric tests are used. The findings reveal that the suggested method outperforms competing strategies regarding accuracy, consistency, and convergence rate. Moreover, the primary outcomes clarify the superiority of the proposed modified optimizer in determining accurate parameters that provide a high matching between the estimated and the measured datasets.
Collapse
|
48
|
Diversity-Based Evolutionary Population Dynamics: A New Operator for Grey Wolf Optimizer. Processes (Basel) 2022; 10:2615. [DOI: 10.3390/pr10122615] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023] Open
Abstract
Evolutionary Population Dynamics (EPD) refers to eliminating poor individuals in nature, which is the opposite of survival of the fittest. Although this method can improve the median of the whole population of the meta-heuristic algorithms, it suffers from poor exploration capability to handle high-dimensional problems. This paper proposes a novel EPD operator to improve the search process. In other words, as the primary EPD mainly improves the fitness of the worst individuals in the population, and hence we name it the Fitness-Based EPD (FB-EPD), our proposed EPD mainly improves the diversity of the best individuals, and hence we name it the Diversity-Based EPD (DB-EPD). The proposed method is applied to the Grey Wolf Optimizer (GWO) and named DB-GWO-EPD. In this algorithm, the three most diversified individuals are first identified at each iteration, and then half of the best-fitted individuals are forced to be eliminated and repositioned around these diversified agents with equal probability. This process can free the merged best individuals located in a closed populated region and transfer them to the diversified and, thus, less-densely populated regions in the search space. This approach is frequently employed to make the search agents explore the whole search space. The proposed DB-GWO-EPD is tested on 13 high-dimensional and shifted classical benchmark functions as well as 29 test problems included in the CEC2017 test suite, and four constrained engineering problems. The results obtained by the proposal upon implemented on the classical test problems are compared to GWO, FB-GWO-EPD, and four other popular and newly proposed optimization algorithms, including Aquila Optimizer (AO), Flow Direction Algorithm (FDA), Arithmetic Optimization Algorithm (AOA), and Gradient-based Optimizer (GBO). The experiments demonstrate the significant superiority of the proposed algorithm when applied to a majority of the test functions, recommending the application of the proposed EPD operator to any other meta-heuristic whenever decided to ameliorate their performance.
Collapse
|
49
|
awad A, D.abdellatif A, Alburaikan A, Khalifa H, Elaziz MA, Abualigah L, M.abdelmouty A. A Novel Hybrid Arithmetic Optimization Algorithm and Salp Swarm Algorithm for Data Placement in Cloud Computing.. [DOI: 10.21203/rs.3.rs-2266856/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Abstract
In recent years, the Internet of Things (IoT) has led to the spread of cloud computing devices in all commercial, industrial and agricultural sectors. The use of cloud computing environment services is increasing exponentially with all technology applications based on IoT. Fog computing has led to addressing issues in cloud computing environments. Fog computing reduces load balancing, processing, bandwidth, and storage as data file replication from the cloud to the network closest to sensors in different geographic locations. Traditional cloud computing leads to an increase in response time and processing time, and processing in the performance of data replication. We need replication strategies to meet users' requirements across different geographic locations while effectively harnessing fog computing capabilities to optimally select and place data replication of IoT services on cloud resources. In this strategy, the identification and mode of the data replication problem are designed as a multi-objective optimization problem that considers the heterogeneity of resources, least cost path, distance, and applications based on replication requirements. Firstly, a new hybrid metaheuristic method, using the Arithmetic Optimization Algorithm (AOA) and the salp swarm algorithm (SSA), is proposed to handle the problem of selection and placement data replication in fog computing. This strategy, called AOASSA, depends on using fog computing to optimally select and place data replication of IoT services on cloud resources. Secondly, the Floyd algorithm is used to strategy the least cost path, distance, and data transmission in different geographic locations. To validate the AOASSA strategy a set of experiments was carried out to validate the proposed strategy AOASSA. The performance of AOASSA is tested and compared with other swarm intelligence. Experiment results show the superiority of AOASSA over its competitors in terms of performance measures, such as least cost path, distance, and bandwidth.
Collapse
|
50
|
Evaluating the Applications of Dendritic Neuron Model with Metaheuristic Optimization Algorithms for Crude-Oil-Production Forecasting. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1674. [PMID: 36421530 PMCID: PMC9689334 DOI: 10.3390/e24111674] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/14/2022] [Accepted: 11/15/2022] [Indexed: 06/16/2023]
Abstract
The forecasting and prediction of crude oil are necessary in enabling governments to compile their economic plans. Artificial neural networks (ANN) have been widely used in different forecasting and prediction applications, including in the oil industry. The dendritic neural regression (DNR) model is an ANNs that has showed promising performance in time-series prediction. The DNR has the capability to deal with the nonlinear characteristics of historical data for time-series forecasting applications. However, it faces certain limitations in training and configuring its parameters. To this end, we utilized the power of metaheuristic optimization algorithms to boost the training process and optimize its parameters. A comprehensive evaluation is presented in this study with six MH optimization algorithms used for this purpose: whale optimization algorithm (WOA), particle swarm optimization algorithm (PSO), genetic algorithm (GA), sine-cosine algorithm (SCA), differential evolution (DE), and harmony search algorithm (HS). We used oil-production datasets for historical records of crude oil production from seven real-world oilfields (from Tahe oilfields, in China), provided by a local partner. Extensive evaluation experiments were carried out using several performance measures to study the validity of the DNR with MH optimization methods in time-series applications. The findings of this study have confirmed the applicability of MH with DNR. The applications of MH methods improved the performance of the original DNR. We also concluded that the PSO and WOA achieved the best performance compared with other methods.
Collapse
|