1
|
Mangalampalli S, Karri GR, Ratnamani MV, Mohanty SN, Jabr BA, Ali YA, Ali S, Abdullaeva BS. Efficient deep reinforcement learning based task scheduler in multi cloud environment. Sci Rep 2024; 14:21850. [PMID: 39300104 DOI: 10.1038/s41598-024-72774-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 09/10/2024] [Indexed: 09/22/2024] Open
Abstract
Task scheduling problem (TSP) is huge challenge in cloud computing paradigm as number of tasks comes to cloud application platform vary from time to time and all the tasks consists of variable length, runtime capacities. All these tasks may generated from various heterogeneous resources which comes onto cloud console directly effects the performance of cloud paradigm with increase in makespan, energy consumption, resource costs. Traditional task scheduling algorithms cannot handle these type of complex workloads in cloud paradigm. Many authors developed Task Scheduling algorithms by using metaheuristic techniques, hybrid approaches but all these algorithms give near optimal solutions but still TSP is a highly challenging and dynamic scenario as it resembles NP hard problem. Therefore, to tackle the TSP in cloud computing paradigm and schedule the tasks in an effective way in cloud paradigm, we formulated Adaptive Task scheduler which segments all the tasks comes to cloud console as sub tasks and fed these to the scheduler which is modeled by Improved Asynchronous Advantage Actor Critic Algorithm(IA3C) to generate schedules. This scheduling process is carried out in two stages. In first stage, all incoming tasks are segmented as sub tasks. After segmentation, all these sub tasks according to their size, execution time, communication time are grouped together and fed to the (ATSIA3C) scheduler. In the second stage, it checks for the above said constraints and disperse them onto the corresponding suitable processing capacity VMs resided in datacenters. Proposed ATSIA3C is simulated on Cloudsim. Extensive simulations are conducted using both fabricated worklogs and as well as realtime supercomputing worklogs. Our proposed mechanism evaluated over baseline algorithms i.e. RATS-HM, AINN-BPSO, MOABCQ. From results it is evident that our proposed ATSIA3C outperforms existing task schedulers by improving makespan by 70.49%. Resource cost is improved by 77.42%. Energy Consumption is improved over compared algorithms 74.24% in multi cloud environment by proposed ATSIA3C.
Collapse
Affiliation(s)
- Sudheer Mangalampalli
- Department of CSE, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal, India.
| | - Ganesh Reddy Karri
- School of Computer Science and Engineering, VIT-AP University, Amaravati, AP, 522237, India
| | - M V Ratnamani
- Aditya Institute of Technology and Management, Tekkali, Srikakulam, AP, 530021, India
| | - Sachi Nandan Mohanty
- School of Computer Science and Engineering, VIT-AP University, Amaravati, AP, 522237, India
| | - Bander A Jabr
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, 11543, Riyadh, Saudi Arabia
| | - Yasser A Ali
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, 11543, Riyadh, Saudi Arabia
| | - Shahid Ali
- Battery Management System, Research and Development Center, EVE Lithium Energy Company, Huizhou, People's Republic of China.
| | - Barno Sayfutdinovna Abdullaeva
- Department of Mathematics and Information Technologies, Vice-Rector for Scientific Affairs, Tashkent State Pedagogical University, Tashkent, Uzbekistan
| |
Collapse
|
2
|
Mangalampalli S, Karri GR, Elngar AA. An Efficient Trust-Aware Task Scheduling Algorithm in Cloud Computing Using Firefly Optimization. SENSORS (BASEL, SWITZERLAND) 2023; 23:1384. [PMID: 36772424 PMCID: PMC9918964 DOI: 10.3390/s23031384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 01/19/2023] [Accepted: 01/24/2023] [Indexed: 06/18/2023]
Abstract
Task scheduling in the cloud computing paradigm poses a challenge for researchers as the workloads that come onto cloud platforms are dynamic and heterogeneous. Therefore, scheduling these heterogeneous tasks to the appropriate virtual resources is a huge challenge. The inappropriate assignment of tasks to virtual resources leads to the degradation of the quality of services and thereby leads to a violation of the SLA metrics, ultimately leading to the degradation of trust in the cloud provider by the cloud user. Therefore, to preserve trust in the cloud provider and to improve the scheduling process in the cloud paradigm, we propose an efficient task scheduling algorithm that considers the priorities of tasks as well as virtual machines, thereby scheduling tasks accurately to appropriate VMs. This scheduling algorithm is modeled using firefly optimization. The workload for this approach is considered by using fabricated datasets with different distributions and the real-time worklogs of HPC2N and NASA were considered. This algorithm was implemented by using a Cloudsim simulation environment and, finally, our proposed approach is compared over the baseline approaches of ACO, PSO, and the GA. The simulation results revealed that our proposed approach has shown a significant impact over the baseline approaches by minimizing the makespan, availability, success rate, and turnaround efficiency.
Collapse
Affiliation(s)
- Sudheer Mangalampalli
- School of Computer Science and Engineering, VIT-AP University, Amaravati 522237, India
| | - Ganesh Reddy Karri
- School of Computer Science and Engineering, VIT-AP University, Amaravati 522237, India
| | - Ahmed A. Elngar
- Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suef 62511, Egypt
| |
Collapse
|
3
|
Multi Objective Trust aware task scheduling algorithm in cloud computing using Whale Optimization. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2023. [DOI: 10.1016/j.jksuci.2023.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
4
|
Jangu N, Raza Z. Improved Jellyfish Algorithm-based multi-aspect task scheduling model for IoT tasks over fog integrated cloud environment. JOURNAL OF CLOUD COMPUTING: ADVANCES, SYSTEMS AND APPLICATIONS 2022. [DOI: 10.1186/s13677-022-00376-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
AbstractCorporations and enterprises creating IoT-based systems frequently use fog computing integrated with cloud computing to harness the benefits offered by both. These computing paradigms use virtualization and a pay-as-you-go strategy to provide IT resources, including CPU, memory, network and storage. Resource management in such a hybrid environment becomes a challenging task. This problem is exacerbated in the IoT environment, as it generates deadline-driven and heterogeneous data demanding real-time processing. This work proposes an efficient two-step scheduling algorithm comprising a Bi-factor classification task phase based on deadline and priority and a scheduling phase using an enhanced artificial Jellyfish Search Optimizer (JS) proposed as an Improved Jellyfish Algorithm (IJFA). The model considers a variety of cloud and fog resource parameters, including speed, capacity, task size, number of tasks, and number of virtual machines for resource provisioning in a fog integrated cloud environment. The model has been tested for the real-time task scenario with the number of tasks considering both the smaller workload and the relatively higher workload scenario matching the real-time situation. The model addresses the Quality of Service (QoS) parameters of minimizing the batch’s make-span time, lowering the batch execution costs, and increasing the resource utilization. Simulation results prove the effectiveness of the proposed model.
Collapse
|
5
|
Dynamic Load Balancing Techniques in the IoT: A Review. Symmetry (Basel) 2022. [DOI: 10.3390/sym14122554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
The Internet of things (IoT) extends the Internet space by allowing smart things to sense and/or interact with the physical environment and communicate with other physical objects (or things) around us. In IoT, sensors, actuators, smart devices, cameras, protocols, and cloud services are used to support many intelligent applications such as environmental monitoring, traffic monitoring, remote monitoring of patients, security surveillance, and smart home automation. To optimize the usage of an IoT network, certain challenges must be addressed such as energy constraints, scalability, reliability, heterogeneity, security, privacy, routing, quality of service (QoS), and congestion. To avoid congestion in IoT, efficient load balancing (LB) is needed for distributing traffic loads among different routes. To this end, this survey presents the IoT architectures and the networking paradigms (i.e., edge–fog–cloud paradigms) adopted in these architectures. Then, it analyzes and compares previous related surveys on LB in the IoT. It reviews and classifies dynamic LB techniques in the IoT for cloud and edge/fog networks. Lastly, it presents some lessons learned and open research issues.
Collapse
|
6
|
Energy-Aware Bag-of-Tasks Scheduling in the Cloud Computing System Using Hybrid Oppositional Differential Evolution-Enabled Whale Optimization Algorithm. ENERGIES 2022. [DOI: 10.3390/en15134571] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Bag-of-Tasks (BoT) scheduling over cloud computing resources called Cloud Bag-of-Tasks Scheduling (CBS) problem, which is a well-known NP-hard optimization problem. Whale Optimization Algorithm (WOA) is an effective method for CBS problems, which still requires further improvement in exploration ability, solution diversity, convergence speed, and ensuring adequate exploration–exploitation tradeoff to produce superior scheduling solutions. In order to remove WOA limitations, a hybrid oppositional differential evolution-enabled WOA (called h-DEWOA) approach is introduced to tackle CBS problems to minimize workload makespan and energy consumption. The proposed h-DEWOA incorporates chaotic maps, opposition-based learning (OBL), differential evolution (DE), and a fitness-based balancing mechanism into the standard WOA method, resulting in enhanced exploration, faster convergence, and adequate exploration–exploitation tradeoff throughout the algorithm execution. Besides this, an efficient allocation heuristic is added to the h-DEWOA method to improve resource assignment. CEA-Curie and HPC2N real cloud workloads are used for performance evaluation of scheduling algorithms using the CloudSim simulator. Two series of experiments have been conducted for performance comparison: one with WOA-based heuristics and another with non-WOA-based metaheuristics. Experimental results of the first series of experiments reveal that the h-DEWOA approach results in makespan improvement in the range of 5.79–13.38% (for CEA-Curie workloads), 5.03–13.80% (for HPC2N workloads), and energy consumption in the range of 3.21–14.70% (for CEA-Curie workloads) and 10.84–19.30% (for HPC2N workloads) over well-known WOA-based metaheuristics. Similarly, h-DEWOA also resulted in significant performance in comparison with recent state-of-the-art non-WOA-based metaheuristics in the second series of experiments. Statistical tests and box plots also revealed the robustness of the proposed h-DEWOA algorithm.
Collapse
|
7
|
A Cloud Computing-Based Modified Symbiotic Organisms Search Algorithm (AI) for Optimal Task Scheduling. SENSORS 2022; 22:s22041674. [PMID: 35214574 PMCID: PMC8878445 DOI: 10.3390/s22041674] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/14/2022] [Accepted: 02/15/2022] [Indexed: 02/01/2023]
Abstract
The search algorithm based on symbiotic organisms’ interactions is a relatively recent bio-inspired algorithm of the swarm intelligence field for solving numerical optimization problems. It is meant to optimize applications based on the simulation of the symbiotic relationship among the distinct species in the ecosystem. The task scheduling problem is NP complete, which makes it hard to obtain a correct solution, especially for large-scale tasks. This paper proposes a modified symbiotic organisms search-based scheduling algorithm for the efficient mapping of heterogeneous tasks to access cloud resources of different capacities. The significant contribution of this technique is the simplified representation of the algorithm’s mutualism process, which uses equity as a measure of relationship characteristics or efficiency of species in the current ecosystem to move to the next generation. These relational characteristics are achieved by replacing the original mutual vector, which uses an arithmetic mean to measure the mutual characteristics with a geometric mean that enhances the survival advantage of two distinct species. The modified symbiotic organisms search algorithm (G_SOS) aims to minimize the task execution time (makespan), cost, response time, and degree of imbalance, and improve the convergence speed for an optimal solution in an IaaS cloud. The performance of the proposed technique was evaluated using a CloudSim toolkit simulator, and the percentage of improvement of the proposed G_SOS over classical SOS and PSO-SA in terms of makespan minimization ranges between 0.61–20.08% and 1.92–25.68% over a large-scale task that spans between 100 to 1000 Million Instructions (MI). The solutions are found to be better than the existing standard (SOS) technique and PSO.
Collapse
|
8
|
Abstract
AbstractThe cloud computing systems are sorts of shared collateral structure which has been in demand from its inception. In these systems, clients are able to access existing services based on their needs and without knowing where the service is located and how it is delivered, and only pay for the service used. Like other systems, there are challenges in the cloud computing system. Because of a wide array of clients and the variety of services available in this system, it can be said that the issue of scheduling and, of course, energy consumption is essential challenge of this system. Therefore, it should be properly provided to users, which minimizes both the cost of the provider and consumer and the energy consumption, and this requires the use of an optimal scheduling algorithm. In this paper, we present a two-step hybrid method for scheduling tasks aware of energy and time called Genetic Algorithm and Energy-Conscious Scheduling Heuristic based on the Genetic Algorithm. The first step involves prioritizing tasks, and the second step consists of assigning tasks to the processor. We prioritized tasks and generated primary chromosomes, and used the Energy-Conscious Scheduling Heuristic model, which is an energy-conscious model, to assign tasks to the processor. As the simulation results show, these results demonstrate that the proposed algorithm has been able to outperform other methods.
Collapse
|