1
|
Kim J, Bang J, Lee J. Adaptive Dataset Management Scheme for Lightweight Federated Learning in Mobile Edge Computing. Sensors (Basel) 2024; 24:2579. [PMID: 38676197 PMCID: PMC11053995 DOI: 10.3390/s24082579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 04/03/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024]
Abstract
Federated learning (FL) in mobile edge computing has emerged as a promising machine-learning paradigm in the Internet of Things, enabling distributed training without exposing private data. It allows multiple mobile devices (MDs) to collaboratively create a global model. FL not only addresses the issue of private data exposure but also alleviates the burden on a centralized server, which is common in conventional centralized learning. However, a critical issue in FL is the imposed computing for local training on multiple MDs, which often have limited computing capabilities. This limitation poses a challenge for MDs to actively contribute to the training process. To tackle this problem, this paper proposes an adaptive dataset management (ADM) scheme, aiming to reduce the burden of local training on MDs. Through an empirical study on the influence of dataset size on accuracy improvement over communication rounds, we confirm that the amount of dataset has a reduced impact on accuracy gain. Based on this finding, we introduce a discount factor that represents the reduced impact of the size of the dataset on the accuracy gain over communication rounds. To address the ADM problem, which involves determining how much the dataset should be reduced over classes while considering both the proposed discounting factor and Kullback-Leibler divergence (KLD), a theoretical framework is presented. The ADM problem is a non-convex optimization problem. To solve it, we propose a greedy-based heuristic algorithm that determines a suboptimal solution with low complexity. Simulation results demonstrate that our proposed scheme effectively alleviates the training burden on MDs while maintaining acceptable training accuracy.
Collapse
Affiliation(s)
| | | | - Joohyung Lee
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea; (J.K.); (J.B.)
| |
Collapse
|
2
|
Shen L, Li B, Zhu X. Robust Offloading for Edge Computing-Assisted Sensing and Communication Systems: A Deep Reinforcement Learning Approach. Sensors (Basel) 2024; 24:2489. [PMID: 38676106 PMCID: PMC11054745 DOI: 10.3390/s24082489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 03/30/2024] [Accepted: 04/11/2024] [Indexed: 04/28/2024]
Abstract
In this paper, we consider an integrated sensing, communication, and computation (ISCC) system to alleviate the spectrum congestion and computation burden problem. Specifically, while serving communication users, a base station (BS) actively engages in sensing targets and collaborates seamlessly with the edge server to concurrently process the acquired sensing data for efficient target recognition. A significant challenge in edge computing systems arises from the inherent uncertainty in computations, mainly stemming from the unpredictable complexity of tasks. With this consideration, we address the computation uncertainty by formulating a robust communication and computing resource allocation problem in ISCC systems. The primary goal of the system is to minimize total energy consumption while adhering to perception and delay constraints. This is achieved through the optimization of transmit beamforming, offloading ratio, and computing resource allocation, effectively managing the trade-offs between local execution and edge computing. To overcome this challenge, we employ a Markov decision process (MDP) in conjunction with the proximal policy optimization (PPO) algorithm, establishing an adaptive learning strategy. The proposed algorithm stands out for its rapid training speed, ensuring compliance with latency requirements for perception and computation in applications. Simulation results highlight its robustness and effectiveness within ISCC systems compared to baseline approaches.
Collapse
Affiliation(s)
- Li Shen
- School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, China;
| | - Bin Li
- School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, China;
| | - Xiaojie Zhu
- Division of Computer Science, King Abdullah University of Science and Technology, Thuwal 23955-6900, Saudi Arabia;
| |
Collapse
|
3
|
Bai J, Zhu S, Ji H. Blockchain Based Decentralized and Proactive Caching Strategy in Mobile Edge Computing Environment. Sensors (Basel) 2024; 24:2279. [PMID: 38610489 PMCID: PMC11014043 DOI: 10.3390/s24072279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 04/01/2024] [Accepted: 04/01/2024] [Indexed: 04/14/2024]
Abstract
In the mobile edge computing (MEC) environment, the edge caching can provide the timely data response service for the intelligent scenarios. However, due to the limited storage capacity of edge nodes and the malicious node behavior, the question of how to select the cached contents and realize the decentralized security data caching faces challenges. In this paper, a blockchain-based decentralized and proactive caching strategy is proposed in an MEC environment to address this problem. The novelty is that the blockchain was adopted in an MEC environment with a proactive caching strategy based on node utility, and the corresponding optimization problem was built. The blockchain was adopted to build a secure and reliable service environment. The employed methodology is that the optimal caching strategy was achieved based on the linear relaxation technology and the interior point method. Additionally, in a content caching system, there is a trade-off between cache space and node utility, and the caching strategy was proposed to solve this problem. There was also a trade-off between the consensus process delay of blockchain and the caching latency of content. An offline consensus authentication method was adopted to reduce the influence of the consensus process delay on the content caching. The key finding was that the proposed algorithm can reduce latency and can ensure the security data caching in an IoT environment. Finally, the simulation experiment showed that the proposed algorithm can achieve up to 49.32%, 43.11%, and 34.85% improvements on the cache hit rate, the average content response latency, and the average system utility, respectively, compared to the random content caching algorithm, and it achieved up to 9.67%, 8.11%, and 5.95% increases, successively, compared to the greedy content caching algorithm.
Collapse
Affiliation(s)
| | | | - Houling Ji
- School of Computer Science, Yangtze University, Jingzhou 434023, China; (J.B.); (S.Z.)
| |
Collapse
|
4
|
Zhuang W, Xing F, Lu Y. Task Offloading Strategy for Unmanned Aerial Vehicle Power Inspection Based on Deep Reinforcement Learning. Sensors (Basel) 2024; 24:2070. [PMID: 38610282 PMCID: PMC11014296 DOI: 10.3390/s24072070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Revised: 03/15/2024] [Accepted: 03/21/2024] [Indexed: 04/14/2024]
Abstract
With the ongoing advancement of electric power Internet of Things (IoT), traditional power inspection methods face challenges such as low efficiency and high risk. Unmanned aerial vehicles (UAVs) have emerged as a more efficient solution for inspecting power facilities due to their high maneuverability, excellent line-of-sight communication capabilities, and strong adaptability. However, UAVs typically grapple with limited computational power and energy resources, which constrain their effectiveness in handling computationally intensive and latency-sensitive inspection tasks. In response to this issue, we propose a UAV task offloading strategy based on deep reinforcement learning (DRL), which is designed for power inspection scenarios consisting of mobile edge computing (MEC) servers and multiple UAVs. Firstly, we propose an innovative UAV-Edge server collaborative computing architecture to fully exploit the mobility of UAVs and the high-performance computing capabilities of MEC servers. Secondly, we established a computational model concerning energy consumption and task processing latency in the UAV power inspection system, enhancing our understanding of the trade-offs involved in UAV offloading strategies. Finally, we formalize the task offloading problem as a multi-objective optimization issue and simultaneously model it as a Markov Decision Process (MDP). Subsequently, we proposed a task offloading algorithm based on a Deep Deterministic Policy Gradient (OTDDPG) to obtain the optimal task offloading strategy for UAVs. The simulation results demonstrated that this approach outperforms baseline methods with significant improvements in task processing latency and energy consumption.
Collapse
Affiliation(s)
- Wei Zhuang
- School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | | | | |
Collapse
|
5
|
Liu X, Huang Z, Zhang Y, Jia Y, Wen W. CNN and Attention-Based Joint Source Channel Coding for Semantic Communications in WSNs. Sensors (Basel) 2024; 24:957. [PMID: 38339674 PMCID: PMC10857329 DOI: 10.3390/s24030957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 01/24/2024] [Accepted: 01/28/2024] [Indexed: 02/12/2024]
Abstract
Wireless Sensor Networks (WSNs) have emerged as an efficient solution for numerous real-time applications, attributable to their compactness, cost-effectiveness, and ease of deployment. The rapid advancement of 5G technology and mobile edge computing (MEC) in recent years has catalyzed the transition towards large-scale deployment of WSN devices. However, the resulting data proliferation and the dynamics of communication environments introduce new challenges for WSN communication: (1) ensuring robust communication in adverse environments and (2) effectively alleviating bandwidth pressure from massive data transmission. In response to the aforementioned challenges, this paper proposes a semantic communication solution. Specifically, considering the limited computational and storage resources of WSN devices, we propose a flexible Attention-based Adaptive Coding (AAC) module. This module integrates window and channel attention mechanisms, dynamically adjusts semantic information in response to the current channel state, and facilitates adaptation of a single model across various Signal-to-Noise Ratio (SNR) environments. Furthermore, to validate the effectiveness of this approach, the paper introduces an end-to-end Joint Source Channel Coding (JSCC) scheme for image semantic communication, employing the AAC module. Experimental results demonstrate that the proposed scheme surpasses existing deep JSCC schemes across datasets of varying resolutions; furthermore, they validate the efficacy of the proposed AAC module, which is capable of dynamically adjusting critical information according to the current channel state. This enables the model to be trained over a range of SNRs and obtain better results.
Collapse
Affiliation(s)
| | | | | | | | - Wanli Wen
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 401331, China; (X.L.); (Z.H.); (Y.Z.); (Y.J.)
| |
Collapse
|
6
|
Sun Z, Chen G. Enhancing Data Freshness in Air-Ground Collaborative Heterogeneous Networks through Contract Theory and Generative Diffusion-Based Mobile Edge Computing. Sensors (Basel) 2023; 24:74. [PMID: 38202936 PMCID: PMC10781220 DOI: 10.3390/s24010074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/24/2023] [Accepted: 12/20/2023] [Indexed: 01/12/2024]
Abstract
Mobile edge computing is critical for improving the user experience of latency-sensitive and freshness-based applications. This paper provides insights into the potential of non-orthogonal multiple access (NOMA) convergence with heterogeneous air-ground collaborative networks to improve system throughput and spectral efficiency. Coordinated resource allocation between UAVs and MEC servers, especially in the NOMA framework, is addressed as a key challenge. Under the unrealistic assumption that edge nodes contribute resources indiscriminately, we introduce a two-stage incentive mechanism. The model is based on contract theory and aims at optimizing the utility of the service provider (SP) under the constraints of individual rationality (IR) and incentive compatibility (IC) of the mobile user. The block coordinate descent method is used to refine the contract design and complemented by a generative diffusion model to improve the efficiency of searching for contracts. During the deployment process, the study emphasizes the positioning of UAVs to maximize SP effectiveness. An improved differential evolutionary algorithm is introduced to optimize the positioning of UAVs. Extensive evaluation shows our approach has excellent effectiveness and robustness in deterministic and unpredictable scenarios.
Collapse
Affiliation(s)
| | - Guifen Chen
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130000, China
| |
Collapse
|
7
|
Cheng Z, Ji X, You W, Bai Y, Chen Y, Qin X. FLPP: A Federated-Learning-Based Scheme for Privacy Protection in Mobile Edge Computing. Entropy (Basel) 2023; 25:1551. [PMID: 37998243 PMCID: PMC10670361 DOI: 10.3390/e25111551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/26/2023] [Accepted: 11/07/2023] [Indexed: 11/25/2023]
Abstract
Data sharing and analyzing among different devices in mobile edge computing is valuable for social innovation and development. The limitation to the achievement of this goal is the data privacy risk. Therefore, existing studies mainly focus on enhancing the data privacy-protection capability. On the one hand, direct data leakage is avoided through federated learning by converting raw data into model parameters for transmission. On the other hand, the security of federated learning is further strengthened by privacy-protection techniques to defend against inference attack. However, privacy-protection techniques may reduce the training accuracy of the data while improving the security. Particularly, trading off data security and accuracy is a major challenge in dynamic mobile edge computing scenarios. To address this issue, we propose a federated-learning-based privacy-protection scheme, FLPP. Then, we build a layered adaptive differential privacy model to dynamically adjust the privacy-protection level in different situations. Finally, we design a differential evolutionary algorithm to derive the most suitable privacy-protection policy for achieving the optimal overall performance. The simulation results show that FLPP has an advantage of 8∼34% in overall performance. This demonstrates that our scheme can enable data to be shared securely and accurately.
Collapse
Affiliation(s)
- Zhimo Cheng
- Department of Next-Generation Mobile Communication and Cyber Space Security, Information Engineering University, Zhengzhou 450002, China; (X.J.); (W.Y.); (Y.B.); (Y.C.); (X.Q.)
| | - Xinsheng Ji
- Department of Next-Generation Mobile Communication and Cyber Space Security, Information Engineering University, Zhengzhou 450002, China; (X.J.); (W.Y.); (Y.B.); (Y.C.); (X.Q.)
- Purple Mountain Laboratories, Nanjing 211111, China
| | - Wei You
- Department of Next-Generation Mobile Communication and Cyber Space Security, Information Engineering University, Zhengzhou 450002, China; (X.J.); (W.Y.); (Y.B.); (Y.C.); (X.Q.)
| | - Yi Bai
- Department of Next-Generation Mobile Communication and Cyber Space Security, Information Engineering University, Zhengzhou 450002, China; (X.J.); (W.Y.); (Y.B.); (Y.C.); (X.Q.)
| | - Yunjie Chen
- Department of Next-Generation Mobile Communication and Cyber Space Security, Information Engineering University, Zhengzhou 450002, China; (X.J.); (W.Y.); (Y.B.); (Y.C.); (X.Q.)
| | - Xiaogang Qin
- Department of Next-Generation Mobile Communication and Cyber Space Security, Information Engineering University, Zhengzhou 450002, China; (X.J.); (W.Y.); (Y.B.); (Y.C.); (X.Q.)
| |
Collapse
|
8
|
Nugroho AK, Shioda S, Kim T. Optimal Resource Provisioning and Task Offloading for Network-Aware and Federated Edge Computing. Sensors (Basel) 2023; 23:9200. [PMID: 38005586 PMCID: PMC10674318 DOI: 10.3390/s23229200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/09/2023] [Accepted: 11/14/2023] [Indexed: 11/26/2023]
Abstract
Compared to cloud computing, mobile edge computing (MEC) is a promising solution for delay-sensitive applications due to its proximity to end users. Because of its ability to offload resource-intensive tasks to nearby edge servers, MEC allows a diverse range of compute- and storage-intensive applications to operate on resource-constrained devices. The optimal utilization of MEC can lead to enhanced responsiveness and quality of service, but it requires careful design from the perspective of user-base station association, virtualized resource provisioning, and task distribution. Also, considering the limited exploration of the federation concept in the existing literature, its impacts on the allocation and management of resources still remain not widely recognized. In this paper, we study the network and MEC resource scheduling problem, where some edge servers are federated, limiting resource expansion within the same federations. The integration of network and MEC is crucial, emphasizing the necessity of a joint approach. In this work, we present NAFEOS, a proposed solution formulated as a two-stage algorithm that can effectively integrate association optimization with vertical and horizontal scaling. The Stage-1 problem optimizes the user-base station association and federation assignment so that the edge servers can be utilized in a balanced manner. The following Stage-2 dynamically schedules both vertical and horizontal scaling so that the fluctuating task-offloading demands from users are fulfilled. The extensive evaluations and comparison results show that the proposed approach can effectively achieve optimal resource utilization.
Collapse
Affiliation(s)
| | - Shigeo Shioda
- Graduate School of Engineering, Chiba University, Inage-ku, Chiba 263-8522, Japan;
| | - Taewoon Kim
- School of Computer Science and Engineering, Pusan National University, Busan 46241, Republic of Korea;
| |
Collapse
|
9
|
Zainudin H, Koufos K, Lee G, Jiang L, Dianati M. Impact analysis of cooperative perception on the performance of automated driving in unsignalized roundabouts. Front Robot AI 2023; 10:1164950. [PMID: 37649809 PMCID: PMC10464950 DOI: 10.3389/frobt.2023.1164950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 07/25/2023] [Indexed: 09/01/2023] Open
Abstract
This paper reports the implementation and results of a simulation-based analysis of the impact of cloud/edge-enabled cooperative perception on the performance of automated driving in unsignalized roundabouts. This is achieved by comparing the performance of automated driving assisted by cooperative perception to that of a baseline system, where the automated vehicle relies only on its onboard sensing and perception for motion planning and control. The paper first provides the descriptions of the implemented simulation model, which integrates the SUMO road traffic generator and CARLA simulator. This includes descriptions of both the baseline and cooperative perception-assisted automated driving systems. We then define a set of relevant key performance indicators for traffic efficiency, safety, and ride comfort, as well as simulation scenarios to collect relevant data for our analysis. This is followed by the description of simulation scenarios, presentation of the results, and discussions of the insights learned from the results.
Collapse
Affiliation(s)
| | - Konstantinos Koufos
- Warwick Manufacturing Group (WMG) at The University of Warwick, Coventry, United Kingdom
| | | | | | | |
Collapse
|
10
|
Huang X, Lei B, Ji G, Zhang B. Energy Criticality Avoidance-Based Delay Minimization Ant Colony Algorithm for Task Assignment in Mobile-Server-Assisted Mobile Edge Computing. Sensors (Basel) 2023; 23:6041. [PMID: 37447890 DOI: 10.3390/s23136041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 06/24/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023]
Abstract
Mobile edge computing has been an important computing paradigm for providing delay-sensitive and computation-intensive services to mobile users. In this paper, we study the problem of the joint optimization of task assignment and energy management in a mobile-server-assisted edge computing network, where mobile servers can provide assisted task offloading services on behalf of the fixed servers at the network edge. The design objective is to minimize the system delay. As far as we know, our paper presents the first work that improves the quality of service of the whole system from a long-term aspect by prolonging the operational time of assisted mobile servers. We formulate the system delay minimization problem as a mixed-integer programming (MIP) problem. Due to the NP-hardness of this problem, we propose a dynamic energy criticality avoidance-based delay minimization ant colony algorithm (EACO), which strives for a balance between delay minimization for offloaded tasks and operational time maximization for mobile servers. We present a detailed algorithm design and deduce its computational complexity. We conduct extensive simulations, and the results demonstrate the high performance of the proposed algorithm compared to the benchmark algorithms.
Collapse
Affiliation(s)
- Xiaoyao Huang
- Research Institute China Telecom, Beijing 102209, China
| | - Bo Lei
- Research Institute China Telecom, Beijing 102209, China
| | - Guoliang Ji
- No. 208 Research Institute of China Ordnance Industries, Beijing 102227, China
| | - Baoxian Zhang
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
11
|
Sun Z, Chen G. Contract-Optimization Approach (COA): A New Approach for Optimizing Service Caching, Computation Offloading, and Resource Allocation in Mobile Edge Computing Network. Sensors (Basel) 2023; 23:4806. [PMID: 37430721 DOI: 10.3390/s23104806] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 04/25/2023] [Accepted: 05/09/2023] [Indexed: 07/12/2023]
Abstract
An optimal method for resource allocation based on contract theory is proposed to improve energy utilization. In heterogeneous networks (HetNets), distributed heterogeneous network architectures are designed to balance different computing capacities, and MEC server gains are designed based on the amount of allocated computing tasks. An optimal function based on contract theory is developed to optimize the revenue gain of MEC servers while considering constraints such as service caching, computation offloading, and the number of resources allocated. As the objective function is a complex problem, it is solved utilizing equivalent transformations and variations of the reduced constraints. A greedy algorithm is applied to solve the optimal function. A comparative experiment on resource allocation is conducted, and energy utilization parameters are calculated to compare the effectiveness of the proposed algorithm and the main algorithm. The results show that the proposed incentive mechanism has a significant advantage in improving the utility of the MEC server.
Collapse
Affiliation(s)
- Zhiyao Sun
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun 130000, China
| | - Guifen Chen
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun 130000, China
| |
Collapse
|
12
|
Hadjkouider AM, Kerrache CA, Korichi A, Sahraoui Y, Calafate CT. Stackelberg Game Approach for Service Selection in UAV Networks. Sensors (Basel) 2023; 23:s23094220. [PMID: 37177424 PMCID: PMC10180695 DOI: 10.3390/s23094220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 04/20/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023]
Abstract
Nowadays, mobile devices are expected to perform a growing number of tasks, whose complexity is also increasing significantly. However, despite great technological improvements in the last decade, such devices still have limitations in terms of processing power and battery lifetime. In this context, mobile edge computing (MEC) emerges as a possible solution to address such limitations, being able to provide on-demand services to the customer, and bringing closer several services published in the cloud with a reduced cost and fewer security concerns. On the other hand, Unmanned Aerial Vehicle (UAV) networking emerged as a paradigm offering flexible services, new ephemeral applications such as safety and disaster management, mobile crowd-sensing, and fast delivery, to name a few. However, to efficiently use these services, discovery and selection strategies must be taken into account. In this context, discovering the services made available by a UAV-MEC network, and selecting the best services among those available in a timely and efficient manner, can become a challenging task. To face these issues, game theory methods have been proposed in the literature that perfectly suit the case of UAV-MEC services by modeling this challenge as a Stackelberg game, and using existing approaches to find the solution for such a game aiming at an efficient services' discovery and service selection. Hence, the goal of this paper is to propose Stackelberg-game-based solutions for service discovery and selection in the context of UAV-based mobile edge computing. Simulations results conducted using the NS-3 simulator highlight the efficiency of our proposed game in terms of price and QoS metrics.
Collapse
Affiliation(s)
- Abdessalam Mohammed Hadjkouider
- LINATI Laboratory, Department of Computer Science and Information Technology, Kasdi Merbah University of Ouargla, 30000 Ouargla, Algeria
| | - Chaker Abdelaziz Kerrache
- Laboratoire d'Informatique et de Mathématiques, Université Amar Telidji de Laghouat, 03000 Laghouat, Algeria
| | - Ahmed Korichi
- LINATI Laboratory, Department of Computer Science and Information Technology, Kasdi Merbah University of Ouargla, 30000 Ouargla, Algeria
| | - Yesin Sahraoui
- LINATI Laboratory, Department of Computer Science and Information Technology, Kasdi Merbah University of Ouargla, 30000 Ouargla, Algeria
| | - Carlos T Calafate
- Computer Engineering Department (DISCA), Universitat Politècnica de València, 46022 Valencia, Spain
| |
Collapse
|
13
|
Park J, Chung K. Distributed DRL-Based Computation Offloading Scheme for Improving QoE in Edge Computing Environments. Sensors (Basel) 2023; 23:4166. [PMID: 37112505 PMCID: PMC10144645 DOI: 10.3390/s23084166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/18/2023] [Accepted: 04/20/2023] [Indexed: 06/19/2023]
Abstract
Various edge collaboration schemes that rely on reinforcement learning (RL) have been proposed to improve the quality of experience (QoE). Deep RL (DRL) maximizes cumulative rewards through large-scale exploration and exploitation. However, the existing DRL schemes do not consider the temporal states using a fully connected layer. Moreover, they learn the offloading policy regardless of the importance of experience. They also do not learn enough because of their limited experiences in distributed environments. To solve these problems, we proposed a distributed DRL-based computation offloading scheme for improving the QoE in edge computing environments. The proposed scheme selects the offloading target by modeling the task service time and load balance. We implemented three methods to improve the learning performance. Firstly, the DRL scheme used the least absolute shrinkage and selection operator (LASSO) regression and attention layer to consider the temporal states. Secondly, we learned the optimal policy based on the importance of experience using the TD error and loss of the critic network. Finally, we adaptively shared the experience between agents, based on the strategy gradient, to solve the data sparsity problem. The simulation results showed that the proposed scheme achieved lower variation and higher rewards than the existing schemes.
Collapse
|
14
|
Alharbi HA, Aldossary M, Almutairi J, Elgendy IA. Energy-Aware and Secure Task Offloading for Multi-Tier Edge-Cloud Computing Systems. Sensors (Basel) 2023; 23:3254. [PMID: 36991964 PMCID: PMC10055840 DOI: 10.3390/s23063254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 03/04/2023] [Accepted: 03/13/2023] [Indexed: 06/19/2023]
Abstract
Nowadays, Unmanned Aerial Vehicle (UAV) devices and their services and applications are gaining popularity and attracting considerable attention in different fields of our daily life. Nevertheless, most of these applications and services require more powerful computational resources and energy, and their limited battery capacity and processing power make it difficult to run them on a single device. Edge-Cloud Computing (ECC) is emerging as a new paradigm to cope with the challenges of these applications, which moves computing resources to the edge of the network and remote cloud, thereby alleviating the overhead through task offloading. Even though ECC offers substantial benefits for these devices, the limited bandwidth condition in the case of simultaneous offloading via the same channel with increasing data transmission of these applications has not been adequately addressed. Moreover, protecting the data through transmission remains a significant concern that still needs to be addressed. Therefore, in this paper, to bypass the limited bandwidth and address the potential security threats challenge, a new compression, security, and energy-aware task offloading framework is proposed for the ECC system environment. Specifically, we first introduce an efficient layer of compression to smartly reduce the transmission data over the channel. In addition, to address the security issue, a new layer of security based on an Advanced Encryption Standard (AES) cryptographic technique is presented to protect offloaded and sensitive data from different vulnerabilities. Subsequently, task offloading, data compression, and security are jointly formulated as a mixed integer problem whose objective is to reduce the overall energy of the system under latency constraints. Finally, simulation results reveal that our model is scalable and can cause a significant reduction in energy consumption (i.e., 19%, 18%, 21%, 14.5%, 13.1% and 12%) with respect to other benchmarks (i.e., local, edge, cloud and further benchmark models).
Collapse
Affiliation(s)
- Hatem A. Alharbi
- Department of Computer Engineering, College of Computer Science and Engineering, Taibah University, Al-Madinah 42353, Saudi Arabia
| | - Mohammad Aldossary
- Department of Computer Science, College of Arts and Science, Prince Sattam Bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia
| | - Jaber Almutairi
- Department of Computer Science, College of Computer Science and Engineering, Taibah University, Al-Madinah 42353, Saudi Arabia
| | - Ibrahim A. Elgendy
- Department of Computer Science, Faculty of Computers and Information, Menoufia University, Shibin El Kom 32511, Egypt
| |
Collapse
|
15
|
Kwon Y, Kim W, Jung I. Neural Network Models for Driving Control of Indoor Autonomous Vehicles in Mobile Edge Computing. Sensors (Basel) 2023; 23:2575. [PMID: 36904779 PMCID: PMC10007646 DOI: 10.3390/s23052575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 02/16/2023] [Accepted: 02/20/2023] [Indexed: 06/18/2023]
Abstract
Mobile edge computing has been proposed as a solution for solving the latency problem of traditional cloud computing. In particular, mobile edge computing is needed in areas such as autonomous driving, which requires large amounts of data to be processed without latency for safety. Indoor autonomous driving is attracting attention as one of the mobile edge computing services. Furthermore, it relies on its sensors for location recognition because indoor autonomous driving cannot use a GPS device, as is the case with outdoor driving. However, while the autonomous vehicle is being driven, the real-time processing of external events and the correction of errors are required for safety. Furthermore, an efficient autonomous driving system is required because it is a mobile environment with resource constraints. This study proposes neural network models as a machine-learning method for autonomous driving in an indoor environment. The neural network model predicts the most appropriate driving command for the current location based on the range data measured with the LiDAR sensor. We designed six neural network models to be evaluated according to the number of input data points. In addition, we made an autonomous vehicle based on the Raspberry Pi for driving and learning and an indoor circular driving track for collecting data and performance evaluation. Finally, we evaluated six neural network models in terms of confusion matrix, response time, battery consumption, and driving command accuracy. In addition, when neural network learning was applied, the effect of the number of inputs was confirmed in the usage of resources. The result will influence the choice of an appropriate neural network model for an indoor autonomous vehicle.
Collapse
|
16
|
Liu S, Yang S, Zhang H, Wu W. A Federated Learning and Deep Reinforcement Learning-Based Method with Two Types of Agents for Computation Offload. Sensors (Basel) 2023; 23:2243. [PMID: 36850846 PMCID: PMC9964467 DOI: 10.3390/s23042243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/14/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
With the rise of latency-sensitive and computationally intensive applications in mobile edge computing (MEC) environments, the computation offloading strategy has been widely studied to meet the low-latency demands of these applications. However, the uncertainty of various tasks and the time-varying conditions of wireless networks make it difficult for mobile devices to make efficient decisions. The existing methods also face the problems of long-delay decisions and user data privacy disclosures. In this paper, we present the FDRT, a federated learning and deep reinforcement learning-based method with two types of agents for computation offload, to minimize the system latency. FDRT uses a multi-agent collaborative computation offloading strategy, namely, DRT. DRT divides the offloading decision into whether to compute tasks locally and whether to offload tasks to MEC servers. The designed DDQN agent considers the task information, its own resources, and the network status conditions of mobile devices, and the designed D3QN agent considers these conditions of all MEC servers in the collaborative cloud-side end MEC system; both jointly learn the optimal decision. FDRT also applies federated learning to reduce communication overhead and optimize the model training of DRT by designing a new parameter aggregation method, while protecting user data privacy. The simulation results showed that DRT effectively reduced the average task execution delay by up to 50% compared with several baselines and state-of-the-art offloading strategies. FRDT also accelerates the convergence rate of multi-agent training and reduces the training time of DRT by 61.7%.
Collapse
Affiliation(s)
| | | | | | - Weiguo Wu
- Correspondence: ; Tel.: +86-13193399337
| |
Collapse
|
17
|
da Silva JCF, Silva MC, Luz EJS, Delabrida S, Oliveira RAR. Using Mobile Edge AI to Detect and Map Diseases in Citrus Orchards. Sensors (Basel) 2023; 23:2165. [PMID: 36850763 PMCID: PMC9959271 DOI: 10.3390/s23042165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/08/2023] [Accepted: 02/11/2023] [Indexed: 06/18/2023]
Abstract
Deep Learning models have presented promising results when applied to Agriculture 4.0. Among other applications, these models can be used in disease detection and fruit counting. Deep Learning models usually have many layers in the architecture and millions of parameters. This aspect hinders the use of Deep Learning on mobile devices as they require a large amount of processing power for inference. In addition, the lack of high-quality Internet connectivity in the field impedes the usage of cloud computing, pushing the processing towards edge devices. This work describes the proposal of an edge AI application to detect and map diseases in citrus orchards. The proposed system has low computational demand, enabling the use of low-footprint models for both detection and classification tasks. We initially compared AI algorithms to detect fruits on trees. Specifically, we analyzed and compared YOLO and Faster R-CNN. Then, we studied lean AI models to perform the classification task. In this context, we tested and compared the performance of MobileNetV2, EfficientNetV2-B0, and NASNet-Mobile. In the detection task, YOLO and Faster R-CNN had similar AI performance metrics, but YOLO was significantly faster. In the image classification task, MobileNetMobileV2 and EfficientNetV2-B0 obtained an accuracy of 100%, while NASNet-Mobile had a 98% performance. As for the timing performance, MobileNetV2 and EfficientNetV2-B0 were the best candidates, while NASNet-Mobile was significantly worse. Furthermore, MobileNetV2 had a 10% better performance than EfficientNetV2-B0. Finally, we provide a method to evaluate the results from these algorithms towards describing the disease spread using statistical parametric models and a genetic algorithm to perform the parameters' regression. With these results, we validated the proposed pipeline, enabling the usage of adequate AI models to develop a mobile edge AI solution.
Collapse
|
18
|
Rodriguez-Conde I, Campos C, Fdez-Riverola F. Horizontally Distributed Inference of Deep Neural Networks for AI-Enabled IoT. Sensors (Basel) 2023; 23:1911. [PMID: 36850508 PMCID: PMC9958567 DOI: 10.3390/s23041911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/02/2023] [Accepted: 02/05/2023] [Indexed: 06/18/2023]
Abstract
Motivated by the pervasiveness of artificial intelligence (AI) and the Internet of Things (IoT) in the current "smart everything" scenario, this article provides a comprehensive overview of the most recent research at the intersection of both domains, focusing on the design and development of specific mechanisms for enabling a collaborative inference across edge devices towards the in situ execution of highly complex state-of-the-art deep neural networks (DNNs), despite the resource-constrained nature of such infrastructures. In particular, the review discusses the most salient approaches conceived along those lines, elaborating on the specificities of the partitioning schemes and the parallelism paradigms explored, providing an organized and schematic discussion of the underlying workflows and associated communication patterns, as well as the architectural aspects of the DNNs that have driven the design of such techniques, while also highlighting both the primary challenges encountered at the design and operational levels and the specific adjustments or enhancements explored in response to them.
Collapse
Affiliation(s)
- Ivan Rodriguez-Conde
- Department of Computer Science, University of Arkansas at Little Rock, 2801 South University Avenue, Little Rock, AR 72204, USA
| | - Celso Campos
- Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
| | - Florentino Fdez-Riverola
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| |
Collapse
|
19
|
Huang YY, Wang PC. Computation Offloading and User-Clustering Game in Multi-Channel Cellular Networks for Mobile Edge Computing. Sensors (Basel) 2023; 23:1155. [PMID: 36772194 PMCID: PMC9919130 DOI: 10.3390/s23031155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 01/15/2023] [Accepted: 01/16/2023] [Indexed: 06/18/2023]
Abstract
Mobile devices may use mobile edge computing to improve energy efficiency and responsiveness by offloading computation tasks to edge servers. However, the transmissions of mobile devices may result in interference that decreases the upload rate and prolongs transmission delay. Clustering has been shown as an effective approach to improve the transmission efficiency for dense devices, but there is no distributed algorithm for the optimization of clustering and computation offloading. In this work, we study the optimization problem of computation offloading to minimize the energy consumption of mobile devices in mobile edge computing by adaptively clustering devices to improve the transmission efficiency. To address the optimization problem in a distributed manner, the decision problem of clustering and computation offloading for mobile devices is formulated as a potential game. We introduce the construction of the potential game and show the existence of Nash equilibrium in the game with a finite enhancement ability. Then, we propose a distributed algorithm of clustering and computation offloading based on game theory. We conducted a simulation to evaluate the proposed algorithm. The numerical results from our simulation show that our algorithm can improve offloading efficiency for mobile devices in mobile edge computing by improving transmission efficiency. By offloading more tasks to edge servers, both the energy efficiency of mobile devices and the responsiveness of computation-intensive applications can be improved simultaneously.
Collapse
|
20
|
Feng M, Yao H, Li J. A Task Scheduling Optimization Method for Vehicles Serving as Obstacles in Mobile Edge Computing Based IoV Systems. Entropy (Basel) 2023; 25:139. [PMID: 36673280 PMCID: PMC9857856 DOI: 10.3390/e25010139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 01/05/2023] [Accepted: 01/08/2023] [Indexed: 06/17/2023]
Abstract
In recent years, as more and more vehicles request service from roadside units (RSU), the vehicle-to-infrastructure (V2I) communication links are under tremendous pressure. This paper first proposes a dynamic dense traffic flow model under the condition of fading channel. Based on this, the reliability is redefined according to the real-time location information of vehicles. The on-board units (OBU) migrate intensive computing tasks to the appropriate RSU to optimize the execution time and calculating cost at the same time. In addition, competitive delay is introduced into the model of execution time, which can describe the channel resource contention and data conflict in dynamic scenes of the internet of vehicles (IoV). Next, the task scheduling for RSU is formulated as a multi-objective optimization problem. In order to solve the problem, a task scheduling algorithm based on a reliability constraint (TSARC) is proposed to select the optimal RSU for task transmission. When compared with the genetic algorithm (GA), there are some improvements of TSARC: first, the quick non-dominated sorting is applied to layer the population and reduce the complexity. Second, the elite strategy is introduced with an excellent nonlinear optimization ability, which ensures the diversity of optimal individuals and provides different preference choices for passengers. Third, the reference point mechanism is introduced to reserve the individuals that are non-dominated and close to reference points. TSARC's Pareto based multi-objective optimization can comprehensively measure the overall state of the system and flexibly schedule system resources. Furthermore, it overcomes the defects of the GA method, such as the determination of the linear weight value, the non-uniformity of dimensions among objectives, and poor robustness. Finally, numerical simulation results based on the British Highway Traffic Flow Data Set show that the TSARC performs better scalability and efficiency than other methods with different numbers of tasks and traffic flow densities, which verifies the previous theoretical derivation.
Collapse
|
21
|
Miao J, Chen H, Li H, Bai S. Secrecy Energy Efficiency Enhancement in UAV-Assisted MEC System. Sensors (Basel) 2023; 23:723. [PMID: 36679520 PMCID: PMC9864342 DOI: 10.3390/s23020723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 12/30/2022] [Accepted: 01/03/2023] [Indexed: 06/17/2023]
Abstract
A secrecy energy efficiency optimization scheme for a multifunctional unmanned aerial vehicle (UAV) assisted mobile edge computing system is proposed to solve the computing power and security issues in the Internet-of-Things scenario. The UAV can switch roles between a computing UAV and jamming UAV based on the channel conditions. To ensure the security of the content and the system energy efficiency in the process of offloading computing tasks, the UAV trajectory, uplink transmit power, user scheduling, and offload task are jointly optimized, and an updated-rate assisted block coordinate descent (BCD) algorithm is used. Simulation results show that this scheme efficiently improves the secrecy performance and energy efficiency of the system. Compared with the benchmark scheme, the secrecy energy efficiency of the scheme is improved by 38.5%.
Collapse
|
22
|
Tong M, Li S, Wang X, Wei P. Inter-Satellite Cooperative Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Networks. Sensors (Basel) 2023; 23:668. [PMID: 36679460 PMCID: PMC9864525 DOI: 10.3390/s23020668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/25/2022] [Accepted: 01/03/2023] [Indexed: 06/17/2023]
Abstract
Mobile edge computing (MEC)-enabled satellite-terrestrial networks (STNs) can provide task computing services for Internet of Things (IoT) devices. However, since some applications' tasks require huge amounts of computing resources, sometimes the computing resources of a local satellite's MEC server are insufficient, but the computing resources of neighboring satellites' MEC servers are redundant. Therefore, we investigated inter-satellite cooperation in MEC-enabled STNs. First, we designed a system model of the MEC-enabled STN architecture, where the local satellite and the neighboring satellites assist IoT devices in computing tasks through inter-satellite cooperation. The local satellite migrates some tasks to the neighboring satellites to utilize their idle resources. Next, the task completion delay minimization problem for all IoT devices is formulated and decomposed. Then, we propose an inter-satellite cooperative joint offloading decision and resource allocation optimization scheme, which consists of a task offloading decision algorithm based on the Grey Wolf Optimizer (GWO) algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method. The optimal solution is obtained by continuous iterations. Finally, simulation results demonstrate that the proposed scheme achieves relatively better performance than other baseline schemes.
Collapse
Affiliation(s)
- Minglei Tong
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Song Li
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Xiaoxiang Wang
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Peng Wei
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
| |
Collapse
|
23
|
Yu H, Liu J, Hu C, Zhu Z. Privacy-Preserving Task Offloading Strategies in MEC. Sensors (Basel) 2022; 23:95. [PMID: 36616692 PMCID: PMC9823524 DOI: 10.3390/s23010095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/11/2022] [Accepted: 12/18/2022] [Indexed: 06/17/2023]
Abstract
In mobile edge computing (MEC), mobile devices can choose to offload their tasks to edge servers for execution, thereby effectively reducing the completion time of tasks and energy consumption of mobile devices. However, most of the data transfer brought by offloading relies on wireless communication technology, making the private information of mobile devices vulnerable to eavesdropping and monitoring. Privacy leakage, especially the location and association privacies, can pose a significant risk to users of mobile devices. Therefore, protecting the privacy of mobile devices during task offloading is important and cannot be ignored. This paper considers both location privacy and association privacy of mobile devices during task offloading in MEC and targets to reduce the leakage of location and association privacy while minimizing the average completion time of tasks. To achieve these goals, we design a privacy-preserving task offloading scheme to protect location privacy and association privacy. The scheme is mainly divided into two parts. First, we adopt a proxy forwarding mechanism to protect the location privacy of mobile devices from being leaked. Second, we select the proxy server and edge server for each task that needs to be offloaded. In the proxy server selection policy, we make a choice based on the location information of proxy servers, to reduce the leakage risk of location privacy. In the edge server selection strategy, we consider the privacy conflict between tasks, the computing ability, and location of edge servers, to reduce the leakage risk of association privacy plus the average completion time of tasks as much as possible. Simulated experimental results demonstrate that our scheme is effective in protecting the location privacy and association privacy of mobile devices and reducing the average completion time of tasks compared with the-state-of-art techniques.
Collapse
Affiliation(s)
- Haijian Yu
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan 430065, China
| | - Jing Liu
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan 430065, China
| | - Chunjie Hu
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan 430065, China
| | - Ziqi Zhu
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan 430065, China
| |
Collapse
|
24
|
Sarfraz M, Alshahrani HM, Tarmissi K, Alshahrani H, Elfaki MA, Hamza MA, Nauman A, Khurshaid T. Intelligent Reflecting Surfaces Enhanced Mobile Edge Computing: Minimizing the Maximum Computational Time. Sensors (Basel) 2022; 22:8719. [PMID: 36433313 PMCID: PMC9699166 DOI: 10.3390/s22228719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 09/28/2022] [Accepted: 10/21/2022] [Indexed: 06/16/2023]
Abstract
Intelligent reflecting surfaces (IRS) and mobile edge computing (MEC) have recently attracted significant attention in academia and industry. Without consuming any external energy, IRS can extend wireless coverage by smartly reconfiguring the phase shift of a signal towards the receiver with the help of passive elements. On the other hand, MEC has the ability to reduce latency by providing extensive computational facilities to users. This paper proposes a new optimization scheme for IRS-enhanced mobile edge computing to minimize the maximum computational time of the end users' tasks. The optimization problem is formulated to simultaneously optimize the task segmentation and transmission power of users, phase shift design of IRS, and computational resource of mobile edge. The optimization problem is non-convex and coupled on multiple variables which make it very complex. Therefore, we transform it to convex by decoupling it into sub-problems and then obtain an efficient solution. In particular, the closed-form solutions for task segmentation and edge computational resources are achieved through the monotonical relation of time and Karush-Kuhn-Tucker conditions, while the transmission power of users and phase shift design of IRS are computed using the convex optimization technique. The proposed IRS-enhanced optimization scheme is compared with edge computing nave offloading, binary offloading, and edge computing, respectively. Numerical results demonstrate the benefits of the proposed scheme compared to other benchmark schemes.
Collapse
Affiliation(s)
- Mubashar Sarfraz
- Department of Electrical Engineering, National University of Modern Languages, Islamabad 44000, Pakistan
| | - Haya Mesfer Alshahrani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Khaled Tarmissi
- Department of Computer Sciences, College of Computing and Information System, Umm Al-Qura University, Mecca 24382, Saudi Arabia
| | - Hussain Alshahrani
- Department of Computer Science, College of Computing and Information Technology, Shaqra University, Shaqra 11961, Saudi Arabia
| | - Mohamed Ahmed Elfaki
- Department of Computer Science, College of Computing and Information Technology, Shaqra University, Shaqra 11961, Saudi Arabia
| | - Manar Ahmed Hamza
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam Bin Abdulaziz University, AlKharj 11671, Saudi Arabia
| | - Ali Nauman
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Korea
| | - Tahir Khurshaid
- Department of Electrical Engineering, Yeungnam University, Gyeongsan 38541, Korea
| |
Collapse
|
25
|
Huang S, Zhang J, Wu Y. Altitude Optimization and Task Allocation of UAV-Assisted MEC Communication System. Sensors (Basel) 2022; 22:8061. [PMID: 36298409 PMCID: PMC9607876 DOI: 10.3390/s22208061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/17/2022] [Accepted: 10/19/2022] [Indexed: 06/16/2023]
Abstract
Unmanned aerial vehicles (UAVs) are widely used in wireless communication systems due to their flexible mobility and high maneuverability. The combination of UAVs and mobile edge computing (MEC) is regarded as a promising technology to provide high-quality computing services for latency-sensitive applications. In this paper, a novel UAV-assisted MEC uplink maritime communication system is proposed, where an MEC server is equipped on UAV to provide flexible assistance to maritime user. In particular, the task of user can be divided into two parts: one portion is offloaded to UAV and the remaining portion is offloaded to onshore base station for computing. We formulate an optimization problem to minimize the total system latency by designing the optimal flying altitude of UAV and the optimal task allocation ratio. We derive a semi closed-form expression of the optimal flying altitude of UAV and a closed-form expression of the optimal task allocation ratio. Simulation results demonstrate the precision of the theoretical analyses and show some interesting insights.
Collapse
Affiliation(s)
- Shuqi Huang
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Jun Zhang
- Jiangsu Key Laboratory of Wireless Communications, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
| | - Yi Wu
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| |
Collapse
|
26
|
Guan X, Lv T, Lin Z, Huang P, Zeng J. D2D-Assisted Multi-User Cooperative Partial Offloading in MEC Based on Deep Reinforcement Learning. Sensors (Basel) 2022; 22:7004. [PMID: 36146350 PMCID: PMC9502189 DOI: 10.3390/s22187004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/02/2022] [Accepted: 09/13/2022] [Indexed: 06/16/2023]
Abstract
Mobile edge computing (MEC) and device-to-device (D2D) communication can alleviate the resource constraints of mobile devices and reduce communication latency. In this paper, we construct a D2D-MEC framework and study the multi-user cooperative partial offloading and computing resource allocation. We maximize the number of devices under the maximum delay constraints of the application and the limited computing resources. In the considered system, each user can offload its tasks to an edge server and a nearby D2D device. We first formulate the optimization problem as an NP-hard problem and then decouple it into two subproblems. The convex optimization method is used to solve the first subproblem, and the second subproblem is defined as a Markov decision process (MDP). A deep reinforcement learning algorithm based on a deep Q network (DQN) is developed to maximize the amount of tasks that the system can compute. Extensive simulation results demonstrate the effectiveness and superiority of the proposed scheme.
Collapse
Affiliation(s)
- Xin Guan
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China
| | - Tiejun Lv
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China
| | - Zhipeng Lin
- Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space, College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics (NUAA), Nanjing 211106, China
| | - Pingmu Huang
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China
| | - Jie Zeng
- School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
27
|
Liu X, Zhao X, Liu G, Huang F, Huang T, Wu Y. Collaborative Task Offloading and Service Caching Strategy for Mobile Edge Computing. Sensors (Basel) 2022; 22:s22186760. [PMID: 36146113 PMCID: PMC9502834 DOI: 10.3390/s22186760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 09/03/2022] [Accepted: 09/04/2022] [Indexed: 05/14/2023]
Abstract
Mobile edge computing (MEC), which sinks the functions of cloud servers, has become an emerging paradigm to solve the contradiction between delay-sensitive tasks and resource-constrained terminals. Task offloading assisted by service caching in a collaborative manner can reduce delay and balance the edge load in MEC. Due to the limited storage resources of edge servers, it is a significant issue to develop a dynamical service caching strategy according to the actual variable user demands in task offloading. Therefore, this paper investigates the collaborative task offloading problem assisted by a dynamical caching strategy in MEC. Furthermore, a two-level computing strategy called joint task offloading and service caching (JTOSC) is proposed to solve the optimized problem. The outer layer in JTOSC iteratively updates the service caching decisions based on the Gibbs sampling. The inner layer in JTOSC adopts the fairness-aware allocation algorithm and the offloading revenue preference-based bilateral matching algorithm to get a great computing resource allocation and task offloading scheme. The simulation results indicate that the proposed strategy outperforms the other four comparison strategies in terms of maximum offloading delay, service cache hit rate, and edge load balance.
Collapse
Affiliation(s)
- Xiang Liu
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
| | - Xu Zhao
- Beijing Smart-Chip Microelectronics Technology Co., Ltd., Beijing 100005, China
| | - Guojin Liu
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
- Correspondence:
| | - Fei Huang
- State Grid Chongqing Electric Power Company Electric Power Research Institute, Chongqing 401123, China
| | - Tiancong Huang
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
| | - Yucheng Wu
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
| |
Collapse
|
28
|
Li D, Mao Y, Chen X, Li J, Liu S. Deployment and Allocation Strategy for MEC Nodes in Complex Multi-Terminal Scenarios. Sensors (Basel) 2022; 22:6719. [PMID: 36146069 PMCID: PMC9505643 DOI: 10.3390/s22186719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 09/01/2022] [Accepted: 09/02/2022] [Indexed: 06/16/2023]
Abstract
Mobile edge computing (MEC) has become an effective solution for insufficient computing and communication problems for the Internet of Things (IoT) applications due to its rich computing resources on the edge side. In multi-terminal scenarios, the deployment scheme of edge nodes has an important impact on system performance and has become an essential issue in end-edge-cloud architecture. In this article, we consider specific factors, such as spatial location, power supply, and urgency requirements of terminals, with respect to building an evaluation model to solve the allocation problem. An evaluation model based on reward, energy consumption, and cost factors is proposed. The genetic algorithm is applied to determine the optimal edge node deployment and allocation strategies. Moreover, we compare the proposed method with the k-means and ant colony algorithms. The results show that the obtained strategies achieve good evaluation results under problem constraints. Furthermore, we conduct comparison tests with different attributes to further test the performance of the proposed method.
Collapse
Affiliation(s)
- Danyang Li
- State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, China
| | - Yuxing Mao
- State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, China
| | - Xueshuo Chen
- State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, China
| | - Jian Li
- State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, China
| | - Siyang Liu
- State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, China
- Electric Power Research Institute, Yunnan Power Grid Co., Ltd., Yundaxilu, Kunming 650217, China
| |
Collapse
|
29
|
Chen J, Chang Z, Guo W, Guo X. Resource Allocation and Computation Offloading for Wireless Powered Mobile Edge Computing. Sensors (Basel) 2022; 22:6002. [PMID: 36015762 PMCID: PMC9412292 DOI: 10.3390/s22166002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 07/27/2022] [Accepted: 08/09/2022] [Indexed: 06/15/2023]
Abstract
In this paper, we investigate a resource allocation and computation offloading problem in a heterogeneous mobile edge computing (MEC) system. In the considered system, a wireless power transfer (WPT) base station (BS) with an MEC sever is able to deliver wireless energy to the mobile devices (MDs), and the MDs can utilize the harvested energy for local computing or task offloading to the WPT BS or a Macro BS (MBS) with a stronger computing server. In particular, we consider that the WPT BS can utilize full- or half-duplex wireless energy transmission mode to empower the MDs. The aim of this work focuses on optimizing the offloading decision, full/half-duplex energy harvesting mode and energy harvesting (EH) time allocation with the objective of minimizing the energy consumption of the MDs. As the formulate problem has a non-convex mixed integer programming structure, we use the quadratically constrained quadratic program (QCQP) and semi-definite relaxation (SDR) methods to solve it. The simulation results demonstrate the effectiveness of the proposed scheme.
Collapse
Affiliation(s)
- Jun Chen
- The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Colleage of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
| | - Zheng Chang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
- Faculty of Information Technology, University of Jyväskylä, P.O. Box 35, 40014 Jyväskylä, Finland
| | - Wenlong Guo
- The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Colleage of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
| | - Xijuan Guo
- The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Colleage of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
| |
Collapse
|
30
|
Chen X, Liu G. Federated Deep Reinforcement Learning-Based Task Offloading and Resource Allocation for Smart Cities in a Mobile Edge Network. Sensors (Basel) 2022; 22:4738. [PMID: 35808234 PMCID: PMC9269392 DOI: 10.3390/s22134738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
Mobile edge computing (MEC) has become an indispensable part of the era of the intelligent manufacturing industry 4.0. In the smart city, computation-intensive tasks can be offloaded to the MEC server or the central cloud server for execution. However, the privacy disclosure issue may arise when the raw data is migrated to other MEC servers or the central cloud server. Since federated learning has the characteristics of protecting the privacy and improving training performance, it is introduced to solve the issue. In this article, we formulate the joint optimization problem of task offloading and resource allocation to minimize the energy consumption of all Internet of Things (IoT) devices subject to delay threshold and limited resources. A two-timescale federated deep reinforcement learning algorithm based on Deep Deterministic Policy Gradient (DDPG) framework (FL-DDPG) is proposed. Simulation results show that the proposed algorithm can greatly reduce the energy consumption of all IoT devices.
Collapse
|
31
|
Mu L, Ge B, Xia C, Wu C. Multi-Task Offloading Based on Optimal Stopping Theory in Edge Computing Empowered Internet of Vehicles. Entropy (Basel) 2022; 24:814. [PMID: 35741535 DOI: 10.3390/e24060814] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/04/2022] [Accepted: 06/09/2022] [Indexed: 11/29/2022]
Abstract
Vehicular edge computing is a new computing paradigm. By introducing edge computing into the Internet of Vehicles (IoV), service providers are able to serve users with low-latency services, as edge computing deploys resources (e.g., computation, storage, and bandwidth) at the side close to the IoV users. When mobile nodes are moving and generating structured tasks, they can connect with the roadside units (RSUs) and then choose a proper time and several suitable Mobile Edge Computing (MEC) servers to offload the tasks. However, how to offload tasks in sequence efficiently is challenging. In response to this problem, in this paper, we propose a time-optimized, multi-task-offloading model adopting the principles of Optimal Stopping Theory (OST) with the objective of maximizing the probability of offloading to the optimal servers. When the server utilization is close to uniformly distributed, we propose another OST-based model with the objective of minimizing the total offloading delay. The proposed models are experimentally compared and evaluated with related OST models using simulated data sets and real data sets, and sensitivity analysis is performed. The results show that the proposed offloading models can be efficiently implemented in the mobile nodes and significantly reduce the total expected processing time of the tasks.
Collapse
|
32
|
Yuan X, Xie Z, Tan X. Computation Offloading in UAV-Enabled Edge Computing: A Stackelberg Game Approach. Sensors (Basel) 2022; 22:3854. [PMID: 35632262 DOI: 10.3390/s22103854] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 04/29/2022] [Accepted: 05/17/2022] [Indexed: 11/17/2022]
Abstract
This paper studies an efficient computing resource offloading mechanism for UAV-enabled edge computing. According to the interests of three different roles: base station, UAV, and user, we comprehensively consider the factors such as time delay, operation, and transmission energy consumption in a multi-layer game to improve the overall system performance. Firstly, we construct a Stackelberg multi-layer game model to get the appropriate resource pricing and computing offload allocation strategies through iterations. Base stations and UAVs are the leaders, and users are the followers. Then, we analyze the equilibrium states of the Stackelberg game and prove that the equilibrium state of the game exists and is unique. Finally, the algorithm’s feasibility is verified by simulation, and compared with the benchmark strategy, the Stackelberg game algorithm (SGA) has certain superiority and robustness.
Collapse
|
33
|
Hu Z, Gao H, Wang T, Han D, Lu Y. Joint Optimization for Mobile Edge Computing-Enabled Blockchain Systems: A Deep Reinforcement Learning Approach. Sensors (Basel) 2022; 22:3217. [PMID: 35590907 PMCID: PMC9100848 DOI: 10.3390/s22093217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 04/15/2022] [Accepted: 04/19/2022] [Indexed: 06/15/2023]
Abstract
A mobile edge computing (MEC)-enabled blockchain system is proposed in this study for secure data storage and sharing in internet of things (IoT) networks, with the MEC acting as an overlay system to provide dynamic computation offloading services. Considering latency-critical, resource-limited, and dynamic IoT scenarios, an adaptive system resource allocation and computation offloading scheme is designed to optimize the scalability performance for MEC-enabled blockchain systems, wherein the scalability is quantified as MEC computational efficiency and blockchain system throughput. Specifically, we jointly optimize the computation offloading policy and block generation strategy to maximize the scalability of MEC-enabled blockchain systems and meanwhile guarantee data security and system efficiency. In contrast to existing works that ignore frequent user movement and dynamic task requirements in IoT networks, the joint performance optimization scheme is formulated as a Markov decision process (MDP). Furthermore, we design a deep deterministic policy gradient (DDPG)-based algorithm to solve the MDP problem and define the multiple and variable number of consecutive time slots as a decision epoch to conduct model training. Specifically, DDPG can solve an MDP problem with a continuous action space and it only requires a straightforward actor-critic architecture, making it suitable for tackling the dynamics and complexity of the MEC-enabled blockchain system. As demonstrated by simulations, the proposed scheme can achieve performance improvements over the deep Q network (DQN)-based scheme and some other greedy schemes in terms of long-term transactional throughput.
Collapse
Affiliation(s)
- Zhuoer Hu
- Key Laboratory of Trustworthy Distributed Computing and Service, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China; (Z.H.); (D.H.); (Y.L.)
| | - Hui Gao
- Key Laboratory of Trustworthy Distributed Computing and Service, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China; (Z.H.); (D.H.); (Y.L.)
| | - Taotao Wang
- College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518060, China;
| | - Daoqi Han
- Key Laboratory of Trustworthy Distributed Computing and Service, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China; (Z.H.); (D.H.); (Y.L.)
| | - Yueming Lu
- Key Laboratory of Trustworthy Distributed Computing and Service, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China; (Z.H.); (D.H.); (Y.L.)
| |
Collapse
|
34
|
Guo S. LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing. Sensors (Basel) 2022; 22:3112. [PMID: 35590802 PMCID: PMC9104959 DOI: 10.3390/s22093112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/15/2022] [Accepted: 04/16/2022] [Indexed: 12/04/2022]
Abstract
Fundus is the only structure that can be observed without trauma to the human body. By analyzing color fundus images, the diagnosis basis for various diseases can be obtained. Recently, fundus image segmentation has witnessed vast progress with the development of deep learning. However, the improvement of segmentation accuracy comes with the complexity of deep models. As a result, these models show low inference speeds and high memory usages when deploying to mobile edges. To promote the deployment of deep fundus segmentation models to mobile devices, we aim to design a lightweight fundus segmentation network. Our observation comes from the fact that high-resolution representations could boost the segmentation of tiny fundus structures, and the classification of small fundus structures depends more on local features. To this end, we propose a lightweight segmentation model called LightEyes. We first design a high-resolution backbone network to learn high-resolution representations, so that the spatial relationship between feature maps can be always retained. Meanwhile, considering high-resolution features means high memory usage; for each layer, we use at most 16 convolutional filters to reduce memory usage and decrease training difficulty. LightEyes has been verified on three kinds of fundus segmentation tasks, including the hard exudate, the microaneurysm, and the vessel, on five publicly available datasets. Experimental results show that LightEyes achieves highly competitive segmentation accuracy and segmentation speed compared with state-of-the-art fundus segmentation models, while running at 1.6 images/s Cambricon-1A speed and 51.3 images/s GPU speed with only 36k parameters.
Collapse
Affiliation(s)
- Song Guo
- School of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China
| |
Collapse
|
35
|
Huang J, Xu S, Zhang J, Wu Y. Resource Allocation and 3D Deployment of UAVs-Assisted MEC Network with Air-Ground Cooperation. Sensors (Basel) 2022; 22:s22072590. [PMID: 35408207 PMCID: PMC9003303 DOI: 10.3390/s22072590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 03/24/2022] [Accepted: 03/25/2022] [Indexed: 12/04/2022]
Abstract
Equipping an unmanned aerial vehicle (UAV) with a mobile edge computing (MEC) server is an interesting technique for assisting terminal devices (TDs) to complete their delay sensitive computing tasks. In this paper, we investigate a UAV-assisted MEC network with air–ground cooperation, where both UAV and ground access point (GAP) have a direct link with TDs and undertake computing tasks cooperatively. We set out to minimize the maximum delay among TDs by optimizing the resource allocation of the system and by three-dimensional (3D) deployment of UAVs. Specifically, we propose an iterative algorithm by jointly optimizing UAV–TD association, UAV horizontal location, UAV vertical location, bandwidth allocation, and task split ratio. However, the overall optimization problem will be a mixed-integer nonlinear programming (MINLP) problem, which is hard to deal with. Thus, we adopt successive convex approximation (SCA) and block coordinate descent (BCD) methods to obtain a solution. The simulation results have shown that our proposed algorithm is efficient and has a great performance compared to other benchmark schemes.
Collapse
Affiliation(s)
- Jinming Huang
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China; (J.H.); (S.X.)
| | - Sijie Xu
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China; (J.H.); (S.X.)
| | - Jun Zhang
- Jiangsu Key Laboratory of Wireless Communications, Nanjing University of Posts and Telecommunications, Nanjing 210003, China;
| | - Yi Wu
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China; (J.H.); (S.X.)
- Correspondence:
| |
Collapse
|
36
|
Kihtir F, Yazici MA, Oztoprak K, Alpaslan FN. Next-Generation Payment System for Device-to-Device Content and Processing Sharing. Sensors (Basel) 2022; 22:2451. [PMID: 35408066 DOI: 10.3390/s22072451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 03/06/2022] [Accepted: 03/18/2022] [Indexed: 11/17/2022]
Abstract
Recent developments in telecommunication world have allowed customers to share the storage and processing capabilities of their devices by providing services through fast and reliable connections. This evolution, however, requires building an incentive system to encourage information exchange in future telecommunication networks. In this study, we propose a mechanism to share bandwidth and processing resources among subscribers using smart contracts and a blockchain-based incentive mechanism, which is used to encourage subscribers to share their resources. We demonstrate the applicability of the proposed method through two use cases: (i) exchanging multimedia data and (ii) CPU sharing. We propose a universal user-to-user and user-to-operator payment system, named TelCash, which provides a solution for current roaming problems and establishes trust in X2X communications. TelCash has a great potential in solving the charges of roaming and reputation management (reliance) problems in telecommunications sector. We also show, by using a simulation study, that encouraging D2D communication leads to a significant increase in content quality, and there is a threshold after which downloading from base station is dramatically cut down and can be kept as low as 10%.
Collapse
|
37
|
Antonić M, Antonić A, Podnar Žarko I. Bloom Filter Approach for Autonomous Data Acquisition in the Edge-Based MCS Scenario. Sensors (Basel) 2022; 22:879. [PMID: 35161626 DOI: 10.3390/s22030879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 01/20/2022] [Accepted: 01/20/2022] [Indexed: 12/02/2022]
Abstract
Mobile crowdsensing (MCS) is a sensing paradigm that allows ordinary citizens to use mobile and wearable technologies and become active observers of their surroundings. MCS services generate a massive amount of data due to the vast number of devices engaging in MCS tasks, and the intrinsic mobility of users can quickly make information obsolete, requiring efficient data processing. Our previous work shows that the Bloom filter (BF) is a promising technique to reduce the quantity of redundant data in a hierarchical edge-based MCS ecosystem, allowing users engaging in MCS tasks to make autonomous informed decisions on whether or not to transmit data. This paper extends the proposed BF algorithm to accept multiple data readings of the same type at an exact location if the MCS task requires such functionality. In addition, we thoroughly evaluate the overall behavior of our approach by taking into account the overhead generated in communication between edge servers and end-user devices on a real-world dataset. Our results indicate that using the proposed algorithm makes it possible to significantly reduce the amount of transmitted data and achieve energy savings up to 62% compared to a baseline approach.
Collapse
|
38
|
Xiao S, Wang S, Zhuang J, Wang T, Liu J. Research on a Task Offloading Strategy for the Internet of Vehicles Based on Reinforcement Learning. Sensors (Basel) 2021; 21:s21186058. [PMID: 34577265 PMCID: PMC8468814 DOI: 10.3390/s21186058] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 09/07/2021] [Accepted: 09/08/2021] [Indexed: 11/26/2022]
Abstract
Today, vehicles are increasingly being connected to the Internet of Things, which enables them to obtain high-quality services. However, the numerous vehicular applications and time-varying network status make it challenging for onboard terminals to achieve efficient computing. Therefore, based on a three-stage model of local-edge clouds and reinforcement learning, we propose a task offloading algorithm for the Internet of Vehicles (IoV). First, we establish communication methods between vehicles and their cost functions. In addition, according to the real-time state of vehicles, we analyze their computing requirements and the price function. Finally, we propose an experience-driven offloading strategy based on multi-agent reinforcement learning. The simulation results show that the algorithm increases the probability of success for the task and achieves a balance between the task vehicle delay, expenditure, task vehicle utility and service vehicle utility under various constraints.
Collapse
Affiliation(s)
- Shuo Xiao
- School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221000, China; (S.X.); (S.W.); (T.W.)
| | - Shengzhi Wang
- School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221000, China; (S.X.); (S.W.); (T.W.)
| | - Jiayu Zhuang
- Agricultural Information Institute, Chinese Academy of Agricultural Sciences, Beijing 100080, China;
- Key Laboratory of Agri-Information Service Technology, Ministry of Agriculture, Beijing 100080, China
- Correspondence:
| | - Tianyu Wang
- School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221000, China; (S.X.); (S.W.); (T.W.)
| | - Jiajia Liu
- Agricultural Information Institute, Chinese Academy of Agricultural Sciences, Beijing 100080, China;
- Key Laboratory of Agri-Information Service Technology, Ministry of Agriculture, Beijing 100080, China
| |
Collapse
|
39
|
Elgendy IA, Muthanna A, Hammoudeh M, Shaiba H, Unal D, Khayyat M. Advanced Deep Learning for Resource Allocation and Security Aware Data Offloading in Industrial Mobile Edge Computing. Big Data 2021; 9:265-278. [PMID: 33656352 DOI: 10.1089/big.2020.0284] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The Internet of Things (IoT) is permeating our daily lives through continuous environmental monitoring and data collection. The promise of low latency communication, enhanced security, and efficient bandwidth utilization lead to the shift from mobile cloud computing to mobile edge computing. In this study, we propose an advanced deep reinforcement resource allocation and security-aware data offloading model that considers the constrained computation and radio resources of industrial IoT devices to guarantee efficient sharing of resources between multiple users. This model is formulated as an optimization problem with the goal of decreasing energy consumption and computation delay. This type of problem is non-deterministic polynomial time-hard due to the curse-of-dimensionality challenge, thus, a deep learning optimization approach is presented to find an optimal solution. In addition, a 128-bit Advanced Encryption Standard-based cryptographic approach is proposed to satisfy the data security requirements. Experimental evaluation results show that the proposed model can reduce offloading overhead in terms of energy and time by up to 64.7% in comparison with the local execution approach. It also outperforms the full offloading scenario by up to 13.2%, where it can select some computation tasks to be offloaded while optimally rejecting others. Finally, it is adaptable and scalable for a large number of mobile devices.
Collapse
Affiliation(s)
- Ibrahim A Elgendy
- Department of Computer Science and Technology, School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
- Department of Computer Science, Faculty of Computers and Information, Menoufia University, Menoufia, Egypt
| | - Ammar Muthanna
- Department of Communication Networks and Data Transmission, St. Petersburg State University of Telecommunication, St. Petersburg, Russia
- Applied Mathematics and Communications Technology Institute, Peoples' Friendship University of Russia (RUDN University), Moscow, Russia
| | - Mohammad Hammoudeh
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester, United Kingdom
| | - Hadil Shaiba
- Department of Computer Science, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Devrim Unal
- Department of Electrical Engineering, KINDI Center for Computing Research, College of Engineering, Qatar University, Doha, Qatar
| | - Mashael Khayyat
- Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
| |
Collapse
|
40
|
Abbas ZH, Ali Z, Abbas G, Jiao L, Bilal M, Suh DY, Piran MJ. Computational Offloading in Mobile Edge with Comprehensive and Energy Efficient Cost Function: A Deep Learning Approach. Sensors (Basel) 2021; 21:s21103523. [PMID: 34069364 PMCID: PMC8158712 DOI: 10.3390/s21103523] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 05/08/2021] [Accepted: 05/13/2021] [Indexed: 11/16/2022]
Abstract
In mobile edge computing (MEC), partial computational offloading can be intelligently investigated to reduce the energy consumption and service delay of user equipment (UE) by dividing a single task into different components. Some of the components execute locally on the UE while the remaining are offloaded to a mobile edge server (MES). In this paper, we investigate the partial offloading technique in MEC using a supervised deep learning approach. The proposed technique, comprehensive and energy efficient deep learning-based offloading technique (CEDOT), intelligently selects the partial offloading policy and also the size of each component of a task to reduce the service delay and energy consumption of UEs. We use deep learning to find, simultaneously, the best partitioning of a single task with the best offloading policy. The deep neural network (DNN) is trained through a comprehensive dataset, generated from our mathematical model, which reduces the time delay and energy consumption of the overall process. Due to the complexity and computation of the mathematical model in the algorithm being high, due to trained DNN the complexity and computation are minimized in the proposed work. We propose a comprehensive cost function, which depends on various delays, energy consumption, radio resources, and computation resources. Furthermore, the cost function also depends on energy consumption and delay due to the task-division-process in partial offloading. None of the literature work considers the partitioning along with the computational offloading policy, and hence, the time and energy consumption due to task-division-process are ignored in the cost function. The proposed work considers all the important parameters in the cost function and generates a comprehensive training dataset with high computation and complexity. Once we get the training dataset, then the complexity is minimized through trained DNN which gives faster decision making with low energy consumptions. Simulation results demonstrate the superior performance of the proposed technique with high accuracy of the DNN in deciding offloading policy and partitioning of a task with minimum delay and energy consumption for UE. More than 70% accuracy of the trained DNN is achieved through a comprehensive training dataset. The simulation results also show the constant accuracy of the DNN when the UEs are moving which means the decision making of the offloading policy and partitioning are not affected by the mobility of UEs.
Collapse
Affiliation(s)
- Ziaul Haq Abbas
- Faculty of Electrical Engineering, GIK Institute of Engineering Sciences and Technology, Topi 23640, Pakistan;
| | - Zaiwar Ali
- Telecommunications and Networking Research Center, GIK Institute of Engineering Sciences and Technology, Topi 23640, Pakistan;
| | - Ghulam Abbas
- Faculty of Computer Science and Engineering, GIK Institute of Engineering Sciences and Technology, Topi 23640, Pakistan;
| | - Lei Jiao
- Department of Information and Communication Technology, University of Agder (UiA), 4898 Grimstad, Norway;
| | - Muhammad Bilal
- Department of Computer Engineering, Hankuk University of Foreign Studies, Yongin-si 17035, Korea
- Correspondence: (M.B.); (D.-Y.S.)
| | - Doug-Young Suh
- Department of Electronics and Software Convergence, Kyung Hee University, Yongin-si 17035, Korea
- Correspondence: (M.B.); (D.-Y.S.)
| | - Md. Jalil Piran
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea;
| |
Collapse
|
41
|
dos Anjos JCS, Gross JLG, Matteussi KJ, González GV, Leithardt VRQ, Geyer CFR. An Algorithm to Minimize Energy Consumption and Elapsed Time for IoT Workloads in a Hybrid Architecture. Sensors (Basel) 2021; 21:s21092914. [PMID: 33919222 PMCID: PMC8122349 DOI: 10.3390/s21092914] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 04/10/2021] [Accepted: 04/16/2021] [Indexed: 11/16/2022]
Abstract
Advances in communication technologies have made the interaction of small devices, such as smartphones, wearables, and sensors, scattered on the Internet, bringing a whole new set of complex applications with ever greater task processing needs. These Internet of things (IoT) devices run on batteries with strict energy restrictions. They tend to offload task processing to remote servers, usually to cloud computing (CC) in datacenters geographically located away from the IoT device. In such a context, this work proposes a dynamic cost model to minimize energy consumption and task processing time for IoT scenarios in mobile edge computing environments. Our approach allows for a detailed cost model, with an algorithm called TEMS that considers energy, time consumed during processing, the cost of data transmission, and energy in idle devices. The task scheduling chooses among cloud or mobile edge computing (MEC) server or local IoT devices to achieve better execution time with lower cost. The simulated environment evaluation saved up to 51.6% energy consumption and improved task completion time up to 86.6%.
Collapse
Affiliation(s)
- Julio C. S. dos Anjos
- Institute of Informatics, UFRGS/PPGC, Federal University of Rio Grande do Sul, RS, Porto Alegre 91501-970, Brazil; (J.L.G.G.); (K.J.M.); (C.F.R.G.)
- Correspondence:
| | - João L. G. Gross
- Institute of Informatics, UFRGS/PPGC, Federal University of Rio Grande do Sul, RS, Porto Alegre 91501-970, Brazil; (J.L.G.G.); (K.J.M.); (C.F.R.G.)
| | - Kassiano J. Matteussi
- Institute of Informatics, UFRGS/PPGC, Federal University of Rio Grande do Sul, RS, Porto Alegre 91501-970, Brazil; (J.L.G.G.); (K.J.M.); (C.F.R.G.)
| | - Gabriel V. González
- Faculty of Science, Expert Systems and Applications Laboratory, University of Salamanca, 37008 Salamanca, Spain;
| | - Valderi R. Q. Leithardt
- COPELABS, Universidade Lusófona de Humanidades e Tecnologias, 1749-024 Lisboa, Portugal;
- VALORIZA, Research Center for Endogenous Resource Valorization, Polytechnic Institute of Portalegre, 7300-555 Portalegre, Portugal
| | - Claudio F. R. Geyer
- Institute of Informatics, UFRGS/PPGC, Federal University of Rio Grande do Sul, RS, Porto Alegre 91501-970, Brazil; (J.L.G.G.); (K.J.M.); (C.F.R.G.)
| |
Collapse
|
42
|
Sedar R, Vázquez-Gallego F, Casellas R, Vilalta R, Muñoz R, Silva R, Dizambourg L, Fernández Barciela AE, Vilajosana X, Datta SK, Härri J, Alonso-Zarate J. Standards-Compliant Multi-Protocol On-Board Unit for the Evaluation of Connected and Automated Mobility Services in Multi-Vendor Environments. Sensors (Basel) 2021; 21:s21062090. [PMID: 33802669 PMCID: PMC8002513 DOI: 10.3390/s21062090] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 03/08/2021] [Accepted: 03/09/2021] [Indexed: 11/23/2022]
Abstract
Vehicle-to-everything (V2X) communications enable real-time information exchange between vehicles and infrastructure, which extends the perception range of vehicles beyond the limits of on-board sensors and, thus, facilitating the realisation of cooperative, connected, and automated mobility (CCAM) services that will improve road safety and traffic efficiency. In the context of CCAM, the successful deployments of cooperative intelligent transport system (C-ITS) use cases, with the integration of advanced wireless communication technologies, are effectively leading to make transport safer and more efficient. However, the evaluation of multi-vendor and multi-protocol based CCAM service architectures can become challenging and complex. Additionally, conducting on-demand field trials of such architectures with real vehicles involved is prohibitively expensive and time-consuming. In order to overcome these obstacles, in this paper, we present the development of a standards-compliant experimental vehicular on-board unit (OBU) that supports the integration of multiple V2X protocols from different vendors to communicate with heterogeneous cloud-based services that are offered by several original equipment manufacturers (OEMs). We experimentally demonstrate the functionalities of the OBU in a real-world deployment of a cooperative collision avoidance service infrastructure that is based on edge and cloud servers. In addition, we measure end-to-end application-level latencies of multi-protocol supported V2X information flows to show the effectiveness of interoperability in V2X communications between different vehicle OEMs.
Collapse
Affiliation(s)
- Roshan Sedar
- Centre Tecnològic de Telecomunicacions de Catalunya, 08860 Castelldefels, Spain; (F.V.-G.); (R.C.); (R.V.); (R.M.); (J.A.-Z.)
- Correspondence:
| | - Francisco Vázquez-Gallego
- Centre Tecnològic de Telecomunicacions de Catalunya, 08860 Castelldefels, Spain; (F.V.-G.); (R.C.); (R.V.); (R.M.); (J.A.-Z.)
| | - Ramon Casellas
- Centre Tecnològic de Telecomunicacions de Catalunya, 08860 Castelldefels, Spain; (F.V.-G.); (R.C.); (R.V.); (R.M.); (J.A.-Z.)
| | - Ricard Vilalta
- Centre Tecnològic de Telecomunicacions de Catalunya, 08860 Castelldefels, Spain; (F.V.-G.); (R.C.); (R.V.); (R.M.); (J.A.-Z.)
| | - Raul Muñoz
- Centre Tecnològic de Telecomunicacions de Catalunya, 08860 Castelldefels, Spain; (F.V.-G.); (R.C.); (R.V.); (R.M.); (J.A.-Z.)
| | - Rodrigo Silva
- Peugeot Citroën Automobiles, 78943 Velizy-Villacoublay, France; (R.S.); (L.D.); (A.E.F.B.)
| | - Laurent Dizambourg
- Peugeot Citroën Automobiles, 78943 Velizy-Villacoublay, France; (R.S.); (L.D.); (A.E.F.B.)
| | | | - Xavier Vilajosana
- Worldsensing S.L., 08014 Barcelona, Spain; or
- Computer Science, Telecommunications and Multimedia Department, Universitat Oberta de Catalunya, 08018 Barcelona, Spain
| | | | - Jérôme Härri
- EURECOM, 06904 Sophia Antipolis, France; (S.K.D.); (J.H.)
| | - Jesus Alonso-Zarate
- Centre Tecnològic de Telecomunicacions de Catalunya, 08860 Castelldefels, Spain; (F.V.-G.); (R.C.); (R.V.); (R.M.); (J.A.-Z.)
| |
Collapse
|
43
|
Fang J, Shi J, Lu S, Zhang M, Ye Z. An Efficient Computation Offloading Strategy with Mobile Edge Computing for IoT. Micromachines (Basel) 2021; 12:mi12020204. [PMID: 33671142 PMCID: PMC7923021 DOI: 10.3390/mi12020204] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 02/06/2021] [Accepted: 02/15/2021] [Indexed: 11/16/2022]
Abstract
With the rapidly development of mobile cloud computing (MCC), the Internet of Things (IoT), and artificial intelligence (AI), user equipment (UEs) are facing explosive growth. In order to effectively solve the problem that UEs may face with insufficient capacity when dealing with computationally intensive and delay sensitive applications, we take Mobile Edge Computing (MEC) of the IoT as the starting point and study the computation offloading strategy of UEs. First, we model the application generated by UEs as a directed acyclic graph (DAG) to achieve fine-grained task offloading scheduling, which makes the parallel processing of tasks possible and speeds up the execution efficiency. Then, we propose a multi-population cooperative elite algorithm (MCE-GA) based on the standard genetic algorithm, which can solve the offloading problem for tasks with dependency in MEC to minimize the execution delay and energy consumption of applications. Experimental results show that MCE-GA has better performance compared to the baseline algorithms. To be specific, the overhead reduction by MCE-GA can be up to 72.4%, 38.6%, and 19.3%, respectively, which proves the effectiveness and reliability of MCE-GA.
Collapse
Affiliation(s)
- Juan Fang
- Correspondence: (J.F.); (S.L.); Tel.: +86-139-1129-6256 (J.F.)
| | | | - Shuaibing Lu
- Correspondence: (J.F.); (S.L.); Tel.: +86-139-1129-6256 (J.F.)
| | | | | |
Collapse
|
44
|
Li D, Xu S, Li P. Deep Reinforcement Learning-Empowered Resource Allocation for Mobile Edge Computing in Cellular V2X Networks. Sensors (Basel) 2021; 21:s21020372. [PMID: 33430386 PMCID: PMC7826838 DOI: 10.3390/s21020372] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 12/30/2020] [Accepted: 01/05/2021] [Indexed: 11/23/2022]
Abstract
With the rapid development of vehicular networks, vehicle-to-everything (V2X) communications have huge number of tasks to be calculated, which brings challenges to the scarce network resources. Cloud servers can alleviate the terrible situation regarding the lack of computing abilities of vehicular user equipment (VUE), but the limited resources, the dynamic environment of vehicles, and the long distances between the cloud servers and VUE induce some potential issues, such as extra communication delay and energy consumption. Fortunately, mobile edge computing (MEC), a promising computing paradigm, can ameliorate the above problems by enhancing the computing abilities of VUE through allocating the computational resources to VUE. In this paper, we propose a joint optimization algorithm based on a deep reinforcement learning algorithm named the double deep Q network (double DQN) to minimize the cost constituted of energy consumption, the latency of computation, and communication with the proper policy. The proposed algorithm is more suitable for dynamic scenarios and requires low-latency vehicular scenarios in the real world. Compared with other reinforcement learning algorithms, the algorithm we proposed algorithm improve the performance in terms of convergence, defined cost, and speed by around 30%, 15%, and 17%.
Collapse
Affiliation(s)
- Dongji Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China; (D.L.); (P.L.)
| | - Shaoyi Xu
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China; (D.L.); (P.L.)
- National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China
- Correspondence:
| | - Pengyu Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China; (D.L.); (P.L.)
| |
Collapse
|
45
|
Tian X, Zhu J, Xu T, Li Y. Mobility-Included DNN Partition Offloading from Mobile Devices to Edge Clouds. Sensors (Basel) 2021; 21:E229. [PMID: 33401409 DOI: 10.3390/s21010229] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 12/21/2020] [Accepted: 12/28/2020] [Indexed: 11/25/2022]
Abstract
The latest results in Deep Neural Networks (DNNs) have greatly improved the accuracy and performance of a variety of intelligent applications. However, running such computation-intensive DNN-based applications on resource-constrained mobile devices definitely leads to long latency and huge energy consumption. The traditional way is performing DNNs in the central cloud, but it requires significant amounts of data to be transferred to the cloud over the wireless network and also results in long latency. To solve this problem, offloading partial DNN computation to edge clouds has been proposed, to realize the collaborative execution between mobile devices and edge clouds. In addition, the mobility of mobile devices is easily to cause the computation offloading failure. In this paper, we develop a mobility-included DNN partition offloading algorithm (MDPO) to adapt to user’s mobility. The objective of MDPO is minimizing the total latency of completing a DNN job when the mobile user is moving. The MDPO algorithm is suitable for both DNNs with chain topology and graphic topology. We evaluate the performance of our proposed MDPO compared to local-only execution and edge-only execution, experiments show that MDPO significantly reduces the total latency and improves the performance of DNN, and MDPO can adjust well to different network conditions.
Collapse
|
46
|
Li L, Wen X, Lu Z, Jing W. An Energy Efficient Design of Computation Offloading Enabled by UAV. Sensors (Basel) 2020; 20:s20123363. [PMID: 32545823 PMCID: PMC7348790 DOI: 10.3390/s20123363] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 06/07/2020] [Accepted: 06/11/2020] [Indexed: 11/26/2022]
Abstract
The data volume is exploding due to various newly-developing applications that call for stringent communication requirements towards 5th generation wireless systems. Fortunately, mobile edge computing makes it possible to relieve the heavy computation pressure of ground users and decrease the latency and energy consumption. What is more, the unmanned aerial vehicle has the advantages of agility and easy deployment, which gives the unmanned aerial vehicle enabled mobile edge computing system opportunities to fly towards areas with communication demand, such as hotspot areas. However, the limited endurance time of unmanned aerial vehicle affects the performance of mobile edge computing services, which results in the incomplete mobile edge computing services under the time limit. Consequently, this paper concerns the energy-efficient scheme design of the unmanned aerial vehicle while providing high-quality offloading services for ground users, particularly in the regions where the ground communication infrastructures are overloaded or damaged after natural disasters. Firstly, the model of energy-efficient design of the unmanned aerial vehicle is set up taking the constraints of the energy limitation of the unmanned aerial vehicle, the data causality, and the speed of the unmanned aerial vehicle into account. Subsequently, aiming at maximizing the energy efficiency of the unmanned aerial vehicle in the unmanned aerial vehicle enabled mobile edge computing system, the bits allocation in each time slot and the trajectory of the unmanned aerial vehicle are jointly optimized. Secondly, a successive convex approximation based alternating algorithm is brought forward to deal with the non-convex energy efficiency maximization problem. Finally, it is proved that the proposed energy efficient scheme design of the unmanned aerial vehicle is superior to other benchmark schemes by the simulation results. Besides, how the performance of proposed scheme design change under different parameters is discussed.
Collapse
Affiliation(s)
- Linpei Li
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China; (L.L.); (X.W.); (W.J.)
- Beijing Key Laboratory of Network System Architecture and Convergence, Beijing University of Posts and Telecommunications, Beijing 100876, China
- Beijing Laboratory of Advanced Information Networks, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Xiangming Wen
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China; (L.L.); (X.W.); (W.J.)
- Beijing Key Laboratory of Network System Architecture and Convergence, Beijing University of Posts and Telecommunications, Beijing 100876, China
- Beijing Laboratory of Advanced Information Networks, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Zhaoming Lu
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China; (L.L.); (X.W.); (W.J.)
- Beijing Key Laboratory of Network System Architecture and Convergence, Beijing University of Posts and Telecommunications, Beijing 100876, China
- Beijing Laboratory of Advanced Information Networks, Beijing University of Posts and Telecommunications, Beijing 100876, China
- Correspondence:
| | - Wenpeng Jing
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China; (L.L.); (X.W.); (W.J.)
- Beijing Key Laboratory of Network System Architecture and Convergence, Beijing University of Posts and Telecommunications, Beijing 100876, China
- Beijing Laboratory of Advanced Information Networks, Beijing University of Posts and Telecommunications, Beijing 100876, China
| |
Collapse
|
47
|
Liu T, Luo R, Xu F, Fan C, Zhao C. Distributed Learning Based Joint Communication and Computation Strategy of IoT Devices in Smart Cities. Sensors (Basel) 2020; 20:s20040973. [PMID: 32059343 PMCID: PMC7070816 DOI: 10.3390/s20040973] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 01/23/2020] [Accepted: 02/10/2020] [Indexed: 12/03/2022]
Abstract
With the development of global urbanization, the Internet of Things (IoT) and smart cities are becoming hot research topics. As an emerging model, edge computing can play an important role in smart cities because of its low latency and good performance. IoT devices can reduce time consumption with the help of a mobile edge computing (MEC) server. However, if too many IoT devices simultaneously choose to offload the computation tasks to the MEC server via the limited wireless channel, it may lead to the channel congestion, thus increasing time overhead. Facing a large number of IoT devices in smart cities, the centralized resource allocation algorithm needs a lot of signaling exchange, resulting in low efficiency. To solve the problem, this paper studies the joint policy of communication and computing of IoT devices in edge computing through game theory, and proposes distributed Q-learning algorithms with two learning policies. Simulation results show that the algorithm can converge quickly with a balanced solution.
Collapse
Affiliation(s)
- Tianyi Liu
- International School, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Ruyu Luo
- School of Information and Telecommunication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Fangmin Xu
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
- Correspondence:
| | - Chaoqiong Fan
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Chenglin Zhao
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China
| |
Collapse
|
48
|
Wei H, Luo H, Sun Y. Mobility-Aware Service Caching in Mobile Edge Computing for Internet of Things. Sensors (Basel) 2020; 20:E610. [PMID: 31979135 DOI: 10.3390/s20030610] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 01/19/2020] [Accepted: 01/20/2020] [Indexed: 11/23/2022]
Abstract
The mobile edge computing architecture successfully solves the problem of high latency in cloud computing. However, current research focuses on computation offloading and lacks research on service caching issues. To solve the service caching problem, especially for scenarios with high mobility in the Sensor Networks environment, we study the mobility-aware service caching mechanism. Our goal is to maximize the number of users who are served by the local edge-cloud, and we need to make predictions about the user’s target location to avoid invalid service requests. First, we propose an idealized geometric model to predict the target area of a user’s movement. Since it is difficult to obtain all the data needed by the model in practical applications, we use frequent patterns to mine local moving track information. Then, by using the results of the trajectory data mining and the proposed geometric model, we make predictions about the user’s target location. Based on the prediction result and existing service cache, the service request is forwarded to the appropriate base station through the service allocation algorithm. Finally, to be able to train and predict the most popular services online, we propose a service cache selection algorithm based on back-propagation (BP) neural network. The simulation experiments show that our service cache algorithm reduces the service response time by about 13.21% on average compared to other algorithms, and increases the local service proportion by about 15.19% on average compared to the algorithm without mobility prediction.
Collapse
|
49
|
Wang T, Lu Y, Cao Z, Shu L, Zheng X, Liu A, Xie M. When Sensor-Cloud Meets Mobile Edge Computing. Sensors (Basel) 2019; 19:s19235324. [PMID: 31816927 PMCID: PMC6928901 DOI: 10.3390/s19235324] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Revised: 11/28/2019] [Accepted: 11/29/2019] [Indexed: 11/25/2022]
Abstract
Sensor-clouds are a combination of wireless sensor networks (WSNs) and cloud computing. The emergence of sensor-clouds has greatly enhanced the computing power and storage capacity of traditional WSNs via exploiting the advantages of cloud computing in resource utilization. However, there are still many problems to be solved in sensor-clouds, such as the limitations of WSNs in terms of communication and energy, the high latency, and the security and privacy issues due to applying a cloud platform as the data processing and control center. In recent years, mobile edge computing has received increasing attention from industry and academia. The core of mobile edge computing is to migrate some or all of the computing tasks of the original cloud computing center to the vicinity of the data source, which gives mobile edge computing great potential in solving the shortcomings of sensor-clouds. In this paper, the latest research status of sensor-clouds is briefly analyzed and the characteristics of the existing sensor-clouds are summarized. After that we discuss the issues of sensor-clouds and propose some applications, especially a trust evaluation mechanism and trustworthy data collection which use mobile edge computing to solve the problems in sensor-clouds. Finally, we discuss research challenges and future research directions in leveraging mobile edge computing for sensor-clouds.
Collapse
Affiliation(s)
- Tian Wang
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China (Y.L.); (Z.C.)
- Key Laboratory of Computer Vision and Machine Learning (Huaqiao University), Fujian Province University, Xiamen 361021, China
| | - Yucheng Lu
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China (Y.L.); (Z.C.)
| | - Zhihan Cao
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China (Y.L.); (Z.C.)
| | - Lei Shu
- College of Engineering, Nanjing Agricultural University, Nanjing 210095, China;
| | - Xi Zheng
- Department of Computing, Macquarie University, Macquarie Park, NSW 2109, Australia;
| | - Anfeng Liu
- College of Computer Science and Engineering, Central South University, Changsha 410083, China;
| | - Mande Xie
- School of Computer Science and Information Engineering, Zhejiang Gongshang University, Hangzhou 310018, China
- Correspondence: ; Tel.: +135-1672-6165
| |
Collapse
|
50
|
Cui T, Hu Y, Shen B, Chen Q. Task Offloading Based on Lyapunov Optimization for MEC-Assisted Vehicular Platooning Networks. Sensors (Basel) 2019; 19:s19224974. [PMID: 31731622 PMCID: PMC6891471 DOI: 10.3390/s19224974] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2019] [Revised: 11/05/2019] [Accepted: 11/12/2019] [Indexed: 11/17/2022]
Abstract
Due to limited computation resources of a vehicle terminal, it is impossible to meet the demands of some applications and services, especially for computation-intensive types, which not only results in computation burden and delay, but also consumes more energy. Mobile edge computing (MEC) is an emerging architecture in which computation and storage services are extended to the edge of a network, which is an advanced technology to support multiple applications and services that requires ultra-low latency. In this paper, a task offloading approach for an MEC-assisted vehicle platooning is proposed, where the Lyapunov optimization algorithm is employed to solve the optimization problem under the condition of stability of task queues. The proposed approach dynamically adjusts the offloading decisions for all tasks according to data parameters of current task, and judge whether it is executed locally, in other platooning member or at an MEC server. The simulation results show that the proposed algorithm can effectively reduce energy consumption of task execution and greatly improve the offloading efficiency compared with the shortest queue waiting time algorithm and the full offloading to an MEC algorithm.
Collapse
Affiliation(s)
- Taiping Cui
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Nan-An District, Chongqing 400065, China;
- Chongqing Key Labs of Mobile Communications, Chongqing 400065, China;
- Correspondence: (T.C.); (Y.H.); Tel.: +86-187-1628-5097 (T.C.)
| | - Yuyu Hu
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Nan-An District, Chongqing 400065, China;
- Chongqing Key Labs of Mobile Communications, Chongqing 400065, China;
- Correspondence: (T.C.); (Y.H.); Tel.: +86-187-1628-5097 (T.C.)
| | - Bin Shen
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Nan-An District, Chongqing 400065, China;
- Chongqing Key Labs of Mobile Communications, Chongqing 400065, China;
| | - Qianbin Chen
- Chongqing Key Labs of Mobile Communications, Chongqing 400065, China;
| |
Collapse
|