1
|
Zhao X, Xie G, Luo Y, Chen J, Liu F, Bai H. Optimizing storage on fog computing edge servers: A recent algorithm design with minimal interference. PLoS One 2024; 19:e0304009. [PMID: 38985790 PMCID: PMC11236131 DOI: 10.1371/journal.pone.0304009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 05/03/2024] [Indexed: 07/12/2024] Open
Abstract
The burgeoning field of fog computing introduces a transformative computing paradigm with extensive applications across diverse sectors. At the heart of this paradigm lies the pivotal role of edge servers, which are entrusted with critical computing and storage functions. The optimization of these servers' storage capacities emerges as a crucial factor in augmenting the efficacy of fog computing infrastructures. This paper presents a novel storage optimization algorithm, dubbed LIRU (Low Interference Recently Used), which synthesizes the strengths of the LIRS (Low Interference Recency Set) and LRU (Least Recently Used) replacement algorithms. Set against the backdrop of constrained storage resources, this research endeavours to formulate an algorithm that optimizes storage space utilization, elevates data access efficiency, and diminishes access latencies. The investigation initiates a comprehensive analysis of the storage resources available on edge servers, pinpointing the essential considerations for optimization algorithms: storage resource utilization and data access frequency. The study then constructs an optimization model that harmonizes data frequency with cache capacity, employing optimization theory to discern the optimal solution for storage maximization. Subsequent experimental validations of the LIRU algorithm underscore its superiority over conventional replacement algorithms, showcasing significant improvements in storage utilization, data access efficiency, and reduced access delays. Notably, the LIRU algorithm registers a 5% increment in one-hop hit ratio relative to the LFU algorithm, a 66% enhancement over the LRU algorithm, and a 14% elevation in system hit ratio against the LRU algorithm. Moreover, it curtails the average system response time by 2.4% and 16.5% compared to the LRU and LFU algorithms, respectively, particularly in scenarios involving large cache sizes. This research not only sheds light on the intricacies of edge server storage optimization but also significantly propels the performance and efficiency of the broader fog computing ecosystem. Through these insights, the study contributes a valuable framework for enhancing data management strategies within fog computing architectures, marking a noteworthy advancement in the field.
Collapse
Affiliation(s)
- Xumin Zhao
- Zhejiang Yuexiu University, Shaoxing, China
- Key Laboratory for Data Open Integration in Zhejiang Province, Hangzhou, China
- Philippine Christian University, Manila, Philippines
| | - Guojie Xie
- Key Laboratory for Data Open Integration in Zhejiang Province, Hangzhou, China
| | - Yi Luo
- Zhejiang Yuexiu University, Shaoxing, China
- Key Laboratory for Data Open Integration in Zhejiang Province, Hangzhou, China
| | - Jingyuan Chen
- Zhejiang Mingren Health Culture Development Co., LTD, Hangzhou, China
| | - Fenghua Liu
- Huzhou Vocational and Technical College, Huzhou, China
| | - HongPeng Bai
- School of Intelligence and Computing, Tianjin University, Tianjin, China
| |
Collapse
|
2
|
Martín-Martín A, Padial-Allué R, Castillo E, Parrilla L, Parellada-Serrano I, Morán A, García A. Hardware Implementations of a Deep Learning Approach to Optimal Configuration of Reconfigurable Intelligence Surfaces. SENSORS (BASEL, SWITZERLAND) 2024; 24:899. [PMID: 38339618 PMCID: PMC10857622 DOI: 10.3390/s24030899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/25/2024] [Accepted: 01/26/2024] [Indexed: 02/12/2024]
Abstract
Reconfigurable intelligent surfaces (RIS) offer the potential to customize the radio propagation environment for wireless networks, and will be a key element for 6G communications. However, due to the unique constraints in these systems, the optimization problems associated to RIS configuration are challenging to solve. This paper illustrates a new approach to the RIS configuration problem, based on the use of artificial intelligence (AI) and deep learning (DL) algorithms. Concretely, a custom convolutional neural network (CNN) intended for edge computing is presented, and implementations on different representative edge devices are compared, including the use of commercial AI-oriented devices and a field-programmable gate array (FPGA) platform. This FPGA option provides the best performance, with ×20 performance increase over the closest FP32, GPU-accelerated option, and almost ×3 performance advantage when compared with the INT8-quantized, TPU-accelerated implementation. More noticeably, this is achieved even when high-level synthesis (HLS) tools are used and no custom accelerators are developed. At the same time, the inherent reconfigurability of FPGAs opens a new field for their use as enabler hardware in RIS applications.
Collapse
Affiliation(s)
- Alberto Martín-Martín
- eesy-Innovation GmbH, 82008 Unterhaching, Germany;
- Department of Electronics and Computer Technology, University of Granada, 18071 Granada, Spain; (R.P.-A.); (E.C.); (L.P.)
| | - Rubén Padial-Allué
- Department of Electronics and Computer Technology, University of Granada, 18071 Granada, Spain; (R.P.-A.); (E.C.); (L.P.)
| | - Encarnación Castillo
- Department of Electronics and Computer Technology, University of Granada, 18071 Granada, Spain; (R.P.-A.); (E.C.); (L.P.)
| | - Luis Parrilla
- Department of Electronics and Computer Technology, University of Granada, 18071 Granada, Spain; (R.P.-A.); (E.C.); (L.P.)
| | - Ignacio Parellada-Serrano
- Department of Signal Theory, Telematics and Communications, University of Granada, 18071 Granada, Spain;
| | - Alejandro Morán
- Department of Industrial Engineering & Construction, University of Balearic Islands, 07120 Palma, Spain;
| | - Antonio García
- Department of Electronics and Computer Technology, University of Granada, 18071 Granada, Spain; (R.P.-A.); (E.C.); (L.P.)
| |
Collapse
|
3
|
Shi J, Sun D, Kieu M, Guo B, Gao M. An Enhanced Detector for Vulnerable Road Users Using Infrastructure-Sensors-Enabled Device. SENSORS (BASEL, SWITZERLAND) 2023; 24:59. [PMID: 38202921 PMCID: PMC10780687 DOI: 10.3390/s24010059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 12/09/2023] [Accepted: 12/19/2023] [Indexed: 01/12/2024]
Abstract
The precise and real-time detection of vulnerable road users (VRUs) using infrastructure-sensors-enabled devices is crucial for the advancement of intelligent traffic monitoring systems. To overcome the prevalent inefficiencies in VRU detection, this paper introduces an enhanced detector that utilizes a lightweight backbone network integrated with a parameterless attention mechanism. This integration significantly enhances the feature extraction capability for small targets within high-resolution images. Additionally, the design features a streamlined 'neck' and a dynamic detection head, both augmented with a pruning algorithm to reduce the model's parameter count and ensure a compact architecture. In collaboration with the specialized engineering dataset De_VRU, the model was deployed on the Hisilicon_Hi3516DV300 platform, specifically designed for infrastructure units. Rigorous ablation studies, employing YOLOv7-tiny as the baseline, confirm the detector's efficacy on the BDD100K and LLVIP datasets. The model not only achieved an improvement of over 12% in the mAP@50 metric but also realized a reduction in parameter count by more than 40%, and a 50% decrease in inference time. Visualization outcomes and a case study illustrate the detector's proficiency in conducting real-time detection with high-resolution imagery, underscoring its practical applicability.
Collapse
Affiliation(s)
- Jian Shi
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China; (J.S.); (D.S.)
| | - Dongxian Sun
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China; (J.S.); (D.S.)
| | - Minh Kieu
- Department of Civil and Environmental Engineering, University of Auckland, Auckland 1010, New Zealand;
| | - Baicang Guo
- School of Vehicle and Energy, Yanshan University, Qinhuangdao 066004, China;
| | - Ming Gao
- College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China
- School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China
| |
Collapse
|
4
|
Liu Q, Dong L, Zeng Z, Zhu W, Zhu Y, Meng C. SSD with multi-scale feature fusion and attention mechanism. Sci Rep 2023; 13:21387. [PMID: 38049437 PMCID: PMC10695922 DOI: 10.1038/s41598-023-41373-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 08/25/2023] [Indexed: 12/06/2023] Open
Abstract
In the field of the Internet of Things, image acquisition equipment is the very important equipment, which will generate lots of invalid data during real-time monitoring. Analyzing the data collected directly from the terminal by edge calculation, we can remove invalid frames and improve the accuracy of system detection. SSD algorithm has a relatively light and fast detection speed. However, SSD algorithm do not take full advantage of both shallow and deep information of data. So a multiscale feature fusion attention mechanism structure based on SSD algorithm has been proposed in this paper, which combines multiscale feature fusion and attention mechanism. The adjacent feature layers for each detection layer are fused to improve the feature information expression ability. Then, the attention mechanism is added to increase the attention of the feature map channels. The results of the experiments show that the detection accuracy of the optimized model is improved, and the reliability of edge calculation has been improved.
Collapse
Affiliation(s)
- Qiang Liu
- College of Computer Science, Hunan University of Technology, Zhuzhou, Hunan, China.
- Intelligent Information Perception and Processing Technology Hunan Province Key Laboratory, Hunan University of Technology, Zhuzhou, Hunan, China.
| | - Lijun Dong
- College of Computer Science, Hunan University of Technology, Zhuzhou, Hunan, China
- Intelligent Information Perception and Processing Technology Hunan Province Key Laboratory, Hunan University of Technology, Zhuzhou, Hunan, China
| | - Zhigao Zeng
- College of Computer Science, Hunan University of Technology, Zhuzhou, Hunan, China.
- Intelligent Information Perception and Processing Technology Hunan Province Key Laboratory, Hunan University of Technology, Zhuzhou, Hunan, China.
| | - Wenqiu Zhu
- College of Computer Science, Hunan University of Technology, Zhuzhou, Hunan, China
- Intelligent Information Perception and Processing Technology Hunan Province Key Laboratory, Hunan University of Technology, Zhuzhou, Hunan, China
| | - Yanhui Zhu
- College of Computer Science, Hunan University of Technology, Zhuzhou, Hunan, China
- Intelligent Information Perception and Processing Technology Hunan Province Key Laboratory, Hunan University of Technology, Zhuzhou, Hunan, China
| | - Chen Meng
- College of Computer Science, Hunan University of Technology, Zhuzhou, Hunan, China
- Intelligent Information Perception and Processing Technology Hunan Province Key Laboratory, Hunan University of Technology, Zhuzhou, Hunan, China
| |
Collapse
|
5
|
Zhu Y, Wen H, Wu J, Zhao R. Online data poisoning attack against edge AI paradigm for IoT-enabled smart city. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:17726-17746. [PMID: 38052534 DOI: 10.3934/mbe.2023788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
The deep integration of edge computing and Artificial Intelligence (AI) in IoT (Internet of Things)-enabled smart cities has given rise to new edge AI paradigms that are more vulnerable to attacks such as data and model poisoning and evasion of attacks. This work proposes an online poisoning attack framework based on the edge AI environment of IoT-enabled smart cities, which takes into account the limited storage space and proposes a rehearsal-based buffer mechanism to manipulate the model by incrementally polluting the sample data stream that arrives at the appropriately sized cache. A maximum-gradient-based sample selection strategy is presented, which converts the operation of traversing historical sample gradients into an online iterative computation method to overcome the problem of periodic overwriting of the sample data cache after training. Additionally, a maximum-loss-based sample pollution strategy is proposed to solve the problem of each poisoning sample being updated only once in basic online attacks, transforming the bi-level optimization problem from offline mode to online mode. Finally, the proposed online gray-box poisoning attack algorithms are implemented and evaluated on edge devices of IoT-enabled smart cities using an online data stream simulated with offline open-grid datasets. The results show that the proposed method outperforms the existing baseline methods in both attack effectiveness and overhead.
Collapse
Affiliation(s)
- Yanxu Zhu
- School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China
- Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu 611731, China
- Intelligent IoT Communication Technology Engineering Research Center, Chengdu 611731, China
| | - Hong Wen
- School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China
- Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu 611731, China
- Intelligent IoT Communication Technology Engineering Research Center, Chengdu 611731, China
| | - Jinsong Wu
- School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin 510004, China
- Department of Electrical Engineering, University of Chile, Santiago 8370451, Chile
| | - Runhui Zhao
- School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China
- Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu 611731, China
- Intelligent IoT Communication Technology Engineering Research Center, Chengdu 611731, China
| |
Collapse
|
6
|
McDonnell KJ. Leveraging the Academic Artificial Intelligence Silecosystem to Advance the Community Oncology Enterprise. J Clin Med 2023; 12:4830. [PMID: 37510945 PMCID: PMC10381436 DOI: 10.3390/jcm12144830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/05/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023] Open
Abstract
Over the last 75 years, artificial intelligence has evolved from a theoretical concept and novel paradigm describing the role that computers might play in our society to a tool with which we daily engage. In this review, we describe AI in terms of its constituent elements, the synthesis of which we refer to as the AI Silecosystem. Herein, we provide an historical perspective of the evolution of the AI Silecosystem, conceptualized and summarized as a Kuhnian paradigm. This manuscript focuses on the role that the AI Silecosystem plays in oncology and its emerging importance in the care of the community oncology patient. We observe that this important role arises out of a unique alliance between the academic oncology enterprise and community oncology practices. We provide evidence of this alliance by illustrating the practical establishment of the AI Silecosystem at the City of Hope Comprehensive Cancer Center and its team utilization by community oncology providers.
Collapse
Affiliation(s)
- Kevin J McDonnell
- Center for Precision Medicine, Department of Medical Oncology & Therapeutics Research, City of Hope Comprehensive Cancer Center, Duarte, CA 91010, USA
| |
Collapse
|
7
|
Molęda M, Małysiak-Mrozek B, Ding W, Sunderam V, Mrozek D. From Corrective to Predictive Maintenance-A Review of Maintenance Approaches for the Power Industry. SENSORS (BASEL, SWITZERLAND) 2023; 23:5970. [PMID: 37447820 DOI: 10.3390/s23135970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 06/23/2023] [Accepted: 06/24/2023] [Indexed: 07/15/2023]
Abstract
Appropriate maintenance of industrial equipment keeps production systems in good health and ensures the stability of production processes. In specific production sectors, such as the electrical power industry, equipment failures are rare but may lead to high costs and substantial economic losses not only for the power plant but for consumers and the larger society. Therefore, the power production industry relies on a variety of approaches to maintenance tasks, ranging from traditional solutions and engineering know-how to smart, AI-based analytics to avoid potential downtimes. This review shows the evolution of maintenance approaches to support maintenance planning, equipment monitoring and supervision. We present older techniques traditionally used in maintenance tasks and those that rely on IT analytics to automate tasks and perform the inference process for failure detection. We analyze prognostics and health-management techniques in detail, including their requirements, advantages and limitations. The review focuses on the power-generation sector. However, some of the issues addressed are common to other industries. The article also presents concepts and solutions that utilize emerging technologies related to Industry 4.0, touching on prescriptive analysis, Big Data and the Internet of Things. The primary motivation and purpose of the article are to present the existing practices and classic methods used by engineers, as well as modern approaches drawing from Artificial Intelligence and the concept of Industry 4.0. The summary of existing practices and the state of the art in the area of predictive maintenance provides two benefits. On the one hand, it leads to improving processes by matching existing tools and methods. On the other hand, it shows researchers potential directions for further analysis and new developments.
Collapse
Affiliation(s)
- Marek Molęda
- TAURON Wytwarzanie S.A., Promienna 51, 43-603 Jaworzno, Poland
| | - Bożena Małysiak-Mrozek
- Department of Distributed Systems and Informatic Devices, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, No. 9 Seyuan Road, Nantong 226019, China
| | - Vaidy Sunderam
- Department of Computer Science, Emory University, Atlanta, GA 30322, USA
| | - Dariusz Mrozek
- Department of Applied Informatics, Silesian University of Technology, 44-100 Gliwice, Poland
| |
Collapse
|
8
|
Zhu Y, Wen H, Zhao R, Jiang Y, Liu Q, Zhang P. Research on Data Poisoning Attack against Smart Grid Cyber-Physical System Based on Edge Computing. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094509. [PMID: 37177713 PMCID: PMC10181508 DOI: 10.3390/s23094509] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/26/2023] [Accepted: 05/03/2023] [Indexed: 05/15/2023]
Abstract
Data poisoning attack is a well-known attack against machine learning models, where malicious attackers contaminate the training data to manipulate critical models and predictive outcomes by masquerading as terminal devices. As this type of attack can be fatal to the operation of a smart grid, addressing data poisoning is of utmost importance. However, this attack requires solving an expensive two-level optimization problem, which can be challenging to implement in resource-constrained edge environments of the smart grid. To mitigate this issue, it is crucial to enhance efficiency and reduce the costs of the attack. This paper proposes an online data poisoning attack framework based on the online regression task model. The framework achieves the goal of manipulating the model by polluting the sample data stream that arrives at the cache incrementally. Furthermore, a point selection strategy based on sample loss is proposed in this framework. Compared to the traditional random point selection strategy, this strategy makes the attack more targeted, thereby enhancing the attack's efficiency. Additionally, a batch-polluting strategy is proposed in this paper, which synchronously updates the poisoning points based on the direction of gradient ascent. This strategy reduces the number of iterations required for inner optimization and thus reduces the time overhead. Finally, multiple experiments are conducted to compare the proposed method with the baseline method, and the evaluation index of loss over time is proposed to demonstrate the effectiveness of the method. The results show that the proposed method outperforms the existing baseline method in both attack effectiveness and overhead.
Collapse
Affiliation(s)
- Yanxu Zhu
- School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China
- Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu 611731, China
| | - Hong Wen
- School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China
- Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu 611731, China
| | - Runhui Zhao
- School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China
- Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu 611731, China
| | - Yixin Jiang
- Electric Power Research Institute, China Southern Power Grid Co., Ltd., Guangzhou 510663, China
| | - Qiang Liu
- School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China
- Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu 611731, China
| | - Peng Zhang
- School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu 611731, China
- Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, Chengdu 611731, China
| |
Collapse
|
9
|
Rasha AH, Li T, Huang W, Gu J, Li C. Federated Learning in Smart Cities: Privacy and Security Survey. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.03.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
|
10
|
Biswas A, Wang HC. Autonomous Vehicles Enabled by the Integration of IoT, Edge Intelligence, 5G, and Blockchain. SENSORS (BASEL, SWITZERLAND) 2023; 23:1963. [PMID: 36850560 PMCID: PMC9963447 DOI: 10.3390/s23041963] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 02/02/2023] [Accepted: 02/06/2023] [Indexed: 06/18/2023]
Abstract
The wave of modernization around us has put the automotive industry on the brink of a paradigm shift. Leveraging the ever-evolving technologies, vehicles are steadily transitioning towards automated driving to constitute an integral part of the intelligent transportation system (ITS). The term autonomous vehicle has become ubiquitous in our lives, owing to the extensive research and development that frequently make headlines. Nonetheless, the flourishing of AVs hinges on many factors due to the extremely stringent demands for safety, security, and reliability. Cutting-edge technologies play critical roles in tackling complicated issues. Assimilating trailblazing technologies such as the Internet of Things (IoT), edge intelligence (EI), 5G, and Blockchain into the AV architecture will unlock the potential of an efficient and sustainable transportation system. This paper provides a comprehensive review of the state-of-the-art in the literature on the impact and implementation of the aforementioned technologies into AV architectures, along with the challenges faced by each of them. We also provide insights into the technological offshoots concerning their seamless integration to fulfill the requirements of AVs. Finally, the paper sheds light on future research directions and opportunities that will spur further developments. Exploring the integration of key enabling technologies in a single work will serve as a valuable reference for the community interested in the relevant issues surrounding AV research.
Collapse
Affiliation(s)
- Anushka Biswas
- Department of Power Engineering, Jadavpur University, Kolkata 700056, India
| | - Hwang-Cheng Wang
- Department of Electronic Engineering, National Ilan University, Yilan 260007, Taiwan
| |
Collapse
|
11
|
Rodriguez-Conde I, Campos C, Fdez-Riverola F. Horizontally Distributed Inference of Deep Neural Networks for AI-Enabled IoT. SENSORS (BASEL, SWITZERLAND) 2023; 23:1911. [PMID: 36850508 PMCID: PMC9958567 DOI: 10.3390/s23041911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/02/2023] [Accepted: 02/05/2023] [Indexed: 06/18/2023]
Abstract
Motivated by the pervasiveness of artificial intelligence (AI) and the Internet of Things (IoT) in the current "smart everything" scenario, this article provides a comprehensive overview of the most recent research at the intersection of both domains, focusing on the design and development of specific mechanisms for enabling a collaborative inference across edge devices towards the in situ execution of highly complex state-of-the-art deep neural networks (DNNs), despite the resource-constrained nature of such infrastructures. In particular, the review discusses the most salient approaches conceived along those lines, elaborating on the specificities of the partitioning schemes and the parallelism paradigms explored, providing an organized and schematic discussion of the underlying workflows and associated communication patterns, as well as the architectural aspects of the DNNs that have driven the design of such techniques, while also highlighting both the primary challenges encountered at the design and operational levels and the specific adjustments or enhancements explored in response to them.
Collapse
Affiliation(s)
- Ivan Rodriguez-Conde
- Department of Computer Science, University of Arkansas at Little Rock, 2801 South University Avenue, Little Rock, AR 72204, USA
| | - Celso Campos
- Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
| | - Florentino Fdez-Riverola
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| |
Collapse
|
12
|
Bourechak A, Zedadra O, Kouahla MN, Guerrieri A, Seridi H, Fortino G. At the Confluence of Artificial Intelligence and Edge Computing in IoT-Based Applications: A Review and New Perspectives. SENSORS (BASEL, SWITZERLAND) 2023; 23:1639. [PMID: 36772680 PMCID: PMC9920982 DOI: 10.3390/s23031639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/25/2023] [Accepted: 01/26/2023] [Indexed: 06/18/2023]
Abstract
Given its advantages in low latency, fast response, context-aware services, mobility, and privacy preservation, edge computing has emerged as the key support for intelligent applications and 5G/6G Internet of things (IoT) networks. This technology extends the cloud by providing intermediate services at the edge of the network and improving the quality of service for latency-sensitive applications. Many AI-based solutions with machine learning, deep learning, and swarm intelligence have exhibited the high potential to perform intelligent cognitive sensing, intelligent network management, big data analytics, and security enhancement for edge-based smart applications. Despite its many benefits, there are still concerns about the required capabilities of intelligent edge computing to deal with the computational complexity of machine learning techniques for big IoT data analytics. Resource constraints of edge computing, distributed computing, efficient orchestration, and synchronization of resources are all factors that require attention for quality of service improvement and cost-effective development of edge-based smart applications. In this context, this paper aims to explore the confluence of AI and edge in many application domains in order to leverage the potential of the existing research around these factors and identify new perspectives. The confluence of edge computing and AI improves the quality of user experience in emergency situations, such as in the Internet of vehicles, where critical inaccuracies or delays can lead to damage and accidents. These are the same factors that most studies have used to evaluate the success of an edge-based application. In this review, we first provide an in-depth analysis of the state of the art of AI in edge-based applications with a focus on eight application areas: smart agriculture, smart environment, smart grid, smart healthcare, smart industry, smart education, smart transportation, and security and privacy. Then, we present a qualitative comparison that emphasizes the main objective of the confluence, the roles and the use of artificial intelligence at the network edge, and the key enabling technologies for edge analytics. Then, open challenges, future research directions, and perspectives are identified and discussed. Finally, some conclusions are drawn.
Collapse
Affiliation(s)
- Amira Bourechak
- LabSTIC Laboratory, Department of Computer Science, 8 Mai 1945 University. P.O. Box 401, Guelma 24000, Algeria
| | - Ouarda Zedadra
- LabSTIC Laboratory, Department of Computer Science, 8 Mai 1945 University. P.O. Box 401, Guelma 24000, Algeria
| | - Mohamed Nadjib Kouahla
- LabSTIC Laboratory, Department of Computer Science, 8 Mai 1945 University. P.O. Box 401, Guelma 24000, Algeria
| | - Antonio Guerrieri
- ICAR-CNR, Institute for High Performance Computing and Networking, National Research Council of Italy, Via P. Bucci 8/9C, 87036 Rende, CS, Italy
| | - Hamid Seridi
- LabSTIC Laboratory, Department of Computer Science, 8 Mai 1945 University. P.O. Box 401, Guelma 24000, Algeria
| | - Giancarlo Fortino
- ICAR-CNR, Institute for High Performance Computing and Networking, National Research Council of Italy, Via P. Bucci 8/9C, 87036 Rende, CS, Italy
- DIMES, University of Calabria, Via P. Bucci 41C, 87036 Rende, CS, Italy
| |
Collapse
|
13
|
Hoffpauir K, Simmons J, Schmidt N, Pittala R, Briggs I, Makani S, Jararweh Y. A Survey on Edge Intelligence and Lightweight Machine Learning Support for Future Applications and Services. ACM JOURNAL OF DATA AND INFORMATION QUALITY 2023. [DOI: 10.1145/3581759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
As the number of devices connected to the internet has grown larger, so too has the intensity of the tasks that these devices need to perform. Modern networks are more frequently working to perform computationally intensive tasks on low-power devices and low-end hardware. Current architectures and platforms tend towards centralized and resource rich cloud computing approaches to address these deficits. However, edge computing presents a much more viable and flexible alternative. Edge computing refers to a distributed and decentralized network architecture in which demanding tasks such as image recognition, smart city services, and high-intensity data processing tasks can be distributed over a number of integrated network devices. In this paper, we provide a comprehensive survey for emerging edge intelligence applications, lightweight machine learning algorithms, and their support for future applications and services. We started by analyzing the rise of cloud computing discuss its weak points, and identify situations in which edge computing provides advantages over traditional cloud computing architectures. We then divulge into the survey - the first section identifying opportunities and domains for edge computing growth, the second identifying algorithms and approaches that can be used to enhance edge intelligence implementations, and the third specifically analyzing situations in which edge intelligence can be enhanced using any of the aforementioned algorithms or approaches. In this third section, lightweight machine learning approaches are detailed. A more in-depth analysis and discussion of future developments follows. The primary discourse of this piece is in an effort to ensure that appropriate approaches are applied adequately to artificial intelligence implementations in edge systems and mainly the lightweight machine learning approaches.
Collapse
|
14
|
Surianarayanan C, Lawrence JJ, Chelliah PR, Prakash E, Hewage C. A Survey on Optimization Techniques for Edge Artificial Intelligence (AI). SENSORS (BASEL, SWITZERLAND) 2023; 23:1279. [PMID: 36772319 PMCID: PMC9919555 DOI: 10.3390/s23031279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 01/12/2023] [Accepted: 01/19/2023] [Indexed: 06/18/2023]
Abstract
Artificial Intelligence (Al) models are being produced and used to solve a variety of current and future business and technical problems. Therefore, AI model engineering processes, platforms, and products are acquiring special significance across industry verticals. For achieving deeper automation, the number of data features being used while generating highly promising and productive AI models is numerous, and hence the resulting AI models are bulky. Such heavyweight models consume a lot of computation, storage, networking, and energy resources. On the other side, increasingly, AI models are being deployed in IoT devices to ensure real-time knowledge discovery and dissemination. Real-time insights are of paramount importance in producing and releasing real-time and intelligent services and applications. Thus, edge intelligence through on-device data processing has laid down a stimulating foundation for real-time intelligent enterprises and environments. With these emerging requirements, the focus turned towards unearthing competent and cognitive techniques for maximally compressing huge AI models without sacrificing AI model performance. Therefore, AI researchers have come up with a number of powerful optimization techniques and tools to optimize AI models. This paper is to dig deep and describe all kinds of model optimization at different levels and layers. Having learned the optimization methods, this work has highlighted the importance of having an enabling AI model optimization framework.
Collapse
Affiliation(s)
- Chellammal Surianarayanan
- Centre for Distance and Online Education, Bharathidasan University, Tiruchirappalli 620024, Tamilnadu, India
| | | | - Pethuru Raj Chelliah
- Edge AI Division, Reliance Jio Platforms Ltd., Bangalore 560103, Karnataka, India
| | - Edmond Prakash
- Research Center for Creative Arts, University for the Creative Arts (UCA), Farnham GU9 7DS, UK
| | - Chaminda Hewage
- Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
| |
Collapse
|
15
|
SENSIPLUS-LM: A Low-Cost EIS-Enabled Microchip Enhanced with an Open-Source Tiny Machine Learning Toolchain. COMPUTERS 2023. [DOI: 10.3390/computers12020023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
The technological step towards sensors’ miniaturization, low-cost platforms, and evolved communication paradigms is rapidly moving the monitoring and computation tasks to the edge, causing the joint use of the Internet of Things (IoT) and machine learning (ML) to be massively employed. Edge devices are often composed of sensors and actuators, and their behavior depends on the relative rapid inference of specific conditions. Therefore, the computation and decision-making processes become obsolete and ineffective by communicating raw data and leaving them to a centralized system. This paper responds to this need by proposing an integrated architecture, able to host both the sensing part and the learning and classifying mechanisms, empowered by ML, directly on board and thus able to overcome some of the limitations presented by off-the-shelf solutions. The presented system is based on a proprietary platform named SENSIPLUS, a multi-sensor device especially devoted to performing electrical impedance spectroscopy (EIS) on a wide frequency interval. The measurement acquisition, data processing, and embedded classification techniques are supported by a system capable of generating and compiling code automatically, which uses a toolchain to run inference routines on the edge. As a case study, the system capabilities of such a platform in this work are exploited for water quality assessment. The joint system, composed of the measurement platform and the developed toolchain, is named SENSIPLUS-LM, standing for SENSIPLUS learning machine. The introduction of the toolchain empowers the SENSIPLUS platform moving the inference phase of the machine learning algorithm to the edge, thus limiting the needs of external computing platforms. The software part, i.e., the developed toolchain, is available for free download from GitLab, as reported in this paper.
Collapse
|
16
|
Zhang J, Zhang W, Xu J. StegEdge: Privacy protection of unknown sensitive attributes in edge intelligence via deception. JOURNAL OF COMPUTER SECURITY 2022. [DOI: 10.3233/jcs-220042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Due to the limited capabilities of user devices, such as smart phones, and the Internet of Things (IoT), edge intelligence is being recognized as a promising paradigm to enable effective analysis of the data generated by these devices with complex artificial intelligence (AI) models, and it often entails either fully or partially offloading the computation of neural networks from user devices to edge computing servers. To protect users’ data privacy in the process, most existing researches assume that the private (sensitive) attributes of user data are known in advance when designing privacy-protection measures. This assumption is restrictive in real life, and thus limits the application of these methods. Inspired by the research in image steganography and cyber deception, in this paper, we propose StegEdge, a conceptually novel approach to this challenge. StegEdge takes as input the user-generated image and a randomly selected “cover” image that does not pose any privacy concern (e.g., downloaded from the Internet), and extracts the features such that the utility tasks can still be conducted by the edge computing servers, while potential adversaries seeking to reconstruct/recover the original user data or analyze sensitive attributes from the extracted features sent from users to the server, will largely acquire information of the cover image. Thus, users’ data privacy is protected via a form of deception. Empirical results conducted on the CelebA and ImageNet datasets show that, at the same level of accuracy for utility tasks, StegEdge reduces the adversaries’ accuracy of predicting sensitive attributes by up to 38% compared with other methods, while also defending against adversaries seeking to reconstruct user data from the extracted features.
Collapse
Affiliation(s)
- Jianfeng Zhang
- College of Computer Science, Nankai University, Tianjin, China
| | - Wensheng Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jingdong Xu
- College of Computer Science, Nankai University, Tianjin, China
| |
Collapse
|
17
|
Ahmed M, Liu J, Mirza MA, Khan WU, Al-Wesabi FN. MARL based resource allocation scheme leveraging vehicular cloudlet in automotive-industry 5.0. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2022.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
18
|
Li D, Song L. Multi-Agent Multi-View Collaborative Perception Based on Semi-Supervised Online Evolutive Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:6893. [PMID: 36146246 PMCID: PMC9502217 DOI: 10.3390/s22186893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 09/03/2022] [Accepted: 09/08/2022] [Indexed: 06/16/2023]
Abstract
In the edge intelligence environment, multiple sensing devices perceive and recognize the current scene in real time to provide specific user services. However, the generalizability of the fixed recognition model will gradually weaken due to the time-varying perception scene. To ensure the stability of the perception and recognition service, each edge model/agent needs to continuously learn from the new perception data unassisted to adapt to the perception environment changes and jointly build the online evolutive learning (OEL) system. The generalization degradation problem can be addressed by deploying the semi-supervised learning (SSL) method on multi-view agents and continuously tuning each discriminative model by collaborative perception. This paper proposes a multi-view agent's collaborative perception (MACP) semi-supervised online evolutive learning method. First, each view model will be initialized based on self-supervised learning methods, and each initialized model can learn differentiated feature-extraction patterns with certain discriminative independence. Then, through the discriminative information fusion of multi-view model predictions on the unlabeled perceptual data, reliable pseudo-labels are obtained for the consistency regularization process of SSL. Moreover, we introduce additional critical parameter constraints to continuously improve the discriminative independence of each view model during training. We compare our method with multiple representative multi-model and single-model SSL methods on various benchmarks. Experimental results show the superiority of the MACP in terms of convergence efficiency and performance. Meanwhile, we construct an ideal multi-view experiment to demonstrate the application potential of MACP in practical perception scenarios.
Collapse
Affiliation(s)
- Di Li
- College of Information Engineering, Henan University of Science and Technology, Luoyang 471000, China
| | - Liang Song
- Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
| |
Collapse
|
19
|
Tang X, Xu L, Chen G. Research on the Rapid Diagnostic Method of Rolling Bearing Fault Based on Cloud-Edge Collaboration. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1277. [PMID: 36141163 PMCID: PMC9497659 DOI: 10.3390/e24091277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 06/16/2023]
Abstract
Recent deep-learning methods for fault diagnosis of rolling bearings need a significant amount of computing time and resources. Most of them cannot meet the requirements of real-time fault diagnosis of rolling bearings under the cloud computing framework. This paper proposes a quick cloud-edge collaborative bearing fault diagnostic method based on the tradeoff between the advantages and disadvantages of cloud and edge computing. First, a collaborative cloud-based framework and an improved DSCNN-GAP algorithm are suggested to build a general model using the public bearing fault dataset. Second, the general model is distributed to each edge node, and a limited number of unique fault samples acquired by each edge node are used to quickly adjust the parameters of the model before running diagnostic tests. Finally, a fusion result is made from the diagnostic results of each edge node by DS evidence theory. Experiment results show that the proposed method not only improves diagnostic accuracy by DSCNN-GAP and fusion of multi-sensors, but also decreases diagnosis time by migration learning with the cloud-edge collaborative framework. Additionally, the method can effectively enhance data security and privacy protection.
Collapse
Affiliation(s)
- Xianghong Tang
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Lei Xu
- Key Laboratory of Advanced Manufacturing Technology of the Ministry of Education, Guizhou University, Guiyang 550025, China
| | - Gongsheng Chen
- Key Laboratory of Advanced Manufacturing Technology of the Ministry of Education, Guizhou University, Guiyang 550025, China
| |
Collapse
|
20
|
DISSEC: A distributed deep neural network inference scheduling strategy for edge clusters. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.05.084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
21
|
Yang H, Lam KY, Xiao L, Xiong Z, Hu H, Niyato D, Vincent Poor H. Lead federated neuromorphic learning for wireless edge artificial intelligence. Nat Commun 2022; 13:4269. [PMID: 35879326 PMCID: PMC9314401 DOI: 10.1038/s41467-022-32020-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 07/13/2022] [Indexed: 12/02/2022] Open
Abstract
In order to realize the full potential of wireless edge artificial intelligence (AI), very large and diverse datasets will often be required for energy-demanding model training on resource-constrained edge devices. This paper proposes a lead federated neuromorphic learning (LFNL) technique, which is a decentralized energy-efficient brain-inspired computing method based on spiking neural networks. The proposed technique will enable edge devices to exploit brain-like biophysiological structure to collaboratively train a global model while helping preserve privacy. Experimental results show that, under the situation of uneven dataset distribution among edge devices, LFNL achieves a comparable recognition accuracy to existing edge AI techniques, while substantially reducing data traffic by >3.5× and computational latency by >2.0×. Furthermore, LFNL significantly reduces energy consumption by >4.5× compared to standard federated learning with a slight accuracy loss up to 1.5%. Therefore, the proposed LFNL can facilitate the development of brain-inspired computing and edge AI. Designing energy-efficient computing solution for the implementation of AI algorithms in edge devices remains a challenge. Yang et al. proposes a decentralized brain-inspired computing method enabling multiple edge devices to collaboratively train a global model without a fixed central coordinator.
Collapse
Affiliation(s)
- Helin Yang
- Department of Information and Communication Engineering, School of Informatics, Xiamen University, Xiamen, China.,Strategic Centre for Research in Privacy-Preserving Technologies and Systems, Nanyang Technological University, Singapore, Singapore
| | - Kwok-Yan Lam
- Strategic Centre for Research in Privacy-Preserving Technologies and Systems, Nanyang Technological University, Singapore, Singapore. .,School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore.
| | - Liang Xiao
- Department of Information and Communication Engineering, School of Informatics, Xiamen University, Xiamen, China
| | - Zehui Xiong
- Pillar of Information Systems Technology and Design, Singapore University of Technology and Design, Singapore, Singapore
| | - Hao Hu
- Department of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Dusit Niyato
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore
| | - H Vincent Poor
- Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ, USA
| |
Collapse
|
22
|
Ahmed M, Raza S, Mirza MA, Aziz A, Khan MA, Khan WU, Li J, Han Z. A survey on vehicular task offloading: Classification, issues, and challenges. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2022.05.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
23
|
Himeur Y, Sohail SS, Bensaali F, Amira A, Alazab M. Latest trends of security and privacy in recommender systems: A comprehensive review and future perspectives. Comput Secur 2022. [DOI: 10.1016/j.cose.2022.102746] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
24
|
Edge AI and Blockchain for Smart Sustainable Cities: Promise and Potential. SUSTAINABILITY 2022. [DOI: 10.3390/su14137609] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Modern cities worldwide are undergoing radical changes to foster a clean, sustainable and secure environment, install smart infrastructures, deliver intelligent services to residents, and facilitate access for vulnerable groups. The adoption of new technologies is at the heart of implementing many initiatives to address critical concerns in urban mobility, healthcare, water management, clean energy production and consumption, energy saving, housing, safety, and accessibility. Given the advancements in sensing and communication technologies over the past few decades, exploring the adoption of recent and innovative technologies is critical to addressing these concerns and making cities more innovative, sustainable, and safer. This article provides a broad understanding of the current urban challenges faced by smart cities. It highlights two new technological advances, edge artificial intelligence (edge AI) and Blockchain, and analyzes their transformative potential to make our cities smarter. In addition, it explores the multiple uses of edge AI and Blockchain technologies in the fields of smart mobility and smart energy and reviews relevant research efforts in these two critical areas of modern smart cities. It highlights the various algorithms to handle vehicle detection, counting, speed identification to address the problem of traffic congestion and the different use-cases of Blockchain in terms of trustworthy communications and trading between vehicles and smart energy trading. This review paper is expected to serve as a guideline for future research on adopting edge AI and Blockchain in other smart city domains.
Collapse
|
25
|
Áika: A Distributed Edge System for AI Inference. BIG DATA AND COGNITIVE COMPUTING 2022. [DOI: 10.3390/bdcc6020068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Video monitoring and surveillance of commercial fisheries in world oceans has been proposed by the governing bodies of several nations as a response to crimes such as overfishing. Traditional video monitoring systems may not be suitable due to limitations in the offshore fishing environment, including low bandwidth, unstable satellite network connections and issues of preserving the privacy of crew members. In this paper, we present Áika, a robust system for executing distributed Artificial Intelligence (AI) applications on the edge. Áika provides engineers and researchers with several building blocks in the form of Agents, which enable the expression of computation pipelines and distributed applications with robustness and privacy guarantees. Agents are continuously monitored by dedicated monitoring nodes, and provide applications with a distributed checkpointing and replication scheme. Áika is designed for monitoring and surveillance in privacy-sensitive and unstable offshore environments, where flexible access policies at the storage level can provide privacy guarantees for data transfer and access.
Collapse
|
26
|
Abstract
The main goal of this paper is to survey the influential research of distributed learning technologies playing a key role in the 6G world. Upcoming 6G technology is expected to create an intelligent, highly scalable, dynamic, and programable wireless communication network able to serve many heterogeneous wireless devices. Various machine learning (ML) techniques are expected to be deployed over the intelligent 6G wireless network that provide solutions to highly complex networking problems. In order to do this, various 6G nodes and devices are expected to generate tons of data through external sensors, and data analysis will be needed. With such massive and distributed data, and various innovations in computing hardware, distributed ML techniques are expected to play an important role in 6G. Though they have several advantages over the centralized ML techniques, implementing the distributed ML algorithms over resource-constrained wireless environments can be challenging. Therefore, it is important to select a proper ML algorithm based upon the characteristics of the wireless environment and the resource requirements of the learning process. In this work, we survey the recently introduced distributed ML techniques with their characteristics and possible benefits by focusing our attention on the most influential papers in the area. We finally give our perspective on the main challenges and advantages for telecommunication networks, along with the main scenarios that could eventuate.
Collapse
|
27
|
Johnson M, Albizri A, Harfouche A, Fosso-Wamba S. Integrating human knowledge into artificial intelligence for complex and ill-structured problems: Informed artificial intelligence. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2022. [DOI: 10.1016/j.ijinfomgt.2022.102479] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
28
|
Abstract
The end of Moore’s Law aligned with data privacy concerns is forcing machine learning (ML) to shift from the cloud to the deep edge. In the next-generation ML systems, the inference and part of the training process will perform at the edge, while the cloud stays responsible for major updates. This new computing paradigm, called federated learning (FL), alleviates the cloud and network infrastructure while increasing data privacy. Recent advances empowered the inference pass of quantized artificial neural networks (ANNs) on Arm Cortex-M and RISC-V microcontroller units (MCUs). Nevertheless, the training remains confined to the cloud, imposing the transaction of high volumes of private data over a network and leading to unpredictable delays when ML applications attempt to adapt to adversarial environments. To fill this gap, we make the first attempt to evaluate the feasibility of ANN training in Arm Cortex-M MCUs. From the available optimization algorithms, stochastic gradient descent (SGD) has the best trade-off between accuracy, memory footprint, and latency. However, its original form and the variants available in the literature still do not fit the stringent requirements of Arm Cortex-M MCUs. We propose L-SGD, a lightweight implementation of SGD optimized for maximum speed and minimal memory footprint in this class of MCUs. We developed a floating-point version and another that operates over quantized weights. For a fully-connected ANN trained on the MNIST dataset, L-SGD (float-32) is 4.20× faster than the SGD while requiring only 2.80% of the memory with negligible accuracy loss. Results also show that quantized training is still unfeasible to train an ANN from the scratch but is a lightweight solution to perform minor model fixes and counteract the fairness problem in typical FL systems.
Collapse
|
29
|
Wu X, Qi L, Gao J, Ji G, Xu X. An ensemble of random decision trees with local differential privacy in edge computing. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.01.145] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
30
|
Construction of Smart Grid Load Forecast Model by Edge Computing. ENERGIES 2022. [DOI: 10.3390/en15093028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
This research aims to minimize the unnecessary resource consumption by intelligent Power Grid Systems (PGSs). Edge Computing (EC) technology is used to forecast PGS load and optimize the PGS load forecasting model. Following a literature review of EC and Internet of Things (IoT)-native edge devices, an intelligent PGS-oriented Resource Management Scheme (RMS) and PGS load forecasting model are proposed based on task offloading. Simultaneously, an online delay-aware power Resource Allocation Algorithm (RAA) is developed for EC architecture. Finally, comparing three algorithms corroborate that the system overhead decreases significantly with the model iteration. From the 40th iteration, the system overhead stabilizes. Moreover, given no more than 50 users, the average user delay of the proposed delay-aware power RAA is less than 13 s. The average delay of the proposed algorithm is better than that of the other two algorithms. This research contributes to optimizing intelligent PGS in smart cities and improving power transmission efficiency.
Collapse
|
31
|
Abstract
In the 5G intelligent edge scenario, more and more accelerator-based single-board computers (SBCs) with low power consumption and high performance are being used as edge devices to run the inferencing part of the artificial intelligence (AI) model to deploy intelligent applications. In this paper, we investigate the inference workflow and performance of the You Only Look Once (YOLO) network, which is the most popular object detection model, in three different accelerator-based SBCs, which are NVIDIA Jetson Nano, NVIDIA Jetson Xavier NX and Raspberry Pi 4B (RPi) with Intel Neural Compute Stick2 (NCS2). Different video contents with different input resize windows are detected and benchmarked by using four different versions of the YOLO model across the above three SBCs. By comparing the inference performance of the three SBCs, the performance of RPi + NCS2 is more friendly to lightweight models. For example, the FPS of detected videos from RPi + NCS2 running YOLOv3-tiny is 7.6 times higher than that of YOLOv3. However, in terms of detection accuracy, we found that in the process of realizing edge intelligence, how to better adapt a AI model to run on RPi + NCS2 is much more complex than the process of Jetson devices. The analysis results indicate that Jetson Nano is a trade-off SBCs in terms of performance and cost; it achieves up to 15 FPSs of detected videos when running YOLOv4-tiny, and this result can be further increased by using TensorRT.
Collapse
|
32
|
Hayyolalam V, Otoum S, Özkasap Ö. Dynamic QoS/QoE-aware reliable service composition framework for edge intelligence. CLUSTER COMPUTING 2022; 25:1695-1713. [PMID: 35368911 PMCID: PMC8959554 DOI: 10.1007/s10586-022-03572-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 02/06/2022] [Accepted: 02/16/2022] [Indexed: 06/06/2023]
Abstract
Edge intelligence has become popular recently since it brings smartness and copes with some shortcomings of conventional technologies such as cloud computing, Internet of Things (IoT), and centralized AI adoptions. However, although utilizing edge intelligence contributes to providing smart systems such as automated driving systems, smart cities, and connected healthcare systems, it is not free from limitations. There exist various challenges in integrating AI and edge computing, one of which is addressed in this paper. Our main focus is to handle the adoption of AI methods on resource-constrained edge devices. In this regard, we introduce the concept of Edge devices as a Service (EdaaS) and propose a quality of service (QoS) and quality of experience (QoE)-aware dynamic and reliable framework for AI subtasks composition. The proposed framework is evaluated utilizing three well-known meta-heuristics in terms of various metrics for a connected healthcare application scenario. The experimental results confirm the applicability of the proposed framework. Moreover, the results reveal that black widow optimization (BWO) can handle the issue more efficiently compared to particle swarm optimization (PSO) and simulated annealing (SA). The overall efficiency of BWO over PSO is 95%, and BWO outperforms SA with 100% efficiency. It means that BWO prevails SA and PSO in all and 95% of the experiments, respectively.
Collapse
Affiliation(s)
| | - Safa Otoum
- College of Technological Innovation (CTI), Zayed University, Abu Dhabi, United Arab Emirates
| | - Öznur Özkasap
- Department of Computer Engineering, Koç University, Istanbul, Turkey
| |
Collapse
|
33
|
AI on the edge: a comprehensive review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10141-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
34
|
Loseto G, Scioscia F, Ruta M, Gramegna F, Ieva S, Fasciano C, Bilenchi I, Loconte D. Osmotic Cloud-Edge Intelligence for IoT-Based Cyber-Physical Systems. SENSORS 2022; 22:s22062166. [PMID: 35336335 PMCID: PMC8955238 DOI: 10.3390/s22062166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 02/28/2022] [Accepted: 03/03/2022] [Indexed: 11/22/2022]
Abstract
Artificial Intelligence (AI) in Cyber-Physical Systems allows machine learning inference on acquired data with ever greater accuracy, thanks to models trained with massive amounts of information generated by Internet of Things devices. Edge Intelligence is increasingly adopted to execute inference on data at the border of local networks, exploiting models trained in the Cloud. However, the training tasks on Edge nodes are not supported yet with flexible dynamic migration between Edge and Cloud. This paper proposes a Cloud-Edge AI microservice architecture, based on Osmotic Computing principles. Notable features include: (i) containerized architecture enabling training and inference on the Edge, Cloud, or both, exploiting computational resources opportunistically to reach the best prediction accuracy; and (ii) microservice encapsulation of each architectural module, allowing a direct mapping with Commercial-Off-The-Shelf (COTS) components. Grounding on the proposed architecture: (i) a prototype has been realized with commodity hardware leveraging open-source software technologies; and (ii) it has been then used in a small-scale intelligent manufacturing case study, carrying out experiments. The obtained results validate the feasibility and key benefits of the approach.
Collapse
Affiliation(s)
- Giuseppe Loseto
- Department of Management, Finance and Technology, LUM University “Giuseppe Degennaro”, Strada Statale 100 km 18, I-70010 Casamassima, Italy;
| | - Floriano Scioscia
- Department of Electrical and Information Engineering, Polytechnic University of Bari, Via E. Orabona 4, I-70125 Bari, Italy; (F.S.); (F.G.); (S.I.); (C.F.); (I.B.); (D.L.)
| | - Michele Ruta
- Department of Electrical and Information Engineering, Polytechnic University of Bari, Via E. Orabona 4, I-70125 Bari, Italy; (F.S.); (F.G.); (S.I.); (C.F.); (I.B.); (D.L.)
- Correspondence: ; Tel.: +39-080-596-3316
| | - Filippo Gramegna
- Department of Electrical and Information Engineering, Polytechnic University of Bari, Via E. Orabona 4, I-70125 Bari, Italy; (F.S.); (F.G.); (S.I.); (C.F.); (I.B.); (D.L.)
| | - Saverio Ieva
- Department of Electrical and Information Engineering, Polytechnic University of Bari, Via E. Orabona 4, I-70125 Bari, Italy; (F.S.); (F.G.); (S.I.); (C.F.); (I.B.); (D.L.)
| | - Corrado Fasciano
- Department of Electrical and Information Engineering, Polytechnic University of Bari, Via E. Orabona 4, I-70125 Bari, Italy; (F.S.); (F.G.); (S.I.); (C.F.); (I.B.); (D.L.)
- Exprivia S.p.A., Via A. Olivetti 11, I-70056 Molfetta, Italy
| | - Ivano Bilenchi
- Department of Electrical and Information Engineering, Polytechnic University of Bari, Via E. Orabona 4, I-70125 Bari, Italy; (F.S.); (F.G.); (S.I.); (C.F.); (I.B.); (D.L.)
| | - Davide Loconte
- Department of Electrical and Information Engineering, Polytechnic University of Bari, Via E. Orabona 4, I-70125 Bari, Italy; (F.S.); (F.G.); (S.I.); (C.F.); (I.B.); (D.L.)
| |
Collapse
|
35
|
Janbi N, Mehmood R, Katib I, Albeshri A, Corchado JM, Yigitcanlar T. Imtidad: A Reference Architecture and a Case Study on Developing Distributed AI Services for Skin Disease Diagnosis over Cloud, Fog and Edge. SENSORS (BASEL, SWITZERLAND) 2022; 22:1854. [PMID: 35271000 PMCID: PMC8914788 DOI: 10.3390/s22051854] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/17/2022] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
Several factors are motivating the development of preventive, personalized, connected, virtual, and ubiquitous healthcare services. These factors include declining public health, increase in chronic diseases, an ageing population, rising healthcare costs, the need to bring intelligence near the user for privacy, security, performance, and costs reasons, as well as COVID-19. Motivated by these drivers, this paper proposes, implements, and evaluates a reference architecture called Imtidad that provides Distributed Artificial Intelligence (AI) as a Service (DAIaaS) over cloud, fog, and edge using a service catalog case study containing 22 AI skin disease diagnosis services. These services belong to four service classes that are distinguished based on software platforms (containerized gRPC, gRPC, Android, and Android Nearby) and are executed on a range of hardware platforms (Google Cloud, HP Pavilion Laptop, NVIDIA Jetson nano, Raspberry Pi Model B, Samsung Galaxy S9, and Samsung Galaxy Note 4) and four network types (Fiber, Cellular, Wi-Fi, and Bluetooth). The AI models for the diagnosis include two standard Deep Neural Networks and two Tiny AI deep models to enable their execution at the edge, trained and tested using 10,015 real-life dermatoscopic images. The services are evaluated using several benchmarks including model service value, response time, energy consumption, and network transfer time. A DL service on a local smartphone provides the best service in terms of both energy and speed, followed by a Raspberry Pi edge device and a laptop in fog. The services are designed to enable different use cases, such as patient diagnosis at home or sending diagnosis requests to travelling medical professionals through a fog device or cloud. This is the pioneering work that provides a reference architecture and such a detailed implementation and treatment of DAIaaS services, and is also expected to have an extensive impact on developing smart distributed service infrastructures for healthcare and other sectors.
Collapse
Affiliation(s)
- Nourah Janbi
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Rashid Mehmood
- High Performance Computing Center, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Iyad Katib
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Aiiad Albeshri
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Juan M. Corchado
- Bisite Research Group, University of Salamanca, 37007 Salamanca, Spain;
- Air Institute, IoT Digital Innovation Hub, 37188 Salamanca, Spain
- Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
| | - Tan Yigitcanlar
- School of Architecture and Built Environment, Queensland University of Technology, 2 George Street, Brisbane, QLD 4000, Australia;
| |
Collapse
|
36
|
Zhu Z, Geng J, Zhou M, Fang B. Module Against Power Consumption Attacks for Trustworthiness of Vehicular AI Chips in Wide Temperature Range. INT J PATTERN RECOGN 2022. [DOI: 10.1142/s0218001422500124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Power consumption attacks monitoring on artificial intelligence (AI) chips play a critical role in the vehicular AI systems. However, most of the current monitoring and management methods focus on the trustworthiness of industrial equipment instead of resource-constrained edge devices. To address the above problem, a closed-loop module for monitoring and management of vehicular AI chips based on fitting and filtering to resist power consumption attacks is proposed in this paper. First, considering the characteristics of power, we propose a raw data correction approach for power monitoring to monitor abnormal power consumption. Second, we address the challenging problem of precision temperature monitoring to monitor the abnormal temperature of the chip, especially in a wide temperature range. Finally, the established method is applied to attack surveillance and transformed into a power consumption management problem solved by dynamic voltage and frequency scaling (DVFS) technology. As the experimental results reveal, compared with existing methods of power and temperature monitoring and power consumption control in wide temperature, our method can achieve significantly improved monitoring and managing performance.
Collapse
Affiliation(s)
- Zongwei Zhu
- School of Software Engineering, Suzhou Research Institute for Advanced Study, University of Science and Technology of China, Suzhou 215000, P. R. China
| | - Jiawei Geng
- School of Software Engineering, Suzhou Research Institute for Advanced Study, University of Science and Technology of China, Suzhou 215000, P. R. China
| | - Mingliang Zhou
- The School of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| | - Bin Fang
- The School of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| |
Collapse
|
37
|
Padmasiri H, Shashirangana J, Meedeniya D, Rana O, Perera C. Automated License Plate Recognition for Resource-Constrained Environments. SENSORS 2022; 22:s22041434. [PMID: 35214336 PMCID: PMC8880701 DOI: 10.3390/s22041434] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 02/02/2022] [Accepted: 02/11/2022] [Indexed: 12/04/2022]
Abstract
The incorporation of deep-learning techniques in embedded systems has enhanced the capabilities of edge computing to a great extent. However, most of these solutions rely on high-end hardware and often require a high processing capacity, which cannot be achieved with resource-constrained edge computing. This study presents a novel approach and a proof of concept for a hardware-efficient automated license plate recognition system for a constrained environment with limited resources. The proposed solution is purely implemented for low-resource edge devices and performed well for extreme illumination changes such as day and nighttime. The generalisability of the proposed models has been achieved using a novel set of neural networks for different hardware configurations based on the computational capabilities and low cost. The accuracy, energy efficiency, communication, and computational latency of the proposed models are validated using different license plate datasets in the daytime and nighttime and in real time. Meanwhile, the results obtained from the proposed study have shown competitive performance to the state-of-the-art server-grade hardware solutions as well.
Collapse
Affiliation(s)
- Heshan Padmasiri
- Department of Computer Science and Engineering, University of Moratuwa, Moratuwa 10400, Sri Lanka; (H.P.); (J.S.); (D.M.)
| | - Jithmi Shashirangana
- Department of Computer Science and Engineering, University of Moratuwa, Moratuwa 10400, Sri Lanka; (H.P.); (J.S.); (D.M.)
| | - Dulani Meedeniya
- Department of Computer Science and Engineering, University of Moratuwa, Moratuwa 10400, Sri Lanka; (H.P.); (J.S.); (D.M.)
| | - Omer Rana
- School of Computer Science and Informatics, Cardiff University, Cardiff CF10 3AT, UK;
| | - Charith Perera
- School of Computer Science and Informatics, Cardiff University, Cardiff CF10 3AT, UK;
- Correspondence:
| |
Collapse
|
38
|
Bilal K, Shuja J, Erbad A, Alasmary W, Alanazi E, Alourani A. Addressing Challenges of Distance Learning in the Pandemic with Edge Intelligence Enabled Multicast and Caching Solution. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22031092. [PMID: 35161839 PMCID: PMC8839201 DOI: 10.3390/s22031092] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 01/11/2022] [Accepted: 01/19/2022] [Indexed: 05/17/2023]
Abstract
The COVID-19 pandemic has affected the world socially and economically changing behaviors towards medical facilities, public gatherings, workplaces, and education. Educational institutes have been shutdown sporadically across the globe forcing teachers and students to adopt distance learning techniques. Due to the closure of educational institutes, work and learn from home methods have burdened the network resources and considerably decreased a viewer's Quality of Experience (QoE). The situation calls for innovative techniques to handle the surging load of video traffic on cellular networks. In the scenario of distance learning, there is ample opportunity to realize multi-cast delivery instead of a conventional unicast. However, the existing 5G architecture does not support service-less multi-cast. In this article, we advance the case of Virtual Network Function (VNF) based service-less architecture for video multicast. Multicasting a video session for distance learning significantly lowers the burden on core and Radio Access Networks (RAN) as demonstrated by evaluation over a real-world dataset. We debate the role of Edge Intelligence (EI) for enabling multicast and edge caching for distance learning to complement the performance of the proposed VNF architecture. EI offers the determination of users that are part of a multicast session based on location, session, and cell information. Moreover, user preferences and network's contextual information can differentiate between live and cached access patterns optimizing edge caching decisions. While exploring the opportunities of EI-enabled distance learning, we demonstrate a significant reduction in network operator resource utilization and an increase in user QoE for VNF based multicast transmission.
Collapse
Affiliation(s)
- Kashif Bilal
- Department of Computer Science, Abbottabad Campus, COMSATS University Islamabad, Abbottabad 22060, Pakistan;
| | - Junaid Shuja
- Department of Computer Science, Abbottabad Campus, COMSATS University Islamabad, Abbottabad 22060, Pakistan;
- Correspondence:
| | - Aiman Erbad
- College of Science and Engineering, Hamad Bin Khalifa University, Doha 5825, Qatar;
| | - Waleed Alasmary
- Computer Engineering Department, College of Computer and Information Systems, Umm Al-Qura University, Makkah 21955, Saudi Arabia;
| | - Eisa Alanazi
- Department of Computer Science, Umm Al-Qura University, Makkah 21955, Saudi Arabia;
| | - Abdullah Alourani
- Department of Computer Science and Information, College of Science in Zulfi, Majmaah University, Al-Majmaah 11952, Saudi Arabia;
| |
Collapse
|
39
|
Roig PJ, Alcaraz S, Gilly K, Bernad C, Juiz C. Modeling an Edge Computing Arithmetic Framework for IoT Environments. SENSORS 2022; 22:s22031084. [PMID: 35161828 PMCID: PMC8839237 DOI: 10.3390/s22031084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 01/25/2022] [Accepted: 01/26/2022] [Indexed: 02/06/2023]
Abstract
IoT environments are forecasted to grow exponentially in the coming years thanks to the recent advances in both edge computing and artificial intelligence. In this paper, a model of remote computing scheme is presented, where three layers of computing nodes are put in place in order to optimize the computing and forwarding tasks. In this sense, a generic layout has been designed so as to easily achieve communications among the diverse layers by means of simple arithmetic operations, which may result in saving resources in all nodes involved. Traffic forwarding is undertaken by means of forwarding tables within network devices, which need to be searched upon in order to find the proper destination, and that process may be resource-consuming as the number of entries in such tables grow. However, the arithmetic framework proposed may speed up the traffic forwarding decisions as relaying on integer divisions and modular arithmetic, which may result more straightforward. Furthermore, two diverse approaches have been proposed to formally describe such a design by means of coding with Spin/Promela, or otherwise, by using an algebraic approach with Algebra of Communicating Processes (ACP), resulting in a explosion state for the former and a specified and verified model in the latter.
Collapse
Affiliation(s)
- Pedro Juan Roig
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (C.B.)
- Correspondence: (P.J.R.); (K.G.); Tel.: +34-96-665-8388 (P.J.R.); +34-96-665-8565 (K.G.)
| | - Salvador Alcaraz
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (C.B.)
| | - Katja Gilly
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (C.B.)
- Correspondence: (P.J.R.); (K.G.); Tel.: +34-96-665-8388 (P.J.R.); +34-96-665-8565 (K.G.)
| | - Cristina Bernad
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (C.B.)
| | - Carlos Juiz
- Mathematics and Computer Science Department, University of the Balearic Islands, 07022 Palma de Mallorca, Spain;
| |
Collapse
|
40
|
Roig PJ, Alcaraz S, Gilly K, Bernad C, Juiz C. Arithmetic Framework to Optimize Packet Forwarding among End Devices in Generic Edge Computing Environments. SENSORS 2022; 22:s22020421. [PMID: 35062381 PMCID: PMC8780602 DOI: 10.3390/s22020421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 12/22/2021] [Accepted: 01/04/2022] [Indexed: 11/29/2022]
Abstract
Multi-access edge computing implementations are ever increasing in both the number of deployments and the areas of application. In this context, the easiness in the operations of packet forwarding between two end devices being part of a particular edge computing infrastructure may allow for a more efficient performance. In this paper, an arithmetic framework based in a layered approach has been proposed in order to optimize the packet forwarding actions, such as routing and switching, in generic edge computing environments by taking advantage of the properties of integer division and modular arithmetic, thus simplifying the search of the proper next hop to reach the desired destination into simple arithmetic operations, as opposed to having to look into the routing or switching tables. In this sense, the different type of communications within a generic edge computing environment are first studied, and afterwards, three diverse case scenarios have been described according to the arithmetic framework proposed, where all of them have been further verified by using arithmetic means with the help of applying theorems, as well as algebraic means, with the help of searching for behavioral equivalences.
Collapse
Affiliation(s)
- Pedro Juan Roig
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (K.G.); (C.B.)
- Correspondence: ; Tel.: +34-966658388
| | - Salvador Alcaraz
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (K.G.); (C.B.)
| | - Katja Gilly
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (K.G.); (C.B.)
| | - Cristina Bernad
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (K.G.); (C.B.)
| | - Carlos Juiz
- Mathematics and Computer Science Department, University of the Balearic Islands, 07022 Palma de Mallorca, Spain;
| |
Collapse
|
41
|
Edge Deep Learning Towards the Metallurgical Industry: Improving the Hybrid Pelletized Sinter (HPS) Process. ENTERP INF SYST-UK 2022. [DOI: 10.1007/978-3-031-08965-7_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
42
|
Bao G, Guo P. Federated learning in cloud-edge collaborative architecture: key technologies, applications and challenges. JOURNAL OF CLOUD COMPUTING (HEIDELBERG, GERMANY) 2022; 11:94. [PMID: 36536803 PMCID: PMC9753079 DOI: 10.1186/s13677-022-00377-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 08/09/2022] [Indexed: 12/23/2022]
Abstract
In recent years, with the rapid growth of edge data, the novel cloud-edge collaborative architecture has been proposed to compensate for the lack of data processing power of traditional cloud computing. On the other hand, on account of the increasing demand of the public for data privacy, federated learning has been proposed to compensate for the lack of security of traditional centralized machine learning. Deploying federated learning in cloud-edge collaborative architecture is widely considered to be a promising cyber infrastructure in the future. Although each cloud-edge collaboration and federated learning is hot research topic respectively at present, the discussion of deploying federated learning in cloud-edge collaborative architecture is still in its infancy and little research has been conducted. This article aims to fill the gap by providing a detailed description of the critical technologies, challenges, and applications of deploying federated learning in cloud-edge collaborative architecture, and providing guidance on future research directions.
Collapse
Affiliation(s)
- Guanming Bao
- grid.260478.f0000 0000 9249 2313School of Computer Science, Nanjing University of Information Science and Technology, Ningliu Road, 210044 Nanjing, China
| | - Ping Guo
- grid.260478.f0000 0000 9249 2313School of Computer Science, Nanjing University of Information Science and Technology, Ningliu Road, 210044 Nanjing, China
| |
Collapse
|
43
|
Ibn-Khedher H, Laroui M, Moungla H, Afifi H, Abd-Elrahman E. Next-Generation Edge Computing Assisted Autonomous Driving Based Artificial Intelligence Algorithms. IEEE ACCESS 2022; 10:53987-54001. [DOI: 10.1109/access.2022.3174548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
| | - Mohammed Laroui
- Computer Science Department, Djillali Liabes University, Sidi Bel Abbes, Algeria
| | - Hassine Moungla
- Laboratory of Informatics Paris Descartes (LIPADE), Universite de Paris Cite, Paris, France
| | - Hossam Afifi
- Telecom SudParis, Institut Polytechnique de Paris, Palaiseau, France
| | | |
Collapse
|
44
|
Klippel E, Bianchi AGC, Delabrida S, Silva MC, Garrocho CTB, Moreira VDS, Oliveira RAR. Deep Learning Approach at the Edge to Detect Iron Ore Type. SENSORS (BASEL, SWITZERLAND) 2021; 22:169. [PMID: 35009712 PMCID: PMC8749548 DOI: 10.3390/s22010169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 12/14/2021] [Accepted: 12/18/2021] [Indexed: 06/14/2023]
Abstract
There is a constant risk of iron ore collapsing during its transfer between processing stages in beneficiation plants. Existing instrumentation is not only expensive but also complex and challenging to maintain. In this research, we propose using edge artificial intelligence for early detection of landslide risk based on images of iron ore transported on conveyor belts. During this work, we defined the device edge and the deep neural network model. Then, we built a prototype will to collect images that will be used for training the model. This model will be compressed for use in the device edge. This same prototype will be used for field tests of the model under operational conditions. In building the prototype, a real-time clock was used to ensure the synchronization of image records with the plant's process information, ensuring the correct classification of images by the process specialist. The results obtained in the field tests of the prototype with an accuracy of 91% and a recall of 96% indicate the feasibility of using deep learning at the edge to detect the type of iron ore and prevent its risk of avalanche.
Collapse
Affiliation(s)
- Emerson Klippel
- Graduate Program in Instrumentation, Control and Automation of Mining Processes, Instituto Tecnológico Vale, Federal University of Ouro Preto, Ouro Preto 35400-000, Brazil
- VALE S.A., Parauapebas, Para 68516-000, Brazil;
| | - Andrea Gomes Campos Bianchi
- Computing Department, Federal University of Ouro Preto, Ouro Preto 35400-000, Brazil; (A.G.C.B.); (S.D.); (M.C.S.); (C.T.B.G.); (R.A.R.O.)
| | - Saul Delabrida
- Computing Department, Federal University of Ouro Preto, Ouro Preto 35400-000, Brazil; (A.G.C.B.); (S.D.); (M.C.S.); (C.T.B.G.); (R.A.R.O.)
| | - Mateus Coelho Silva
- Computing Department, Federal University of Ouro Preto, Ouro Preto 35400-000, Brazil; (A.G.C.B.); (S.D.); (M.C.S.); (C.T.B.G.); (R.A.R.O.)
| | - Charles Tim Batista Garrocho
- Computing Department, Federal University of Ouro Preto, Ouro Preto 35400-000, Brazil; (A.G.C.B.); (S.D.); (M.C.S.); (C.T.B.G.); (R.A.R.O.)
| | | | - Ricardo Augusto Rabelo Oliveira
- Computing Department, Federal University of Ouro Preto, Ouro Preto 35400-000, Brazil; (A.G.C.B.); (S.D.); (M.C.S.); (C.T.B.G.); (R.A.R.O.)
| |
Collapse
|
45
|
Angel NA, Ravindran D, Vincent PMDR, Srinivasan K, Hu YC. Recent Advances in Evolving Computing Paradigms: Cloud, Edge, and Fog Technologies. SENSORS 2021; 22:s22010196. [PMID: 35009740 PMCID: PMC8749780 DOI: 10.3390/s22010196] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 12/22/2021] [Accepted: 12/23/2021] [Indexed: 11/16/2022]
Abstract
Cloud computing has become integral lately due to the ever-expanding Internet-of-things (IoT) network. It still is and continues to be the best practice for implementing complex computational applications, emphasizing the massive processing of data. However, the cloud falls short due to the critical constraints of novel IoT applications generating vast data, which entails a swift response time with improved privacy. The newest drift is moving computational and storage resources to the edge of the network, involving a decentralized distributed architecture. The data processing and analytics perform at proximity to end-users, and overcome the bottleneck of cloud computing. The trend of deploying machine learning (ML) at the network edge to enhance computing applications and services has gained momentum lately, specifically to reduce latency and energy consumed while optimizing the security and management of resources. There is a need for rigorous research efforts oriented towards developing and implementing machine learning algorithms that deliver the best results in terms of speed, accuracy, storage, and security, with low power consumption. This extensive survey presented on the prominent computing paradigms in practice highlights the latest innovations resulting from the fusion between ML and the evolving computing paradigms and discusses the underlying open research challenges and future prospects.
Collapse
Affiliation(s)
- Nancy A Angel
- Department of Computer Science, St. Joseph’s College (Autonomous), Bharathidasan University, Tiruchirappalli 620002, India; (N.A.A.); (D.R.)
| | - Dakshanamoorthy Ravindran
- Department of Computer Science, St. Joseph’s College (Autonomous), Bharathidasan University, Tiruchirappalli 620002, India; (N.A.A.); (D.R.)
| | - P M Durai Raj Vincent
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India;
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India;
| | - Yuh-Chung Hu
- Department of Mechanical and Electromechanical Engineering, National ILan University, Yilan 26047, Taiwan
- Correspondence:
| |
Collapse
|
46
|
Rocha-Jácome C, Carvajal RG, Chavero FM, Guevara-Cabezas E, Hidalgo Fort E. Industry 4.0: A Proposal of Paradigm Organization Schemes from a Systematic Literature Review. SENSORS (BASEL, SWITZERLAND) 2021; 22:66. [PMID: 35009609 PMCID: PMC8747394 DOI: 10.3390/s22010066] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 12/18/2021] [Accepted: 12/21/2021] [Indexed: 06/14/2023]
Abstract
Currently, the concept of Industry 4.0 is well known; however, it is extremely complex, as it is constantly evolving and innovating. It includes the participation of many disciplines and areas of knowledge as well as the integration of many technologies, both mature and emerging, but working in collaboration and relying on their study and implementation under the novel criteria of Cyber-Physical Systems. This study starts with an exhaustive search for updated scientific information of which a bibliometric analysis is carried out with results presented in different tables and graphs. Subsequently, based on the qualitative analysis of the references, we present two proposals for the schematic analysis of Industry 4.0 that will help academia and companies to support digital transformation studies. The results will allow us to perform a simple alternative analysis of Industry 4.0 to understand the functions and scope of the integrating technologies to achieve a better collaboration of each area of knowledge and each professional, considering the potential and limitations of each one, supporting the planning of an appropriate strategy, especially in the management of human resources, for the successful execution of the digital transformation of the industry.
Collapse
|
47
|
Memory-Efficient AI Algorithm for Infant Sleeping Death Syndrome Detection in Smart Buildings. AI 2021. [DOI: 10.3390/ai2040042] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) is fundamentally transforming smart buildings by increasing energy efficiency and operational productivity, improving life experience, and providing better healthcare services. Sudden Infant Death Syndrome (SIDS) is an unexpected and unexplained death of infants under one year old. Previous research reports that sleeping on the back can significantly reduce the risk of SIDS. Existing sensor-based wearable or touchable monitors have serious drawbacks such as inconvenience and false alarm, so they are not attractive in monitoring infant sleeping postures. Several recent studies use a camera, portable electronics, and AI algorithm to monitor the sleep postures of infants. However, there are two major bottlenecks that prevent AI from detecting potential baby sleeping hazards in smart buildings. In order to overcome these bottlenecks, in this work, we create a complete dataset containing 10,240 day and night vision samples, and use post-training weight quantization to solve the huge memory demand problem. Experimental results verify the effectiveness and benefits of our proposed idea. Compared with the state-of-the-art AI algorithms in the literature, the proposed method reduces memory footprint by at least 89%, while achieving a similar high detection accuracy of about 90%. Our proposed AI algorithm only requires 6.4 MB of memory space, while other existing AI algorithms for sleep posture detection require 58.2 MB to 275 MB of memory space. This comparison shows that the memory is reduced by at least 9 times without sacrificing the detection accuracy. Therefore, our proposed memory-efficient AI algorithm has great potential to be deployed and to run on edge devices, such as micro-controllers and Raspberry Pi, which have low memory footprint, limited power budget, and constrained computing resources.
Collapse
|
48
|
Rago A, Piro G, Boggia G, Dini P. Anticipatory Allocation of Communication and Computational Resources at the Edge Using Spatio-Temporal Dynamics of Mobile Users. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT 2021. [DOI: 10.1109/tnsm.2021.3099472] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
49
|
Erhan L, Di Mauro M, Anjum A, Bagdasar O, Song W, Liotta A. Embedded Data Imputation for Environmental Intelligent Sensing: A Case Study. SENSORS 2021; 21:s21237774. [PMID: 34883778 PMCID: PMC8659818 DOI: 10.3390/s21237774] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 11/19/2021] [Accepted: 11/20/2021] [Indexed: 11/25/2022]
Abstract
Recent developments in cloud computing and the Internet of Things have enabled smart environments, in terms of both monitoring and actuation. Unfortunately, this often results in unsustainable cloud-based solutions, whereby, in the interest of simplicity, a wealth of raw (unprocessed) data are pushed from sensor nodes to the cloud. Herein, we advocate the use of machine learning at sensor nodes to perform essential data-cleaning operations, to avoid the transmission of corrupted (often unusable) data to the cloud. Starting from a public pollution dataset, we investigate how two machine learning techniques (kNN and missForest) may be embedded on Raspberry Pi to perform data imputation, without impacting the data collection process. Our experimental results demonstrate the accuracy and computational efficiency of edge-learning methods for filling in missing data values in corrupted data series. We find that kNN and missForest correctly impute up to 40% of randomly distributed missing values, with a density distribution of values that is indistinguishable from the benchmark. We also show a trade-off analysis for the case of bursty missing values, with recoverable blocks of up to 100 samples. Computation times are shorter than sampling periods, allowing for data imputation at the edge in a timely manner.
Collapse
Affiliation(s)
- Laura Erhan
- College of Science and Engineering, University of Derby, Derby DE22 1GB, UK; (L.E.); (O.B.)
| | - Mario Di Mauro
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy;
| | - Ashiq Anjum
- College of Science and Engineering, University of Leicester, Leicester LE1 7RH, UK;
| | - Ovidiu Bagdasar
- College of Science and Engineering, University of Derby, Derby DE22 1GB, UK; (L.E.); (O.B.)
- Department of Computing, Mathematics and Electronics, “1 Decembrie 1918” University of Alba Iulia, 510009 Alba Iulia, Romania
| | - Wei Song
- College of Information Technology, Shanghai Ocean University, Shanghai 200090, China;
| | - Antonio Liotta
- Faculty of Computer Science, Free University of Bozen-Bolzano, 39100 Bolzano, Italy
- Correspondence:
| |
Collapse
|
50
|
Chmurski M, Mauro G, Santra A, Zubert M, Dagasan G. Highly-Optimized Radar-Based Gesture Recognition System with Depthwise Expansion Module. SENSORS (BASEL, SWITZERLAND) 2021; 21:7298. [PMID: 34770603 PMCID: PMC8588382 DOI: 10.3390/s21217298] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 10/08/2021] [Accepted: 10/26/2021] [Indexed: 11/16/2022]
Abstract
The increasing integration of technology in our daily lives demands the development of more convenient human-computer interaction (HCI) methods. Most of the current hand-based HCI strategies exhibit various limitations, e.g., sensibility to variable lighting conditions and limitations on the operating environment. Further, the deployment of such systems is often not performed in resource-constrained contexts. Inspired by the MobileNetV1 deep learning network, this paper presents a novel hand gesture recognition system based on frequency-modulated continuous wave (FMCW) radar, exhibiting a higher recognition accuracy in comparison to the state-of-the-art systems. First of all, the paper introduces a method to simplify radar preprocessing while preserving the main information of the performed gestures. Then, a deep neural classifier with the novel Depthwise Expansion Module based on the depthwise separable convolutions is presented. The introduced classifier is optimized and deployed on the Coral Edge TPU board. The system defines and adopts eight different hand gestures performed by five users, offering a classification accuracy of 98.13% while operating in a low-power and resource-constrained environment.
Collapse
Affiliation(s)
- Mateusz Chmurski
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
- Department of Microelectronics and Computer Science, Lodz University of Technology, 90924 Lodz, Poland
| | - Gianfranco Mauro
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
- Department of Electronic and Computer Technology, University of Granada, Avenida de Fuente Nueva s/n, 18071 Granada, Spain
| | - Avik Santra
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
| | - Mariusz Zubert
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
| | - Gökberk Dagasan
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
| |
Collapse
|