151
|
Janbi N, Mehmood R, Katib I, Albeshri A, Corchado JM, Yigitcanlar T. Imtidad: A Reference Architecture and a Case Study on Developing Distributed AI Services for Skin Disease Diagnosis over Cloud, Fog and Edge. Sensors (Basel) 2022; 22:1854. [PMID: 35271000 PMCID: PMC8914788 DOI: 10.3390/s22051854] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/17/2022] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
Several factors are motivating the development of preventive, personalized, connected, virtual, and ubiquitous healthcare services. These factors include declining public health, increase in chronic diseases, an ageing population, rising healthcare costs, the need to bring intelligence near the user for privacy, security, performance, and costs reasons, as well as COVID-19. Motivated by these drivers, this paper proposes, implements, and evaluates a reference architecture called Imtidad that provides Distributed Artificial Intelligence (AI) as a Service (DAIaaS) over cloud, fog, and edge using a service catalog case study containing 22 AI skin disease diagnosis services. These services belong to four service classes that are distinguished based on software platforms (containerized gRPC, gRPC, Android, and Android Nearby) and are executed on a range of hardware platforms (Google Cloud, HP Pavilion Laptop, NVIDIA Jetson nano, Raspberry Pi Model B, Samsung Galaxy S9, and Samsung Galaxy Note 4) and four network types (Fiber, Cellular, Wi-Fi, and Bluetooth). The AI models for the diagnosis include two standard Deep Neural Networks and two Tiny AI deep models to enable their execution at the edge, trained and tested using 10,015 real-life dermatoscopic images. The services are evaluated using several benchmarks including model service value, response time, energy consumption, and network transfer time. A DL service on a local smartphone provides the best service in terms of both energy and speed, followed by a Raspberry Pi edge device and a laptop in fog. The services are designed to enable different use cases, such as patient diagnosis at home or sending diagnosis requests to travelling medical professionals through a fog device or cloud. This is the pioneering work that provides a reference architecture and such a detailed implementation and treatment of DAIaaS services, and is also expected to have an extensive impact on developing smart distributed service infrastructures for healthcare and other sectors.
Collapse
Affiliation(s)
- Nourah Janbi
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Rashid Mehmood
- High Performance Computing Center, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Iyad Katib
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Aiiad Albeshri
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Juan M. Corchado
- Bisite Research Group, University of Salamanca, 37007 Salamanca, Spain;
- Air Institute, IoT Digital Innovation Hub, 37188 Salamanca, Spain
- Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
| | - Tan Yigitcanlar
- School of Architecture and Built Environment, Queensland University of Technology, 2 George Street, Brisbane, QLD 4000, Australia;
| |
Collapse
|
152
|
Dammak B, Turki M, Cheikhrouhou S, Baklouti M, Mars R, Dhahbi A. LoRaChainCare: An IoT Architecture Integrating Blockchain and LoRa Network for Personal Health Care Data Monitoring. Sensors (Basel) 2022; 22:1497. [PMID: 35214404 DOI: 10.3390/s22041497] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 01/29/2022] [Accepted: 02/04/2022] [Indexed: 02/02/2023]
Abstract
Over the past several years, the adoption of HealthCare Monitoring Systems (HCS) in health centers and organizations like hospitals or eldery homes growth significantly. The adoption of such systems is revolutionized by a propelling advancements in IoT and Blockchain technologies. Owing to technological advancement in IoT sensors market, innovations in HCS to monitor patients health status have motivated many countries to strength their efforts to support their citizens with such care delivery systems under the directives of a physician who has access to patient’s data. Nevertheless, secure data sharing is a principal patient’s concern to be comfort to use such systems. Current HCS are not able to provide reassuring security policies. For that, one of our focus in this work, is to provide security countermeasures, likewise cost-efficient solution for HCS by integrating storage model based on Blockchain and Interplanetary File Systems (IPFS). Blockchain technology is an emerging solution in pharmaceutical industry and starts to take place for HCS and allows HealthCare providers to track connected devices and control access to shared data, hence protecting patients’ privacy. Furthermore, the addition of Edge and Fog computing has improved HCS to react in real-time and enhance their reliability. A variety of communication protocols can connect sensor devices to edge/Fog layer and the best choice will depend upon connectivity requirements: range, bandwidth, power, interoperability, security, and reliability. Instead, systems efficiency would decline and hurt if communication protocol is inconsistent. LoRa (Long Range) communications technology is emerging as the leader among Low-Power Wide-Area Networks (LPWANs) entering the IoT domain benefiting from many features such as long-range distances and low power consumption. This work proposes LoRaChainCare, an architecture model for HCS which combines the technologies Blockchain, Fog/Edge computing, and the LoRa communication protocol. A real implementation of LoRaChainCare system is presented and evaluated in terms of cost, run time and power consumption.
Collapse
|
153
|
Lăcătușu F, Ionita AD, Lăcătușu M, Olteanu A. Performance Evaluation of Information Gathering from Edge Devices in a Complex of Smart Buildings. Sensors (Basel) 2022; 22:s22031002. [PMID: 35161745 PMCID: PMC8838296 DOI: 10.3390/s22031002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/19/2022] [Accepted: 01/24/2022] [Indexed: 12/10/2022]
Abstract
The use of monitoring systems based on cloud computing has become common for smart buildings. However, the dilemma of centralization versus decentralization, in terms of gathering information and making the right decisions based on it, remains. Performance, dependent on the system design, does matter for emergency detection, where response time and loading behavior become very important. We studied several design options based on edge computing and containers for a smart building monitoring system that sends alerts to the responsible personnel when necessary. The study evaluated performance, including a qualitative analysis and load testing, for our experimental settings. From 700+ edge nodes, we obtained response times that were 30% lower for the public cloud versus the local solution. For up to 100 edge nodes, the values were better for the latter, and in between, they were rather similar. Based on an interpretation of the results, we developed recommendations for five real-world configurations, and we present the design choices adopted in our development for a complex of smart buildings.
Collapse
|
154
|
Ali O, Ishak MK, Bhatti MKL, Khan I, Kim KI. A Comprehensive Review of Internet of Things: Technology Stack, Middlewares, and Fog/ Edge Computing Interface. Sensors (Basel) 2022; 22:995. [PMID: 35161740 PMCID: PMC8840251 DOI: 10.3390/s22030995] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 01/11/2022] [Accepted: 01/21/2022] [Indexed: 06/14/2023]
Abstract
The Internet of Things (IoT) is an extensive network of heterogeneous devices that provides an array of innovative applications and services. IoT networks enable the integration of data and services to seamlessly interconnect the cyber and physical systems. However, the heterogeneity of devices, underlying technologies and lack of standardization pose critical challenges in this domain. On account of these challenges, this research article aims to provide a comprehensive overview of the enabling technologies and standards that build up the IoT technology stack. First, a layered architecture approach is presented where the state-of-the-art research and open challenges are discussed at every layer. Next, this research article focuses on the role of middleware platforms in IoT application development and integration. Furthermore, this article addresses the open challenges and provides comprehensive steps towards IoT stack optimization. Finally, the interfacing of Fog/Edge Networks to IoT technology stack is thoroughly investigated by discussing the current research and open challenges in this domain. The main scope of this study is to provide a comprehensive review into IoT technology (the horizontal fabric), the associated middleware and networks required to build future proof applications (the vertical markets).
Collapse
Affiliation(s)
- Omer Ali
- School of Electrical and Electronic Engineering, Universiti Sains Malaysia (USM), Nibong Tebal 14300, Malaysia; (O.A.); (M.K.I.)
- Department of Electrical Engineering, NFC Institute of Engineering & Technology (NFC IET), Multan 60000, Pakistan;
| | - Mohamad Khairi Ishak
- School of Electrical and Electronic Engineering, Universiti Sains Malaysia (USM), Nibong Tebal 14300, Malaysia; (O.A.); (M.K.I.)
| | | | - Imran Khan
- Department of Electrical Engineering, University of Engineering & Technology Peshawar, Peshawar 21500, Pakistan;
| | - Ki-Il Kim
- Department of Computer Science and Engineering, Chungnam National University, Daejeon 34134, Korea
| |
Collapse
|
155
|
Qafzezi E, Bylykbashi K, Ampririt P, Ikeda M, Matsuo K, Barolli L. An Intelligent Approach for Cloud-Fog- Edge Computing SDN-VANETs Based on Fuzzy Logic: Effect of Different Parameters on Coordination and Management of Resources. Sensors (Basel) 2022; 22:878. [PMID: 35161623 DOI: 10.3390/s22030878] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 01/20/2022] [Accepted: 01/22/2022] [Indexed: 11/20/2022]
Abstract
The integration of cloud-fog-edge computing in Software-Defined Vehicular Ad hoc Networks (SDN-VANETs) brings a new paradigm that provides the needed resources for supporting a myriad of emerging applications. While an abundance of resources may offer many benefits, it also causes management problems. In this work, we propose an intelligent approach to flexibly and efficiently manage resources in these networks. The proposed approach makes use of an integrated fuzzy logic system that determines the most appropriate resources that vehicles should use when set under various circumstances. These circumstances cover the quality of the network created between the vehicles, its size and longevity, the number of available resources, and the requirements of applications. We evaluated the proposed approach by computer simulations. The results demonstrate the feasibility of the proposed approach in coordinating and managing the available SDN-VANETs resources.
Collapse
|
156
|
Zhang B, Zhou Z, Cao W, Qi X, Xu C, Wen W. A New Few-Shot Learning Method of Bacterial Colony Counting Based on the Edge Computing Device. Biology (Basel) 2022; 11:biology11020156. [PMID: 35205023 PMCID: PMC8869218 DOI: 10.3390/biology11020156] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 01/16/2022] [Accepted: 01/17/2022] [Indexed: 04/09/2023]
Abstract
Bacterial colony counting is a time consuming but important task for many fields, such as food quality testing and pathogen detection, which own the high demand for accurate on-site testing. However, bacterial colonies are often overlapped, adherent with each other, and difficult to precisely process by traditional algorithms. The development of deep learning has brought new possibilities for bacterial colony counting, but deep learning networks usually require a large amount of training data and highly configured test equipment. The culture and annotation time of bacteria are costly, and professional deep learning workstations are too expensive and large to meet portable requirements. To solve these problems, we propose a lightweight improved YOLOv3 network based on the few-shot learning strategy, which is able to accomplish high detection accuracy with only five raw images and be deployed on a low-cost edge device. Compared with the traditional methods, our method improved the average accuracy from 64.3% to 97.4% and decreased the False Negative Rate from 32.1% to 1.5%. Our method could greatly improve the detection accuracy, realize the portability for on-site testing, and significantly save the cost of data collection and annotation over 80%, which brings more potential for bacterial colony counting.
Collapse
Affiliation(s)
- Beini Zhang
- Advanced Materials Thrust, Department of Physics, The Hong Kong University of Science and Technology, Hong Kong;
| | - Zhentao Zhou
- Clearwaterbay Biomaterials Ltd., Shenzhen 518100, China; (Z.Z.); (W.C.)
| | - Wenbin Cao
- Clearwaterbay Biomaterials Ltd., Shenzhen 518100, China; (Z.Z.); (W.C.)
| | - Xirui Qi
- Department of Physics, The Hong Kong University of Science and Technology, Hong Kong; (X.Q.); (C.X.)
| | - Chen Xu
- Department of Physics, The Hong Kong University of Science and Technology, Hong Kong; (X.Q.); (C.X.)
| | - Weijia Wen
- Advanced Materials Thrust, Department of Physics, The Hong Kong University of Science and Technology, Hong Kong;
- Correspondence:
| |
Collapse
|
157
|
Berta R, Bellotti F, De Gloria A, Lazzaroni L. Assessing Versatility of a Generic End-to-End Platform for IoT Ecosystem Applications. Sensors (Basel) 2022; 22:713. [PMID: 35161458 DOI: 10.3390/s22030713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 01/15/2022] [Accepted: 01/15/2022] [Indexed: 02/05/2023]
Abstract
Availability of efficient development tools for data-rich IoT applications is becoming ever more important. Such tools should support cross-platform deployment and seamless and effective applicability in a variety of domains. In this view, we assessed the versatility of an edge-to-cloud system featuring Measurify, a framework for managing smart things. The framework exposes to developers a set of measurement-oriented resources that can be used in different contexts. The tool has been assessed in the development of end-to-end IoT applications in six Electronic and Information Technologies Engineering BSc theses that have highlighted the potential of such a system, both from a didactic and a professional point of view. The main design abstractions of the system (i.e., generic sensor configuration, simple language with chainable operations for processing data on the edge, seamless WiFi/GSM communication) allowed developers to be productive and focus on the application requirements and the high-level design choices needed to define the edge system (microcontroller and its sensors), avoiding the large set-up times necessary to start a solution from scratch. The experience also highlighted some usability issues that will be addressed in an upcoming release of the system.
Collapse
|
158
|
Avgeris M, Spatharakis D, Dechouniotis D, Leivadeas A, Karyotis V, Papavassiliou S. ENERDGE: Distributed Energy-Aware Resource Allocation at the Edge. Sensors (Basel) 2022; 22:660. [PMID: 35062619 DOI: 10.3390/s22020660] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/31/2021] [Accepted: 01/11/2022] [Indexed: 01/27/2023]
Abstract
Mobile applications are progressively becoming more sophisticated and complex, increasing their computational requirements. Traditional offloading approaches that use exclusively the Cloud infrastructure are now deemed unsuitable due to the inherent associated delay. Edge Computing can address most of the Cloud limitations at the cost of limited available resources. This bottleneck necessitates an efficient allocation of offloaded tasks from the mobile devices to the Edge. In this paper, we consider a task offloading setting with applications of different characteristics and requirements, and propose an optimal resource allocation framework leveraging the amalgamation of the edge resources. To balance the trade-off between retaining low total energy consumption, respecting end-to-end delay requirements and load balancing at the Edge, we additionally introduce a Markov Random Field based mechanism for the distribution of the excess workload. The proposed approach investigates a realistic scenario, including different categories of mobile applications, edge devices with different computational capabilities, and dynamic wireless conditions modeled by the dynamic behavior and mobility of the users. The framework is complemented with a prediction mechanism that facilitates the orchestration of the physical resources. The efficiency of the proposed scheme is evaluated via modeling and simulation and is shown to outperform a well-known task offloading solution, as well as a more recent one.
Collapse
|
159
|
Abreha HG, Hayajneh M, Serhani MA. Federated Learning in Edge Computing: A Systematic Survey. Sensors (Basel) 2022; 22:450. [PMID: 35062410 PMCID: PMC8780479 DOI: 10.3390/s22020450] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 12/26/2021] [Accepted: 12/31/2021] [Indexed: 06/14/2023]
Abstract
Edge Computing (EC) is a new architecture that extends Cloud Computing (CC) services closer to data sources. EC combined with Deep Learning (DL) is a promising technology and is widely used in several applications. However, in conventional DL architectures with EC enabled, data producers must frequently send and share data with third parties, edge or cloud servers, to train their models. This architecture is often impractical due to the high bandwidth requirements, legalization, and privacy vulnerabilities. The Federated Learning (FL) concept has recently emerged as a promising solution for mitigating the problems of unwanted bandwidth loss, data privacy, and legalization. FL can co-train models across distributed clients, such as mobile phones, automobiles, hospitals, and more, through a centralized server, while maintaining data localization. FL can therefore be viewed as a stimulating factor in the EC paradigm as it enables collaborative learning and model optimization. Although the existing surveys have taken into account applications of FL in EC environments, there has not been any systematic survey discussing FL implementation and challenges in the EC paradigm. This paper aims to provide a systematic survey of the literature on the implementation of FL in EC environments with a taxonomy to identify advanced solutions and other open problems. In this survey, we review the fundamentals of EC and FL, then we review the existing related works in FL in EC. Furthermore, we describe the protocols, architecture, framework, and hardware requirements for FL implementation in the EC environment. Moreover, we discuss the applications, challenges, and related existing solutions in the edge FL. Finally, we detail two relevant case studies of applying FL in EC, and we identify open issues and potential directions for future research. We believe this survey will help researchers better understand the connection between FL and EC enabling technologies and concepts.
Collapse
|
160
|
Roig PJ, Alcaraz S, Gilly K, Bernad C, Juiz C. Arithmetic Framework to Optimize Packet Forwarding among End Devices in Generic Edge Computing Environments. Sensors (Basel) 2022; 22:s22020421. [PMID: 35062381 PMCID: PMC8780602 DOI: 10.3390/s22020421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 12/22/2021] [Accepted: 01/04/2022] [Indexed: 11/29/2022]
Abstract
Multi-access edge computing implementations are ever increasing in both the number of deployments and the areas of application. In this context, the easiness in the operations of packet forwarding between two end devices being part of a particular edge computing infrastructure may allow for a more efficient performance. In this paper, an arithmetic framework based in a layered approach has been proposed in order to optimize the packet forwarding actions, such as routing and switching, in generic edge computing environments by taking advantage of the properties of integer division and modular arithmetic, thus simplifying the search of the proper next hop to reach the desired destination into simple arithmetic operations, as opposed to having to look into the routing or switching tables. In this sense, the different type of communications within a generic edge computing environment are first studied, and afterwards, three diverse case scenarios have been described according to the arithmetic framework proposed, where all of them have been further verified by using arithmetic means with the help of applying theorems, as well as algebraic means, with the help of searching for behavioral equivalences.
Collapse
Affiliation(s)
- Pedro Juan Roig
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (K.G.); (C.B.)
- Correspondence: ; Tel.: +34-966658388
| | - Salvador Alcaraz
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (K.G.); (C.B.)
| | - Katja Gilly
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (K.G.); (C.B.)
| | - Cristina Bernad
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (K.G.); (C.B.)
| | - Carlos Juiz
- Mathematics and Computer Science Department, University of the Balearic Islands, 07022 Palma de Mallorca, Spain;
| |
Collapse
|
161
|
Qi C, Gao J, Chen K, Shu L, Pearson S. Tea Chrysanthemum Detection by Leveraging Generative Adversarial Networks and Edge Computing. Front Plant Sci 2022; 13:850606. [PMID: 35463441 PMCID: PMC9021924 DOI: 10.3389/fpls.2022.850606] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 03/09/2022] [Indexed: 05/16/2023]
Abstract
A high resolution dataset is one of the prerequisites for tea chrysanthemum detection with deep learning algorithms. This is crucial for further developing a selective chrysanthemum harvesting robot. However, generating high resolution datasets of the tea chrysanthemum with complex unstructured environments is a challenge. In this context, we propose a novel tea chrysanthemum - generative adversarial network (TC-GAN) that attempts to deal with this challenge. First, we designed a non-linear mapping network for untangling the features of the underlying code. Then, a customized regularization method was used to provide fine-grained control over the image details. Finally, a gradient diversion design with multi-scale feature extraction capability was adopted to optimize the training process. The proposed TC-GAN was compared with 12 state-of-the-art generative adversarial networks, showing that an optimal average precision (AP) of 90.09% was achieved with the generated images (512 × 512) on the developed TC-YOLO object detection model under the NVIDIA Tesla P100 GPU environment. Moreover, the detection model was deployed into the embedded NVIDIA Jetson TX2 platform with 0.1 s inference time, and this edge computing device could be further developed into a perception system for selective chrysanthemum picking robots in the future.
Collapse
Affiliation(s)
- Chao Qi
- College of Engineering, Nanjing Agricultural University, Nanjing, China
| | - Junfeng Gao
- Lincoln Agri-Robotics Centre, Lincoln Institute for Agri-Food Technology, University of Lincoln, Lincoln, United Kingdom
| | - Kunjie Chen
- College of Engineering, Nanjing Agricultural University, Nanjing, China
- *Correspondence: Kunjie Chen,
| | - Lei Shu
- College of Engineering, Nanjing Agricultural University, Nanjing, China
- Lei Shu,
| | - Simon Pearson
- Lincoln Agri-Robotics Centre, Lincoln Institute for Agri-Food Technology, University of Lincoln, Lincoln, United Kingdom
| |
Collapse
|
162
|
Loke CH, Adam MS, Nordin R, Abdullah NF, Abu-Samah A. Physical Distancing Device with Edge Computing for COVID-19 (PADDIE-C19). Sensors (Basel) 2021; 22:279. [PMID: 35009820 PMCID: PMC8749825 DOI: 10.3390/s22010279] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 12/22/2021] [Accepted: 12/22/2021] [Indexed: 01/10/2023]
Abstract
The most effective methods of preventing COVID-19 infection include maintaining physical distancing and wearing a face mask while in close contact with people in public places. However, densely populated areas have a greater incidence of COVID-19 dissemination, which is caused by people who do not comply with standard operating procedures (SOPs). This paper presents a prototype called PADDIE-C19 (Physical Distancing Device with Edge Computing for COVID-19) to implement the physical distancing monitoring based on a low-cost edge computing device. The PADDIE-C19 provides real-time results and responses, as well as notifications and warnings to anyone who violates the 1-m physical distance rule. In addition, PADDIE-C19 includes temperature screening using an MLX90614 thermometer and ultrasonic sensors to restrict the number of people on specified premises. The Neural Network Processor (KPU) in Grove Artificial Intelligence Hardware Attached on Top (AI HAT), an edge computing unit, is used to accelerate the neural network model on person detection and achieve up to 18 frames per second (FPS). The results show that the accuracy of person detection with Grove AI HAT could achieve 74.65% and the average absolute error between measured and actual physical distance is 8.95 cm. Furthermore, the accuracy of the MLX90614 thermometer is guaranteed to have less than 0.5 °C value difference from the more common Fluke 59 thermometer. Experimental results also proved that when cloud computing is compared to edge computing, the Grove AI HAT achieves the average performance of 18 FPS for a person detector (kmodel) with an average 56 ms execution time in different networks, regardless of the network connection type or speed.
Collapse
Affiliation(s)
| | | | - Rosdiadee Nordin
- Department of Electrical, Electronics and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia; (C.H.L.); (M.S.A.); (N.F.A.); (A.A.-S.)
| | | | | |
Collapse
|
163
|
Angel NA, Ravindran D, Vincent PMDR, Srinivasan K, Hu YC. Recent Advances in Evolving Computing Paradigms: Cloud, Edge, and Fog Technologies. Sensors (Basel) 2021; 22:s22010196. [PMID: 35009740 PMCID: PMC8749780 DOI: 10.3390/s22010196] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 12/22/2021] [Accepted: 12/23/2021] [Indexed: 11/16/2022]
Abstract
Cloud computing has become integral lately due to the ever-expanding Internet-of-things (IoT) network. It still is and continues to be the best practice for implementing complex computational applications, emphasizing the massive processing of data. However, the cloud falls short due to the critical constraints of novel IoT applications generating vast data, which entails a swift response time with improved privacy. The newest drift is moving computational and storage resources to the edge of the network, involving a decentralized distributed architecture. The data processing and analytics perform at proximity to end-users, and overcome the bottleneck of cloud computing. The trend of deploying machine learning (ML) at the network edge to enhance computing applications and services has gained momentum lately, specifically to reduce latency and energy consumed while optimizing the security and management of resources. There is a need for rigorous research efforts oriented towards developing and implementing machine learning algorithms that deliver the best results in terms of speed, accuracy, storage, and security, with low power consumption. This extensive survey presented on the prominent computing paradigms in practice highlights the latest innovations resulting from the fusion between ML and the evolving computing paradigms and discusses the underlying open research challenges and future prospects.
Collapse
Affiliation(s)
- Nancy A Angel
- Department of Computer Science, St. Joseph’s College (Autonomous), Bharathidasan University, Tiruchirappalli 620002, India; (N.A.A.); (D.R.)
| | - Dakshanamoorthy Ravindran
- Department of Computer Science, St. Joseph’s College (Autonomous), Bharathidasan University, Tiruchirappalli 620002, India; (N.A.A.); (D.R.)
| | - P M Durai Raj Vincent
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India;
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India;
| | - Yuh-Chung Hu
- Department of Mechanical and Electromechanical Engineering, National ILan University, Yilan 26047, Taiwan
- Correspondence:
| |
Collapse
|
164
|
Park J, Chung K. Resource Prediction-Based Edge Collaboration Scheme for Improving QoE. Sensors (Basel) 2021; 21:8500. [PMID: 34960593 DOI: 10.3390/s21248500] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 12/13/2021] [Accepted: 12/17/2021] [Indexed: 11/17/2022]
Abstract
Recent years have witnessed a growth in the Internet of Things (IoT) applications and devices; however, these devices are unable to meet the increased computational resource needs of the applications they host. Edge servers can provide sufficient computing resources. However, when the number of connected devices is large, the task processing efficiency decreases due to limited computing resources. Therefore, an edge collaboration scheme that utilizes other computing nodes to increase the efficiency of task processing and improve the quality of experience (QoE) was proposed. However, existing edge server collaboration schemes have low QoE because they do not consider other edge servers’ computing resources or communication time. In this paper, we propose a resource prediction-based edge collaboration scheme for improving QoE. We estimate computing resource usage based on the tasks received from the devices. According to the predicted computing resources, the edge server probabilistically collaborates with other edge servers. The proposed scheme is based on the delay model, and uses the greedy algorithm. It allocates computing resources to the task considering the computation and buffering time. Experimental results show that the proposed scheme achieves a high QoE compared with existing schemes because of the high success rate and low completion time.
Collapse
|
165
|
Kim SY, Kim YK. An Energy Efficient UAV-Based Edge Computing System with Reliability Guarantee for Mobile Ground Nodes. Sensors (Basel) 2021; 21:8264. [PMID: 34960363 DOI: 10.3390/s21248264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 12/09/2021] [Accepted: 12/09/2021] [Indexed: 11/21/2022]
Abstract
An edge computing system is a distributed computing framework that provides execution resources such as computation and storage for applications involving networking close to the end nodes. An unmanned aerial vehicle (UAV)-aided edge computing system can provide a flexible configuration for mobile ground nodes (MGN). However, edge computing systems still require higher guaranteed reliability for computational task completion and more efficient energy management before their widespread usage. To solve these problems, we propose an energy efficient UAV-based edge computing system with energy harvesting capability. In this system, the MGN makes requests for computing service from multiple UAVs, and geographically proximate UAVs determine whether or not to conduct the data processing in a distributed manner. To minimize the energy consumption of UAVs while maintaining a guaranteed level of reliability for task completion, we propose a stochastic game model with constraints for our proposed system. We apply a best response algorithm to obtain a multi-policy constrained Nash equilibrium. The results show that our system can achieve an improved life cycle compared to the individual computing scheme while maintaining a sufficient successful complete computation probability.
Collapse
|
166
|
Alwakeel AM. An Overview of Fog Computing and Edge Computing Security and Privacy Issues. Sensors (Basel) 2021; 21:s21248226. [PMID: 34960320 PMCID: PMC8708798 DOI: 10.3390/s21248226] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/22/2021] [Accepted: 11/26/2021] [Indexed: 11/16/2022]
Abstract
With the advancement of different technologies such as 5G networks and IoT the use of different cloud computing technologies became essential. Cloud computing allowed intensive data processing and warehousing solution. Two different new cloud technologies that inherit some of the traditional cloud computing paradigm are fog computing and edge computing that is aims to simplify some of the complexity of cloud computing and leverage the computing capabilities within the local network in order to preform computation tasks rather than carrying it to the cloud. This makes this technology fits with the properties of IoT systems. However, using such technology introduces several new security and privacy challenges that could be huge obstacle against implementing these technologies. In this paper, we survey some of the main security and privacy challenges that faces fog and edge computing illustrating how these security issues could affect the work and implementation of edge and fog computing. Moreover, we present several countermeasures to mitigate the effect of these security issues.
Collapse
Affiliation(s)
- Ahmed M. Alwakeel
- Sensor Network and Cellular Systems Research Center, University of Tabuk, Tabuk 71491, Saudi Arabia;
- Department of Information Technology, University of Tabuk, Tabuk 71491, Saudi Arabia
| |
Collapse
|
167
|
Brandão FC, Lima MAT, Pantoja CE, Zahn J, Viterbo J. Engineering Approaches for Programming Agent-Based IoT Objects Using the Resource Management Architecture. Sensors (Basel) 2021; 21:8110. [PMID: 34884114 DOI: 10.3390/s21238110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 11/20/2021] [Accepted: 11/29/2021] [Indexed: 11/16/2022]
Abstract
The Internet of Things (IoT) allows the sharing of information among devices in a network. Hardware evolutions have enabled the employment of cognitive agents on top of such devices, which could help to adopt pro-active and autonomous IoT systems. Agents are autonomous entities from Artificial Intelligence capable of sensing (perceiving) the environment where they are situated. Then, with these captured perceptions, they can reason and act pro-actively. However, some agent approaches are created for a specific domain or application when dealing with embedded systems and hardware interfacing. In addition, the agent architecture can compromise the system’s performance because of the number of perceptions that agents can access. This paper presents three engineering approaches for creating IoT Objects using Embedded Multi-agent systems (MAS)—as cognitive systems at the edge of an IoT network—connecting, acting, and sharing information with a re-engineered IoT architecture based on the Sensor as a Service model. These engineering approaches use Belief-Desire-Intention (BDI) agents and the JaCaMo framework. In addition, it is expected to diversify the designers’ choice in applying embedded MAS in IoT systems. We also present a case study to validate the whole re-engineered architecture and the approaches. Moreover, some performance tests and comparisons are also presented. The study case shows that each approach is more or less suitable depending on the domain tackled. The performance tests show that the re-engineered IoT architecture is scalable and that there are some trade-offs in adopting one or another approach. The contributions of this paper are an architecture for sharing resources in an IoT network, the use of embedded MAS on top IoT Objects, and three engineering approaches considering agent and artifacts dimensions.
Collapse
|
168
|
Sheng Q, Sheng H, Gao P, Li Z, Yin H. Real-Time Detection of Cook Assistant Overalls Based on Embedded Reasoning. Sensors (Basel) 2021; 21:s21238069. [PMID: 34884074 PMCID: PMC8659890 DOI: 10.3390/s21238069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/22/2021] [Accepted: 11/28/2021] [Indexed: 11/16/2022]
Abstract
Currently, the target detection based on convolutional neural network plays an important role in image recognition, speech recognition and other fields. However, the current network model features a complex structure, a huge number of parameters and resources. These conditions make it difficult to apply in embedded devices with limited computational capabilities and extreme sensitivity to power consumption. In this regard, the application scenarios of deep learning are limited. This paper proposes a real-time detection scheme for cook assistant overalls based on the Hi3559A embedded processor. With YOLOv3 as the benchmark network, this scheme fully mobilizes the hardware acceleration resources through the network model optimization and the parallel processing technology of the processor, and improves the network reasoning speed, so that the embedded device can complete the task of real-time detection on the local device. The experimental results show that through the purposeful cropping, segmentation and in-depth optimization of the neural network according to the specific processor, the neural network can recognize the image accurately. In an application environment where the power consumption is only 5.5 W, the recognition speed of the neural network on the embedded end is increased to about 28 frames (the design requirement was to achieve a recognition speed of 25 frames or more), so that the optimized network can be effectively applied in the back kitchen overalls identification scene.
Collapse
|
169
|
Lu J, Lin W, Chen P, Lan Y, Deng X, Niu H, Mo J, Li J, Luo S. Research on Lightweight Citrus Flowering Rate Statistical Model Combined with Anchor Frame Clustering Optimization. Sensors (Basel) 2021; 21:s21237929. [PMID: 34883932 PMCID: PMC8659452 DOI: 10.3390/s21237929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 11/25/2021] [Accepted: 11/26/2021] [Indexed: 11/16/2022]
Abstract
At present, learning-based citrus blossom recognition models based on deep learning are highly complicated and have a large number of parameters. In order to estimate citrus flower quantities in natural orchards, this study proposes a lightweight citrus flower recognition model based on improved YOLOv4. In order to compress the backbone network, we utilize MobileNetv3 as a feature extractor, combined with deep separable convolution for further acceleration. The Cutout data enhancement method is also introduced to simulate citrus in nature for data enhancement. The test results show that the improved model has an mAP of 84.84%, 22% smaller than that of YOLOv4, and approximately two times faster. Compared with the Faster R-CNN, the improved citrus flower rate statistical model proposed in this study has the advantages of less memory usage and fast detection speed under the premise of ensuring a certain accuracy. Therefore, our solution can be used as a reference for the edge detection of citrus flowering.
Collapse
Affiliation(s)
- Jianqiang Lu
- School College of Electronic Engineering and School College of Artificial Intelligence, South China Agricultural University, Guangzhou 510642, China; (J.L.); (W.L.); (P.C.); (Y.L.); (H.N.); (J.M.); (J.L.); (S.L.)
- National International Joint Research Center of Precision Agriculture Aviation Application Technology, Guangzhou 510642, China
- Lingnan Modern Agriculture Guangdong Laboratory, Guangzhou 510642, China
| | - Weize Lin
- School College of Electronic Engineering and School College of Artificial Intelligence, South China Agricultural University, Guangzhou 510642, China; (J.L.); (W.L.); (P.C.); (Y.L.); (H.N.); (J.M.); (J.L.); (S.L.)
| | - Pingfu Chen
- School College of Electronic Engineering and School College of Artificial Intelligence, South China Agricultural University, Guangzhou 510642, China; (J.L.); (W.L.); (P.C.); (Y.L.); (H.N.); (J.M.); (J.L.); (S.L.)
| | - Yubin Lan
- School College of Electronic Engineering and School College of Artificial Intelligence, South China Agricultural University, Guangzhou 510642, China; (J.L.); (W.L.); (P.C.); (Y.L.); (H.N.); (J.M.); (J.L.); (S.L.)
- National International Joint Research Center of Precision Agriculture Aviation Application Technology, Guangzhou 510642, China
- Lingnan Modern Agriculture Guangdong Laboratory, Guangzhou 510642, China
| | - Xiaoling Deng
- School College of Electronic Engineering and School College of Artificial Intelligence, South China Agricultural University, Guangzhou 510642, China; (J.L.); (W.L.); (P.C.); (Y.L.); (H.N.); (J.M.); (J.L.); (S.L.)
- National International Joint Research Center of Precision Agriculture Aviation Application Technology, Guangzhou 510642, China
- Lingnan Modern Agriculture Guangdong Laboratory, Guangzhou 510642, China
- Correspondence:
| | - Hongyu Niu
- School College of Electronic Engineering and School College of Artificial Intelligence, South China Agricultural University, Guangzhou 510642, China; (J.L.); (W.L.); (P.C.); (Y.L.); (H.N.); (J.M.); (J.L.); (S.L.)
| | - Jiawei Mo
- School College of Electronic Engineering and School College of Artificial Intelligence, South China Agricultural University, Guangzhou 510642, China; (J.L.); (W.L.); (P.C.); (Y.L.); (H.N.); (J.M.); (J.L.); (S.L.)
| | - Jiaxing Li
- School College of Electronic Engineering and School College of Artificial Intelligence, South China Agricultural University, Guangzhou 510642, China; (J.L.); (W.L.); (P.C.); (Y.L.); (H.N.); (J.M.); (J.L.); (S.L.)
| | - Shengfu Luo
- School College of Electronic Engineering and School College of Artificial Intelligence, South China Agricultural University, Guangzhou 510642, China; (J.L.); (W.L.); (P.C.); (Y.L.); (H.N.); (J.M.); (J.L.); (S.L.)
| |
Collapse
|
170
|
Bravo-Arrabal J, Toscano-Moreno M, Fernandez-Lozano JJ, Mandow A, Gomez-Ruiz JA, García-Cerezo A. The Internet of Cooperative Agents Architecture (X-IoCA) for Robots, Hybrid Sensor Networks, and MEC Centers in Complex Environments: A Search and Rescue Case Study. Sensors (Basel) 2021; 21:s21237843. [PMID: 34883848 PMCID: PMC8659820 DOI: 10.3390/s21237843] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 11/12/2021] [Accepted: 11/21/2021] [Indexed: 11/26/2022]
Abstract
Cloud robotics and advanced communications can foster a step-change in cooperative robots and hybrid wireless sensor networks (H-WSN) for demanding environments (e.g., disaster response, mining, demolition, and nuclear sites) by enabling the timely sharing of data and computational resources between robot and human teams. However, the operational complexity of such multi-agent systems requires defining effective architectures, coping with implementation details, and testing in realistic deployments. This article proposes X-IoCA, an Internet of robotic things (IoRT) and communication architecture consisting of a hybrid and heterogeneous network of wireless transceivers (H2WTN), based on LoRa and BLE technologies, and a robot operating system (ROS) network. The IoRT is connected to a feedback information system (FIS) distributed among multi-access edge computing (MEC) centers. Furthermore, we present SAR-IoCA, an implementation of the architecture for search and rescue (SAR) integrated into a 5G network. The FIS for this application consists of an SAR-FIS (including a path planner for UGVs considering risks detected by a LoRa H-WSN) and an ROS-FIS (for real-time monitoring and processing of information published throughout the ROS network). Moreover, we discuss lessons learned from using SAR-IoCA in a realistic exercise where three UGVs, a UAV, and responders collaborated to rescue victims from a tunnel accessible through rough terrain.
Collapse
|
171
|
Erhan L, Di Mauro M, Anjum A, Bagdasar O, Song W, Liotta A. Embedded Data Imputation for Environmental Intelligent Sensing: A Case Study. Sensors (Basel) 2021; 21:s21237774. [PMID: 34883778 PMCID: PMC8659818 DOI: 10.3390/s21237774] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 11/19/2021] [Accepted: 11/20/2021] [Indexed: 11/25/2022]
Abstract
Recent developments in cloud computing and the Internet of Things have enabled smart environments, in terms of both monitoring and actuation. Unfortunately, this often results in unsustainable cloud-based solutions, whereby, in the interest of simplicity, a wealth of raw (unprocessed) data are pushed from sensor nodes to the cloud. Herein, we advocate the use of machine learning at sensor nodes to perform essential data-cleaning operations, to avoid the transmission of corrupted (often unusable) data to the cloud. Starting from a public pollution dataset, we investigate how two machine learning techniques (kNN and missForest) may be embedded on Raspberry Pi to perform data imputation, without impacting the data collection process. Our experimental results demonstrate the accuracy and computational efficiency of edge-learning methods for filling in missing data values in corrupted data series. We find that kNN and missForest correctly impute up to 40% of randomly distributed missing values, with a density distribution of values that is indistinguishable from the benchmark. We also show a trade-off analysis for the case of bursty missing values, with recoverable blocks of up to 100 samples. Computation times are shorter than sampling periods, allowing for data imputation at the edge in a timely manner.
Collapse
Affiliation(s)
- Laura Erhan
- College of Science and Engineering, University of Derby, Derby DE22 1GB, UK; (L.E.); (O.B.)
| | - Mario Di Mauro
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy;
| | - Ashiq Anjum
- College of Science and Engineering, University of Leicester, Leicester LE1 7RH, UK;
| | - Ovidiu Bagdasar
- College of Science and Engineering, University of Derby, Derby DE22 1GB, UK; (L.E.); (O.B.)
- Department of Computing, Mathematics and Electronics, “1 Decembrie 1918” University of Alba Iulia, 510009 Alba Iulia, Romania
| | - Wei Song
- College of Information Technology, Shanghai Ocean University, Shanghai 200090, China;
| | - Antonio Liotta
- Faculty of Computer Science, Free University of Bozen-Bolzano, 39100 Bolzano, Italy
- Correspondence:
| |
Collapse
|
172
|
Fondo-Ferreiro P, Candal-Ventureira D, González-Castaño FJ, Gil-Castiñeira F. Latency Reduction in Vehicular Sensing Applications by Dynamic 5G User Plane Function Allocation with Session Continuity. Sensors (Basel) 2021; 21:7744. [PMID: 34833821 DOI: 10.3390/s21227744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 11/03/2021] [Accepted: 11/18/2021] [Indexed: 11/25/2022]
Abstract
Vehicle automation is driving the integration of advanced sensors and new applications that demand high-quality information, such as collaborative sensing for enhanced situational awareness. In this work, we considered a vehicular sensing scenario supported by 5G communications, in which vehicle sensor data need to be sent to edge computing resources with stringent latency constraints. To ensure low latency with the resources available, we propose an optimization framework that deploys User Plane Functions (UPFs) dynamically at the edge to minimize the number of network hops between the vehicles and them. The proposed framework relies on a practical Software-Defined-Networking (SDN)-based mechanism that allows seamless re-assignment of vehicles to UPFs while maintaining session and service continuity. We propose and evaluate different UPF allocation algorithms that reduce communications latency compared to static, random, and centralized deployment baselines. Our results demonstrated that the dynamic allocation of UPFs can support latency-critical applications that would be unfeasible otherwise.
Collapse
|
173
|
Shah SC. Design of a Machine Learning-Based Intelligent Middleware Platform for a Heterogeneous Private Edge Cloud System. Sensors (Basel) 2021; 21:7701. [PMID: 34833792 DOI: 10.3390/s21227701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 11/13/2021] [Accepted: 11/17/2021] [Indexed: 11/17/2022]
Abstract
Recent advances in mobile technologies have facilitated the development of a new class of smart city and fifth-generation (5G) network applications. These applications have diverse requirements, such as low latencies, high data rates, significant amounts of computing and storage resources, and access to sensors and actuators. A heterogeneous private edge cloud system was proposed to address the requirements of these applications. The proposed heterogeneous private edge cloud system is characterized by a complex and dynamic multilayer network and computing infrastructure. Efficient management and utilization of this infrastructure may increase data rates and reduce data latency, data privacy risks, and traffic to the core Internet network. A novel intelligent middleware platform is proposed in the current study to manage and utilize heterogeneous private edge cloud infrastructure efficiently. The proposed platform aims to provide computing, data collection, and data storage services to support emerging resource-intensive and non-resource-intensive smart city and 5G network applications. It aims to leverage regression analysis and reinforcement learning methods to solve the problem of efficiently allocating heterogeneous resources to application tasks. This platform adopts parallel transmission techniques, dynamic interface allocation techniques, and machine learning-based algorithms in a dynamic multilayer network infrastructure to improve network and application performance. Moreover, it uses container and device virtualization technologies to address problems related to heterogeneous hardware and execution environments.
Collapse
|
174
|
Andreadis A, Giambene G, Zambon R. Monitoring Illegal Tree Cutting through Ultra-Low-Power Smart IoT Devices. Sensors (Basel) 2021; 21:s21227593. [PMID: 34833669 PMCID: PMC8624687 DOI: 10.3390/s21227593] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 10/31/2021] [Accepted: 11/11/2021] [Indexed: 11/16/2022]
Abstract
Forests play a fundamental role in preserving the environment and fighting global warming. Unfortunately, they are continuously reduced by human interventions such as deforestation, fires, etc. This paper proposes and evaluates a framework for automatically detecting illegal tree-cutting activity in forests through audio event classification. We envisage ultra-low-power tiny devices, embedding edge-computing microcontrollers and long-range wireless communication to cover vast areas in the forest. To reduce the energy footprint and resource consumption for effective and pervasive detection of illegal tree cutting, an efficient and accurate audio classification solution based on convolutional neural networks is proposed, designed specifically for resource-constrained wireless edge devices. With respect to previous works, the proposed system allows for recognizing a wider range of threats related to deforestation through a distributed and pervasive edge-computing technique. Different pre-processing techniques have been evaluated, focusing on a trade-off between classification accuracy with respect to computational resources, memory, and energy footprint. Furthermore, experimental long-range communication tests have been conducted in real environments. Data obtained from the experimental results show that the proposed solution can detect and notify tree-cutting events for efficient and cost-effective forest monitoring through smart IoT, with an accuracy of 85%.
Collapse
|
175
|
Abdullahi Yari I, Dehling T, Kluge F, Geck J, Sunyaev A, Eskofier B. Security Engineering of Patient-Centered Health Care Information Systems in Peer-to-Peer Environments: Systematic Review. J Med Internet Res 2021; 23:e24460. [PMID: 34779788 PMCID: PMC8663665 DOI: 10.2196/24460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 05/20/2021] [Accepted: 08/02/2021] [Indexed: 11/13/2022] Open
Abstract
Background Patient-centered health care information systems (PHSs) enable patients to take control and become knowledgeable about their own health, preferably in a secure environment. Current and emerging PHSs use either a centralized database, peer-to-peer (P2P) technology, or distributed ledger technology for PHS deployment. The evolving COVID-19 decentralized Bluetooth-based tracing systems are examples of disease-centric P2P PHSs. Although using P2P technology for the provision of PHSs can be flexible, scalable, resilient to a single point of failure, and inexpensive for patients, the use of health information on P2P networks poses major security issues as users must manage information security largely by themselves. Objective This study aims to identify the inherent security issues for PHS deployment in P2P networks and how they can be overcome. In addition, this study reviews different P2P architectures and proposes a suitable architecture for P2P PHS deployment. Methods A systematic literature review was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) reporting guidelines. Thematic analysis was used for data analysis. We searched the following databases: IEEE Digital Library, PubMed, Science Direct, ACM Digital Library, Scopus, and Semantic Scholar. The search was conducted on articles published between 2008 and 2020. The Common Vulnerability Scoring System was used as a guide for rating security issues. Results Our findings are consolidated into 8 key security issues associated with PHS implementation and deployment on P2P networks and 7 factors promoting them. Moreover, we propose a suitable architecture for P2P PHSs and guidelines for the provision of PHSs while maintaining information security. Conclusions Despite the clear advantages of P2P PHSs, the absence of centralized controls and inconsistent views of the network on some P2P systems have profound adverse impacts in terms of security. The security issues identified in this study need to be addressed to increase patients’ intention to use PHSs on P2P networks by making them safe to use.
Collapse
Affiliation(s)
- Imrana Abdullahi Yari
- Department of Artificial Intelligence in Biomedical Engineering, Machine Learning and Data Analytics Lab, Friedrich-Alexander University Erlangen-Nuremberg, Erlangen, Germany
| | - Tobias Dehling
- Institute of Applied Informatics and Formal Description Methods, Karlsruhe Institute of Technology, Karlsruhe, Germany.,KASTEL Security Research Labs, Karlsruhe, Germany
| | - Felix Kluge
- Department of Artificial Intelligence in Biomedical Engineering, Machine Learning and Data Analytics Lab, Friedrich-Alexander University Erlangen-Nuremberg, Erlangen, Germany
| | | | - Ali Sunyaev
- Institute of Applied Informatics and Formal Description Methods, Karlsruhe Institute of Technology, Karlsruhe, Germany.,KASTEL Security Research Labs, Karlsruhe, Germany
| | - Bjoern Eskofier
- Department of Artificial Intelligence in Biomedical Engineering, Machine Learning and Data Analytics Lab, Friedrich-Alexander University Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
176
|
Ademola OA, Leier M, Petlenkov E. Evaluation of Deep Neural Network Compression Methods for Edge Devices Using Weighted Score-Based Ranking Scheme. Sensors (Basel) 2021; 21:7529. [PMID: 34833610 PMCID: PMC8622199 DOI: 10.3390/s21227529] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 11/04/2021] [Accepted: 11/05/2021] [Indexed: 12/02/2022]
Abstract
The demand for object detection capability in edge computing systems has surged. As such, the need for lightweight Convolutional Neural Network (CNN)-based object detection models has become a focal point. Current models are large in memory and deployment in edge devices is demanding. This shows that the models need to be optimized for the hardware without performance degradation. There exist several model compression methods; however, determining the most efficient method is of major concern. Our goal was to rank the performance of these methods using our application as a case study. We aimed to develop a real-time vehicle tracking system for cargo ships. To address this, we developed a weighted score-based ranking scheme that utilizes the model performance metrics. We demonstrated the effectiveness of this method by applying it on the baseline, compressed, and micro-CNN models trained on our dataset. The result showed that quantization is the most efficient compression method for the application, having the highest rank, with an average weighted score of 9.00, followed by binarization, having an average weighted score of 8.07. Our proposed method is extendable and can be used as a framework for the selection of suitable model compression methods for edge devices in different applications.
Collapse
Affiliation(s)
- Olutosin Ajibola Ademola
- Embedded AI Research Laboratory, Department of Computer Systems, Tallinn University of Technology, Ehitajate tee 5, 19086 Tallinn, Estonia;
| | - Mairo Leier
- Embedded AI Research Laboratory, Department of Computer Systems, Tallinn University of Technology, Ehitajate tee 5, 19086 Tallinn, Estonia;
| | - Eduard Petlenkov
- Centre for Intelligent Systems, Department of Computer Systems, Tallinn University of Technology, Ehitajate tee 5, 19086 Tallinn, Estonia;
| |
Collapse
|
177
|
Zhang R, Li X. Edge Computing Driven Data Sensing Strategy in the Entire Crop Lifecycle for Smart Agriculture. Sensors (Basel) 2021; 21:s21227502. [PMID: 34833575 PMCID: PMC8619343 DOI: 10.3390/s21227502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 11/01/2021] [Accepted: 11/04/2021] [Indexed: 11/16/2022]
Abstract
In the context of smart agriculture, high-value data sensing in the entire crop lifecycle is fundamental for realizing crop cultivation control. However, the existing data sensing methods are deficient regarding the sensing data value, poor data correlation, and high data collection cost. The main problem for data sensing over the entire crop lifecycle is how to sense high-value data according to crop growth stage at a low cost. To solve this problem, a data sensing framework was developed by combining edge computing with the Internet of Things, and a novel data sensing strategy for the entire crop lifecycle is proposed in this paper. The proposed strategy includes four phases. In the first phase, the crop growth stage is divided by Gath-Geva (GG) fuzzy clustering, and the key growth parameters corresponding to the growth stage are extracted. In the second phase, based on the current crop growth information, a prediction method of the current crop growth stage is constructed by using a Tkagi-Sugneo (T-S) fuzzy neural network. In the third phase, based on Deng's grey relational analysis method, the environmental sensing parameters of the corresponding crop growth stage are optimized. In the fourth phase, an adaptive sensing method of sensing nodes with effective sensing area constraints is established. Finally, based on the actual crop growth history data, the whole crop life cycle dataset is established to test the performance and prediction accuracy of the proposed method for crop growth stage division. Based on the historical data, the simulation data sensing environment is established. Then, the proposed algorithm is tested and compared with the traditional algorithms. The comparison results show that the proposed strategy can divide and predict a crop growth cycle with high accuracy. The proposed strategy can significantly reduce the sensing and data collection times and energy consumption and significantly improve the value of sensing data.
Collapse
|
178
|
Weng YK, Huang SH, Kao HY. Block-Based Compression and Corresponding Hardware Circuits for Sparse Activations. Sensors (Basel) 2021; 21:7468. [PMID: 34833543 DOI: 10.3390/s21227468] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 11/03/2021] [Accepted: 11/09/2021] [Indexed: 11/17/2022]
Abstract
In a CNN (convolutional neural network) accelerator, to reduce memory traffic and power consumption, there is a need to exploit the sparsity of activation values. Therefore, some research efforts have been paid to skip ineffectual computations (i.e., multiplications by zero). Different from previous works, in this paper, we point out the similarity of activation values: (1) in the same layer of a CNN model, most feature maps are either highly dense or highly sparse; (2) in the same layer of a CNN model, feature maps in different channels are often similar. Based on the two observations, we propose a block-based compression approach, which utilizes both the sparsity and the similarity of activation values to further reduce the data volume. Moreover, we also design an encoder, a decoder and an indexing module to support the proposed approach. The encoder is used to translate output activations into the proposed block-based compression format, while both the decoder and the indexing module are used to align nonzero values for effectual computations. Compared with previous works, benchmark data consistently show that the proposed approach can greatly reduce both memory traffic and power consumption.
Collapse
|
179
|
Chmurski M, Mauro G, Santra A, Zubert M, Dagasan G. Highly-Optimized Radar-Based Gesture Recognition System with Depthwise Expansion Module. Sensors (Basel) 2021; 21:7298. [PMID: 34770603 PMCID: PMC8588382 DOI: 10.3390/s21217298] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 10/08/2021] [Accepted: 10/26/2021] [Indexed: 11/16/2022]
Abstract
The increasing integration of technology in our daily lives demands the development of more convenient human-computer interaction (HCI) methods. Most of the current hand-based HCI strategies exhibit various limitations, e.g., sensibility to variable lighting conditions and limitations on the operating environment. Further, the deployment of such systems is often not performed in resource-constrained contexts. Inspired by the MobileNetV1 deep learning network, this paper presents a novel hand gesture recognition system based on frequency-modulated continuous wave (FMCW) radar, exhibiting a higher recognition accuracy in comparison to the state-of-the-art systems. First of all, the paper introduces a method to simplify radar preprocessing while preserving the main information of the performed gestures. Then, a deep neural classifier with the novel Depthwise Expansion Module based on the depthwise separable convolutions is presented. The introduced classifier is optimized and deployed on the Coral Edge TPU board. The system defines and adopts eight different hand gestures performed by five users, offering a classification accuracy of 98.13% while operating in a low-power and resource-constrained environment.
Collapse
Affiliation(s)
- Mateusz Chmurski
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
- Department of Microelectronics and Computer Science, Lodz University of Technology, 90924 Lodz, Poland
| | - Gianfranco Mauro
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
- Department of Electronic and Computer Technology, University of Granada, Avenida de Fuente Nueva s/n, 18071 Granada, Spain
| | - Avik Santra
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
| | - Mariusz Zubert
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
| | - Gökberk Dagasan
- Infineon Technologies AG, 85579 Neubiberg, Germany; (G.M.); (A.S.); (M.Z.); (G.D.)
| |
Collapse
|
180
|
Roig PJ, Alcaraz S, Gilly K, Bernad C, Juiz C. Modeling of a Generic Edge Computing Application Design. Sensors (Basel) 2021; 21:s21217276. [PMID: 34770582 PMCID: PMC8587040 DOI: 10.3390/s21217276] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 10/25/2021] [Accepted: 10/27/2021] [Indexed: 12/29/2022]
Abstract
Edge computing applications leverage advances in edge computing along with the latest trends of convolutional neural networks in order to achieve ultra-low latency, high-speed processing, low-power consumptions scenarios, which are necessary for deploying real-time Internet of Things deployments efficiently. As the importance of such scenarios is growing by the day, we propose to undertake two different kind of models, such as an algebraic models, with a process algebra called ACP and a coding model with a modeling language called Promela. Both approaches have been used to build models considering an edge infrastructure with a cloud backup, which has been further extended with the addition of extra fog nodes, and after having applied the proper verification techniques, they have all been duly verified. Specifically, a generic edge computing design has been specified in an algebraic manner with ACP, being followed by its corresponding algebraic verification, whereas it has also been specified by means of Promela code, which has been verified by means of the model checker Spin.
Collapse
Affiliation(s)
- Pedro Juan Roig
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (C.B.)
- Correspondence: (P.J.R.); (K.G.); Tel.: +34-966658388 (P.J.R.); +34-966658565 (K.G.)
| | - Salvador Alcaraz
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (C.B.)
| | - Katja Gilly
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (C.B.)
- Correspondence: (P.J.R.); (K.G.); Tel.: +34-966658388 (P.J.R.); +34-966658565 (K.G.)
| | - Cristina Bernad
- Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain; (S.A.); (C.B.)
| | - Carlos Juiz
- Mathematics and Computer Science Department, University of the Balearic Islands, 07022 Palma de Mallorca, Spain;
| |
Collapse
|
181
|
Kong X, Wang K, Wang S, Wang X, Jiang X, Guo Y, Shen G, Chen X, Ni Q. Real-Time Mask Identification for COVID-19: An Edge-Computing-Based Deep Learning Framework. IEEE Internet Things J 2021; 8:15929-15938. [PMID: 35782184 PMCID: PMC8768989 DOI: 10.1109/jiot.2021.3051844] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 12/20/2020] [Accepted: 01/09/2021] [Indexed: 05/20/2023]
Abstract
During the outbreak of the Coronavirus disease 2019 (COVID-19), while bringing various serious threats to the world, it reminds us that we need to take precautions to control the transmission of the virus. The rise of the Internet of Medical Things (IoMT) has made related data collection and processing, including healthcare monitoring systems, more convenient on the one hand, and requirements of public health prevention are also changing and more challengeable on the other hand. One of the most effective nonpharmaceutical medical intervention measures is mask wearing. Therefore, there is an urgent need for an automatic real-time mask detection method to help prevent the public epidemic. In this article, we put forward an edge computing-based mask (ECMask) identification framework to help public health precautions, which can ensure real-time performance on the low-power camera devices of buses. Our ECMask consists of three main stages: 1) video restoration; 2) face detection; and 3) mask identification. The related models are trained and evaluated on our bus drive monitoring data set and public data set. We construct extensive experiments to validate the good performance based on real video data, in consideration of detection accuracy and execution time efficiency of the whole video analysis, which have valuable application in COVID-19 prevention.
Collapse
Affiliation(s)
- Xiangjie Kong
- College of Computer Science and TechnologyZhejiang University of Technology Hangzhou 310023 China
| | - Kailai Wang
- School of SoftwareDalian University of Technology Dalian 116620 China
| | - Shupeng Wang
- Institute of Information EngineeringChinese Academy of Sciences Beijing 100864 China
| | - Xiaojie Wang
- School of Communication and Information EngineeringChongqing University of Posts and Telecommunications Chongqing 400065 China
| | - Xin Jiang
- Second Clinical Medical College (Shenzhen People's Hospital)Jinan University Guangzhou 510632 China
| | - Yi Guo
- Second Clinical Medical College (Shenzhen People's Hospital)Jinan University Guangzhou 510632 China
| | - Guojiang Shen
- College of Computer Science and TechnologyZhejiang University of Technology Hangzhou 310023 China
| | - Xin Chen
- School of SoftwareDalian University of Technology Dalian 116620 China
| | - Qichao Ni
- School of SoftwareDalian University of Technology Dalian 116620 China
| |
Collapse
|
182
|
Simić M, Sladić G, Zarić M, Markoski B. Infrastructure as Software in Micro Clouds at the Edge. Sensors (Basel) 2021; 21:s21217001. [PMID: 34770308 PMCID: PMC8588097 DOI: 10.3390/s21217001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 09/18/2021] [Accepted: 10/12/2021] [Indexed: 11/26/2022]
Abstract
Edge computing offers cloud services closer to data sources and end-users, making the foundation for novel applications. The infrastructure deployment is taking off, bringing new challenges: how to use geo-distribution properly, or harness the advantages of having resources at a specific location? New real-time applications require multi-tier infrastructure, preferably doing data preprocessing locally, but using the cloud for heavy workloads. We present a model, able to organize geo-distributed nodes into micro clouds dynamically, allowing resource reorganization to best serve population needs. Such elasticity is achieved by relying on cloud organization principles, adapted for a different environment. The desired state is specified descriptively, and the system handles the rest. As such, infrastructure is abstracted to the software level, thus enabling “infrastructure as software” at the edge. We argue about blending the proposed model into existing tools, allowing cloud providers to offer future micro clouds as a service.
Collapse
Affiliation(s)
- Miloš Simić
- Faculty of Technical Sciences, University of Novi Sad, Trg D. Obradovića 6, 21000 Novi Sad, Serbia; (G.S.); (M.Z.)
- Correspondence:
| | - Goran Sladić
- Faculty of Technical Sciences, University of Novi Sad, Trg D. Obradovića 6, 21000 Novi Sad, Serbia; (G.S.); (M.Z.)
| | - Miroslav Zarić
- Faculty of Technical Sciences, University of Novi Sad, Trg D. Obradovića 6, 21000 Novi Sad, Serbia; (G.S.); (M.Z.)
| | - Branko Markoski
- Technical Faculty Mihajlno Pupin, University of Novi Sad, Đure Đakovića bb, 23000 Zrenjanin, Serbia;
| |
Collapse
|
183
|
Alkinani MH, Almazroi AA, Jhanjhi NZ, Khan NA. 5G and IoT Based Reporting and Accident Detection (RAD) System to Deliver First Aid Box Using Unmanned Aerial Vehicle. Sensors (Basel) 2021; 21:6905. [PMID: 34696118 DOI: 10.3390/s21206905] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 09/26/2021] [Accepted: 10/01/2021] [Indexed: 11/16/2022]
Abstract
Internet of Things (IoT) and 5G are enabling intelligent transportation systems (ITSs). ITSs promise to improve road safety in smart cities. Therefore, ITSs are gaining earnest devotion in the industry as well as in academics. Due to the rapid increase in population, vehicle numbers are increasing, resulting in a large number of road accidents. The majority of the time, casualties are not appropriately discovered and reported to hospitals and relatives. This lack of rapid care and first aid might result in life loss in a matter of minutes. To address all of these challenges, an intelligent system is necessary. Although several information communication technologies (ICT)-based solutions for accident detection and rescue operations have been proposed, these solutions are not compatible with all vehicles and are also costly. Therefore, we proposed a reporting and accident detection system (RAD) for a smart city that is compatible with any vehicle and less expensive. Our strategy aims to improve the transportation system at a low cost. In this context, we developed an android application that collects data related to sound, gravitational force, pressure, speed, and location of the accident from the smartphone. The value of speed helps to improve the accident detection accuracy. The collected information is further processed for accident identification. Additionally, a navigation system is designed to inform the relatives, police station, and the nearest hospital. The hospital dispatches UAV (i.e., drone with first aid box) and ambulance to the accident spot. The actual dataset from the Road Safety Open Repository is used for results generation through simulation. The proposed scheme shows promising results in terms of accuracy and response time as compared to existing techniques.
Collapse
|
184
|
Dębski R, Dreżewski R. Adaptive Segmentation of Streaming Sensor Data on Edge Devices. Sensors (Basel) 2021; 21:s21206884. [PMID: 34696096 PMCID: PMC8538390 DOI: 10.3390/s21206884] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 10/06/2021] [Accepted: 10/12/2021] [Indexed: 11/22/2022]
Abstract
Sensor data streams often represent signals/trajectories which are twice differentiable (e.g., to give a continuous velocity and acceleration), and this property must be reflected in their segmentation. An adaptive streaming algorithm for this problem is presented. It is based on the greedy look-ahead strategy and is built on the concept of a cubic splinelet. A characteristic feature of the proposed algorithm is the real-time simultaneous segmentation, smoothing, and compression of data streams. The segmentation quality is measured in terms of the signal approximation accuracy and the corresponding compression ratio. The numerical results show the relatively high compression ratios (from 135 to 208, i.e., compressed stream sizes up to 208 times smaller) combined with the approximation errors comparable to those obtained from the state-of-the-art global reference algorithm. The proposed algorithm can be applied to various domains, including online compression and/or smoothing of data streams coming from sensors, real-time IoT analytics, and embedded time-series databases.
Collapse
|
185
|
Prasad A, Mofjeld C, Peng Y. A Joint Model Provisioning and Request Dispatch Solution for Low-Latency Inference Services on Edge. Sensors (Basel) 2021; 21:s21196594. [PMID: 34640914 PMCID: PMC8513104 DOI: 10.3390/s21196594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 09/24/2021] [Accepted: 09/28/2021] [Indexed: 11/24/2022]
Abstract
With the advancement of machine learning, a growing number of mobile users rely on machine learning inference for making time-sensitive and safety-critical decisions. Therefore, the demand for high-quality and low-latency inference services at the network edge has become the key to modern intelligent society. This paper proposes a novel solution that jointly provisions machine learning models and dispatches inference requests to reduce inference latency on edge nodes. Existing solutions either direct inference requests to the nearest edge node to save network latency or balance edge nodes’ workload by reducing queuing and computing time. The proposed solution provisions each edge node with the optimal number and type of inference instances under a holistic consideration of networking, computing, and memory resources. Mobile users can thus be directed to utilize inference services on the edge nodes that offer minimal serving latency. The proposed solution has been implemented using TensorFlow Serving and Kubernetes on an edge cluster. Through simulation and testbed experiments under various system settings, the evaluation results showed that the joint strategy could consistently achieve lower latency than simply searching for the best edge node to serve inference requests.
Collapse
|
186
|
Qin J, Mei G, Ma Z, Piccialli F. General Paradigm of Edge-Based Internet of Things Data Mining for Geohazard Prevention. Big Data 2021; 9:373-389. [PMID: 34227850 DOI: 10.1089/big.2020.0392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Geological hazards (geohazards) are geological processes or phenomena formed under external-induced factors causing losses to human life and property. Geohazards are sudden, cause great harm, and have broad ranges of influence, which bring considerable challenges to geohazard prevention. Monitoring and early warning are the most common strategies to prevent geohazards. With the development of the internet of things (IoT), IoT-based monitoring devices provide rich and fine data, making geohazard monitoring and early warning more accurate and effective. IoT-based monitoring data can be transmitted to a cloud center for processing to provide credible data references for geohazard early warning. However, the massive numbers of IoT devices occupy most resources of the cloud center, which increases the data processing delay. Moreover, limited bandwidth restricts the transmission of large amounts of geohazard monitoring data. Thus, in some cases, cloud computing is not able to meet the real-time requirements of geohazard early warning. Edge computing technology processes data closer to the data source than to the cloud center, which provides the opportunity for the rapid processing of monitoring data. This article presents the general paradigm of edge-based IoT data mining for geohazard prevention, especially monitoring and early warning. The paradigm mainly includes data acquisition, data mining and analysis, and data interpretation. Moreover, a real case is used to illustrate the details of the presented general paradigm. Finally, this article discusses several key problems for the general paradigm of edge-based IoT data mining for geohazard prevention.
Collapse
Affiliation(s)
- Jiayu Qin
- School of Engineering and Technology, China University of Geosciences (Beijing), Beijing, China
| | - Gang Mei
- School of Engineering and Technology, China University of Geosciences (Beijing), Beijing, China
| | - Zhengjing Ma
- School of Engineering and Technology, China University of Geosciences (Beijing), Beijing, China
| | - Francesco Piccialli
- Department of Mathematics and Applications "R.Caccioppoli," University of Naples Federico II, Napoli, Italy
| |
Collapse
|
187
|
Li S, Hu X, Du Y. Deep Reinforcement Learning for Computation Offloading and Resource Allocation in Unmanned-Aerial-Vehicle Assisted Edge Computing. Sensors (Basel) 2021; 21:s21196499. [PMID: 34640820 PMCID: PMC8512227 DOI: 10.3390/s21196499] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 09/16/2021] [Accepted: 09/25/2021] [Indexed: 11/16/2022]
Abstract
Computation offloading technology extends cloud computing to the edge of the access network close to users, bringing many benefits to terminal devices with limited battery and computational resources. Nevertheless, the existing computation offloading approaches are challenging to apply to specific scenarios, such as the dense distribution of end-users and the sparse distribution of network infrastructure. The technological revolution in the unmanned aerial vehicle (UAV) and chip industry has granted UAVs more computing resources and promoted the emergence of UAV-assisted mobile edge computing (MEC) technology, which could be applied to those scenarios. However, in the MEC system with multiple users and multiple servers, making reasonable offloading decisions and allocating system resources is still a severe challenge. This paper studies the offloading decision and resource allocation problem in the UAV-assisted MEC environment with multiple users and servers. To ensure the quality of service for end-users, we set the weighted total cost of delay, energy consumption, and the size of discarded tasks as our optimization objective. We further formulate the joint optimization problem as a Markov decision process and apply the soft actor-critic (SAC) deep reinforcement learning algorithm to optimize the offloading policy. Numerical simulation results show that the offloading policy optimized by our proposed SAC-based dynamic computing offloading (SACDCO) algorithm effectively reduces the delay, energy consumption, and size of discarded tasks for the UAV-assisted MEC system. Compared with the fixed local-UAV scheme in the specific simulation setting, our proposed approach reduces system delay and energy consumption by approximately 50% and 200%, respectively.
Collapse
|
188
|
Daher AW, Rizik A, Muselli M, Chible H, Caviglia DD. Porting Rulex Software to the Raspberry Pi for Machine Learning Applications on the Edge. Sensors (Basel) 2021; 21:s21196526. [PMID: 34640846 PMCID: PMC8512253 DOI: 10.3390/s21196526] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 09/16/2021] [Accepted: 09/27/2021] [Indexed: 11/28/2022]
Abstract
Edge Computing enables to perform measurement and cognitive decisions outside a central server by performing data storage, manipulation, and processing on the Internet of Things (IoT) node. Also, Artificial Intelligence (AI) and Machine Learning applications have become a rudimentary procedure in virtually every industrial or preliminary system. Consequently, the Raspberry Pi is adopted, which is a low-cost computing platform that is profitably applied in the field of IoT. As for the software part, among the plethora of Machine Learning (ML) paradigms reported in the literature, we identified Rulex, as a good ML platform, suitable to be implemented on the Raspberry Pi. In this paper, we present the porting of the Rulex ML platform on the board to perform ML forecasts in an IoT setup. Specifically, we explain the porting Rulex’s libraries on Windows 32 Bits, Ubuntu 64 Bits, and Raspbian 32 Bits. Therefore, with the aim of carrying out an in-depth verification of the application possibilities, we propose to perform forecasts on five unrelated datasets from five different applications, having varying sizes in terms of the number of records, skewness, and dimensionality. These include a small Urban Classification dataset, three larger datasets concerning Human Activity detection, a Biomedical dataset related to mental state, and a Vehicle Activity Recognition dataset. The overall accuracies for the forecasts performed are: 84.13%, 99.29% (for SVM), 95.47% (for SVM), and 95.27% (For KNN) respectively. Finally, an image-based gender classification dataset is employed to perform image classification on the Edge. Moreover, a novel image pre-processing Algorithm was developed that converts images into Time-series by relying on statistical contour-based detection techniques. Even though the dataset contains inconsistent and random images, in terms of subjects and settings, Rulex achieves an overall accuracy of 96.47% while competing with the literature which is dominated by forward-facing and mugshot images. Additionally, power consumption for the Raspberry Pi in a Client/Server setup was compared with an HP laptop, where the board takes more time, but consumes less energy for the same ML task.
Collapse
Affiliation(s)
- Ali Walid Daher
- COSMIC Lab, Department of Electrical, Electronic and Telecommunications Engineering and Naval Architecture (DITEN), University of Genoa, 16145 Genoa, Italy; (A.W.D.); (A.R.)
- MECRL Laboratory, Ph.D. School for Sciences and Technology, Lebanese University, Beirut 6573/14, Lebanon;
- Consiglio Nazionale delle Ricerche, Institute of Electronics Computer and Telecommunication Engineering (IEIIT), 16149 Genoa, Italy;
- Rulex Innovation Labs, Rulex Inc., 16122 Genoa, Italy
| | - Ali Rizik
- COSMIC Lab, Department of Electrical, Electronic and Telecommunications Engineering and Naval Architecture (DITEN), University of Genoa, 16145 Genoa, Italy; (A.W.D.); (A.R.)
- MECRL Laboratory, Ph.D. School for Sciences and Technology, Lebanese University, Beirut 6573/14, Lebanon;
| | - Marco Muselli
- Consiglio Nazionale delle Ricerche, Institute of Electronics Computer and Telecommunication Engineering (IEIIT), 16149 Genoa, Italy;
- Rulex Innovation Labs, Rulex Inc., 16122 Genoa, Italy
| | - Hussein Chible
- MECRL Laboratory, Ph.D. School for Sciences and Technology, Lebanese University, Beirut 6573/14, Lebanon;
| | - Daniele D. Caviglia
- COSMIC Lab, Department of Electrical, Electronic and Telecommunications Engineering and Naval Architecture (DITEN), University of Genoa, 16145 Genoa, Italy; (A.W.D.); (A.R.)
- Correspondence: ; Tel.: +39-010-33-56-587
| |
Collapse
|
189
|
Abstract
There is an ever-growing mismatch between the proliferation of data-intensive, power-hungry deep learning solutions in the machine learning (ML) community and the need for agile, portable solutions in resource-constrained devices, particularly for intelligence at the edge. In this paper, we present a fundamentally novel approach that leverages data-driven intelligence with biologically-inspired efficiency. The proposed Sparse Embodiment Neural-Statistical Architecture (SENSA) decomposes the learning task into two distinct phases: a training phase and a hardware embedment phase where prototypes are extracted from the trained network and used to construct fast, sparse embodiment for hardware deployment at the edge. Specifically, we propose the Sparse Pulse Automata via Reproducing Kernel (SPARK) method, which first constructs a learning machine in the form of a dynamical system using energy-efficient spike or pulse trains, commonly used in neuroscience and neuromorphic engineering, then extracts a rule-based solution in the form of automata or lookup tables for rapid deployment in edge computing platforms. We propose to use the theoretically-grounded unifying framework of the Reproducing Kernel Hilbert Space (RKHS) to provide interpretable, nonlinear, and nonparametric solutions, compared to the typical neural network approach. In kernel methods, the explicit representation of the data is of secondary nature, allowing the same algorithm to be used for different data types without altering the learning rules. To showcase SPARK’s capabilities, we carried out the first proof-of-concept demonstration on the task of isolated-word automatic speech recognition (ASR) or keyword spotting, benchmarked on the TI-46 digit corpus. Together, these energy-efficient and resource-conscious techniques will bring advanced machine learning solutions closer to the edge.
Collapse
Affiliation(s)
- Kan Li
- Computational NeuroEngineering Laboratory (CNEL), Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, United States
| | - José C Príncipe
- Computational NeuroEngineering Laboratory (CNEL), Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, United States
| |
Collapse
|
190
|
Velichko A. A Method for Medical Data Analysis Using the LogNNet for Clinical Decision Support Systems and Edge Computing in Healthcare. Sensors (Basel) 2021; 21:6209. [PMID: 34577414 PMCID: PMC8473446 DOI: 10.3390/s21186209] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 09/10/2021] [Accepted: 09/13/2021] [Indexed: 11/17/2022]
Abstract
Edge computing is a fast-growing and much needed technology in healthcare. The problem of implementing artificial intelligence on edge devices is the complexity and high resource intensity of the most known neural network data analysis methods and algorithms. The difficulty of implementing these methods on low-power microcontrollers with small memory size calls for the development of new effective algorithms for neural networks. This study presents a new method for analyzing medical data based on the LogNNet neural network, which uses chaotic mappings to transform input information. The method effectively solves classification problems and calculates risk factors for the presence of a disease in a patient according to a set of medical health indicators. The efficiency of LogNNet in assessing perinatal risk is illustrated on cardiotocogram data obtained from the UC Irvine machine learning repository. The classification accuracy reaches ~91% with the~3-10 kB of RAM used on the Arduino microcontroller. Using the LogNNet network trained on a publicly available database of the Israeli Ministry of Health, a service concept for COVID-19 express testing is provided. A classification accuracy of ~95% is achieved, and~0.6 kB of RAM is used. In all examples, the model is tested using standard classification quality metrics: precision, recall, and F1-measure. The LogNNet architecture allows the implementation of artificial intelligence on medical peripherals of the Internet of Things with low RAM resources and can be used in clinical decision support systems.
Collapse
Affiliation(s)
- Andrei Velichko
- Institute of Physics and Technology, Petrozavodsk State University, 31 Lenina Str., 185910 Petrozavodsk, Russia
| |
Collapse
|
191
|
Corches C, Daraban M, Miclea L. Availability of an RFID Object-Identification System in IoT Environments. Sensors (Basel) 2021; 21:s21186220. [PMID: 34577425 PMCID: PMC8472853 DOI: 10.3390/s21186220] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 09/09/2021] [Accepted: 09/14/2021] [Indexed: 11/16/2022]
Abstract
Through the latest technological and conceptual developments, the centralized cloud-computing approach has moved to structures such as edge, fog, and the Internet of Things (IoT), approaching end users. As mobile network operators (MNOs) implement the new 5G standards, enterprise computing function shifts to the edge. In parallel to interconnection topics, there is the issue of global impact over the environment. The idea is to develop IoT devices to eliminate the greenhouse effect of current applications. Radio-frequency identification (RFID) is the technology that has this potential, and it can be used in applications ranging from identifying a person to granting access in a building. Past studies have focused on how to improve RFID communication or to achieve maximal throughput. However, for many applications, system latency and availability are critical aspects. This paper examines, through stochastic Petri nets (SPNs), the availability, dependability, and latency of an object-identification system that uses RFID tags. Through the performed analysis, the optimal balance between latency and throughput was identified. Analyzing multiple communication scenarios revealed the availability of such a system when deployed at the edge layer.
Collapse
Affiliation(s)
- Cosmina Corches
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania;
| | - Mihai Daraban
- Applied Electronics Department, Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania;
| | - Liviu Miclea
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania;
- Correspondence:
| |
Collapse
|
192
|
Oh SR, Seo YD, Lee E, Kim YG. A Comprehensive Survey on Security and Privacy for Electronic Health Data. Int J Environ Res Public Health 2021; 18:ijerph18189668. [PMID: 34574593 PMCID: PMC8465695 DOI: 10.3390/ijerph18189668] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 09/01/2021] [Accepted: 09/09/2021] [Indexed: 12/01/2022]
Abstract
Recently, the integration of state-of-the-art technologies, such as modern sensors, networks, and cloud computing, has revolutionized the conventional healthcare system. However, security concerns have increasingly been emerging due to the integration of technologies. Therefore, the security and privacy issues associated with e-health data must be properly explored. In this paper, to investigate the security and privacy of e-health systems, we identified major components of the modern e-health systems (i.e., e-health data, medical devices, medical networks and edge/fog/cloud). Then, we reviewed recent security and privacy studies that focus on each component of the e-health systems. Based on the review, we obtained research taxonomy, security concerns, requirements, solutions, research trends, and open challenges for the components with strengths and weaknesses of the analyzed studies. In particular, edge and fog computing studies for e-health security and privacy were reviewed since the studies had mostly not been analyzed in other survey papers.
Collapse
Affiliation(s)
- Se-Ra Oh
- Miro Corporation, Incheon 21988, Korea;
| | - Young-Duk Seo
- Department of Computer Engineering, Inha University, Incheon 22212, Korea;
| | - Euijong Lee
- Department of Computer Science, Chungbuk National University, Cheongju 28644, Korea;
| | - Young-Gab Kim
- Department of Computer and Information Security, and Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Korea
- Correspondence:
| |
Collapse
|
193
|
Vasileiadis N, Ntinas V, Sirakoulis GC, Dimitrakis P. In-Memory-Computing Realization with a Photodiode/Memristor Based Vision Sensor. Materials (Basel) 2021; 14:5223. [PMID: 34576447 PMCID: PMC8464783 DOI: 10.3390/ma14185223] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/27/2021] [Accepted: 09/07/2021] [Indexed: 11/16/2022]
Abstract
State-of-the-art IoT technologies request novel design solutions in edge computing, resulting in even more portable and energy-efficient hardware for in-the-field processing tasks. Vision sensors, processors, and hardware accelerators are among the most demanding IoT applications. Resistance switching (RS) two-terminal devices are suitable for resistive RAMs (RRAM), a promising technology to realize storage class memories. Furthermore, due to their memristive nature, RRAMs are appropriate candidates for in-memory computing architectures. Recently, we demonstrated a CMOS compatible silicon nitride (SiNx) MIS RS device with memristive properties. In this paper, a report on a new photodiode-based vision sensor architecture with in-memory computing capability, relying on memristive device, is disclosed. In this context, the resistance switching dynamics of our memristive device were measured and a data-fitted behavioral model was extracted. SPICE simulations were made highlighting the in-memory computing capabilities of the proposed photodiode-one memristor pixel vision sensor. Finally, an integration and manufacturing perspective was discussed.
Collapse
Affiliation(s)
- Nikolaos Vasileiadis
- Institute of Nanoscience and Nanotechnology, National Center for Scientific Research “Demokritos”, 15341 Agia Paraskevi, Greece
- Department of Electrical and Computer Engineering, Democritus University of Thrace (DUTh), 67100 Xanthi, Greece; (V.N.); (G.C.S.)
| | - Vasileios Ntinas
- Department of Electrical and Computer Engineering, Democritus University of Thrace (DUTh), 67100 Xanthi, Greece; (V.N.); (G.C.S.)
| | - Georgios Ch. Sirakoulis
- Department of Electrical and Computer Engineering, Democritus University of Thrace (DUTh), 67100 Xanthi, Greece; (V.N.); (G.C.S.)
| | - Panagiotis Dimitrakis
- Institute of Nanoscience and Nanotechnology, National Center for Scientific Research “Demokritos”, 15341 Agia Paraskevi, Greece
| |
Collapse
|
194
|
Belabed T, Ramos Gomes da Silva V, Quenon A, Valderamma C, Souani C. A Novel Automate Python Edge-to-Edge: From Automated Generation on Cloud to User Application Deployment on Edge of Deep Neural Networks for Low Power IoT Systems FPGA-Based Acceleration. Sensors (Basel) 2021; 21:s21186050. [PMID: 34577258 PMCID: PMC8467982 DOI: 10.3390/s21186050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 08/18/2021] [Accepted: 09/06/2021] [Indexed: 11/21/2022]
Abstract
Deep Neural Networks (DNNs) deployment for IoT Edge applications requires strong skills in hardware and software. In this paper, a novel design framework fully automated for Edge applications is proposed to perform such a deployment on System-on-Chips. Based on a high-level Python interface that mimics the leading Deep Learning software frameworks, it offers an easy way to implement a hardware-accelerated DNN on an FPGA. To do this, our design methodology covers the three main phases: (a) customization: where the user specifies the optimizations needed on each DNN layer, (b) generation: the framework generates on the Cloud the necessary binaries for both FPGA and software parts, and (c) deployment: the SoC on the Edge receives the resulting files serving to program the FPGA and related Python libraries for user applications. Among the study cases, an optimized DNN for the MNIST database can speed up more than 60× a software version on the ZYNQ 7020 SoC and still consume less than 0.43W. A comparison with the state-of-the-art frameworks demonstrates that our methodology offers the best trade-off between throughput, power consumption, and system cost.
Collapse
Affiliation(s)
- Tarek Belabed
- Electronics and Microelectronics Unit (SEMi), University of Mons, 7000 Mons, Belgium; (V.R.G.d.S.); (A.Q.); (C.V.)
- Ecole Nationale d’Ingénieurs de Sousse, Université de Sousse, Sousse 4000, Tunisia
- Laboratoire de Microélectronique et Instrumentation, Faculté des Sciences de Monastir, Université de Monastir, Monastir 5019, Tunisia
- Correspondence:
| | - Vitor Ramos Gomes da Silva
- Electronics and Microelectronics Unit (SEMi), University of Mons, 7000 Mons, Belgium; (V.R.G.d.S.); (A.Q.); (C.V.)
| | - Alexandre Quenon
- Electronics and Microelectronics Unit (SEMi), University of Mons, 7000 Mons, Belgium; (V.R.G.d.S.); (A.Q.); (C.V.)
| | - Carlos Valderamma
- Electronics and Microelectronics Unit (SEMi), University of Mons, 7000 Mons, Belgium; (V.R.G.d.S.); (A.Q.); (C.V.)
| | - Chokri Souani
- Institut Supérieur des Sciences Appliquées et de Technologie de Sousse, Université de Sousse, Sousse 4003, Tunisia;
| |
Collapse
|
195
|
Romano D, Lapegna M. A GPU-Parallel Image Coregistration Algorithm for InSar Processing at the Edge. Sensors (Basel) 2021; 21:s21175916. [PMID: 34502805 PMCID: PMC8434671 DOI: 10.3390/s21175916] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 07/28/2021] [Accepted: 08/25/2021] [Indexed: 11/16/2022]
Abstract
Image Coregistration for InSAR processing is a time-consuming procedure that is usually processed in batch mode. With the availability of low-energy GPU accelerators, processing at the edge is now a promising perspective. Starting from the individuation of the most computationally intensive kernels from existing algorithms, we decomposed the cross-correlation problem from a multilevel point of view, intending to design and implement an efficient GPU-parallel algorithm for multiple settings, including the edge computing one. We analyzed the accuracy and performance of the proposed algorithm—also considering power efficiency—and its applicability to the identified settings. Results show that a significant speedup of InSAR processing is possible by exploiting GPU computing in different scenarios with no loss of accuracy, also enabling onboard processing using SoC hardware.
Collapse
Affiliation(s)
- Diego Romano
- Institute for High Performance Computing and Networking (ICAR), CNR, 80131 Naples, Italy
- Correspondence: ; Tel.: +39-0816139518
| | - Marco Lapegna
- Department of Mathematics and Applications, University of Naples Federico II, 80126 Naples, Italy;
| |
Collapse
|
196
|
Kim C, Kim J, Kim KH, Lee SK, Kim K, Shah SAR, Goo YH. ScienceIoT: Evolution of the Wireless Infrastructure of KREONET. Sensors (Basel) 2021; 21:s21175852. [PMID: 34502742 PMCID: PMC8433646 DOI: 10.3390/s21175852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 08/18/2021] [Accepted: 08/26/2021] [Indexed: 11/16/2022]
Abstract
Here, we introduce the current stage and future directions of the wireless infrastructure of the Korea Research Environment Open NETwork (KREONET), a representative national research and education network in Korea. In 2018, ScienceLoRa, a pioneering wireless network infrastructure for scientific applications based on low-power wide-area network technology, was launched. Existing in-service applications in monitoring regions, research facilities, and universities prove the effectiveness of using wireless infrastructure in scientific areas. Furthermore, to support the more stringent requirements of various scientific scenarios, ScienceLoRa is evolving toward ScienceIoT by employing high-performance wireless technology and distributed computing capability. Specifically, by accommodating a private 5G network and an integrated edge computing platform, ScienceIoT is expected to support cutting-edge scientific applications requiring high-throughput and distributed data processing.
Collapse
Affiliation(s)
- Cheonyong Kim
- Advanced KREONET Center, KISTI, Daejeon 34141, Korea; (C.K.); (K.-H.K.); (S.-K.L.); (K.K.)
| | - Joobum Kim
- Department of Information Technology, Middle Georgia State University, Macon, GA 31206, USA;
| | - Ki-Hyeon Kim
- Advanced KREONET Center, KISTI, Daejeon 34141, Korea; (C.K.); (K.-H.K.); (S.-K.L.); (K.K.)
| | - Sang-Kwon Lee
- Advanced KREONET Center, KISTI, Daejeon 34141, Korea; (C.K.); (K.-H.K.); (S.-K.L.); (K.K.)
| | - Kiwook Kim
- Advanced KREONET Center, KISTI, Daejeon 34141, Korea; (C.K.); (K.-H.K.); (S.-K.L.); (K.K.)
| | - Syed Asif Raza Shah
- Department of Computer Science, Sukkur IBA University, Airport Road, Delhi Muslim Housing Society, Sukkur 65200, Sindh, Pakistan;
| | - Young-Hoon Goo
- Advanced KREONET Center, KISTI, Daejeon 34141, Korea; (C.K.); (K.-H.K.); (S.-K.L.); (K.K.)
- Correspondence:
| |
Collapse
|
197
|
Fraga-Lamas P, Lopes SI, Fernández-Caramés TM. Green IoT and Edge AI as Key Technological Enablers for a Sustainable Digital Transition towards a Smart Circular Economy: An Industry 5.0 Use Case. Sensors (Basel) 2021; 21:5745. [PMID: 34502637 DOI: 10.3390/s21175745] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 08/20/2021] [Accepted: 08/23/2021] [Indexed: 02/05/2023]
Abstract
Internet of Things (IoT) can help to pave the way to the circular economy and to a more sustainable world by enabling the digitalization of many operations and processes, such as water distribution, preventive maintenance, or smart manufacturing. Paradoxically, IoT technologies and paradigms such as edge computing, although they have a huge potential for the digital transition towards sustainability, they are not yet contributing to the sustainable development of the IoT sector itself. In fact, such a sector has a significant carbon footprint due to the use of scarce raw materials and its energy consumption in manufacturing, operating, and recycling processes. To tackle these issues, the Green IoT (G-IoT) paradigm has emerged as a research area to reduce such carbon footprint; however, its sustainable vision collides directly with the advent of Edge Artificial Intelligence (Edge AI), which imposes the consumption of additional energy. This article deals with this problem by exploring the different aspects that impact the design and development of Edge-AI G-IoT systems. Moreover, it presents a practical Industry 5.0 use case that illustrates the different concepts analyzed throughout the article. Specifically, the proposed scenario consists in an Industry 5.0 smart workshop that looks for improving operator safety and operation tracking. Such an application case makes use of a mist computing architecture composed of AI-enabled IoT nodes. After describing the application case, it is evaluated its energy consumption and it is analyzed the impact on the carbon footprint that it may have on different countries. Overall, this article provides guidelines that will help future developers to face the challenges that will arise when creating the next generation of Edge-AI G-IoT systems.
Collapse
|
198
|
Sun M, Bao T, Xie D, Lv H, Si G. Towards Application-Driven Task Offloading in Edge Computing Based on Deep Reinforcement Learning. Micromachines (Basel) 2021; 12:1011. [PMID: 34577655 DOI: 10.3390/mi12091011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Revised: 08/14/2021] [Accepted: 08/16/2021] [Indexed: 11/17/2022]
Abstract
Edge computing is a new paradigm, which provides storage, computing, and network resources between the traditional cloud data center and terminal devices. In this paper, we concentrate on the application-driven task offloading problem in edge computing by considering the strong dependencies of sub-tasks for multiple users. Our objective is to joint optimize the total delay and energy generated by applications, while guaranteeing the quality of services of users. First, we formulate the problem for the application-driven tasks in edge computing by jointly considering the delays and the energy consumption. Based on that, we propose a novel Application-driven Task Offloading Strategy (ATOS) based on deep reinforcement learning by adding a preliminary sorting mechanism to realize the joint optimization. Specifically, we analyze the characteristics of application-driven tasks and propose a heuristic algorithm by introducing a new factor to determine the processing order of parallelism sub-tasks. Finally, extensive experiments validate the effectiveness and reliability of the proposed algorithm. To be specific, compared with the baseline strategies, the total cost reduction by ATOS can be up to 64.5% on average.
Collapse
|
199
|
Huang K, Li C, Zhang J, Wang B. Cascade and Fusion: A Deep Learning Approach for Camouflaged Object Sensing. Sensors (Basel) 2021; 21:s21165455. [PMID: 34450897 PMCID: PMC8400738 DOI: 10.3390/s21165455] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 07/29/2021] [Accepted: 08/08/2021] [Indexed: 11/16/2022]
Abstract
The demand for the sensor-based detection of camouflage objects widely exists in biological research, remote sensing, and military applications. However, the performance of traditional object detection algorithms is limited, as they are incapable of extracting informative parts from low signal-to-noise ratio features. To address this problem, we propose Camouflaged Object Detection with Cascade and Feedback Fusion (CODCEF), a deep learning framework based on an RGB optical sensor that leverages a cascaded structure with Feedback Partial Decoders (FPD) instead of a traditional encoder–decoder structure. Through a selective fusion strategy and feedback loop, FPD reduces the loss of information and the interference of noises in the process of feature interweaving. Furthermore, we introduce Pixel Perception Fusion (PPF) loss, which aims to pay more attention to local pixels that might become the edges of an object. Experimental results on an edge device show that CODCEF achieved competitive results compared with 10 state-of-the-art methods.
Collapse
Affiliation(s)
- Kaihong Huang
- Department of Computer Science and Engineering, Southeast University, Nanjing 211189, China;
| | - Chunshu Li
- Department of Artificial Intelligence, Southeast University, Nanjing 211189, China;
| | - Jiaqi Zhang
- Department of Computer Science, Brown University, Providence, RI 02860, USA;
| | - Beilun Wang
- Department of Artificial Intelligence, Southeast University, Nanjing 211189, China;
- Correspondence: or
| |
Collapse
|
200
|
Lapegna M, Balzano W, Meyer N, Romano D. Clustering Algorithms on Low-Power and High-Performance Devices for Edge Computing Environments. Sensors (Basel) 2021; 21:5395. [PMID: 34450837 DOI: 10.3390/s21165395] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 07/28/2021] [Accepted: 08/07/2021] [Indexed: 11/16/2022]
Abstract
The synergy between Artificial Intelligence and the Edge Computing paradigm promises to transfer decision-making processes to the periphery of sensor networks without the involvement of central data servers. For this reason, we recently witnessed an impetuous development of devices that integrate sensors and computing resources in a single board to process data directly on the collection place. Due to the particular context where they are used, the main feature of these boards is the reduced energy consumption, even if they do not exhibit absolute computing powers comparable to modern high-end CPUs. Among the most popular Artificial Intelligence techniques, clustering algorithms are practical tools for discovering correlations or affinities within data collected in large datasets, but a parallel implementation is an essential requirement because of their high computational cost. Therefore, in the present work, we investigate how to implement clustering algorithms on parallel and low-energy devices for edge computing environments. In particular, we present the experiments related to two devices with different features: the quad-core UDOO X86 Advanced+ board and the GPU-based NVIDIA Jetson Nano board, evaluating them from the performance and the energy consumption points of view. The experiments show that they realize a more favorable trade-off between these two requirements than other high-end computing devices.
Collapse
|