201
|
Clustering Algorithms on Low-Power and High-Performance Devices for Edge Computing Environments. SENSORS 2021; 21:s21165395. [PMID: 34450837 PMCID: PMC8397962 DOI: 10.3390/s21165395] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 07/28/2021] [Accepted: 08/07/2021] [Indexed: 11/16/2022]
Abstract
The synergy between Artificial Intelligence and the Edge Computing paradigm promises to transfer decision-making processes to the periphery of sensor networks without the involvement of central data servers. For this reason, we recently witnessed an impetuous development of devices that integrate sensors and computing resources in a single board to process data directly on the collection place. Due to the particular context where they are used, the main feature of these boards is the reduced energy consumption, even if they do not exhibit absolute computing powers comparable to modern high-end CPUs. Among the most popular Artificial Intelligence techniques, clustering algorithms are practical tools for discovering correlations or affinities within data collected in large datasets, but a parallel implementation is an essential requirement because of their high computational cost. Therefore, in the present work, we investigate how to implement clustering algorithms on parallel and low-energy devices for edge computing environments. In particular, we present the experiments related to two devices with different features: the quad-core UDOO X86 Advanced+ board and the GPU-based NVIDIA Jetson Nano board, evaluating them from the performance and the energy consumption points of view. The experiments show that they realize a more favorable trade-off between these two requirements than other high-end computing devices.
Collapse
|
202
|
Privacy-Preserving and Lightweight Selective Aggregation with Fault-Tolerance for Edge Computing-Enhanced IoT. SENSORS 2021; 21:s21165369. [PMID: 34450808 PMCID: PMC8398313 DOI: 10.3390/s21165369] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/23/2021] [Accepted: 08/06/2021] [Indexed: 11/16/2022]
Abstract
Edge computing has been introduced to the Internet of Things (IoT) to meet the requirements of IoT applications. At the same time, data aggregation is widely used in data processing to reduce the communication overhead and energy consumption in IoT. Most existing schemes aggregate the overall data without filtering. In addition, aggregation schemes also face huge challenges, such as the privacy of the individual IoT device's data or the fault-tolerant and lightweight requirements of the schemes. In this paper, we present a privacy-preserving and lightweight selective aggregation scheme with fault tolerance (PLSA-FT) for edge computing-enhanced IoT. In PLSA-FT, selective aggregation can be achieved by constructing Boolean responses and numerical responses according to specific query conditions of the cloud center. Furthermore, we modified the basic Paillier homomorphic encryption to guarantee data privacy and support fault tolerance of IoT devices' malfunctions. An online/offline signature mechanism is utilized to reduce computation costs. The system characteristic analyses prove that the PLSA-FT scheme achieves confidentiality, privacy preservation, source authentication, integrity verification, fault tolerance, and dynamic membership management. Moreover, performance evaluation results show that PLSA-FT is lightweight with low computation costs and communication overheads.
Collapse
|
203
|
Application of Edge Computing Technology in Hydrological Spatial Analysis and Ecological Planning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18168382. [PMID: 34444132 PMCID: PMC8394889 DOI: 10.3390/ijerph18168382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 07/28/2021] [Accepted: 08/04/2021] [Indexed: 12/03/2022]
Abstract
The process of rapid urbanization causes so many water security issues such as urban waterlogging, environmental water pollution, water shortages, etc. It is, therefore, necessary for us to integrate a variety of theories, methods, measures, and means to conduct ecological problem diagnosis, ecological function demand assessment, and ecological security pattern planning. Here, EC (Edge Computing) technology is applied to analyze the hydrological spatial structure characteristics and ecological planning method of waterfront green space. First, various information is collected and scientifically analyzed around the core element of ecological planning: water. Then, in-depth research is conducted on the previous hydrological spatial analysis methods to identify their defects. Subsequently, given these defects, the EC technology is introduced to design a bottom-up overall architecture of intelligent ecological planning gateway, which can be divided into field devices, EC intelligent planning gateway, transmission system, and cloud processing platform. Finally, the performance of the overall architecture of the intelligent ecological planning gateway is tested. The study aims to optimize the performance of the hydrological spatial analysis method and ecological planning method in Xianglan town of Jiamusi city. The results show that the system can detect the flood control safety system planning, analysis of water source pollution. Additionally, the system also can use the EC technology, depending on the types, hydrological characteristics, pollutants to predict treatment sludge need to put in the pollutant treatment medicament composition and dosage, protection of water source nearby residents public health security. Compared with previous hydrological spatial analysis and ecological planning methods, the system is more scientific, efficient, and expandable. The results provide a technical basis for the research in related fields.
Collapse
|
204
|
Residential Water Meters as Edge Computing Nodes: Disaggregating End Uses and Creating Actionable Information at the Edge. SENSORS 2021; 21:s21165310. [PMID: 34450752 PMCID: PMC8399262 DOI: 10.3390/s21165310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 08/02/2021] [Accepted: 08/03/2021] [Indexed: 11/16/2022]
Abstract
We present a new, open source, computationally capable datalogger for collecting and analyzing high temporal resolution residential water use data. Using this device, execution of water end use disaggregation algorithms or other data analytics can be performed directly on existing, analog residential water meters without disrupting their operation, effectively transforming existing water meters into smart, edge computing devices. Computation of water use summaries and classified water end use events directly on the meter minimizes data transmission requirements, reduces requirements for centralized data storage and processing, and reduces latency between data collection and generation of decision-relevant information. The datalogger couples an Arduino microcontroller board for data acquisition with a Raspberry Pi computer that serves as a computational resource. The computational node was developed and calibrated at the Utah Water Research Laboratory (UWRL) and was deployed for testing on the water meter for a single-family residential home in Providence City, UT, USA. Results from field deployments are presented to demonstrate the data collection accuracy, computational functionality, power requirements, communication capabilities, and applicability of the system. The computational node’s hardware design and software are open source, available for potential reuse, and can be adapted to specific research needs.
Collapse
|
205
|
Suitability of NB-IoT for Indoor Industrial Environment: A Survey and Insights. SENSORS (BASEL, SWITZERLAND) 2021; 21:5284. [PMID: 34450725 PMCID: PMC8399864 DOI: 10.3390/s21165284] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 07/22/2021] [Accepted: 07/28/2021] [Indexed: 11/17/2022]
Abstract
The Internet of Things (IoT) and its applications in industrial settings are set to bring in the fourth industrial revolution. The industrial environment consisting of high profile manufacturing plants and a variety of equipment is inherently characterized by high reflectiveness, causing significant multi-path components that affect the propagation of wireless communications-a challenge among others that needs to be resolved. This paper provides a detailed insight into Narrow-Band IoT (NB-IoT), Industrial IoT (IIoT), and Wireless Sensor Networks (WSN) within the context of indoor industrial environments. It presents the applications of NB-IoT for industrial settings, such as the challenges associated with these applications. Furthermore, future research directions were put forth in the areas of NB-IoT network management using self-organizing network (SON) technology, edge computing for scalability enhancement, security in NB-IoT generated data, and proposing a suitable propagation model for reliable wireless communications.
Collapse
|
206
|
Deep Q-Learning and Preference Based Multi-Agent System for Sustainable Agricultural Market. SENSORS 2021; 21:s21165276. [PMID: 34450717 PMCID: PMC8402225 DOI: 10.3390/s21165276] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 07/28/2021] [Accepted: 07/30/2021] [Indexed: 11/20/2022]
Abstract
Yearly population growth will lead to a significant increase in agricultural production in the coming years. Twenty-first century agricultural producers will be facing the challenge of achieving food security and efficiency. This must be achieved while ensuring sustainable agricultural systems and overcoming the problems posed by climate change, depletion of water resources, and the potential for increased erosion and loss of productivity due to extreme weather conditions. Those environmental consequences will directly affect the price setting process. In view of the price oscillations and the lack of transparent information for buyers, a multi-agent system (MAS) is presented in this article. It supports the making of decisions in the purchase of sustainable agricultural products. The proposed MAS consists of a system that supports decision-making when choosing a supplier on the basis of certain preference-based parameters aimed at measuring the sustainability of a supplier and a deep Q-learning agent for agricultural future market price forecast. Therefore, different agri-environmental indicators (AEIs) have been considered, as well as the use of edge computing technologies to reduce costs of data transfer to the cloud. The presented MAS combines price setting optimizations and user preferences in regards to accessing, filtering, and integrating information. The agents filter and fuse information relevant to a user according to supplier attributes and a dynamic environment. The results presented in this paper allow a user to choose the supplier that best suits their preferences as well as to gain insight on agricultural future markets price oscillations through a deep Q-learning agent.
Collapse
|
207
|
A Sensing System Based on Public Cloud to Monitor Indoor Environment of Historic Buildings. SENSORS 2021; 21:s21165266. [PMID: 34450715 PMCID: PMC8398254 DOI: 10.3390/s21165266] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 07/25/2021] [Accepted: 07/30/2021] [Indexed: 11/16/2022]
Abstract
Monitoring the indoor environment of historic buildings helps to identify potential risks, provide guidelines for improving regular maintenance, and preserve cultural artifacts. However, most of the existing monitoring systems proposed for historic buildings are not for general digitization purposes that provide data for smart services employing, e.g., artificial intelligence with machine learning. In addition, considering that preserving historic buildings is a long-term process that demands preventive maintenance, a monitoring system requires stable and scalable storage and computing resources. In this paper, a digitalization framework is proposed for smart preservation of historic buildings. A sensing system following the architecture of this framework is implemented by integrating various advanced digitalization techniques, such as Internet of Things, Edge computing, and Cloud computing. The sensing system realizes remote data collection, enables viewing real-time and historical data, and provides the capability for performing real-time analysis to achieve preventive maintenance of historic buildings in future research. Field testing results show that the implemented sensing system has a 2% end-to-end loss rate for collecting data samples and the loss rate can be decreased to 0.3%. The low loss rate indicates that the proposed sensing system has high stability and meets the requirements for long-term monitoring of historic buildings.
Collapse
|
208
|
ESCOVE: Energy-SLA-Aware Edge-Cloud Computation Offloading in Vehicular Networks. SENSORS 2021; 21:s21155233. [PMID: 34372471 PMCID: PMC8347678 DOI: 10.3390/s21155233] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 07/16/2021] [Accepted: 07/29/2021] [Indexed: 01/16/2023]
Abstract
The vehicular network is an emerging technology in the Intelligent Smart Transportation era. The network provides mechanisms for running different applications, such as accident prevention, publishing and consuming services, and traffic flow management. In such scenarios, edge and cloud computing come into the picture to offload computation from vehicles that have limited processing capabilities. Optimizing the energy consumption of the edge and cloud servers becomes crucial. However, existing research efforts focus on either vehicle or edge energy optimization, and do not account for vehicular applications’ quality of services. In this paper, we address this void by proposing a novel offloading algorithm, ESCOVE, which optimizes the energy of the edge–cloud computing platform. The proposed algorithm respects the Service level agreement (SLA) in terms of latency, processing and total execution times. The experimental results show that ESCOVE is a promising approach in energy savings while preserving SLAs compared to the state-of-the-art approach.
Collapse
|
209
|
Wearable Edge AI Applications for Ecological Environments. SENSORS (BASEL, SWITZERLAND) 2021; 21:5082. [PMID: 34372319 PMCID: PMC8347733 DOI: 10.3390/s21155082] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/05/2021] [Accepted: 07/23/2021] [Indexed: 11/16/2022]
Abstract
Ecological environments research helps to assess the impacts on forests and managing forests. The usage of novel software and hardware technologies enforces the solution of tasks related to this problem. In addition, the lack of connectivity for large data throughput raises the demand for edge-computing-based solutions towards this goal. Therefore, in this work, we evaluate the opportunity of using a Wearable edge AI concept in a forest environment. For this matter, we propose a new approach to the hardware/software co-design process. We also address the possibility of creating wearable edge AI, where the wireless personal and body area networks are platforms for building applications using edge AI. Finally, we evaluate a case study to test the possibility of performing an edge AI task in a wearable-based environment. Thus, in this work, we evaluate the system to achieve the desired task, the hardware resource and performance, and the network latency associated with each part of the process. Through this work, we validated both the design pattern review and case study. In the case study, the developed algorithms could classify diseased leaves with a circa 90% accuracy with the proposed technique in the field. This results can be reviewed in the laboratory with more modern models that reached up to 96% global accuracy. The system could also perform the desired tasks with a quality factor of 0.95, considering the usage of three devices. Finally, it detected a disease epicenter with an offset of circa 0.5 m in a 6 m × 6 m × 12 m space. These results enforce the usage of the proposed methods in the targeted environment and the proposed changes in the co-design pattern.
Collapse
|
210
|
Towards Smart Home Automation Using IoT-Enabled Edge-Computing Paradigm. SENSORS 2021; 21:s21144932. [PMID: 34300671 PMCID: PMC8309767 DOI: 10.3390/s21144932] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 07/14/2021] [Accepted: 07/15/2021] [Indexed: 11/16/2022]
Abstract
Smart home applications are ubiquitous and have gained popularity due to the overwhelming use of Internet of Things (IoT)-based technology. The revolution in technologies has made homes more convenient, efficient, and even more secure. The need for advancement in smart home technology is necessary due to the scarcity of intelligent home applications that cater to several aspects of the home simultaneously, i.e., automation, security, safety, and reducing energy consumption using less bandwidth, computation, and cost. Our research work provides a solution to these problems by deploying a smart home automation system with the applications mentioned above over a resource-constrained Raspberry Pi (RPI) device. The RPI is used as a central controlling unit, which provides a cost-effective platform for interconnecting a variety of devices and various sensors in a home via the Internet. We propose a cost-effective integrated system for smart home based on IoT and Edge-Computing paradigm. The proposed system provides remote and automatic control to home appliances, ensuring security and safety. Additionally, the proposed solution uses the edge-computing paradigm to store sensitive data in a local cloud to preserve the customer's privacy. Moreover, visual and scalar sensor-generated data are processed and held over edge device (RPI) to reduce bandwidth, computation, and storage cost. In the comparison with state-of-the-art solutions, the proposed system is 5% faster in detecting motion, and 5 ms and 4 ms in switching relay on and off, respectively. It is also 6% more efficient than the existing solutions with respect to energy consumption.
Collapse
|
211
|
Analysis of Machine Learning Algorithms for Anomaly Detection on Edge Devices. SENSORS (BASEL, SWITZERLAND) 2021; 21:4946. [PMID: 34300686 PMCID: PMC8309800 DOI: 10.3390/s21144946] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 07/11/2021] [Accepted: 07/16/2021] [Indexed: 11/16/2022]
Abstract
The Internet of Things (IoT) consists of small devices or a network of sensors, which permanently generate huge amounts of data. Usually, they have limited resources, either computing power or memory, which means that raw data are transferred to central systems or the cloud for analysis. Lately, the idea of moving intelligence to the IoT is becoming feasible, with machine learning (ML) moved to edge devices. The aim of this study is to provide an experimental analysis of processing a large imbalanced dataset (DS2OS), split into a training dataset (80%) and a test dataset (20%). The training dataset was reduced by randomly selecting a smaller number of samples to create new datasets Di (i = 1, 2, 5, 10, 15, 20, 40, 60, 80%). Afterwards, they were used with several machine learning algorithms to identify the size at which the performance metrics show saturation and classification results stop improving with an F1 score equal to 0.95 or higher, which happened at 20% of the training dataset. Further on, two solutions for the reduction of the number of samples to provide a balanced dataset are given. In the first, datasets DRi consist of all anomalous samples in seven classes and a reduced majority class ('NL') with i = 0.1, 0.2, 0.5, 1, 2, 5, 10, 15, 20 percent of randomly selected samples. In the second, datasets DCi are generated from the representative samples determined with clustering from the training dataset. All three dataset reduction methods showed comparable performance results. Further evaluation of training times and memory usage on Raspberry Pi 4 shows a possibility to run ML algorithms with limited sized datasets on edge devices.
Collapse
|
212
|
Method to Increase Dependability in a Cloud-Fog-Edge Environment. SENSORS 2021; 21:s21144714. [PMID: 34300454 PMCID: PMC8309580 DOI: 10.3390/s21144714] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 06/28/2021] [Accepted: 07/07/2021] [Indexed: 11/23/2022]
Abstract
Robots can be very different, from humanoids to intelligent self-driving cars or just IoT systems that collect and process local sensors’ information. This paper presents a way to increase dependability for information exchange and processing in systems with Cloud-Fog-Edge architectures. In an ideal interconnected world, the recognized and registered robots must be able to communicate with each other if they are close enough, or through the Fog access points without overloading the Cloud. In essence, the presented work addresses the Edge area and how the devices can communicate in a safe and secure environment using cryptographic methods for structured systems. The presented work emphasizes the importance of security in a system’s dependability and offers a communication mechanism for several robots without overburdening the Cloud. This solution is ideal to be used where various monitoring and control aspects demand extra degrees of safety. The extra private keys employed by this procedure further enhance algorithm complexity, limiting the probability that the method may be broken by brute force or systemic attacks.
Collapse
|
213
|
TIP4.0: Industrial Internet of Things Platform for Predictive Maintenance. SENSORS 2021; 21:s21144676. [PMID: 34300415 PMCID: PMC8309552 DOI: 10.3390/s21144676] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 07/02/2021] [Accepted: 07/04/2021] [Indexed: 11/16/2022]
Abstract
Industry 4.0, allied with the growth and democratization of Artificial Intelligence (AI) and the advent of IoT, is paving the way for the complete digitization and automation of industrial processes. Maintenance is one of these processes, where the introduction of a predictive approach, as opposed to the traditional techniques, is expected to considerably improve the industry maintenance strategies with gains such as reduced downtime, improved equipment effectiveness, lower maintenance costs, increased return on assets, risk mitigation, and, ultimately, profitable growth. With predictive maintenance, dedicated sensors monitor the critical points of assets. The sensor data then feed into machine learning algorithms that can infer the asset health status and inform operators and decision-makers. With this in mind, in this paper, we present TIP4.0, a platform for predictive maintenance based on a modular software solution for edge computing gateways. TIP4.0 is built around Yocto, which makes it readily available and compliant with Commercial Off-the-Shelf (COTS) or proprietary hardware. TIP4.0 was conceived with an industry mindset with communication interfaces that allow it to serve sensor networks in the shop floor and modular software architecture that allows it to be easily adjusted to new deployment scenarios. To showcase its potential, the TIP4.0 platform was validated over COTS hardware, and we considered a public data-set for the simulation of predictive maintenance scenarios. We used a Convolution Neural Network (CNN) architecture, which provided competitive performance over the state-of-the-art approaches, while being approximately four-times and two-times faster than the uncompressed model inference on the Central Processing Unit (CPU) and Graphical Processing Unit, respectively. These results highlight the capabilities of distributed large-scale edge computing over industrial scenarios.
Collapse
|
214
|
Intelligent Platform Based on Smart PPE for Safety in Workplaces. SENSORS 2021; 21:s21144652. [PMID: 34300392 PMCID: PMC8309589 DOI: 10.3390/s21144652] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 06/29/2021] [Accepted: 06/30/2021] [Indexed: 11/16/2022]
Abstract
It is estimated that we spend one-third of our lives at work. It is therefore vital to adapt traditional equipment and systems used in the working environment to the new technological paradigm so that the industry is connected and, at the same time, workers are as safe and protected as possible. Thanks to Smart Personal Protective Equipment (PPE) and wearable technologies, information about the workers and their environment can be extracted to reduce the rate of accidents and occupational illness, leading to a significant improvement. This article proposes an architecture that employs three pieces of PPE: a helmet, a bracelet and a belt, which process the collected information using artificial intelligence (AI) techniques through edge computing. The proposed system guarantees the workers’ safety and integrity through the early prediction and notification of anomalies detected in their environment. Models such as convolutional neural networks, long short-term memory, Gaussian Models were joined by interpreting the information with a graph, where different heuristics were used to weight the outputs as a whole, where finally a support vector machine weighted the votes of the models with an area under the curve of 0.81.
Collapse
|
215
|
Gait-Based Implicit Authentication Using Edge Computing and Deep Learning for Mobile Devices. SENSORS 2021; 21:s21134592. [PMID: 34283149 PMCID: PMC8271781 DOI: 10.3390/s21134592] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 06/28/2021] [Accepted: 07/02/2021] [Indexed: 11/16/2022]
Abstract
Implicit authentication mechanisms are expected to prevent security and privacy threats for mobile devices using behavior modeling. However, recently, researchers have demonstrated that the performance of behavioral biometrics is insufficiently accurate. Furthermore, the unique characteristics of mobile devices, such as limited storage and energy, make it subject to constrained capacity of data collection and processing. In this paper, we propose an implicit authentication architecture based on edge computing, coined Edge computing-based mobile Device Implicit Authentication (EDIA), which exploits edge-based gait biometric identification using a deep learning model to authenticate users. The gait data captured by a device’s accelerometer and gyroscope sensors is utilized as the input of our optimized model, which consists of a CNN and a LSTM in tandem. Especially, we deal with extracting the features of gait signal in a two-dimensional domain through converting the original signal into an image, and then input it into our network. In addition, to reduce computation overhead of mobile devices, the model for implicit authentication is generated on the cloud server, and the user authentication process also takes place on the edge devices. We evaluate the performance of EDIA under different scenarios where the results show that i) we achieve a true positive rate of 97.77% and also a 2% false positive rate; and ii) EDIA still reaches high accuracy with limited dataset size.
Collapse
|
216
|
Deep-Framework: A Distributed, Scalable, and Edge-Oriented Framework for Real-Time Analysis of Video Streams. SENSORS 2021; 21:s21124045. [PMID: 34208327 PMCID: PMC8231160 DOI: 10.3390/s21124045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 06/08/2021] [Accepted: 06/09/2021] [Indexed: 11/17/2022]
Abstract
Edge computing is the best approach for meeting the exponential demand and the real-time requirements of many video analytics applications. Since most of the recent advances regarding the extraction of information from images and video rely on computation heavy deep learning algorithms, there is a growing need for solutions that allow the deployment and use of new models on scalable and flexible edge architectures. In this work, we present Deep-Framework, a novel open source framework for developing edge-oriented real-time video analytics applications based on deep learning. Deep-Framework has a scalable multi-stream architecture based on Docker and abstracts away from the user the complexity of cluster configuration, orchestration of services, and GPU resources allocation. It provides Python interfaces for integrating deep learning models developed with the most popular frameworks and also provides high-level APIs based on standard HTTP and WebRTC interfaces for consuming the extracted video data on clients running on browsers or any other web-based platform.
Collapse
|
217
|
Camera-LiDAR Multi-Level Sensor Fusion for Target Detection at the Network Edge. SENSORS (BASEL, SWITZERLAND) 2021; 21:3992. [PMID: 34207851 PMCID: PMC8227618 DOI: 10.3390/s21123992] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 06/07/2021] [Accepted: 06/07/2021] [Indexed: 11/29/2022]
Abstract
There have been significant advances regarding target detection in the autonomous vehicle context. To develop more robust systems that can overcome weather hazards as well as sensor problems, the sensor fusion approach is taking the lead in this context. Laser Imaging Detection and Ranging (LiDAR) and camera sensors are two of the most used sensors for this task since they can accurately provide important features such as target´s depth and shape. However, most of the current state-of-the-art target detection algorithms for autonomous cars do not take into consideration the hardware limitations of the vehicle such as the reduced computing power in comparison with Cloud servers as well as the reduced latency. In this work, we propose Edge Computing Tensor Processing Unit (TPU) devices as hardware support due to their computing capabilities for machine learning algorithms as well as their reduced power consumption. We developed an accurate and small target detection model for these devices. Our proposed Multi-Level Sensor Fusion model has been optimized for the network edge, specifically for the Google Coral TPU. As a result, high accuracy results are obtained while reducing the memory consumption as well as the latency of the system using the challenging KITTI dataset.
Collapse
|
218
|
Fuzzy-Based Microservice Resource Management Platform for Edge Computing in the Internet of Things. SENSORS 2021; 21:s21113800. [PMID: 34072637 PMCID: PMC8197891 DOI: 10.3390/s21113800] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 05/26/2021] [Accepted: 05/28/2021] [Indexed: 11/16/2022]
Abstract
Edge computing exhibits the advantages of real-time operation, low latency, and low network cost. It has become a key technology for realizing smart Internet of Things applications. Microservices are being used by an increasing number of edge computing networks because of their sufficiently small code, reduced program complexity, and flexible deployment. However, edge computing has more limited resources than cloud computing, and thus edge computing networks have higher requirements for the overall resource scheduling of running microservices. Accordingly, the resource management of microservice applications in edge computing networks is a crucial issue. In this study, we developed and implemented a microservice resource management platform for edge computing networks. We designed a fuzzy-based microservice computing resource scaling (FMCRS) algorithm that can dynamically control the resource expansion scale of microservices. We proposed and implemented two microservice resource expansion methods based on the resource usage of edge network computing nodes. We conducted the experimental analysis in six scenarios and the experimental results proved that the designed microservice resource management platform can reduce the response time for microservice resource adjustments and dynamically expand microservices horizontally and vertically. Compared with other state-of-the-art microservice resource management methods, FMCRS can reduce sudden surges in overall network resource allocation, and thus, it is more suitable for the edge computing microservice management environment.
Collapse
|
219
|
Blockchain-Enabled Asynchronous Federated Learning in Edge Computing. SENSORS 2021; 21:s21103335. [PMID: 34064942 PMCID: PMC8151195 DOI: 10.3390/s21103335] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 04/23/2021] [Accepted: 05/04/2021] [Indexed: 11/23/2022]
Abstract
The fast proliferation of edge computing devices brings an increasing growth of data, which directly promotes machine learning (ML) technology development. However, privacy issues during data collection for ML tasks raise extensive concerns. To solve this issue, synchronous federated learning (FL) is proposed, which enables the central servers and end devices to maintain the same ML models by only exchanging model parameters. However, the diversity of computing power and data sizes leads to a significant difference in local training data consumption, and thereby causes the inefficiency of FL. Besides, the centralized processing of FL is vulnerable to single-point failure and poisoning attacks. Motivated by this, we propose an innovative method, federated learning with asynchronous convergence (FedAC) considering a staleness coefficient, while using a blockchain network instead of the classic central server to aggregate the global model. It avoids real-world issues such as interruption by abnormal local device training failure, dedicated attacks, etc. By comparing with the baseline models, we implement the proposed method on a real-world dataset, MNIST, and achieve accuracy rates of 98.96% and 95.84% in both horizontal and vertical FL modes, respectively. Extensive evaluation results show that FedAC outperforms most existing models.
Collapse
|
220
|
Abstract
Wearable devices are a fast-growing technology with impact on personal healthcare for both society and economy. Due to the widespread of sensors in pervasive and distributed networks, power consumption, processing speed, and system adaptation are vital in future smart wearable devices. The visioning and forecasting of how to bring computation to the edge in smart sensors have already begun, with an aspiration to provide adaptive extreme edge computing. Here, we provide a holistic view of hardware and theoretical solutions toward smart wearable devices that can provide guidance to research in this pervasive computing era. We propose various solutions for biologically plausible models for continual learning in neuromorphic computing technologies for wearable sensors. To envision this concept, we provide a systematic outline in which prospective low power and low latency scenarios of wearable sensors in neuromorphic platforms are expected. We successively describe vital potential landscapes of neuromorphic processors exploiting complementary metal-oxide semiconductors (CMOS) and emerging memory technologies (e.g., memristive devices). Furthermore, we evaluate the requirements for edge computing within wearable devices in terms of footprint, power consumption, latency, and data size. We additionally investigate the challenges beyond neuromorphic computing hardware, algorithms and devices that could impede enhancement of adaptive edge computing in smart wearable devices.
Collapse
|
221
|
Spiking Neural Network with Linear Computational Complexity for Waveform Analysis in Amperometry. SENSORS 2021; 21:s21093276. [PMID: 34068538 PMCID: PMC8125990 DOI: 10.3390/s21093276] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 05/03/2021] [Accepted: 05/08/2021] [Indexed: 11/17/2022]
Abstract
The paper describes the architecture of a Spiking Neural Network (SNN) for time waveform analyses using edge computing. The network model was based on the principles of preprocessing signals in the diencephalon and using tonic spiking and inhibition-induced spiking models typical for the thalamus area. The research focused on a significant reduction of the complexity of the SNN algorithm by eliminating most synaptic connections and ensuring zero dispersion of weight values concerning connections between neuron layers. The paper describes a network mapping and learning algorithm, in which the number of variables in the learning process is linearly dependent on the size of the patterns. The works included testing the stability of the accuracy parameter for various network sizes. The described approach used the ability of spiking neurons to process currents of less than 100 pA, typical of amperometric techniques. An example of a practical application is an analysis of vesicle fusion signals using an amperometric system based on Carbon NanoTube (CNT) sensors. The paper concludes with a discussion of the costs of implementing the network as a semiconductor structure.
Collapse
|
222
|
A Compact High-Quality Image Demosaicking Neural Network for Edge-Computing Devices. SENSORS 2021; 21:s21093265. [PMID: 34066794 PMCID: PMC8125912 DOI: 10.3390/s21093265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/03/2021] [Accepted: 05/05/2021] [Indexed: 11/16/2022]
Abstract
Image demosaicking has been an essential and challenging problem among the most crucial steps of image processing behind image sensors. Due to the rapid development of intelligent processors based on deep learning, several demosaicking methods based on a convolutional neural network (CNN) have been proposed. However, it is difficult for their networks to run in real-time on edge computing devices with a large number of model parameters. This paper presents a compact demosaicking neural network based on the UNet++ structure. The network inserts densely connected layer blocks and adopts Gaussian smoothing layers instead of down-sampling operations before the backbone network. The densely connected blocks can extract mosaic image features efficiently by utilizing the correlation between feature maps. Furthermore, the block adopts depthwise separable convolutions to reduce the model parameters; the Gaussian smoothing layer can expand the receptive fields without down-sampling image size and discarding image information. The size constraints on the input and output images can also be relaxed, and the quality of demosaicked images is improved. Experiment results show that the proposed network can improve the running speed by 42% compared with the fastest CNN-based method and achieve comparable reconstruction quality as it on four mainstream datasets. Besides, when we carry out the inference processing on the demosaicked images on typical deep CNN networks, Mobilenet v1 and SSD, the accuracy can also achieve 85.83% (top 5) and 75.44% (mAP), which performs comparably to the existing methods. The proposed network has the highest computing efficiency and lowest parameter number through all methods, demonstrating that it is well suitable for applications on modern edge computing devices.
Collapse
|
223
|
Edge deep learning for neural implants: a case study of seizure detection and prediction. J Neural Eng 2021; 18. [PMID: 33794507 DOI: 10.1088/1741-2552/abf473] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 04/01/2021] [Indexed: 11/12/2022]
Abstract
Objective.Implanted devices providing real-time neural activity classification and control are increasingly used to treat neurological disorders, such as epilepsy and Parkinson's disease. Classification performance is critical to identifying brain states appropriate for the therapeutic action (e.g. neural stimulation). However, advanced algorithms that have shown promise in offline studies, in particular deep learning (DL) methods, have not been deployed on resource-restrained neural implants. Here, we designed and optimized three DL models or edge deployment and evaluated their inference performance in a case study of seizure detection.Approach.A deep neural network (DNN), a convolutional neural network (CNN), and a long short-term memory (LSTM) network were designed and trained with TensorFlow to classify ictal, preictal, and interictal phases from the CHB-MIT scalp EEG database. A sliding window based weighted majority voting algorithm was developed to detect seizure events based on each DL model's classification results. After iterative model compression and coefficient quantization, the algorithms were deployed on a general-purpose, off-the-shelf microcontroller for real-time testing. Inference sensitivity, false positive rate (FPR), execution time, memory size, and power consumption were quantified.Main results.For seizure event detection, the sensitivity and FPR for the DNN, CNN, and LSTM models were 87.36%/0.169 h-1, 96.70%/0.102 h-1, and 97.61%/0.071 h-1, respectively. Predicting seizures for early warnings was also feasible. The LSTM model achieved the best overall performance at the expense of the highest power. The DNN model achieved the shortest execution time. The CNN model showed advantages in balanced performance and power with minimum memory requirement. The implemented model compression and quantization achieved a significant saving of power and memory with an accuracy degradation of less than 0.5%.Significance.Inference with embedded DL models achieved performance comparable to many prior implementations that had no time or computational resource limitations. Generic microcontrollers can provide the required memory and computational resources, while model designs can be migrated to application-specific integrated circuits for further optimization and power saving. The results suggest that edge DL inference is a feasible option for future neural implants to improve classification performance and therapeutic outcomes.
Collapse
|
224
|
Progressive Traffic-Oriented Resource Management for Reducing Network Congestion in Edge Computing. ENTROPY 2021; 23:e23050532. [PMID: 33925902 PMCID: PMC8146102 DOI: 10.3390/e23050532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 04/21/2021] [Accepted: 04/23/2021] [Indexed: 11/24/2022]
Abstract
Edge computing can deliver network services with low latency and real-time processing by providing cloud services at the network edge. Edge computing has a number of advantages such as low latency, locality, and network traffic distribution, but the associated resource management has become a significant challenge because of its inherent hierarchical, distributed, and heterogeneous nature. Various cloud-based network services such as crowd sensing, hierarchical deep learning systems, and cloud gaming each have their own traffic patterns and computing requirements. To provide a satisfactory user experience for these services, resource management that comprehensively considers service diversity, client usage patterns, and network performance indicators is required. In this study, an algorithm that simultaneously considers computing resources and network traffic load when deploying servers that provide edge services is proposed. The proposed algorithm generates candidate deployments based on factors that affect traffic load, such as the number of servers, server location, and client mapping according to service characteristics and usage. A final deployment plan is then established using a partial vector bin packing scheme that considers both the generated traffic and computing resources in the network. The proposed algorithm is evaluated using several simulations that consider actual network service and device characteristics.
Collapse
|
225
|
Managing the Cloud Continuum: Lessons Learnt from a Real Fog-to-Cloud Deployment. SENSORS 2021; 21:s21092974. [PMID: 33922751 PMCID: PMC8123038 DOI: 10.3390/s21092974] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 04/20/2021] [Accepted: 04/21/2021] [Indexed: 11/30/2022]
Abstract
The wide adoption of the recently coined fog and edge computing paradigms alongside conventional cloud computing creates a novel scenario, known as the cloud continuum, where services may benefit from the overall set of resources to optimize their execution. To operate successfully, such a cloud continuum scenario demands for novel management strategies, enabling a coordinated and efficient management of the entire set of resources, from the edge up to the cloud, designed in particular to address key edge characteristics, such as mobility, heterogeneity and volatility. The design of such a management framework poses many research challenges and has already promoted many initiatives worldwide at different levels. In this paper we present the results of one of these experiences driven by an EU H2020 project, focusing on the lessons learnt from a real deployment of the proposed management solution in three different industrial scenarios. We think that such a description may help understand the benefits brought in by a holistic cloud continuum management and also may help other initiatives in their design and development processes.
Collapse
|
226
|
A Smart Home Energy Management System Using Two-Stage Non-Intrusive Appliance Load Monitoring over Fog-Cloud Analytics Based on Tridium's Niagara Framework for Residential Demand-Side Management. SENSORS 2021; 21:s21082883. [PMID: 33924090 PMCID: PMC8074283 DOI: 10.3390/s21082883] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 04/12/2021] [Accepted: 04/19/2021] [Indexed: 11/16/2022]
Abstract
Electricity is a vital resource for various human activities, supporting customers’ lifestyles in today’s modern technologically driven society. Effective demand-side management (DSM) can alleviate ever-increasing electricity demands that arise from customers in downstream sectors of a smart grid. Compared with the traditional means of energy management systems, non-intrusive appliance load monitoring (NIALM) monitors relevant electrical appliances in a non-intrusive manner. Fog (edge) computing addresses the need to capture, process and analyze data generated and gathered by Internet of Things (IoT) end devices, and is an advanced IoT paradigm for applications in which resources, such as computing capability, of a central data center acted as cloud computing are placed at the edge of the network. The literature leaves NIALM developed over fog-cloud computing and conducted as part of a home energy management system (HEMS). In this study, a Smart HEMS prototype based on Tridium’s Niagara Framework® has been established over fog (edge)-cloud computing, where NIALM as an IoT application in energy management has also been investigated in the framework. The SHEMS prototype established over fog-cloud computing in this study utilizes an artificial neural network-based NIALM approach to non-intrusively monitor relevant electrical appliances without an intrusive deployment of plug-load power meters (smart plugs), where a two-stage NIALM approach is completed. The core entity of the SHEMS prototype is based on a compact, cognitive, embedded IoT controller that connects IoT end devices, such as sensors and meters, and serves as a gateway in a smart house/smart building for residential DSM. As demonstrated and reported in this study, the established SHEMS prototype using the investigated two-stage NIALM approach is feasible and usable.
Collapse
|
227
|
Autonomous Marine Robot Based on AI Recognition for Permanent Surveillance in Marine Protected Areas. SENSORS 2021; 21:s21082664. [PMID: 33920075 PMCID: PMC8070409 DOI: 10.3390/s21082664] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 03/30/2021] [Accepted: 04/07/2021] [Indexed: 11/30/2022]
Abstract
The world’s oceans are one of the most valuable sources of biodiversity and resources on the planet, although there are areas where the marine ecosystem is threatened by human activities. Marine protected areas (MPAs) are distinctive spaces protected by law due to their unique characteristics, such as being the habitat of endangered marine species. Even with this protection, there are still illegal activities such as poaching or anchoring that threaten the survival of different marine species. In this context, we propose an autonomous surface vehicle (ASV) model system for the surveillance of marine areas by detecting and recognizing vessels through artificial intelligence (AI)-based image recognition services, in search of those carrying out illegal activities. Cloud and edge AI computing technologies were used for computer vision. These technologies have proven to be accurate and reliable in detecting shapes and objects for which they have been trained. Azure edge and cloud vision services offer the best option in terms of accuracy for this task. Due to the lack of 4G and 5G coverage in offshore marine environments, it is necessary to use radio links with a coastal base station to ensure communications, which may result in a high response time due to the high latency involved. The analysis of on-board images may not be sufficiently accurate; therefore, we proposed a smart algorithm for autonomy optimization by selecting the proper AI technology according to the current scenario (SAAO) capable of selecting the best AI source for the current scenario in real time, according to the required recognition accuracy or low latency. The SAAO optimizes the execution, efficiency, risk reduction, and results of each stage of the surveillance mission, taking appropriate decisions by selecting either cloud or edge vision models without human intervention.
Collapse
|
228
|
Simulator for Interactive and Effective Organization of Things in Edge Cluster Computing. SENSORS 2021; 21:s21082616. [PMID: 33917883 PMCID: PMC8068243 DOI: 10.3390/s21082616] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/03/2021] [Accepted: 04/06/2021] [Indexed: 12/04/2022]
Abstract
Edge computing is intended to process events that occur at the endpoint of the Internet of Things (IoT) network quickly and intelligently. Edge regions must be organized effectively to facilitate cooperation so that the intention of edge computing can be realized. However, inevitably, many human and material resources are required in the process of arranging things in the edge area to confirm the appropriateness of the thing operation. To address this problem, we proposed a simulator that created a virtual space for edge computing and provided an interactive role and effective organization for edge things. The proposed simulator was aimed at Raspberry Pi as the physical hardware target. To prove the accuracy of the proposed simulator, the similarity between the proposed simulator and the physical target Raspberry Pi was evaluated based on three metrics while executing several applications. In the experiment, several edge-computing service applications were performed in various cluster architecture types formed by the proposed simulator. To support effective resource usage and fast real-time response for edge computing, the proposed simulator identified a suitable number of things in forming the edge cluster.
Collapse
|
229
|
Multi-Objective Whale Optimization Algorithm for Computation Offloading Optimization in Mobile Edge Computing. SENSORS 2021; 21:s21082628. [PMID: 33918037 PMCID: PMC8070405 DOI: 10.3390/s21082628] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 04/03/2021] [Accepted: 04/06/2021] [Indexed: 11/16/2022]
Abstract
Computation offloading is one of the most important problems in edge computing. Devices can transmit computation tasks to servers to be executed through computation offloading. However, not all the computation tasks can be offloaded to servers with the limitation of network conditions. Therefore, it is very important to decide quickly how many tasks should be executed on servers and how many should be executed locally. Only computation tasks that are properly offloaded can improve the Quality of Service (QoS). Some existing methods only focus on a single objection, and of the others some have high computational complexity. There still have no method that could balance the targets and complexity for universal application. In this study, a Multi-Objective Whale Optimization Algorithm (MOWOA) based on time and energy consumption is proposed to solve the optimal offloading mechanism of computation offloading in mobile edge computing. It is the first time that MOWOA has been applied in this area. For improving the quality of the solution set, crowding degrees are introduced and all solutions are sorted by crowding degrees. Additionally, an improved MOWOA (MOWOA2) by using the gravity reference point method is proposed to obtain better diversity of the solution set. Compared with some typical approaches, such as the Grid-Based Evolutionary Algorithm (GrEA), Cluster-Gradient-based Artificial Immune System Algorithm (CGbAIS), Non-dominated Sorting Genetic Algorithm III (NSGA-III), etc., the MOWOA2 performs better in terms of the quality of the final solutions.
Collapse
|
230
|
A Multi-Layer LoRaWAN Infrastructure for Smart Waste Management. SENSORS 2021; 21:s21082600. [PMID: 33917255 PMCID: PMC8068086 DOI: 10.3390/s21082600] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 03/26/2021] [Accepted: 04/02/2021] [Indexed: 11/17/2022]
Abstract
Long Range Wide Area Network (LoRaWAN) has rapidly become one of the key enabling technologies for the development of Internet of Things (IoT) architectures. A wide range of different solutions relying on this communication technology can be found in the literature: nevertheless, the most part of these architectures focus on single task systems. Conversely, the aim of this paper is to present the architecture of a LoRaWAN infrastructure gathering under the same network different typologies of services within one of the most significant sub-systems of the Smart City ecosystem (i.e., the Smart Waste Management). The proposed architecture exploits the whole range of different LoRaWAN classes, integrating nodes of growing complexity according to the different functions. The lowest level of this architecture is occupied by smart bins that simply collect data about their status. Moving on to upper levels, smart drop-off containers allow the interaction with users as well as the implementation of asynchronous downlink queries. At the top level, Video Surveillance Units (VSUs) are provided with machine learning capabilities for the detection of the presence of fire nearby bins or drop-off containers, thus fully implementing the Edge Computing paradigm. The proposed network infrastructure and its subsystems have been tested in a laboratory and in the field. This study has enhanced the readiness level of the proposed technology to Technology Readiness Level (TRL) 3.
Collapse
|
231
|
Health-BlockEdge: Blockchain-Edge Framework for Reliable Low-Latency Digital Healthcare Applications. SENSORS 2021; 21:s21072502. [PMID: 33916700 PMCID: PMC8038371 DOI: 10.3390/s21072502] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 03/26/2021] [Accepted: 03/27/2021] [Indexed: 11/16/2022]
Abstract
The rapid evolution of technology allows the healthcare sector to adopt intelligent, context-aware, secure, and ubiquitous healthcare services. Together with the global trend of an aging population, it has become highly important to propose value-creating, yet cost-efficient digital solutions for healthcare systems. These solutions should provide effective means of healthcare services in both the hospital and home care scenarios. In this paper, we focused on the latter case, where the goal was to provide easy-to-use, reliable, and secure remote monitoring and aid for elderly persons at their home. We proposed a framework to integrate the capabilities of edge computing and blockchain technology to address some of the key requirements of smart remote healthcare systems, such as long operating times, low cost, resilience to network problems, security, and trust in highly dynamic network conditions. In order to assess the feasibility of our approach, we evaluated the performance of our framework in terms of latency, power consumption, network utilization, and computational load, compared to a scenario where no blockchain was used.
Collapse
|
232
|
EDISON: An Edge-Native Method and Architecture for Distributed Interpolation. SENSORS (BASEL, SWITZERLAND) 2021; 21:2279. [PMID: 33805187 PMCID: PMC8037329 DOI: 10.3390/s21072279] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 03/18/2021] [Accepted: 03/22/2021] [Indexed: 11/30/2022]
Abstract
Spatio-temporal interpolation provides estimates of observations in unobserved locations and time slots. In smart cities, interpolation helps to provide a fine-grained contextual and situational understanding of the urban environment, in terms of both short-term (e.g., weather, air quality, traffic) or long term (e.g., crime, demographics) spatio-temporal phenomena. Various initiatives improve spatio-temporal interpolation results by including additional data sources such as vehicle-fitted sensors, mobile phones, or micro weather stations of, for example, smart homes. However, the underlying computing paradigm in such initiatives is predominantly centralized, with all data collected and analyzed in the cloud. This solution is not scalable, as when the spatial and temporal density of sensor data grows, the required transmission bandwidth and computational capacity become unfeasible. To address the scaling problem, we propose EDISON: algorithms for distributed learning and inference, and an edge-native architecture for distributing spatio-temporal interpolation models, their computations, and the observed data vertically and horizontally between device, edge and cloud layers. We demonstrate EDISON functionality in a controlled, simulated spatio-temporal setup with 1 M artificial data points. While the main motivation of EDISON is the distribution of the heavy computations, the results show that EDISON also provides an improvement over alternative approaches, reaching at best a 10% smaller RMSE than a global interpolation and 6% smaller RMSE than a baseline distributed approach.
Collapse
|
233
|
A Smart and Secure Logistics System Based on IoT and Cloud Technologies. SENSORS 2021; 21:s21062231. [PMID: 33806770 PMCID: PMC8005061 DOI: 10.3390/s21062231] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 03/18/2021] [Accepted: 03/20/2021] [Indexed: 12/22/2022]
Abstract
Recently, one of the hottest topics in the logistics sector has been the traceability of goods and the monitoring of their condition during transportation. Perishable goods, such as fresh goods, have specifically attracted attention of the researchers that have already proposed different solutions to guarantee quality and freshness of food through the whole cold chain. In this regard, the use of Internet of Things (IoT)-enabling technologies and its specific branch called edge computing is bringing different enhancements thereby achieving easy remote and real-time monitoring of transported goods. Due to the fast changes of the requirements and the difficulties that researchers can encounter in proposing new solutions, the fast prototype approach could contribute to rapidly enhance both the research and the commercial sector. In order to make easy the fast prototyping of solutions, different platforms and tools have been proposed in the last years, however it is difficult to guarantee end-to-end security at all the levels through such platforms. For this reason, based on the experiments reported in literature and aiming at providing support for fast-prototyping, end-to-end security in the logistics sector, the current work presents a solution that demonstrates how the advantages offered by the Azure Sphere platform, a dedicated hardware (i.e., microcontroller unit, the MT3620) device and Azure Sphere Security Service can be used to realize a fast prototype to trace fresh food conditions through its transportation. The proposed solution guarantees end-to-end security and can be exploited by future similar works also in other sectors.
Collapse
|
234
|
A Blockchain-Based Trusted Edge Platform in Edge Computing Environment. SENSORS 2021; 21:s21062126. [PMID: 33803561 PMCID: PMC8003011 DOI: 10.3390/s21062126] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 03/10/2021] [Accepted: 03/16/2021] [Indexed: 11/18/2022]
Abstract
Edge computing is a product of the evolution of IoT and the development of cloud computing technology, providing computing, storage, network, and other infrastructure close to users. Compared with the centralized deployment model of traditional cloud computing, edge computing solves the problems of extended communication time and high convergence traffic, providing better support for low latency and high bandwidth services. With the increasing amount of data generated by users and devices in IoT, security and privacy issues in the edge computing environment have become concerns. Blockchain, a security technology developed rapidly in recent years, has been adopted by many industries, such as finance and insurance. With the edge computing capability, deploying blockchain platforms/applications on edge computing platforms can provide security services for network edge environments. Although there are already solutions for integrating edge computing with blockchain in many IoT application scenarios, they slightly lack scalability, portability, and heterogeneous data processing. In this paper, we propose a trusted edge platform to integrate the edge computing framework and blockchain network for building an edge security environment. The proposed platform aims to preserve the data privacy of the edge computing client. The design based on the microservice architecture makes the platform lighter. To improve the portability of the platform, we introduce the Edgex Foundry framework and design an edge application module on the platform to improve the business capability of Edgex. Simultaneously, we designed a series of well-defined security authentication microservices. These microservices use the Hyperledger Fabric blockchain network to build a reliable security mechanism in the edge environment. Finally, we build an edge computing network using different hardware devices and deploy the trusted edge platform on multiple network nodes. The usability of the proposed platform is demonstrated by testing the round-trip time (RTT) of several important workflows. The experimental results demonstrate that the platform can meet the availability requirements in real-world usage scenarios.
Collapse
|
235
|
Resource Management Techniques for Cloud/Fog and Edge Computing: An Evaluation Framework and Classification. SENSORS 2021; 21:s21051832. [PMID: 33808037 PMCID: PMC7961768 DOI: 10.3390/s21051832] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 02/20/2021] [Accepted: 02/24/2021] [Indexed: 11/16/2022]
Abstract
Processing IoT applications directly in the cloud may not be the most efficient solution for each IoT scenario, especially for time-sensitive applications. A promising alternative is to use fog and edge computing, which address the issue of managing the large data bandwidth needed by end devices. These paradigms impose to process the large amounts of generated data close to the data sources rather than in the cloud. One of the considerations of cloud-based IoT environments is resource management, which typically revolves around resource allocation, workload balance, resource provisioning, task scheduling, and QoS to achieve performance improvements. In this paper, we review resource management techniques that can be applied for cloud, fog, and edge computing. The goal of this review is to provide an evaluation framework of metrics for resource management algorithms aiming at the cloud/fog and edge environments. To this end, we first address research challenges on resource management techniques in that domain. Consequently, we classify current research contributions to support in conducting an evaluation framework. One of the main contributions is an overview and analysis of research papers addressing resource management techniques. Concluding, this review highlights opportunities of using resource management techniques within the cloud/fog/edge paradigm. This practice is still at early development and barriers need to be overcome.
Collapse
|
236
|
Deep Reinforcement Learning-Based Task Scheduling in IoT Edge Computing. SENSORS 2021; 21:s21051666. [PMID: 33671072 PMCID: PMC7957605 DOI: 10.3390/s21051666] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 02/24/2021] [Accepted: 02/24/2021] [Indexed: 11/23/2022]
Abstract
Edge computing (EC) has recently emerged as a promising paradigm that supports resource-hungry Internet of Things (IoT) applications with low latency services at the network edge. However, the limited capacity of computing resources at the edge server poses great challenges for scheduling application tasks. In this paper, a task scheduling problem is studied in the EC scenario, and multiple tasks are scheduled to virtual machines (VMs) configured at the edge server by maximizing the long-term task satisfaction degree (LTSD). The problem is formulated as a Markov decision process (MDP) for which the state, action, state transition, and reward are designed. We leverage deep reinforcement learning (DRL) to solve both time scheduling (i.e., the task execution order) and resource allocation (i.e., which VM the task is assigned to), considering the diversity of the tasks and the heterogeneity of available resources. A policy-based REINFORCE algorithm is proposed for the task scheduling problem, and a fully-connected neural network (FCN) is utilized to extract the features. Simulation results show that the proposed DRL-based task scheduling algorithm outperforms the existing methods in the literature in terms of the average task satisfaction degree and success ratio.
Collapse
|
237
|
A Fully Open-Source Approach to Intelligent Edge Computing: AGILE's Lesson. SENSORS (BASEL, SWITZERLAND) 2021; 21:1309. [PMID: 33673065 PMCID: PMC7918801 DOI: 10.3390/s21041309] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 01/23/2021] [Accepted: 02/10/2021] [Indexed: 01/21/2023]
Abstract
In this paper, we describe the main outcomes of AGILE (acronym for "Adaptive Gateways for dIverse muLtiple Environments"), an EU-funded project that recently delivered a modular hardware and software framework conceived to address the fragmented market of embedded, multi-service, adaptive gateways for the Internet of Things (IoT). Its main goal is to provide a low-cost solution capable of supporting proof-of-concept implementations and rapid prototyping methodologies for both consumer and industrial IoT markets. AGILE allows developers to implement and deliver a complete (software and hardware) IoT solution for managing non-IP IoT devices through a multi-service gateway. Moreover, it simplifies the access of startups to the IoT market, not only providing an efficient and cost-effective solution for industries but also allowing end-users to customize and extend it according to their specific requirements. This flexibility is the result of the joint experience of established organizations in the project consortium already promoting the principles of openness, both at the software and hardware levels. We illustrate how the AGILE framework can provide a cost-effective yet solid and highly customizable, technological foundation supporting the configuration, deployment, and assessment of two distinct showcases, namely a quantified self application for individual consumers, and an air pollution monitoring station for industrial settings.
Collapse
|
238
|
Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks. Front Neurosci 2021; 15:629892. [PMID: 33642986 PMCID: PMC7902857 DOI: 10.3389/fnins.2021.629892] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 01/06/2021] [Indexed: 11/13/2022] Open
Abstract
While the backpropagation of error algorithm enables deep neural network training, it implies (i) bidirectional synaptic weight transport and (ii) update locking until the forward and backward passes are completed. Not only do these constraints preclude biological plausibility, but they also hinder the development of low-cost adaptive smart sensors at the edge, as they severely constrain memory accesses and entail buffering overhead. In this work, we show that the one-hot-encoded labels provided in supervised classification problems, denoted as targets, can be viewed as a proxy for the error sign. Therefore, their fixed random projections enable a layerwise feedforward training of the hidden layers, thus solving the weight transport and update locking problems while relaxing the computational and memory requirements. Based on these observations, we propose the direct random target projection (DRTP) algorithm and demonstrate that it provides a tradeoff between accuracy and computational cost that is suitable for adaptive edge computing devices.
Collapse
|
239
|
Optimising Deep Learning at the Edge for Accurate Hourly Air Quality Prediction. SENSORS 2021; 21:s21041064. [PMID: 33557203 PMCID: PMC7913936 DOI: 10.3390/s21041064] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 12/22/2022]
Abstract
Accurate air quality monitoring requires processing of multi-dimensional, multi-location sensor data, which has previously been considered in centralised machine learning models. These are often unsuitable for resource-constrained edge devices. In this article, we address this challenge by: (1) designing a novel hybrid deep learning model for hourly PM2.5 pollutant prediction; (2) optimising the obtained model for edge devices; and (3) examining model performance running on the edge devices in terms of both accuracy and latency. The hybrid deep learning model in this work comprises a 1D Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to predict hourly PM2.5 concentration. The results show that our proposed model outperforms other deep learning models, evaluated by calculating RMSE and MAE errors. The proposed model was optimised for edge devices, the Raspberry Pi 3 Model B+ (RPi3B+) and Raspberry Pi 4 Model B (RPi4B). This optimised model reduced file size to a quarter of the original, with further size reduction achieved by implementing different post-training quantisation. In total, 8272 hourly samples were continuously fed to the edge device, with the RPi4B executing the model twice as fast as the RPi3B+ in all quantisation modes. Full-integer quantisation produced the lowest execution time, with latencies of 2.19 s and 4.73 s for RPi4B and RPi3B+, respectively.
Collapse
|
240
|
A Generalization Performance Study Using Deep Learning Networks in Embedded Systems. SENSORS 2021; 21:s21041031. [PMID: 33546252 PMCID: PMC7913276 DOI: 10.3390/s21041031] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/22/2021] [Accepted: 01/26/2021] [Indexed: 12/02/2022]
Abstract
Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.
Collapse
|
241
|
Recent Advances in Collaborative Scheduling of Computing Tasks in an Edge Computing Paradigm. SENSORS 2021; 21:s21030779. [PMID: 33498910 PMCID: PMC7865659 DOI: 10.3390/s21030779] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 01/10/2021] [Accepted: 01/12/2021] [Indexed: 01/10/2023]
Abstract
In edge computing, edge devices can offload their overloaded computing tasks to an edge server. This can give full play to an edge server’s advantages in computing and storage, and efficiently execute computing tasks. However, if they together offload all the overloaded computing tasks to an edge server, it can be overloaded, thereby resulting in the high processing delay of many computing tasks and unexpectedly high energy consumption. On the other hand, the resources in idle edge devices may be wasted and resource-rich cloud centers may be underutilized. Therefore, it is essential to explore a computing task collaborative scheduling mechanism with an edge server, a cloud center and edge devices according to task characteristics, optimization objectives and system status. It can help one realize efficient collaborative scheduling and precise execution of all computing tasks. This work analyzes and summarizes the edge computing scenarios in an edge computing paradigm. It then classifies the computing tasks in edge computing scenarios. Next, it formulates the optimization problem of computation offloading for an edge computing system. According to the problem formulation, the collaborative scheduling methods of computing tasks are then reviewed. Finally, future research issues for advanced collaborative scheduling in the context of edge computing are indicated.
Collapse
|
242
|
Optimal Consensus with Dual Abnormality Mode of Cellular IoT Based on Edge Computing. SENSORS 2021; 21:s21020671. [PMID: 33477963 PMCID: PMC7835988 DOI: 10.3390/s21020671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 01/17/2021] [Accepted: 01/18/2021] [Indexed: 11/19/2022]
Abstract
The continuous development of fifth-generation (5G) networks is the main driving force for the growth of Internet of Things (IoT) applications. It is expected that the 5G network will greatly expand the applications of the IoT, thereby promoting the operation of cellular networks, the security and network challenges of the IoT, and pushing the future of the Internet to the edge. Because the IoT can make anything in anyplace be connected together at any time, it can provide ubiquitous services. With the establishment and use of 5G wireless networks, the cellular IoT (CIoT) will be developed and applied. In order to provide more reliable CIoT applications, a reliable network topology is very important. Reaching a consensus is one of the most important issues in providing a highly reliable CIoT design. Therefore, it is necessary to reach a consensus so that even if some components in the system is abnormal, the application in the system can still execute correctly in CIoT. In this study, a protocol of consensus is discussed in CIoT with dual abnormality mode that combines dormant abnormality and malicious abnormality. The protocol proposed in this research not only allows all normal components in CIoT to reach a consensus with the minimum times of data exchange, but also allows the maximum number of dormant and malicious abnormal components in CIoT. In the meantime, the protocol can make all normal components in CIoT satisfy the constraints of reaching consensus: Termination, Agreement, and Integrity.
Collapse
|
243
|
Dynamic Inference Approach Based on Rules Engine in Intelligent Edge Computing for Building Environment Control. SENSORS 2021; 21:s21020630. [PMID: 33477481 PMCID: PMC7831074 DOI: 10.3390/s21020630] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 01/10/2021] [Accepted: 01/14/2021] [Indexed: 11/24/2022]
Abstract
Computation offloading enables intensive computational tasks in edge computing to be separated into multiple computing resources of the server to overcome hardware limitations. Deep learning derives the inference approach based on the learning approach with a volume of data using a sufficient computing resource. However, deploying the domain-specific inference approaches to edge computing provides intelligent services close to the edge of the networks. In this paper, we propose intelligent edge computing by providing a dynamic inference approach for building environment control. The dynamic inference approach is provided based on the rules engine that is deployed on the edge gateway to select an inference function by the triggered rule. The edge gateway is deployed in the entry of a network edge and provides comprehensive functions, including device management, device proxy, client service, intelligent service and rules engine. The functions are provided by microservices provider modules that enable flexibility, extensibility and light weight for offloading domain-specific solutions to the edge gateway. Additionally, the intelligent services can be updated through offloading the microservices provider module with the inference models. Then, using the rules engine, the edge gateway operates an intelligent scenario based on the deployed rule profile by requesting the inference model of the intelligent service provider. The inference models are derived by training the building user data with the deep learning model using the edge server, which provides a high-performance computing resource. The intelligent service provider includes inference models and provides intelligent functions in the edge gateway using a constrained hardware resource based on microservices. Moreover, for bridging the Internet of Things (IoT) device network to the Internet, the gateway provides device management and proxy to enable device access to web clients.
Collapse
|
244
|
Efficient Resource-Aware Convolutional Neural Architecture Search for Edge Computing with Pareto-Bayesian Optimization. SENSORS 2021; 21:s21020444. [PMID: 33435143 PMCID: PMC7827625 DOI: 10.3390/s21020444] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 12/25/2020] [Accepted: 01/07/2021] [Indexed: 11/24/2022]
Abstract
With the development of deep learning technologies and edge computing, the combination of them can make artificial intelligence ubiquitous. Due to the constrained computation resources of the edge device, the research in the field of on-device deep learning not only focuses on the model accuracy but also on the model efficiency, for example, inference latency. There are many attempts to optimize the existing deep learning models for the purpose of deploying them on the edge devices that meet specific application requirements while maintaining high accuracy. Such work not only requires professional knowledge but also needs a lot of experiments, which limits the customization of neural networks for varied devices and application scenarios. In order to reduce the human intervention in designing and optimizing the neural network structure, multi-objective neural architecture search methods that can automatically search for neural networks featured with high accuracy and can satisfy certain hardware performance requirements are proposed. However, the current methods commonly set accuracy and inference latency as the performance indicator during the search process, and sample numerous network structures to obtain the required neural network. Lacking regulation to the search direction with the search objectives will generate a large number of useless networks during the search process, which influences the search efficiency to a great extent. Therefore, in this paper, an efficient resource-aware search method is proposed. Firstly, the network inference consumption profiling model for any specific device is established, and it can help us directly obtain the resource consumption of each operation in the network structure and the inference latency of the entire sampled network. Next, on the basis of the Bayesian search, a resource-aware Pareto Bayesian search is proposed. Accuracy and inference latency are set as the constraints to regulate the search direction. With a clearer search direction, the overall search efficiency will be improved. Furthermore, cell-based structure and lightweight operation are applied to optimize the search space for further enhancing the search efficiency. The experimental results demonstrate that with our method, the inference latency of the searched network structure reduced 94.71% without scarifying the accuracy. At the same time, the search efficiency increased by 18.18%.
Collapse
|
245
|
Hyperledger Fabric Blockchain for Securing the Edge Internet of Things. SENSORS 2021; 21:s21020359. [PMID: 33430274 PMCID: PMC7825674 DOI: 10.3390/s21020359] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 01/03/2021] [Accepted: 01/05/2021] [Indexed: 11/23/2022]
Abstract
Providing security and privacy to the Internet of Things (IoT) networks while achieving it with minimum performance requirements is an open research challenge. Blockchain technology, as a distributed and decentralized ledger, is a potential solution to tackle the limitations of the current peer-to-peer IoT networks. This paper presents the development of an integrated IoT system implementing the permissioned blockchain Hyperledger Fabric (HLF) to secure the edge computing devices by employing a local authentication process. In addition, the proposed model provides traceability for the data generated by the IoT devices. The presented solution also addresses the IoT systems’ scalability challenges, the processing power and storage issues of the IoT edge devices in the blockchain network. A set of built-in queries is leveraged by smart-contracts technology to define the rules and conditions. The paper validates the performance of the proposed model with practical implementation by measuring performance metrics such as transaction throughput and latency, resource consumption, and network use. The results show that the proposed platform with the HLF implementation is promising for the security of resource-constrained IoT devices and is scalable for deployment in various IoT scenarios.
Collapse
|
246
|
Replica selection and placement techniques on the IoT and edge computing: a deep study. WIRELESS NETWORKS 2021. [PMCID: PMC8444506 DOI: 10.1007/s11276-021-02793-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Internet of Things (IoT) has lately been presented as a new technological transformation in which things are connected via the Internet. Several sensors and devices create data and send vital signals constantly over sophisticated networks that allow machine-to-machine interactions and monitor and manage key smart-world infrastructures. Since huge amounts of data are generated, reducing the data access costs is a critical issue. Edge computing has been developed as a novel paradigm for solving IoT demands to reduce the rise in resource congestion. One of the most significant data management challenges in the IoT is selecting suitable replication things that minimize reaction time and cost. Therefore, our goal is to examine replica selection and placement techniques in IoT and edge computing. The findings revealed that the edge computing environment might significantly enhance system performance regarding access response time, prediction accuracy, effective network, and increased data availability. Furthermore, the results illustrate that data provenance is necessary to raise the accuracy of the data by. Also, the results showed that the most important challenge in data replication and placement techniques in IoT and edge computing was the availability of data and access response time.
Collapse
|
247
|
Deepint.net: A Rapid Deployment Platform for Smart Territories. SENSORS 2021; 21:s21010236. [PMID: 33401468 PMCID: PMC7795292 DOI: 10.3390/s21010236] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 12/24/2020] [Accepted: 12/28/2020] [Indexed: 11/27/2022]
Abstract
This paper presents an efficient cyberphysical platform for the smart management of smart territories. It is efficient because it facilitates the implementation of data acquisition and data management methods, as well as data representation and dashboard configuration. The platform allows for the use of any type of data source, ranging from the measurements of a multi-functional IoT sensing devices to relational and non-relational databases. It is also smart because it incorporates a complete artificial intelligence suit for data analysis; it includes techniques for data classification, clustering, forecasting, optimization, visualization, etc. It is also compatible with the edge computing concept, allowing for the distribution of intelligence and the use of intelligent sensors. The concept of smart cities is evolving and adapting to new applications; the trend to create intelligent neighbourhoods, districts or territories is becoming increasingly popular, as opposed to the previous approach of managing an entire megacity. In this paper, the platform is presented, and its architecture and functionalities are described. Moreover, its operation has been validated in a case study where the bike renting service of Paris—Vélib’ Métropole has been managed. This platform could enable smart territories to develop adapted knowledge management systems, adapt them to new requirements and to use multiple types of data, and execute efficient computational and artificial intelligence algorithms. The platform optimizes the decisions taken by human experts through explainable artificial intelligence models that obtain data from IoT sensors, databases, the Internet, etc. The global intelligence of the platform could potentially coordinate its decision-making processes with intelligent nodes installed in the edge, which would use the most advanced data processing techniques.
Collapse
|
248
|
Learning History with Location-Based Applications: An Architecture for Points of Interest in Multiple Layers. SENSORS 2020; 21:s21010129. [PMID: 33379185 PMCID: PMC7796226 DOI: 10.3390/s21010129] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Revised: 11/25/2020] [Accepted: 12/23/2020] [Indexed: 11/30/2022]
Abstract
Location-based applications (LBAs) capture the user’s physical location via satellite navigation sensors and integrate it as part of the digital application. Because of this connection, the real-world environment needs to be accounted for in LBA design. In this work, we focused on creating a database of geographically distributed points of interest (PoIs) that is optimal for learning local history. First, we conducted a requirements elicitation study at three outdoor archaeological sites and identified issues in existing solutions. Second, we designed a multi-layered prototype solution. Third, we evaluated the solution with nine experts who had prior experience with LBAs or similar systems. We incorporated their feedback to our design to iteratively improve it. As a whole, our work contributes to the LBA design literature by proposing a solution that is optimized for the learning of local history.
Collapse
|
249
|
Timely Reliability Analysis of Virtual Machines Considering Migration and Recovery in an Edge Server. SENSORS (BASEL, SWITZERLAND) 2020; 21:s21010093. [PMID: 33375704 PMCID: PMC7795631 DOI: 10.3390/s21010093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 12/21/2020] [Accepted: 12/23/2020] [Indexed: 06/12/2023]
Abstract
For the edge computing network, whether the end-to-end delay satisfies the delay constraint of the task is critical, especially for delay-sensitive tasks. Virtual machine (VM) migration improves the robustness of the network, whereas it also causes service downtime and increases the end-to-end delay. To study the influence of failure, migration, and recovery of VMs, we define three states for the VMs in an edge server and build a continuous-time Markov chain (CTMC). Then, we develop a matrix-geometric method and a first passage time method to obtain the VMs timely reliability (VTR) and the end-to-end timely reliability (ETR). The numerical results are verified by simulation based on OMNeT++. Results show that VTR is a monotonic function of the migration rate and the number of VMs. However, in some cases, the increase in task VMs (TVMs) may conversely decrease VTR, since more TVMs also brings about more failures in a given time. Moreover, we find that there is a trade-off between TVMs and backup VMs (BVMs) when the total number of VMs is limited. Our findings may shed light on understanding the impact of VM migration on end-to-end delay and designing a more reliable edge computing network for delay-sensitive applications.
Collapse
|
250
|
MUP: Simplifying Secure Over-The-Air Update with MQTT for Constrained IoT Devices. SENSORS 2020; 21:s21010010. [PMID: 33374965 PMCID: PMC7792629 DOI: 10.3390/s21010010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 12/15/2020] [Accepted: 12/17/2020] [Indexed: 11/16/2022]
Abstract
Message Queuing Telemetry Transport (MQTT) is one of the dominating protocols for edge- and cloud-based Internet of Things (IoT) solutions. When a security vulnerability of an IoT device is known, it has to be fixed as soon as possible. This requires a firmware update procedure. In this paper, we propose a secure update protocol for MQTT-connected devices which ensures the freshness of the firmware, authenticates the new firmware and considers constrained devices. We show that the update protocol is easy to integrate in an MQTT-based IoT network using a semantic approach. The feasibility of our approach is demonstrated by a detailed performance analysis of our prototype implementation on a IoT device with 32 kB RAM. Thereby, we identify design issues in MQTT 5 which can help to improve the support of constrained devices.
Collapse
|