251
|
Li N, Han K, Spratt W, Bedell S, Ren J, Gunawan O, Ott J, Hopstaken M, Cabral C, Libsch F, Subramanian C, Shahidi G, Sadana D. Dust-Sized High-Power-Density Photovoltaic Cells on Si and SOI Substrates for Wafer-Level-Packaged Small Edge Computers. Adv Mater 2020; 32:e2004573. [PMID: 33095497 DOI: 10.1002/adma.202004573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Revised: 08/20/2020] [Indexed: 06/11/2023]
Abstract
Advancement in microelectronics technology enables autonomous edge computing platforms in the size of a dust mote (<1 mm), bringing efficient and low-cost artificial intelligence close to the end user and Internet-of-Things (IoT) applications. The key challenge for these compact high-performance edge computers is the integration of a power source that satisfies the high-power-density requirement and does not increase the complexity and cost of the packaging. Here, it is shown that dust-sized III-V photovoltaic (PV) cells grown on Si and silicon-on-insulator (SOI) substrates can be integrated using a wafer-level-packaging process and achieve higher power density than all prior micro-PVs on Si and SOI substrates. The high-throughput heterogeneous integration unlocks the potential of large-scale manufacturing of these integrated systems with low cost for IoT applications. The negative effect of crystallographic defects in the heteroepitaxial materials on PV performance diminishes at high power density. Simultaneous power delivery and data transmission to the dust mote with heteroepitaxially grown PV are also demonstrated using hand-held illumination sources.
Collapse
Affiliation(s)
- Ning Li
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - Kevin Han
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - William Spratt
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - Stephen Bedell
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - Jinhan Ren
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - Oki Gunawan
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - John Ott
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - Marinus Hopstaken
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - Cyril Cabral
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - Frank Libsch
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - Chitra Subramanian
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - Ghavam Shahidi
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| | - Devendra Sadana
- IBM T J Watson Research Center, 1101 Kitchawan Rd, Yorktown Heights, New York, NY, 10598, USA
| |
Collapse
|
252
|
Díaz-de-Arcaya J, Miñón R, Torre-Bastida AI, Del Ser J, Almeida A. PADL: A Modeling and Deployment Language for Advanced Analytical Services. Sensors (Basel) 2020; 20:s20236712. [PMID: 33255294 PMCID: PMC7727685 DOI: 10.3390/s20236712] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 11/13/2020] [Accepted: 11/18/2020] [Indexed: 11/16/2022]
Abstract
In the smart city context, Big Data analytics plays an important role in processing the data collected through IoT devices. The analysis of the information gathered by sensors favors the generation of specific services and systems that not only improve the quality of life of the citizens, but also optimize the city resources. However, the difficulties of implementing this entire process in real scenarios are manifold, including the huge amount and heterogeneity of the devices, their geographical distribution, and the complexity of the necessary IT infrastructures. For this reason, the main contribution of this paper is the PADL description language, which has been specifically tailored to assist in the definition and operationalization phases of the machine learning life cycle. It provides annotations that serve as an abstraction layer from the underlying infrastructure and technologies, hence facilitating the work of data scientists and engineers. Due to its proficiency in the operationalization of distributed pipelines over edge, fog, and cloud layers, it is particularly useful in the complex and heterogeneous environments of smart cities. For this purpose, PADL contains functionalities for the specification of monitoring, notifications, and actuation capabilities. In addition, we provide tools that facilitate its adoption in production environments. Finally, we showcase the usefulness of the language by showing the definition of PADL-compliant analytical pipelines over two uses cases in a smart city context (flood control and waste management), demonstrating that its adoption is simple and beneficial for the definition of information and process flows in such environments.
Collapse
Affiliation(s)
- Josu Díaz-de-Arcaya
- TECNALIA, Basque Research & Technology Alliance (BRTA), 48160 Derio, Spain; (R.M.); (A.I.T.-B.); (J.D.S.)
- Correspondence:
| | - Raúl Miñón
- TECNALIA, Basque Research & Technology Alliance (BRTA), 48160 Derio, Spain; (R.M.); (A.I.T.-B.); (J.D.S.)
| | - Ana I. Torre-Bastida
- TECNALIA, Basque Research & Technology Alliance (BRTA), 48160 Derio, Spain; (R.M.); (A.I.T.-B.); (J.D.S.)
| | - Javier Del Ser
- TECNALIA, Basque Research & Technology Alliance (BRTA), 48160 Derio, Spain; (R.M.); (A.I.T.-B.); (J.D.S.)
- Department of Communications Engineering, Faculty of Engineering, University of the Basque Country (UPV/EHU), 48013 Bilbao, Spain
| | - Aitor Almeida
- DeustoTech, University of Deusto, Avenida de las Universidades 24, 48007 Bilbao, Spain;
| |
Collapse
|
253
|
Liu H, Li S, Sun W. Resource Allocation for Edge Computing without Using Cloud Center in Smart Home Environment: A Pricing Approach. Sensors (Basel) 2020; 20:s20226545. [PMID: 33207813 PMCID: PMC7698201 DOI: 10.3390/s20226545] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Revised: 10/29/2020] [Accepted: 11/14/2020] [Indexed: 11/16/2022]
Abstract
Recently, more and more smart homes have become one of important parts of home infrastructure. However, most of the smart home applications are not interconnected and remain isolated. They use the cloud center as the control platform, which increases the risk of link congestion and data security. Thus, in the future, smart homes based on edge computing without using cloud center become an important research area. In this paper, we assume that all applications in a smart home environment are composed of edge nodes and users. In order to maximize the utility of users, we assume that all users and edge nodes are placed in a market and formulate a pricing resource allocation model with utility maximization. We apply the Lagrangian method to analyze the model, so an edge node (provider in the market) allocates its resources to a user (customer in the market) based on the prices of resources and the utility related to the preference of users. To obtain the optimal resource allocation, we propose a pricing-based resource allocation algorithm by using low-pass filtering scheme and conform that the proposed algorithm can achieve an optimum within reasonable convergence times through some numerical examples.
Collapse
Affiliation(s)
| | - Shiyong Li
- Correspondence: ; Tel.: +86-335-805-7025
| | | |
Collapse
|
254
|
Sadique KM, Rahmani R, Johannesson P. IMSC-EIoTD: Identity Management and Secure Communication for Edge IoT Devices. Sensors (Basel) 2020; 20:s20226546. [PMID: 33207820 PMCID: PMC7696764 DOI: 10.3390/s20226546] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 11/12/2020] [Accepted: 11/14/2020] [Indexed: 12/03/2022]
Abstract
The Internet of things (IoT) will accommodate several billions of devices to the Internet to enhance human society as well as to improve the quality of living. A huge number of sensors, actuators, gateways, servers, and related end-user applications will be connected to the Internet. All these entities require identities to communicate with each other. The communicating devices may have mobility and currently, the only main identity solution is IP based identity management which is not suitable for the authentication and authorization of the heterogeneous IoT devices. Sometimes devices and applications need to communicate in real-time to make decisions within very short times. Most of the recently proposed solutions for identity management are cloud-based. Those cloud-based identity management solutions are not feasible for heterogeneous IoT devices. In this paper, we have proposed an edge-fog based decentralized identity management and authentication solution for IoT devices (IoTD) and edge IoT gateways (EIoTG). We have also presented a secure communication protocol for communication between edge IoT devices and edge IoT gateways. The proposed security protocols are verified using Scyther formal verification tool, which is a popular tool for automated verification of security protocols. The proposed model is specified using the PROMELA language. SPIN model checker is used to confirm the specification of the proposed model. The results show different message flows without any error.
Collapse
|
255
|
Hamdan S, Ayyash M, Almajali S. Edge-Computing Architectures for Internet of Things Applications: A Survey. Sensors (Basel) 2020; 20:E6441. [PMID: 33187267 DOI: 10.3390/s20226441] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 11/06/2020] [Accepted: 11/06/2020] [Indexed: 11/30/2022]
Abstract
The rapid growth of the Internet of Things (IoT) applications and their interference with our daily life tasks have led to a large number of IoT devices and enormous sizes of IoT-generated data. The resources of IoT devices are limited; therefore, the processing and storing IoT data in these devices are inefficient. Traditional cloud-computing resources are used to partially handle some of the IoT resource-limitation issues; however, using the resources in cloud centers leads to other issues, such as latency in time-critical IoT applications. Therefore, edge-cloud-computing technology has recently evolved. This technology allows for data processing and storage at the edge of the network. This paper studies, in-depth, edge-computing architectures for IoT (ECAs-IoT), and then classifies them according to different factors such as data placement, orchestration services, security, and big data. Besides, the paper studies each architecture in depth and compares them according to various features. Additionally, ECAs-IoT is mapped according to two existing IoT layered models, which helps in identifying the capabilities, features, and gaps of every architecture. Moreover, the paper presents the most important limitations of existing ECAs-IoT and recommends solutions to them. Furthermore, this survey details the IoT applications in the edge-computing domain. Lastly, the paper recommends four different scenarios for using ECAs-IoT by IoT applications.
Collapse
|
256
|
Cecilia JM, Cano JC, Morales-García J, Llanes A, Imbernón B. Evaluation of Clustering Algorithms on GPU-Based Edge Computing Platforms. Sensors (Basel) 2020; 20:s20216335. [PMID: 33172017 PMCID: PMC7664181 DOI: 10.3390/s20216335] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 10/30/2020] [Accepted: 11/03/2020] [Indexed: 11/16/2022]
Abstract
Internet of Things (IoT) is becoming a new socioeconomic revolution in which data and immediacy are the main ingredients. IoT generates large datasets on a daily basis but it is currently considered as "dark data", i.e., data generated but never analyzed. The efficient analysis of this data is mandatory to create intelligent applications for the next generation of IoT applications that benefits society. Artificial Intelligence (AI) techniques are very well suited to identifying hidden patterns and correlations in this data deluge. In particular, clustering algorithms are of the utmost importance for performing exploratory data analysis to identify a set (a.k.a., cluster) of similar objects. Clustering algorithms are computationally heavy workloads and require to be executed on high-performance computing clusters, especially to deal with large datasets. This execution on HPC infrastructures is an energy hungry procedure with additional issues, such as high-latency communications or privacy. Edge computing is a paradigm to enable light-weight computations at the edge of the network that has been proposed recently to solve these issues. In this paper, we provide an in-depth analysis of emergent edge computing architectures that include low-power Graphics Processing Units (GPUs) to speed-up these workloads. Our analysis includes performance and power consumption figures of the latest Nvidia's AGX Xavier to compare the energy-performance ratio of these low-cost platforms with a high-performance cloud-based counterpart version. Three different clustering algorithms (i.e., k-means, Fuzzy Minimals (FM), and Fuzzy C-Means (FCM)) are designed to be optimally executed on edge and cloud platforms, showing a speed-up factor of up to 11× for the GPU code compared to sequential counterpart versions in the edge platforms and energy savings of up to 150% between the edge computing and HPC platforms.
Collapse
Affiliation(s)
- José M. Cecilia
- Computer Engineering Department (DISCA), Universitat Politécnica de Valencia (UPV), 46022 Valencia, Spain;
- Correspondence:
| | - Juan-Carlos Cano
- Computer Engineering Department (DISCA), Universitat Politécnica de Valencia (UPV), 46022 Valencia, Spain;
| | - Juan Morales-García
- Computer Science Department, Universidad Católica de Murcia (UCAM), 30107 Murcia, Spain; (J.M.-G.); (A.L.); (B.I.)
| | - Antonio Llanes
- Computer Science Department, Universidad Católica de Murcia (UCAM), 30107 Murcia, Spain; (J.M.-G.); (A.L.); (B.I.)
| | - Baldomero Imbernón
- Computer Science Department, Universidad Católica de Murcia (UCAM), 30107 Murcia, Spain; (J.M.-G.); (A.L.); (B.I.)
| |
Collapse
|
257
|
Chukhno O, Chukhno N, Araniti G, Campolo C, Iera A, Molinaro A. Optimal Placement of Social Digital Twins in Edge IoT Networks. Sensors (Basel) 2020; 20:E6181. [PMID: 33143038 DOI: 10.3390/s20216181] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 10/26/2020] [Accepted: 10/27/2020] [Indexed: 11/30/2022]
Abstract
In next-generation Internet of Things (IoT) deployments, every object such as a wearable device, a smartphone, a vehicle, and even a sensor or an actuator will be provided with a digital counterpart (twin) with the aim of augmenting the physical object’s capabilities and acting on its behalf when interacting with third parties. Moreover, such objects can be able to interact and autonomously establish social relationships according to the Social Internet of Things (SIoT) paradigm. In such a context, the goal of this work is to provide an optimal solution for the social-aware placement of IoT digital twins (DTs) at the network edge, with the twofold aim of reducing the latency (i) between physical devices and corresponding DTs for efficient data exchange, and (ii) among DTs of friend devices to speed-up the service discovery and chaining procedures across the SIoT network. To this aim, we formulate the problem as a mixed-integer linear programming model taking into account limited computing resources in the edge cloud and social relationships among IoT devices.
Collapse
|
258
|
Dinh NT, Kim Y. An Information-Centric Semantic Data Collection Tree for Wireless Sensor Networks. Sensors (Basel) 2020; 20:s20216168. [PMID: 33138178 PMCID: PMC7663055 DOI: 10.3390/s20216168] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 10/22/2020] [Accepted: 10/26/2020] [Indexed: 11/23/2022]
Abstract
Data collection is an important application of wireless sensor networks (WSNs) and Internet of Things (IoT). Current routing and addressing operations in WSNs are based on IP addresses, while data collection and data queries are normally information-centric. The current IP-based approach incurs significant management overheads and is inefficient for semantic data collection and queries. To address the above issue, this paper proposes a semantic data collection tree (sDCT) construction scheme to build up a semantic data collection tree for wireless sensor networks. The semantic tree is rooted at the edge/sink and supports data collection tasks, queries, and configurations efficiently. We implement the sDCT in Contiki and evaluate the performance of the sDCT in comparison with the state-of-the-art scheme, 6LoWPAN/RPL and L2RMR, using telosb sensors under various scenarios. The obtained results show that the sDCT achieves a significant improvement in terms of the energy efficiency and the packet transmissions required for data collection or a query task compared to 6LoWPAN/RPL and L2RMR.
Collapse
|
259
|
AlMajed H, AlMogren A. A Secure and Efficient ECC-Based Scheme for Edge Computing and Internet of Things. Sensors (Basel) 2020; 20:E6158. [PMID: 33138018 PMCID: PMC7662995 DOI: 10.3390/s20216158] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/15/2020] [Accepted: 10/26/2020] [Indexed: 11/17/2022]
Abstract
Recent growth in the Internet of Things (IoT) has raised security concerns over the confidentiality of data exchanged between IoT devices and the edge. Many IoT systems adopt asymmetric cryptography to secure their data and communications. A drawback of asymmetric cryptography is the sizeable computation and space requirements. However, elliptic curve cryptography (ECC) is widely used in constrained environments for asymmetric cryptography due its superiority in generating a powerful encryption mechanism with small key sizes. ECC increases device performance and lowers power consumption, meaning it is suitable for diverse applications ranging from the IoT to wireless sensor network (WSN) devices. To ensure the confidentiality and security of data and communications, it is necessary to implement ECC robustly. A special area of focus in this regard is the mapping phase. This study's objective was to propose a tested and trusted scheme that offers authenticated encryption (AE) via enhancing the mapping phase of a plain text to an elliptic curve to resist several encryption attacks such as Chosen Plaintext Attack (CPA) and Chosen Ciphertext Attack (CCA). The proposed scheme also undertakes evaluation and analysis related to security requirements for specific encryption attributes. Finally, results from a comparison of the proposed scheme and other schemes are presented, evaluating each one's security characteristics and performance measurements. Our scheme is efficient in a way that makes so suitable to the IoT, and in particular to the Industrial IoT and the new Urbanization where the demands for services are huge.
Collapse
|
260
|
Fang J, Hu J, Wei J, Liu T, Wang B. An Efficient Resource Allocation Strategy for Edge-Computing Based Environmental Monitoring System. Sensors (Basel) 2020; 20:s20216125. [PMID: 33126457 PMCID: PMC7662575 DOI: 10.3390/s20216125] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 10/19/2020] [Accepted: 10/23/2020] [Indexed: 11/16/2022]
Abstract
The cloud computing and microsensor technology has greatly changed environmental monitoring, but it is difficult for cloud-computing based monitoring system to meet the computation demand of smaller monitoring granularity and increasing monitoring applications. As a novel computing paradigm, edge computing deals with this problem by deploying resource on edge network. However, the particularity of environmental monitoring applications is ignored by most previous studies. In this paper, we proposed a resource allocation algorithm and a task scheduling strategy to reduce the average completion latency of environmental monitoring application, when considering the characteristic of environmental monitoring system and dependency among task. Simulations are conducted, and the results show that compared with the traditional algorithms. With considering the emergency task, the proposed methods decrease the average completion latency by 21.6% in the best scenario.
Collapse
Affiliation(s)
- Juan Fang
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (J.H.); (J.W.)
- Correspondence: ; Tel.: +86-139-1129-6256
| | - Juntao Hu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (J.H.); (J.W.)
| | - Jianhua Wei
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (J.H.); (J.W.)
| | - Tong Liu
- Beijing Computing Center, Beijing 100094, China;
| | - Bo Wang
- National Computer Network Emergency Response Technical Team, Coordination Center of China, Beijing 100096, China;
| |
Collapse
|
261
|
Janbi N, Katib I, Albeshri A, Mehmood R. Distributed Artificial Intelligence-as-a-Service (DAIaaS) for Smarter IoE and 6G Environments. Sensors (Basel) 2020; 20:s20205796. [PMID: 33066295 PMCID: PMC7602081 DOI: 10.3390/s20205796] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 10/06/2020] [Accepted: 10/09/2020] [Indexed: 11/16/2022]
Abstract
Artificial intelligence (AI) has taken us by storm, helping us to make decisions in everything we do, even in finding our "true love" and the "significant other". While 5G promises us high-speed mobile internet, 6G pledges to support ubiquitous AI services through next-generation softwarization, heterogeneity, and configurability of networks. The work on 6G is in its infancy and requires the community to conceptualize and develop its design, implementation, deployment, and use cases. Towards this end, this paper proposes a framework for Distributed AI as a Service (DAIaaS) provisioning for Internet of Everything (IoE) and 6G environments. The AI service is "distributed" because the actual training and inference computations are divided into smaller, concurrent, computations suited to the level and capacity of resources available with cloud, fog, and edge layers. Multiple DAIaaS provisioning configurations for distributed training and inference are proposed to investigate the design choices and performance bottlenecks of DAIaaS. Specifically, we have developed three case studies (e.g., smart airport) with eight scenarios (e.g., federated learning) comprising nine applications and AI delivery models (smart surveillance, etc.) and 50 distinct sensor and software modules (e.g., object tracker). The evaluation of the case studies and the DAIaaS framework is reported in terms of end-to-end delay, network usage, energy consumption, and financial savings with recommendations to achieve higher performance. DAIaaS will facilitate standardization of distributed AI provisioning, allow developers to focus on the domain-specific details without worrying about distributed training and inference, and help systemize the mass-production of technologies for smarter environments.
Collapse
Affiliation(s)
- Nourah Janbi
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Iyad Katib
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Aiiad Albeshri
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Rashid Mehmood
- High Performance Computing Center, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Correspondence:
| |
Collapse
|
262
|
Monti L, Vincenzi M, Mirri S, Pau G, Salomoni P. RaveGuard: A Noise Monitoring Platform Using Low-End Microphones and Machine Learning. Sensors (Basel) 2020; 20:s20195583. [PMID: 33003482 PMCID: PMC7582659 DOI: 10.3390/s20195583] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 09/23/2020] [Accepted: 09/26/2020] [Indexed: 11/29/2022]
Abstract
Urban noise is one of the most serious and underestimated environmental problems. According to the World Health Organization, noise pollution from traffic and other human activities, negatively impact the population health and life quality. Monitoring noise usually requires the use of professional and expensive instruments, called phonometers, able to accurately measure sound pressure levels. In many cases, phonometers are human-operated; therefore, periodic fine-granularity city-wide measurements are expensive. Recent advances in the Internet of Things (IoT) offer a window of opportunities for low-cost autonomous sound pressure meters. Such devices and platforms could enable fine time–space noise measurements throughout a city. Unfortunately, low-cost sound pressure sensors are inaccurate when compared with phonometers, experiencing a high variability in the measurements. In this paper, we present RaveGuard, an unmanned noise monitoring platform that exploits artificial intelligence strategies to improve the accuracy of low-cost devices. RaveGuard was initially deployed together with a professional phonometer for over two months in downtown Bologna, Italy, with the aim of collecting a large amount of precise noise pollution samples. The resulting datasets have been instrumental in designing InspectNoise, a library that can be exploited by IoT platforms, without the need of expensive phonometers, but obtaining a similar precision. In particular, we have applied supervised learning algorithms (adequately trained with our datasets) to reduce the accuracy gap between the professional phonometer and an IoT platform equipped with low-end devices and sensors. Results show that RaveGuard, combined with the InspectNoise library, achieves a 2.24% relative error compared to professional instruments, thus enabling low-cost unmanned city-wide noise monitoring.
Collapse
Affiliation(s)
- Lorenzo Monti
- Department of Computer Science and Engineering, University of Bologna, Mura Anteo Zamboni 7, 40126 Bologna, Italy; (L.M.); (G.P.); (P.S.)
| | - Mattia Vincenzi
- Master Degree in Computer Science, Department of Informatics, Systems and Communication, University of Milan—Bicocca, 20125 Milan, Italy;
| | - Silvia Mirri
- Department of Computer Science and Engineering, University of Bologna, Mura Anteo Zamboni 7, 40126 Bologna, Italy; (L.M.); (G.P.); (P.S.)
- Correspondence:
| | - Giovanni Pau
- Department of Computer Science and Engineering, University of Bologna, Mura Anteo Zamboni 7, 40126 Bologna, Italy; (L.M.); (G.P.); (P.S.)
- Computer Science Department, University of California—Los Angeles (UCLA), Los Angeles, CA 90095-1596, USA
- School of Applied Sciences, Macao Polytechnic Institute, Macao, China
| | - Paola Salomoni
- Department of Computer Science and Engineering, University of Bologna, Mura Anteo Zamboni 7, 40126 Bologna, Italy; (L.M.); (G.P.); (P.S.)
| |
Collapse
|
263
|
Grigorescu S, Cocias T, Trasnea B, Margheri A, Lombardi F, Aniello L. Cloud2Edge Elastic AI Framework for Prototyping and Deployment of AI Inference Engines in Autonomous Vehicles. Sensors (Basel) 2020; 20:E5450. [PMID: 32977409 DOI: 10.3390/s20195450] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Revised: 09/16/2020] [Accepted: 09/16/2020] [Indexed: 11/17/2022]
Abstract
Self-driving cars and autonomous vehicles are revolutionizing the automotive sector, shaping the future of mobility altogether. Although the integration of novel technologies such as Artificial Intelligence (AI) and Cloud/Edge computing provides golden opportunities to improve autonomous driving applications, there is the need to modernize accordingly the whole prototyping and deployment cycle of AI components. This paper proposes a novel framework for developing so-called AI Inference Engines for autonomous driving applications based on deep learning modules, where training tasks are deployed elastically over both Cloud and Edge resources, with the purpose of reducing the required network bandwidth, as well as mitigating privacy issues. Based on our proposed data driven V-Model, we introduce a simple yet elegant solution for the AI components development cycle, where prototyping takes place in the cloud according to the Software-in-the-Loop (SiL) paradigm, while deployment and evaluation on the target ECUs (Electronic Control Units) is performed as Hardware-in-the-Loop (HiL) testing. The effectiveness of the proposed framework is demonstrated using two real-world use-cases of AI inference engines for autonomous vehicles, that is environment perception and most probable path prediction.
Collapse
|
264
|
Ullah S, Kim DH. Lightweight Driver Behavior Identification Model with Sparse Learning on In-Vehicle CAN-BUS Sensor Data. Sensors (Basel) 2020; 20:s20185030. [PMID: 32899751 PMCID: PMC7570946 DOI: 10.3390/s20185030] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 08/28/2020] [Accepted: 08/31/2020] [Indexed: 11/17/2022]
Abstract
This study focuses on driver-behavior identification and its application to finding embedded solutions in a connected car environment. We present a lightweight, end-to-end deep-learning framework for performing driver-behavior identification using in-vehicle controller area network (CAN-BUS) sensor data. The proposed method outperforms the state-of-the-art driver-behavior profiling models. Particularly, it exhibits significantly reduced computations (i.e., reduced numbers both of floating-point operations and parameters), more efficient memory usage (compact model size), and less inference time. The proposed architecture features depth-wise convolution, along with augmented recurrent neural networks (long short-term memory or gated recurrent unit), for time-series classification. The minimum time-step length (window size) required in the proposed method is significantly lower than that required by recent algorithms. We compared our results with compressed versions of existing models by applying efficient channel pruning on several layers of current models. Furthermore, our network can adapt to new classes using sparse-learning techniques, that is, by freezing relatively strong nodes at the fully connected layer for the existing classes and improving the weaker nodes by retraining them using data regarding the new classes. We successfully deploy the proposed method in a container environment using NVIDIA Docker in an embedded system (Xavier, TX2, and Nano) and comprehensively evaluate it with regard to numerous performance metrics.
Collapse
|
265
|
Abu Sufian, Anirudha Ghosh, Ali Safaa Sadiq, Florentin Smarandache. A Survey on Deep Transfer Learning to Edge Computing for Mitigating the COVID-19 Pandemic. Journal of Systems Architecture 2020; 108. [ DOI: 10.1016/j.sysarc.2020.101830] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Presented a systematic study of Deep Learning (DL), Deep Transfer Learning (DTL) and Edge Computing (EC) to mitigate COVID-19. Surveyed on existing DL, DTL, EC, and Dataset to mitigate pandemics with potentialities and challenges. Drawn a precedent pipeline model of DTL over EC for a future scope to mitigate any outbreaks. Given brief analyses and challenges wherever relevant in perspective of COVID-19.
Global Health sometimes faces pandemics as are currently facing COVID-19 disease. The spreading and infection factors of this disease are very high. A huge number of people from most of the countries are infected within six months from its first report of appearance and it keeps spreading. The required systems are not ready up to some stages for any pandemic; therefore, mitigation with existing capacity becomes necessary. On the other hand, modern-era largely depends on Artificial Intelligence(AI) including Data Science; and Deep Learning(DL) is one of the current flag-bearer of these techniques. It could use to mitigate COVID-19 like pandemics in terms of stop spread, diagnosis of the disease, drug & vaccine discovery, treatment, patient care, and many more. But this DL requires large datasets as well as powerful computing resources. A shortage of reliable datasets of a running pandemic is a common phenomenon. So, Deep Transfer Learning(DTL) would be effective as it learns from one task and could work on another task. In addition, Edge Devices(ED) such as IoT, Webcam, Drone, Intelligent Medical Equipment, Robot, etc. are very useful in a pandemic situation. These types of equipment make the infrastructures sophisticated and automated which helps to cope with an outbreak. But these are equipped with low computing resources, so, applying DL is also a bit challenging; therefore, DTL also would be effective there. This article scholarly studies the potentiality and challenges of these issues. It has described relevant technical backgrounds and reviews of the related recent state-of-the-art. This article also draws a pipeline of DTL over Edge Computing as a future scope to assist the mitigation of any pandemic.
Collapse
|
266
|
Fernandez IG, Ahmad SA, Wada C. Inertial Sensor-Based Instrumented Cane for Real-Time Walking Cane Kinematics Estimation. Sensors (Basel) 2020; 20:E4675. [PMID: 32825029 DOI: 10.3390/s20174675] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 08/14/2020] [Accepted: 08/15/2020] [Indexed: 11/22/2022]
Abstract
Falls are among the main causes of injuries in elderly individuals. Balance and mobility impairment are major indicators of fall risk in this group. The objective of this research was to develop a fall risk feedback system that operates in real time using an inertial sensor-based instrumented cane. Based on inertial sensor data, the proposed system estimates the kinematics (contact phase and orientation) of the cane. First, the contact phase of the cane was estimated by a convolutional neural network. Next, various algorithms for the cane orientation estimation were compared and validated using an optical motion capture system. The proposed cane contact phase prediction model achieved higher accuracy than the previous models. In the cane orientation estimation, the Madgwick filter yielded the best results overall. Finally, the proposed system was able to estimate both the contact phase and orientation in real time in a single-board computer.
Collapse
|
267
|
Nguyen TT, Yeom YJ, Kim T, Park DH, Kim S. Horizontal Pod Autoscaling in Kubernetes for Elastic Container Orchestration. Sensors (Basel) 2020; 20:s20164621. [PMID: 32824508 PMCID: PMC7471989 DOI: 10.3390/s20164621] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 08/14/2020] [Accepted: 08/14/2020] [Indexed: 11/16/2022]
Abstract
Kubernetes, an open-source container orchestration platform, enables high availability and scalability through diverse autoscaling mechanisms such as Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler and Cluster Autoscaler. Amongst them, HPA helps provide seamless service by dynamically scaling up and down the number of resource units, called pods, without having to restart the whole system. Kubernetes monitors default Resource Metrics including CPU and memory usage of host machines and their pods. On the other hand, Custom Metrics, provided by external software such as Prometheus, are customizable to monitor a wide collection of metrics. In this paper, we investigate HPA through diverse experiments to provide critical knowledge on its operational behaviors. We also discuss the essential difference between Kubernetes Resource Metrics (KRM) and Prometheus Custom Metrics (PCM) and how they affect HPA's performance. Lastly, we provide deeper insights and lessons on how to optimize the performance of HPA for researchers, developers, and system administrators working with Kubernetes in the future.
Collapse
Affiliation(s)
- Thanh-Tung Nguyen
- School of Information and Communication Engineering, Chungbuk National University, Cheongju, Chungbuk 28644, Korea; (T.-T.N.); (Y.-J.Y.)
| | - Yu-Jin Yeom
- School of Information and Communication Engineering, Chungbuk National University, Cheongju, Chungbuk 28644, Korea; (T.-T.N.); (Y.-J.Y.)
| | - Taehong Kim
- School of Information and Communication Engineering, Chungbuk National University, Cheongju, Chungbuk 28644, Korea; (T.-T.N.); (Y.-J.Y.)
- Correspondence: (T.K.); (D.-H.P.)
| | - Dae-Heon Park
- Electronics and Telecommunications Research Institute, Daejeon 34129, Korea;
- Correspondence: (T.K.); (D.-H.P.)
| | - Sehan Kim
- Electronics and Telecommunications Research Institute, Daejeon 34129, Korea;
| |
Collapse
|
268
|
Xiao Y, Liu Y, Li T. Edge Computing and Blockchain for Quick Fake News Detection in IoV. Sensors (Basel) 2020; 20:s20164360. [PMID: 32764327 PMCID: PMC7472075 DOI: 10.3390/s20164360] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 07/26/2020] [Accepted: 07/31/2020] [Indexed: 11/25/2022]
Abstract
The dissemination of false messages in Internet of Vehicles (IoV) has a negative impact on road safety and traffic efficiency. Therefore, it is critical to quickly detect fake news considering news timeliness in IoV. We propose a network computing framework Quick Fake News Detection (QcFND) in this paper, which exploits the technologies from Software-Defined Networking (SDN), edge computing, blockchain, and Bayesian networks. QcFND consists of two tiers: edge and vehicles. The edge is composed of Software-Defined Road Side Units (SDRSUs), which is extended from traditional Road Side Units (RSUs) and hosts virtual machines such as SDN controllers and blockchain servers. The SDN controllers help to implement the load balancing on IoV. The blockchain servers accommodate the reports submitted by vehicles and calculate the probability of the presence of a traffic event, providing time-sensitive services to the passing vehicles. Specifically, we exploit Bayesian Network to infer whether to trust the received traffic reports. We test the performance of QcFND with three platforms, i.e., Veins, Hyperledger Fabric, and Netica. Extensive simulations and experiments show that QcFND achieves good performance compared with other solutions.
Collapse
|
269
|
Wu Q, Wu J, Shen J, Yong B, Zhou Q. An Edge Based Multi-Agent Auto Communication Method for Traffic Light Control. Sensors (Basel) 2020; 20:s20154291. [PMID: 32752055 PMCID: PMC7436084 DOI: 10.3390/s20154291] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 07/24/2020] [Accepted: 07/29/2020] [Indexed: 11/16/2022]
Abstract
With smart city infrastructures growing, the Internet of Things (IoT) has been widely used in the intelligent transportation systems (ITS). The traditional adaptive traffic signal control method based on reinforcement learning (RL) has expanded from one intersection to multiple intersections. In this paper, we propose a multi-agent auto communication (MAAC) algorithm, which is an innovative adaptive global traffic light control method based on multi-agent reinforcement learning (MARL) and an auto communication protocol in edge computing architecture. The MAAC algorithm combines multi-agent auto communication protocol with MARL, allowing an agent to communicate the learned strategies with others for achieving global optimization in traffic signal control. In addition, we present a practicable edge computing architecture for industrial deployment on IoT, considering the limitations of the capabilities of network transmission bandwidth. We demonstrate that our algorithm outperforms other methods over 17% in experiments in a real traffic simulation environment.
Collapse
Affiliation(s)
- Qiang Wu
- School of Information & Engineering, Lanzhou University, Lanzhou 730000, China; (Q.W.); (B.Y.)
| | - Jianqing Wu
- School of Computing and Information Technology, University of Wollongong, Wollongong 2522, Australia; (J.W.); (J.S.)
| | - Jun Shen
- School of Computing and Information Technology, University of Wollongong, Wollongong 2522, Australia; (J.W.); (J.S.)
| | - Binbin Yong
- School of Information & Engineering, Lanzhou University, Lanzhou 730000, China; (Q.W.); (B.Y.)
| | - Qingguo Zhou
- School of Information & Engineering, Lanzhou University, Lanzhou 730000, China; (Q.W.); (B.Y.)
- Correspondence:
| |
Collapse
|
270
|
Nawaz A, Peña Queralta J, Guan J, Awais M, Gia TN, Bashir AK, Kan H, Westerlund T. Edge Computing to Secure IoT Data Ownership and Trade with the Ethereum Blockchain. Sensors (Basel) 2020; 20:s20143965. [PMID: 32708807 PMCID: PMC7412471 DOI: 10.3390/s20143965] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Revised: 07/08/2020] [Accepted: 07/08/2020] [Indexed: 11/16/2022]
Abstract
With an increasing penetration of ubiquitous connectivity, the amount of data describing the actions of end-users has been increasing dramatically, both within the domain of the Internet of Things (IoT) and other smart devices. This has led to more awareness of users in terms of protecting personal data. Within the IoT, there is a growing number of peer-to-peer (P2P) transactions, increasing the exposure to security vulnerabilities, and the risk of cyberattacks. Blockchain technology has been explored as middleware in P2P transactions, but existing solutions have mainly focused on providing a safe environment for data trade without considering potential changes in interaction topologies. we present EdgeBoT, a proof-of-concept smart contracts based platform for the IoT built on top of the ethereum blockchain. With the Blockchain of Things (BoT) at the edge of the network, EdgeBoT enables a wider variety of interaction topologies between nodes in the network and external services while guaranteeing ownership of data and end users’ privacy. in EdgeBoT, edge devices trade their data directly with third parties and without the need of intermediaries. This opens the door to new interaction modalities, in which data producers at the edge grant access to batches of their data to different third parties. Leveraging the immutability properties of blockchains, together with the distributed nature of smart contracts, data owners can audit and are aware of all transactions that have occurred with their data. we report initial results demonstrating the potential of EdgeBoT within the IoT. we show that integrating our solutions on top of existing IoT systems has a relatively small footprint in terms of computational resource usage, but a significant impact on the protection of data ownership and management of data trade.
Collapse
Affiliation(s)
- Anum Nawaz
- Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai 200433, China; (A.N.); (J.G.)
- Turku Intelligent Embedded and Robotic Systems Group (TIERS), Faculty of Science and Engineering, University of Turku, FI-20014 Turku, Finland; (J.P.Q.); (T.N.G.); (T.W.)
- School of Information Science and Engineering, Fudan Univeristy, Shanghai 200433, China;
| | - Jorge Peña Queralta
- Turku Intelligent Embedded and Robotic Systems Group (TIERS), Faculty of Science and Engineering, University of Turku, FI-20014 Turku, Finland; (J.P.Q.); (T.N.G.); (T.W.)
| | - Jixin Guan
- Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai 200433, China; (A.N.); (J.G.)
| | - Muhammad Awais
- School of Information Science and Engineering, Fudan Univeristy, Shanghai 200433, China;
| | - Tuan Nguyen Gia
- Turku Intelligent Embedded and Robotic Systems Group (TIERS), Faculty of Science and Engineering, University of Turku, FI-20014 Turku, Finland; (J.P.Q.); (T.N.G.); (T.W.)
| | - Ali Kashif Bashir
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester M15 6BH, UK;
| | - Haibin Kan
- Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai 200433, China; (A.N.); (J.G.)
- Fudan-Zhongan Joint Laboratory of Blockchain and Information Security, Shanghai Engineering Research Center of Blockchain, Shanghai 200433, China
- Correspondence:
| | - Tomi Westerlund
- Turku Intelligent Embedded and Robotic Systems Group (TIERS), Faculty of Science and Engineering, University of Turku, FI-20014 Turku, Finland; (J.P.Q.); (T.N.G.); (T.W.)
| |
Collapse
|
271
|
Díaz JJV, Pozo RF, González ABR, Wilby MR, Ávila CS. Hierarchical Agglomerative Clustering of Bicycle Sharing Stations Based on Ultra-Light Edge Computing. Sensors (Basel) 2020; 20:E3550. [PMID: 32585917 DOI: 10.3390/s20123550] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 06/17/2020] [Accepted: 06/19/2020] [Indexed: 11/17/2022]
Abstract
Bicycle sharing systems (BSSs) have established a new shared-economy mobility model. After a rapid growth they are evolving into a fully-functional mobile sensor platform for cities. The viability of BSSs is floored by their operational costs, mainly due to rebalancing operations. Rebalancing implies transporting bicycles to and from docking stations in order to guarantee the service. Rebalancing performs clustering to group docking stations by behaviour and proximity. In this paper we propose a Hierarchical Agglomerative Clustering based on an Ultra-Light Edge Computing Algorithm (HAC-ULECA). We eliminate the proximity and let Hierarchical Agglomerative Clustering (HAC) focus on behaviour. Behaviour is represented by ULECA as an activity profile based on the net flow of arrivals and departures in a docking station. This drastically reduces the computing requirements which allows ULECA to run as an edge computing functionality embedded into the physical layer of the Internet of Shared Bikes (IoSB) architecture. We have applied HAC-ULECA to real data from BiciMAD, the public BSS in Madrid (Spain). Our results, presented as dendograms, graphs, geographical maps, and colour maps, show that HAC-ULECA is capable of separating behaviour profiles related to business and residential areas and extracting meaningful spatio-temporal information about the BSS and the city's mobility.
Collapse
|
272
|
Lee D, Moon H, Oh S, Park D. mIoT: Metamorphic IoT Platform for On-Demand Hardware Replacement in Large-Scaled IoT Applications. Sensors (Basel) 2020; 20:E3337. [PMID: 32545495 DOI: 10.3390/s20123337] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 06/09/2020] [Accepted: 06/10/2020] [Indexed: 11/17/2022]
Abstract
As the Internet of Things (IoT) is becoming more pervasive in our daily lives, the number of devices that connect to IoT edges and data generated at the edges are rapidly increasing. On account of the bottlenecks in servers, due to the increase in data, as well as security and privacy issues, the IoT paradigm has shifted from cloud computing to edge computing. Pursuant to this trend, embedded devices require complex computation capabilities. However, due to various constraints, edge devices cannot equip enough hardware to process data, so the flexibility of operation is reduced, because of the limitations of fixed hardware functions, relative to cloud computing. Recently, as application fields and collected data types diversify, and, in particular, applications requiring complex computation such as artificial intelligence (AI) and signal processing are applied to edges, flexible processing and computation capabilities based on hardware acceleration are required. In this paper, to meet these needs, we propose a new IoT platform, called a metamorphic IoT (mIoT) platform, which can various hardware acceleration with limited hardware platform resources, through on-demand transmission and reconfiguration of required hardware at edges instead of via transference of sensing data to a server. The proposed platform reconfigures the edge’s hardware with minimal overhead, based on a probabilistic value, known as callability. The mIoT consists of reconfigurable edge devices based on RISC-V architecture and a server that manages the reconfiguration of edge devices based on callability. Through various experimental results, we confirmed that the callability-based mIoT platform can provide the hardware required by the edge device in real time. In addition, by performing various functions with small hardware, power consumption, which is a major constraint of IoT, can be reduced.
Collapse
|
273
|
Sakr F, Bellotti F, Berta R, De Gloria A. Machine Learning on Mainstream Microcontrollers. Sensors (Basel) 2020; 20:s20092638. [PMID: 32380766 PMCID: PMC7249132 DOI: 10.3390/s20092638] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 04/24/2020] [Accepted: 04/29/2020] [Indexed: 11/16/2022]
Abstract
This paper presents the Edge Learning Machine (ELM), a machine learning framework for edge devices, which manages the training phase on a desktop computer and performs inferences on microcontrollers. The framework implements, in a platform-independent C language, three supervised machine learning algorithms (Support Vector Machine (SVM) with a linear kernel, k-Nearest Neighbors (K-NN), and Decision Tree (DT)), and exploits STM X-Cube-AI to implement Artificial Neural Networks (ANNs) on STM32 Nucleo boards. We investigated the performance of these algorithms on six embedded boards and six datasets (four classifications and two regression). Our analysis—which aims to plug a gap in the literature—shows that the target platforms allow us to achieve the same performance score as a desktop machine, with a similar time latency. ANN performs better than the other algorithms in most cases, with no difference among the target devices. We observed that increasing the depth of an NN improves performance, up to a saturation level. k-NN performs similarly to ANN and, in one case, even better, but requires all the training sets to be kept in the inference phase, posing a significant memory demand, which can be afforded only by high-end edge devices. DT performance has a larger variance across datasets. In general, several factors impact performance in different ways across datasets. This highlights the importance of a framework like ELM, which is able to train and compare different algorithms. To support the developer community, ELM is released on an open-source basis.
Collapse
|
274
|
Zhai Z, Xiang K, Zhao L, Cheng B, Qian J, Wu J. IoT-RECSM-Resource-Constrained Smart Service Migration Framework for IoT Edge Computing Environment. Sensors (Basel) 2020; 20:s20082294. [PMID: 32316465 PMCID: PMC7219074 DOI: 10.3390/s20082294] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Revised: 04/11/2020] [Accepted: 04/14/2020] [Indexed: 11/17/2022]
Abstract
The edge-based computing paradigm (ECP) becomes one of the most innovative modes of processing distributed Interneit of Things (IoT) sensor data. However, the edge nodes in ECP are usually resource-constrained. When more services are executed on an edge node, the resources required by these services may exceed the edge node’s, so as to fail to maintain the normal running of the edge node. In order to solve this problem, this paper proposes a resource-constrained smart service migration framework for edge computing environment in IoT (IoT-RECSM) and a dynamic edge service migration algorithm. Based on this algorithm, the framework can dynamically migrate services of resource-critical edge nodes to resource-rich nodes. In the framework, four abstract models are presented to quantificationally evaluate the resource usage of edge nodes and the resource consumption of edge service in real-time. Finally, an edge smart services migration prototype system is implemented to simulate the edge service migration in IoT environment. Based on the system, an IoT case including 10 edge nodes is simulated to evaluate the proposed approach. According to the experiment results, service migration among edge nodes not only maintains the stability of service execution on edge nodes, but also reduces the sensor data traffic between edge nodes and cloud center.
Collapse
Affiliation(s)
- Zhongyi Zhai
- Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, GuiLin 541004, China; (Z.Z.); (K.X.); (J.Q.)
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China;
| | - Ke Xiang
- Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, GuiLin 541004, China; (Z.Z.); (K.X.); (J.Q.)
| | - Lingzhong Zhao
- Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, GuiLin 541004, China; (Z.Z.); (K.X.); (J.Q.)
- Correspondence:
| | - Bo Cheng
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China;
| | - Junyan Qian
- Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, GuiLin 541004, China; (Z.Z.); (K.X.); (J.Q.)
| | - Jinsong Wu
- Department of Eletrical Engineering, Universidad de Chile, Santiago 1025000, Chile;
| |
Collapse
|
275
|
Wang E, Qu Z, Liang X, Meng X, Yang Y, Li D, Meng W. Storage Management Strategy in Mobile Phones for Photo Crowdsensing. Sensors (Basel) 2020; 20:E2199. [PMID: 32295027 DOI: 10.3390/s20082199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 03/10/2020] [Accepted: 03/14/2020] [Indexed: 11/16/2022]
Abstract
In mobile crowdsensing, some users jointly finish a sensing task through the sensors equipped in their intelligent terminals. In particular, the photo crowdsensing based on Mobile Edge Computing (MEC) collects pictures for some specific targets or events and uploads them to nearby edge servers, which leads to richer data content and more efficient data storage compared with the common mobile crowdsensing; hence, it has attracted an important amount of attention recently. However, the mobile users prefer uploading the photos through Wifi APs (PoIs) rather than cellular networks. Therefore, photos stored in mobile phones are exchanged among users, in order to quickly upload them to the PoIs, which are actually the edge services. In this paper, we propose a utility-based Storage Management strategy in mobile phones for Photo Crowdsensing (SMPC), which makes a sending/deleting decision on a user's device for either maximizing photo delivery ratio (SMPC-R) or minimizing average delay (SMPC-D). The decision is made according to the photo's utility, which is calculated by measuring the impact of reproducing or deleting a photo on the above performance goals. We have done simulations based on the random-waypoint model and three real traces: roma/taxi, epfl, and geolife. The results show that, compared with other storage management strategies, SMPC-R gets the highest delivery ratio and SMPC-D achieves the lowest average delay.
Collapse
|
276
|
Dechouniotis D, Athanasopoulos N, Leivadeas A, Mitton N, Jungers RM, Papavassiliou S. Edge Computing Resource Allocation for Dynamic Networks: The DRUID-NET Vision and Perspective. Sensors (Basel) 2020; 20:E2191. [PMID: 32294937 DOI: 10.3390/s20082191] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 04/09/2020] [Accepted: 04/10/2020] [Indexed: 11/17/2022]
Abstract
The potential offered by the abundance of sensors, actuators, and communications in the Internet of Things (IoT) era is hindered by the limited computational capacity of local nodes. Several key challenges should be addressed to optimally and jointly exploit the network, computing, and storage resources, guaranteeing at the same time feasibility for time-critical and mission-critical tasks. We propose the DRUID-NET framework to take upon these challenges by dynamically distributing resources when the demand is rapidly varying. It includes analytic dynamical modeling of the resources, offered workload, and networking environment, incorporating phenomena typically met in wireless communications and mobile edge computing, together with new estimators of time-varying profiles. Building on this framework, we aim to develop novel resource allocation mechanisms that explicitly include service differentiation and context-awareness, being capable of guaranteeing well-defined Quality of Service (QoS) metrics. DRUID-NET goes beyond the state of the art in the design of control algorithms by incorporating resource allocation mechanisms to the decision strategy itself. To achieve these breakthroughs, we combine tools from Automata and Graph theory, Machine Learning, Modern Control Theory, and Network Theory. DRUID-NET constitutes the first truly holistic, multidisciplinary approach that extends recent, albeit fragmented results from all aforementioned fields, thus bridging the gap between efforts of different communities.
Collapse
|
277
|
Lou P, Shi L, Zhang X, Xiao Z, Yan J. A Data-driven Adaptive Sampling Method Based on Edge Computing. Sensors (Basel) 2020; 20:E2174. [PMID: 32290534 DOI: 10.3390/s20082174] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 04/01/2020] [Accepted: 04/09/2020] [Indexed: 11/16/2022]
Abstract
The rise of edge computing has promoted the development of the industrial internet of things (IIoT). Supported by edge computing technology, data acquisition can also support more complex and perfect application requirements in industrial field. Most of traditional sampling methods use constant sampling frequency and ignore the impact of changes of sampling objects during the data acquisition. For the problem of sampling distortion, edge data redundancy and energy consumption caused by constant sampling frequency of sensors in the IIoT, a data-driven adaptive sampling method based on edge computing is proposed in this paper. The method uses the latest data collected by the sensors at the edge node for linear fitting and adjusts the next sampling frequency according to the linear median jitter sum and adaptive sampling strategy. An edge data acquisition platform is established to verify the validity of the method. According to the experimental results, the proposed method is more effective than other adaptive sampling methods. Compared with constant sampling frequency, the proposed method can reduce the edge data redundancy and energy consumption by more than 13.92% and 12.86%, respectively.
Collapse
|
278
|
Xu X, Zeng Z, Yang S, Shao H. A Novel Blockchain Framework for Industrial IoT Edge Computing. Sensors (Basel) 2020; 20:E2061. [PMID: 32272555 DOI: 10.3390/s20072061] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 03/21/2020] [Accepted: 03/28/2020] [Indexed: 12/03/2022]
Abstract
With the rapid development of industrial internet of thing (IIoT), the distributed topology of IIoT and resource constraints of edge computing conduct new challenges to traditional data storage, transmission, and security protection. A distributed trust and allocated ledger of blockchain technology are suitable for the distributed IIoT, which also becomes an effective method for edge computing applications. This paper proposes a resource constrained Layered Lightweight Blockchain Framework (LLBF) and implementation mechanism. The framework consists of a resource constrained layer (RCL) and a resource extended layer (REL) blockchain used in IIoT. We redesign the block structure and size to suit to IIoT edge computing devices. A lightweight consensus algorithm and a dynamic trust right algorithm is developed to improve the throughput of blockchain and reduce the number of transactions validated in new blocks respectively. Through a high throughput management to guarantee the transaction load balance of blockchain. Finally, we conducted kinds of blockchain simulation and performance experiments, the outcome indicated that the method have a good performance in IIoT edge application.
Collapse
|
279
|
Wang R, Liu Y, Zhang P, Li X, Kang X. Edge and Cloud Collaborative Entity Recommendation Method towards the IoT Search. Sensors (Basel) 2020; 20:E1918. [PMID: 32235548 DOI: 10.3390/s20071918] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 03/12/2020] [Accepted: 03/18/2020] [Indexed: 12/02/2022]
Abstract
There are massive entities with strong denaturation of state in the physical world, and users have urgent needs for real-time and intelligent acquisition of entity information, thus recommendation technologies that can actively provide instant and precise entity state information come into being. Existing IoT data recommendation methods ignore the characteristics of IoT data and user search behavior; thus the recommendation performances are relatively limited. Considering the time-varying characteristics of the IoT entity state and the characteristics of user search behavior, an edge-cloud collaborative entity recommendation method is proposed via combining the advantages of edge computing and cloud computing. First, an entity recommendation system architecture based on the collaboration between edge and cloud is designed. Then, an entity identification method suitable for edge is presented, which takes into account the feature information of entities and carries out effective entity identification based on the deep clustering model, so as to improve the real-time and accuracy of entity state information search. Furthermore, an interest group division method applied in cloud is devised, which fully considers user’s potential search needs and divides user interest groups based on clustering model for enhancing the quality of recommendation system. Simulation results demonstrate that the proposed recommendation method can effectively improve the real-time and accuracy performance of entity recommendation in comparison with traditional methods.
Collapse
|
280
|
Xu Q, Zhang J, Togookhuu B. Support Mobile Fog Computing Test in piFogBedII. Sensors (Basel) 2020; 20:s20071900. [PMID: 32235392 PMCID: PMC7180584 DOI: 10.3390/s20071900] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 03/24/2020] [Accepted: 03/24/2020] [Indexed: 11/18/2022]
Abstract
IoT and 5G technologies are making smart devices, medical devices, cameras and various types of sensors become parts of the Internet, which provides feasibility to the realization of infrastructure and services such as smart homes, smart cities, smart medical technology and smart transportation. Fog computing (edge computing) is a new research field and can accelerate the analysis speed and decision-making for these delay-sensitive applications. It is very important to test functions and performances of various applications and services before they are deployed to the production environment, and current evaluations are more based on various simulation tools; however, the fidelity of the experimental results is a problem for most of network simulation tools. PiFogBed is a fog computing testbed built with real devices, but it does not support the testing of mobile end devices and mobile fog applications. The paper proposes the piFogBedII to support the testing of mobile fog applications by modifying some components in the piFogBed, such as extending the range of end devices, adding the mobile and migration management strategy and inserting a container agent to implement the transparent transmission between end devices and containers. The evaluation results show that it is effective and the delay resulting from the migration strategy and container agent is acceptable.
Collapse
Affiliation(s)
- Qiaozhi Xu
- College of Computer Science, Inner Mongolia University, Hohhot 010021, China;
- College of Computer Science and Technology, Inner Mongolia Normal University, Hohhot 010022, China
| | - Junxing Zhang
- College of Computer Science, Inner Mongolia University, Hohhot 010021, China;
- Correspondence:
| | - Bulganmaa Togookhuu
- School of Engineering and Technology, Mongolian University of Life Sciences, Ulaanbaatar 17024, Mongolia;
| |
Collapse
|
281
|
Fan X, Zheng H, Jiang R, Zhang J. Optimal Design of Hierarchical Cloud-Fog& Edge Computing Networks with Caching. Sensors (Basel) 2020; 20:s20061582. [PMID: 32178300 PMCID: PMC7361789 DOI: 10.3390/s20061582] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 03/07/2020] [Accepted: 03/09/2020] [Indexed: 11/16/2022]
Abstract
This paper investigates the optimal design of a hierarchical cloud-fog&edge computing (FEC) network, which consists of three tiers, i.e., the cloud tier, the fog&edge tier, and the device tier. The device in the device tier processes its task via three computing modes, i.e., cache-assisted computing mode, cloud-assisted computing mode, and joint device-fog&edge computing mode. Specifically, the task corresponds to being completed via the content caching in the FEC tier, the computation offloading to the cloud tier, and the joint computing in the fog&edge and device tier, respectively. For such a system, an energy minimization problem is formulated by jointly optimizing the computing mode selection, the local computing ratio, the computation frequency, and the transmit power, while guaranteeing multiple system constraints, including the task completion deadline time, the achievable computation capability, and the achievable transmit power threshold. Since the problem is a mixed integer nonlinear programming problem, which is hard to solve with known standard methods, it is decomposed into three subproblems, and the optimal solution to each subproblem is derived. Then, an efficient optimal caching, cloud, and joint computing (CCJ) algorithm to solve the primary problem is proposed. Simulation results show that the system performance achieved by our proposed optimal design outperforms that achieved by the benchmark schemes. Moreover, the smaller the achievable transmit power threshold of the device, the more energy is saved. Besides, with the increment of the data size of the task, the lesser is the local computing ratio.
Collapse
Affiliation(s)
- Xiaoqian Fan
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China; (X.F.); (R.J.); (J.Z.)
| | - Haina Zheng
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China; (X.F.); (R.J.); (J.Z.)
- State Key Lab of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, China
- Correspondence:
| | - Ruihong Jiang
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China; (X.F.); (R.J.); (J.Z.)
- State Key Lab of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, China
| | - Jinyu Zhang
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China; (X.F.); (R.J.); (J.Z.)
| |
Collapse
|
282
|
Silvestre-Blanes J, Sempere-Payá V, Albero-Albero T. Smart Sensor Architectures for Multimedia Sensing in IoMT. Sensors (Basel) 2020; 20:E1400. [PMID: 32143389 DOI: 10.3390/s20051400] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 02/20/2020] [Accepted: 02/29/2020] [Indexed: 11/24/2022]
Abstract
Today, a wide range of developments and paradigms require the use of embedded systems characterized by restrictions on their computing capacity, consumption, cost, and network connection. The evolution of the Internet of Things (IoT) towards Industrial IoT (IIoT) or the Internet of Multimedia Things (IoMT), its impact within the 4.0 industry, the evolution of cloud computing towards edge or fog computing, also called near-sensor computing, or the increase in the use of embedded vision, are current examples of this trend. One of the most common methods of reducing energy consumption is the use of processor frequency scaling, based on a particular policy. The algorithms to define this policy are intended to obtain good responses to the workloads that occur in smarthphones. There has been no study that allows a correct definition of these algorithms for workloads such as those expected in the above scenarios. This paper presents a method to determine the operating parameters of the dynamic governor algorithm called Interactive, which offers significant improvements in power consumption, without reducing the performance of the application. These improvements depend on the load that the system has to support, so the results are evaluated against three different loads, from higher to lower, showing improvements ranging from 62% to 26%.
Collapse
|
283
|
Pundir S, Wazid M, Singh DP, Das AK, Rodrigues JJPC, Park Y. Designing Efficient Sinkhole Attack Detection Mechanism in Edge-Based IoT Deployment. Sensors (Basel) 2020; 20:s20051300. [PMID: 32121017 PMCID: PMC7085610 DOI: 10.3390/s20051300] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2020] [Revised: 02/21/2020] [Accepted: 02/23/2020] [Indexed: 11/16/2022]
Abstract
The sinkhole attack in an edge-based Internet of Things (IoT) environment (EIoT) can devastate and ruin the whole functioning of the communication. The sinkhole attacker nodes (SHAs) have some properties (for example, they first attract the other normal nodes for the shortest path to the destination and when normal nodes initiate the process of sending their packets through that path (i.e., via SHA), the attacker nodes start disrupting the traffic flow of the network). In the presence of SHAs, the destination (for example, sink node i.e., gateway/base station) does not receive the required information or it may receive partial or modified information. This results in reduction of the network performance and degradation in efficiency and reliability of the communication. In the presence of such an attack, the throughput decreases, end-to-end delay increases and packet delivery ratio decreases. Moreover, it may harm other network performance parameters. Hence, it becomes extremely essential to provide an effective and competent scheme to mitigate this attack in EIoT. In this paper, an intrusion detection scheme to protect EIoT environment against sinkhole attack is proposed, which is named as SAD-EIoT. In SAD-EIoT, the resource rich edge nodes (edge servers) perform the detection of different types of sinkhole attacker nodes with the help of exchanging messages. The practical demonstration of SAD-EIoT is also provided using the well known NS2 simulator to compute the various performance parameters. Additionally, the security analysis of SAD-EIoT is conducted to prove its resiliency against various types of SHAs. SAD-EIoT achieves around 95.83% detection rate and 1.03% false positive rate, which are considerably better than other related existing schemes. Apart from those, SAD-EIoT is proficient with respect to computation and communication costs. Eventually, SAD-EIoT will be a suitable match for those applications which can be used in critical and sensitive operations (for example, surveillance, security and monitoring systems).
Collapse
Affiliation(s)
- Sumit Pundir
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun 248 002, India; (S.P.); (M.W.); (D.P.S.)
| | - Mohammad Wazid
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun 248 002, India; (S.P.); (M.W.); (D.P.S.)
| | - Devesh Pratap Singh
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun 248 002, India; (S.P.); (M.W.); (D.P.S.)
| | - Ashok Kumar Das
- Center for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad 500 032, India; or
| | - Joel J. P. C. Rodrigues
- Federal University of Piauí (UFPI), 64049-550 Teresina-Pi, Brazil;
- Instituto de Telecomunicações, 1049-001 Lisbon, Portugal
| | - Youngho Park
- School of Electronics Engineering, Kyungpook National University, Daegu 41566, Korea
- Correspondence: ; Tel.: +82-53-950-5114
| |
Collapse
|
284
|
Marah BD, Jing Z, Ma T, Alsabri R, Anaadumba R, Al-Dhelaan A, Al-Dhelaan M. Smartphone Architecture for Edge-Centric IoT Analytics. Sensors (Basel) 2020; 20:s20030892. [PMID: 32046133 PMCID: PMC7039380 DOI: 10.3390/s20030892] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 02/06/2020] [Accepted: 02/06/2020] [Indexed: 11/16/2022]
Abstract
The current baseline architectures in the field of the Internet of Things (IoT) strongly recommends the use of edge computing in the design of the solution applications instead of the traditional approach which solely uses the cloud/core for analysis and data storage. This research, therefore, focuses on formulating an edge-centric IoT architecture for smartphones which are very popular electronic devices that are capable of executing complex computational tasks at the network edge. A novel smartphone IoT architecture (SMIoT) is introduced that supports data capture and preprocessing, model (i.e., machine learning models) deployment, model evaluation and model updating tasks. Moreover, a novel model evaluation and updating scheme is provided which ensures model validation in real-time. This ensures a sustainable and reliable model at the network edge that automatically adjusts to changes in the IoT data subspace. Finally, the proposed architecture is tested and evaluated using an IoT use case.
Collapse
Affiliation(s)
- Bockarie Daniel Marah
- School of Computer & Software, Nanjing University of Information Science & Technology, Nanjing 210-044, China; (B.D.M.); (Z.J.); (R.A.); (R.A.)
| | - Zilong Jing
- School of Computer & Software, Nanjing University of Information Science & Technology, Nanjing 210-044, China; (B.D.M.); (Z.J.); (R.A.); (R.A.)
| | - Tinghuai Ma
- School of Computer & Software, Nanjing University of Information Science & Technology, Nanjing 210-044, China; (B.D.M.); (Z.J.); (R.A.); (R.A.)
- Correspondence: ; Tel.: +86-25-5873-1267
| | - Raeed Alsabri
- School of Computer & Software, Nanjing University of Information Science & Technology, Nanjing 210-044, China; (B.D.M.); (Z.J.); (R.A.); (R.A.)
| | - Raphael Anaadumba
- School of Computer & Software, Nanjing University of Information Science & Technology, Nanjing 210-044, China; (B.D.M.); (Z.J.); (R.A.); (R.A.)
| | - Abdullah Al-Dhelaan
- College of Computer and Information Sciences, King Saud University, Riyadh 11362, Saudi Arabia; (A.A.-D.); m (M.A.-D.)
| | - Mohammed Al-Dhelaan
- College of Computer and Information Sciences, King Saud University, Riyadh 11362, Saudi Arabia; (A.A.-D.); m (M.A.-D.)
| |
Collapse
|
285
|
Wang L, Qiu R. BeiDou Satellite Positioning Method Based on IoT and Edge Computing. Sensors (Basel) 2020; 20:s20030889. [PMID: 32046128 PMCID: PMC7039293 DOI: 10.3390/s20030889] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 01/31/2020] [Accepted: 02/04/2020] [Indexed: 06/10/2023]
Abstract
The BeiDou navigation satellite system (BDS) developed by China can provide users with high precision, as well as all-weather and real-time positioning and navigation. It can be widely used in many applications. However, new challenges emerge with the development of 5G communication system and Internet of Things (IoT) technologies. The BDS needs to be suitable for the large-scaled terminal scenario and provides higher positioning precision. In this paper, a BeiDou differential positioning method based on IoT and edge computing is proposed. The computational pressure on the data center is offloaded to the edge nodes when the massive positioning requests of IoT terminals need to be processed. To ensure the load balancing of the edge nodes, the resource allocation of the terminal positioning requests is performed with the improved genetic algorithm, thereby reducing the service delay of the entire edge network. Moreover, the optimized unscented Kalman filter based on the edge node (EUKF) algorithm is used to improve the positioning precision of IoT terminals. The results demonstrate that the proposed positioning method has better positioning performance which can provide the real-time positioning service for the large-scale IoT terminals.
Collapse
Affiliation(s)
- Lina Wang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China;
- Shunde Graduate School, University of Science and Technology Beijing, Foshan 528300, China
| | - Rui Qiu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China;
| |
Collapse
|
286
|
de la Iglesia DH, Mendes AS, González GV, Jiménez-Bravo DM, de Paz Santana JF. Connected Elbow Exoskeleton System for Rehabilitation Training Based on Virtual Reality and Context-Aware. Sensors (Basel) 2020; 20:E858. [PMID: 32041156 DOI: 10.3390/s20030858] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Revised: 01/09/2020] [Accepted: 02/04/2020] [Indexed: 11/16/2022]
Abstract
Traditional physiotherapy rehabilitation systems are evolving into more advanced systems based on exoskeleton systems and Virtual Reality (VR) environments that enhance and improve rehabilitation techniques and physical exercise. In addition, due to current connected systems and paradigms such as the Internet of Things (IoT) or Ambient Intelligent (AmI) systems, it is possible to design and develop advanced, effective, and low-cost medical tools that patients may have in their homes. This article presents a low-cost exoskeleton for the elbow that is connected to a Context-Aware architecture and thanks to a VR system the patient can perform rehabilitation exercises in an interactive way. The integration of virtual reality technology in rehabilitation exercises provides an intensive, repetitive and task-oriented capacity to improve patient motivation and reduce work on medical professionals. One of the system highlights is the intelligent ability to generate new exercises, monitor the exercises performed by users in search of progress or possible problems and the dynamic modification of the exercises characteristics. The platform also allows the incorporation of commercial medical sensors capable of collecting valuable information for greater accuracy in the diagnosis and evolution of patients. A case study with real patients with promising results has been carried out.
Collapse
|
287
|
Xu J, Yang S, Lu W, Xu L, Yang D. Incentivizing for Truth Discovery in Edge-assisted Large-scale Mobile Crowdsensing. Sensors (Basel) 2020; 20:E805. [PMID: 32024221 DOI: 10.3390/s20030805] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 01/30/2020] [Accepted: 01/31/2020] [Indexed: 11/21/2022]
Abstract
The recent development of human-carried mobile devices has promoted the great development of mobile crowdsensing systems. Most existing mobile crowdsensing systems depend on the crowdsensing service of the deep cloud. With the increasing scale and complexity, there is a tendency to enhance mobile crowdsensing with the edge computing paradigm to reduce latency and computational complexity, and improve the expandability and security. In this paper, we propose an integrated solution to stimulate the strategic users to contribute more for truth discovery in the edge-assisted mobile crowdsensing. We design an incentive mechanism consisting of truth discovery stage and budget feasible reverse auction stage. In truth discovery stage, we estimate the truth for each task in both deep cloud and edge cloud. In budget feasible reverse auction stage, we design a greedy algorithm to select the winners to maximize the quality function under the budget constraint. Through extensive simulations, we demonstrate that the proposed mechanism is computationally efficient, individually rational, truthful, budget feasible and constant approximate. Moreover, the proposed mechanism shows great superiority in terms of estimation precision and expandability.
Collapse
|
288
|
Kolhar M, Al-Turjman F, Alameen A, Abualhaj MM. A Three Layered Decentralized IoT Biometric Architecture for City Lockdown During COVID-19 Outbreak. IEEE Access 2020; 8:163608-163617. [PMID: 34812355 PMCID: PMC8545303 DOI: 10.1109/access.2020.3021983] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 09/01/2020] [Indexed: 05/03/2023]
Abstract
In this article, we have built a prototype of a decentralized IoT based biometric face detection framework for cities that are under lockdown during COVID-19 outbreaks. To impose restrictions on public movements, we have utilized face detection using three-layered edge computing architecture. We have built a deep learning framework of multi-task cascading to recognize the face. For the face detection proposal we have compared with the state of the art methods on various benchmarking dataset such as FDDB and WIDER FACE. Furthermore, we have also conducted various experiments on latency and face detection load on three-layer and cloud computing architectures. It shows that our proposal has an edge over cloud computing architecture.
Collapse
Affiliation(s)
- Manjur Kolhar
- Department of Computer SciencePrince Sattam Bin Abdulaziz University Wadi Ad-Dawasir 11990 Saudi Arabia
| | - Fadi Al-Turjman
- Research Center for AI and IoTArtificial Intelligence DepartmentNear East University 99138 Mersin Turkey
| | - Abdalla Alameen
- Department of Computer SciencePrince Sattam Bin Abdulaziz University Wadi Ad-Dawasir 11990 Saudi Arabia
| | - Mosleh M Abualhaj
- Department of Networks and Information SecurityAl-Ahliyya Amman University Amman 19328 Jordan
| |
Collapse
|
289
|
Watanabe Y, Shoji Y. An RSSI-Based Low-Power Vehicle-Approach Detection Technique to Alert a Pedestrian. Sensors (Basel) 2019; 20:E118. [PMID: 31878128 DOI: 10.3390/s20010118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Revised: 12/11/2019] [Accepted: 12/21/2019] [Indexed: 11/16/2022]
Abstract
Information about an approaching vehicle is helpful for pedestrians to avoid traffic accidents while most of the past studies related to collision avoidance systems have focused on alerting drivers and controlling vehicles. This paper proposes a technique to detect an approaching vehicle aiming at alerting a pedestrian by observing the variation of the received signal strength indicator (RSSI) of the repeatedly radiated beacons from a vehicle, called the alert beacons. A linear regression algorithm is first applied to RSSI samples. The decision about whether a vehicle is approaching or not is made by the Student’s t-test for the linear regression coefficient. A passive method, where the pedestrian’s device behaves only as a receiver, is first described. The neighbor-discovery-based (ND-based) method, in which the pedestrian’s device repeatedly broadcasts advertising beacons and the moving vehicle in the vicinity returns the alert beacon when it receives the advertising beacon, is then proposed to improve the detection performance as well as reduce the device’s energy consumption. The theoretical detection error rate under Rayleigh fading is derived. It is revealed that the proposed ND-based method achieves a lower detection error rate when compared with the passive method under the same delay.
Collapse
|
290
|
Wazid M, Das AK, Shetty S, J. P. C. Rodrigues J, Park Y. LDAKM-EIoT: Lightweight Device Authentication and Key Management Mechanism for Edge-Based IoT Deployment. Sensors (Basel) 2019; 19:s19245539. [PMID: 31847431 PMCID: PMC6961035 DOI: 10.3390/s19245539] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Revised: 12/09/2019] [Accepted: 12/11/2019] [Indexed: 12/02/2022]
Abstract
In recent years, edge computing has emerged as a new concept in the computing paradigm that empowers several future technologies, such as 5G, vehicle-to-vehicle communications, and the Internet of Things (IoT), by providing cloud computing facilities, as well as services to the end users. However, open communication among the entities in an edge based IoT environment makes it vulnerable to various potential attacks that are executed by an adversary. Device authentication is one of the prominent techniques in security that permits an IoT device to authenticate mutually with a cloud server with the help of an edge node. If authentication is successful, they establish a session key between them for secure communication. To achieve this goal, a novel device authentication and key management mechanism for the edge based IoT environment, called the lightweight authentication and key management scheme for the edge based IoT environment (LDAKM-EIoT), was designed. The detailed security analysis and formal security verification conducted by the widely used “Automated Validation of Internet Security Protocols and Applications (AVISPA)” tool prove that the proposed LDAKM-EIoT is secure against several attack vectors that exist in the infrastructure of the edge based IoT environment. The elaborated comparative analysis of the proposed LDAKM-EIoT and different closely related schemes provides evidence that LDAKM-EIoT is more secure with less communication and computation costs. Finally, the network performance parameters are calculated and analyzed using the NS2 simulation to demonstrate the practical facets of the proposed LDAKM-EIoT.
Collapse
Affiliation(s)
- Mohammad Wazid
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun 248 002, India;
| | - Ashok Kumar Das
- Center for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad 500 032, India
| | - Sachin Shetty
- Virginia Modeling, Analysis and Simulation Center, Center for Cybersecurity Education and Research, Department of Computational Modeling and Simulation Engineering, Old Dominion University, Suffolk, VA 23435, USA;
| | - Joel J. P. C. Rodrigues
- Federal University of Piauí (UFPI), 64049-550 Teresina-Pi, Brazil;
- Instituto de Telecomunicações, 1049-001 Lisbon, Portugal
| | - Youngho Park
- School of Electronics Engineering, Kyungpook National University, Daegu 41566, Korea
- Correspondence: ; Tel.: +82-53-950-5114
| |
Collapse
|
291
|
Utrilla R, Rodriguez-Zurrunero R, Martin J, Rozas A, Araujo A. MIGOU: A Low-Power Experimental Platform with Programmable Logic Resources and Software-Defined Radio Capabilities. Sensors (Basel) 2019; 19:s19224983. [PMID: 31731745 PMCID: PMC6891619 DOI: 10.3390/s19224983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 10/29/2019] [Accepted: 11/13/2019] [Indexed: 06/10/2023]
Abstract
The increase in the number of mobile and Internet of Things (IoT) devices, along with the demands of new applications and services, represents an important challenge in terms of spectral coexistence. As a result, these devices are now expected to make an efficient and dynamic use of the spectrum, and to provide processed information instead of simple raw sensor measurements. These communication and processing requirements have direct implications on the architecture of the systems. In this work, we present MIGOU, a wireless experimental platform that has been designed to address these challenges from the perspective of resource-constrained devices, such as wireless sensor nodes or IoT end-devices. At the radio level, the platform can operate both as a software-defined radio and as a traditional highly integrated radio transceiver, which demands less node resources. For the processing tasks, it relies on a system-on-a-chip that integrates an ARM Cortex-M3 processor, and a flash-based FPGA fabric, where high-speed processing tasks can be offloaded. The power consumption of the platform has been measured in the different modes of operation. In addition, these hardware features and power measurements have been compared with those of other representative platforms. The results obtained confirm that a state-of-the-art tradeoff between hardware flexibility and energy efficiency has been achieved. These characteristics will allow for the development of appropriate solutions to current end-devices' challenges and to test them in real scenarios.
Collapse
|
292
|
Xu R, Jin W, Kim D. Microservice Security Agent Based On API Gateway in Edge Computing. Sensors (Basel) 2019; 19:E4905. [PMID: 31717617 DOI: 10.3390/s19224905] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 10/30/2019] [Accepted: 10/31/2019] [Indexed: 11/17/2022]
Abstract
Internet of Things (IoT) devices are embedded with software, electronics, and sensors, and feature connectivity with constrained resources. They require the edge computing paradigm, with modular characteristics relying on microservices, to provide an extensible and lightweight computing framework at the edge of the network. Edge computing can relieve the burden of centralized cloud computing by performing certain operations, such as data storage and task computation, at the edge of the network. Despite the benefits of edge computing, it can lead to many challenges in terms of security and privacy issues. Thus, services that protect privacy and secure data are essential functions in edge computing. For example, the end user’s ownership and privacy information and control are separated, which can easily lead to data leakage, unauthorized data manipulation, and other data security concerns. Thus, the confidentiality and integrity of the data cannot be guaranteed and, so, more secure authentication and access mechanisms are required to ensure that the microservices are exposed only to authorized users. In this paper, we propose a microservice security agent to integrate the edge computing platform with the API gateway technology for presenting a secure authentication mechanism. The aim of this platform is to afford edge computing clients a practical application which provides user authentication and allows JSON Web Token (JWT)-based secure access to the services of edge computing. To integrate the edge computing platform with the API gateway, we implement a microservice security agent based on the open-source Kong in the EdgeX Foundry framework. Also to provide an easy-to-use approach with Kong, we implement REST APIs for generating new consumers, registering services, configuring access controls. Finally, the usability of the proposed approach is demonstrated by evaluating the round trip time (RTT). The results demonstrate the efficiency of the system and its suitability for real-world applications.
Collapse
|
293
|
Basir R, Qaisar S, Ali M, Aldwairi M, Ashraf MI, Mahmood A, Gidlund M. Fog Computing Enabling Industrial Internet of Things: State-of-the-Art and Research Challenges. Sensors (Basel) 2019; 19:s19214807. [PMID: 31694254 PMCID: PMC6864669 DOI: 10.3390/s19214807] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Revised: 10/18/2019] [Accepted: 10/23/2019] [Indexed: 11/17/2022]
Abstract
Industry is going through a transformation phase, enabling automation and data exchange in manufacturing technologies and processes, and this transformation is called Industry 4.0. Industrial Internet-of-Things (IIoT) applications require real-time processing, near-by storage, ultra-low latency, reliability and high data rate, all of which can be satisfied by fog computing architecture. With smart devices expected to grow exponentially, the need for an optimized fog computing architecture and protocols is crucial. Therein, efficient, intelligent and decentralized solutions are required to ensure real-time connectivity, reliability and green communication. In this paper, we provide a comprehensive review of methods and techniques in fog computing. Our focus is on fog infrastructure and protocols in the context of IIoT applications. This article has two main research areas: In the first half, we discuss the history of industrial revolution, application areas of IIoT followed by key enabling technologies that act as building blocks for industrial transformation. In the second half, we focus on fog computing, providing solutions to critical challenges and as an enabler for IIoT application domains. Finally, open research challenges are discussed to enlighten fog computing aspects in different fields and technologies.
Collapse
Affiliation(s)
- Rabeea Basir
- School of Electrical Engineering and Computer Science, National University of Science and Technology, Islamabad 44000, Pakistan or (R.B.); (S.Q.)
| | - Saad Qaisar
- School of Electrical Engineering and Computer Science, National University of Science and Technology, Islamabad 44000, Pakistan or (R.B.); (S.Q.)
| | - Mudassar Ali
- School of Electrical Engineering and Computer Science, National University of Science and Technology, Islamabad 44000, Pakistan or (R.B.); (S.Q.)
- Department of Telecommunication Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
- Correspondence: or
| | - Monther Aldwairi
- College of Technological Innovation, Zayed University, Abu Dhabi 144534, UAE;
| | | | - Aamir Mahmood
- Department of Information Systems and Technology, Mid Sweden University, 85170 Sundsvall, Sweden; (A.M.); (M.G.)
| | - Mikael Gidlund
- Department of Information Systems and Technology, Mid Sweden University, 85170 Sundsvall, Sweden; (A.M.); (M.G.)
| |
Collapse
|
294
|
Girau R, Cossu R, Farina M, Pilloni V, Atzori L. Virtual User in the IoT: Definition, Technologies and Experiments. Sensors (Basel) 2019; 19:s19204489. [PMID: 31623240 PMCID: PMC6833445 DOI: 10.3390/s19204489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Revised: 09/27/2019] [Accepted: 10/14/2019] [Indexed: 11/16/2022]
Abstract
Virtualization technologies are characterizing major advancements in the Internet of Things (IoT) arena, as they allow for achieving a cyber-physical world where everything can be found, activated, probed, interconnected, and updated at both the virtual and the physical levels. We believe these technologies should apply to human users other than things, bringing us the concept of the Virtual User (VU). This should represent the virtual counterpart of the IoT users with the ultimate goal of: (i) avoiding the user from having the burden of following the tedious processes of setting, configuring and updating IoT services the user is involved in; (ii) acting on behalf of the user when basic operations are required; (iii) exploiting to the best of its ability the IoT potentialities, always taking always account the user profile and interests. Accordingly, the VU is a complex representation of the user and acts as a proxy in between the virtual objects and IoT services and application; to this, it includes the following major functionalities: user profiling, authorization management, quality of experience modeling and management, social networking and context management. In this respect, the major contributions of this paper are to: provide the definition of VU, present the major functionalities, discuss the legal issues related to its introduction, provide some implementation details, and analyze key performance aspects in terms of the capability of the VU to correctly identify the user profile and context.
Collapse
Affiliation(s)
- Roberto Girau
- DIEE, University of Cagliari and National Inter-University Consortium for Telecommunications (CNIT), Research Unit of Cagliari, 09123 Cagliari, Italy.
| | - Raimondo Cossu
- DIEE, University of Cagliari and National Inter-University Consortium for Telecommunications (CNIT), Research Unit of Cagliari, 09123 Cagliari, Italy.
| | - Massimo Farina
- DIEE, University of Cagliari and National Inter-University Consortium for Telecommunications (CNIT), Research Unit of Cagliari, 09123 Cagliari, Italy.
| | - Virginia Pilloni
- DIEE, University of Cagliari and National Inter-University Consortium for Telecommunications (CNIT), Research Unit of Cagliari, 09123 Cagliari, Italy.
| | - Luigi Atzori
- DIEE, University of Cagliari and National Inter-University Consortium for Telecommunications (CNIT), Research Unit of Cagliari, 09123 Cagliari, Italy.
| |
Collapse
|
295
|
Wang Y, Yang J, Guo X, Qu Z. Satellite Edge Computing for the Internet of Things in Aerospace. Sensors (Basel) 2019; 19:E4375. [PMID: 31658684 DOI: 10.3390/s19204375] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Revised: 10/07/2019] [Accepted: 10/09/2019] [Indexed: 11/16/2022]
Abstract
As one of the information industry’s future development directions, the Internet of Things (IoT) has been widely used. In order to reduce the pressure on the network caused by the long distance between the processing platform and the terminal, edge computing provides a new paradigm for IoT applications. In many scenarios, the IoT devices are distributed in remote areas or extreme terrain and cannot be accessed directly through the terrestrial network, and data transmission can only be achieved via satellite. However, traditional satellites are highly customized, and on-board resources are designed for specific applications rather than universal computing. Therefore, we propose to transform the traditional satellite into a space edge computing node. It can dynamically load software in orbit, flexibly share on-board resources, and provide services coordinated with the cloud. The corresponding hardware structure and software architecture of the satellite is presented. Through the modeling analysis and simulation experiments of the application scenarios, the results show that the space edge computing system takes less time and consumes less energy than the traditional satellite constellation. The quality of service is mainly related to the number of satellites, satellite performance, and task offloading strategy.
Collapse
|
296
|
Lo SK, Liew CS, Tey KS, Mekhilef S. An Interoperable Component-Based Architecture for Data-Driven IoT System. Sensors (Basel) 2019; 19:s19204354. [PMID: 31600904 PMCID: PMC6832394 DOI: 10.3390/s19204354] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Revised: 09/30/2019] [Accepted: 09/30/2019] [Indexed: 11/17/2022]
Abstract
The advancement of the Internet of Things (IoT) as a solution in diverse application domains has nurtured the expansion in the number of devices and data volume. Multiple platforms and protocols have been introduced and resulted in high device ubiquity and heterogeneity. However, currently available IoT architectures face challenges to accommodate the diversity in IoT devices or services operating under different operating systems and protocols. In this paper, we propose a new IoT architecture that utilizes the component-based design approach to create and define the loosely-coupled, standalone but interoperable service components for IoT systems. Furthermore, a data-driven feedback function is included as a key feature of the proposed architecture to enable a greater degree of system automation and to reduce the dependency on mankind for data analysis and decision-making. The proposed architecture aims to tackle device interoperability, system reusability and the lack of data-driven functionality issues. Using a real-world use case on a proof-of-concept prototype, we examined the viability and usability of the proposed architecture.
Collapse
Affiliation(s)
- Sin Kit Lo
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia.
| | - Chee Sun Liew
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia.
| | - Kok Soon Tey
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia.
| | - Saad Mekhilef
- Power Electronics and Renewable Energy Research Laboratory (PEARL), Department of Electrical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia.
| |
Collapse
|
297
|
Cheng X, Zhang J, Chen B. Cyber Situation Comprehension for IoT Systems based on APT Alerts and Logs Correlation. Sensors (Basel) 2019; 19:s19184045. [PMID: 31546845 PMCID: PMC6767330 DOI: 10.3390/s19184045] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 08/28/2019] [Accepted: 09/16/2019] [Indexed: 11/21/2022]
Abstract
With the emergence of the Advanced Persistent Threat (APT) attacks, many Internet of Things (IoT) systems have faced large numbers of potential threats with the characteristics of concealment, permeability, and pertinence. However, existing methods and technologies cannot provide comprehensive and prompt recognition of latent APT attack activities in the IoT systems. To address this problem, we propose an APT Alerts and Logs Correlation Method, named APTALCM and a framework of deploying APTALCM on the IoT system, where an edge computing architecture was used to achieve cyber situation comprehension without too much data transmission cost. Specifically, we firstly present a cyber situation ontology for modeling the concepts and properties to formalize APT attack activities in the IoT systems. Then, we introduce a cyber situation instance similarity measurement method based on the SimRank mechanism for APT alerts and logs Correlation. Combining with instance similarity, we further propose an APT alert instances correlation method to reconstruct APT attack scenarios and an APT log instances correlation method to detect log instance communities. Through the coalescence of these methods, APTALCM can accomplish the cyber situation comprehension effectively by recognizing the APT attack intentions in the IoT systems. The exhaustive experimental results demonstrate that the two kernel modules, i.e., Alert Instance Correlation Module (AICM) and Log Instance Correlation Module (LICM) in our APTALCM, can achieve both high true-positive rate and low false-positive rate.
Collapse
Affiliation(s)
- Xiang Cheng
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China.
- The Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210023, China.
| | - Jiale Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China.
- The Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210023, China.
| | - Bing Chen
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China.
- The Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing 210023, China.
| |
Collapse
|
298
|
Passian A, Imam N. Nanosystems, Edge Computing, and the Next Generation Computing Systems. Sensors (Basel) 2019; 19:E4048. [PMID: 31546907 PMCID: PMC6767340 DOI: 10.3390/s19184048] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 09/11/2019] [Accepted: 09/16/2019] [Indexed: 12/24/2022]
Abstract
It is widely recognized that nanoscience and nanotechnology and their subfields, such as nanophotonics, nanoelectronics, and nanomechanics, have had a tremendous impact on recent advances in sensing, imaging, and communication, with notable developments, including novel transistors and processor architectures. For example, in addition to being supremely fast, optical and photonic components and devices are capable of operating across multiple orders of magnitude length, power, and spectral scales, encompassing the range from macroscopic device sizes and kW energies to atomic domains and single-photon energies. The extreme versatility of the associated electromagnetic phenomena and applications, both classical and quantum, are therefore highly appealing to the rapidly evolving computing and communication realms, where innovations in both hardware and software are necessary to meet the growing speed and memory requirements. Development of all-optical components, photonic chips, interconnects, and processors will bring the speed of light, photon coherence properties, field confinement and enhancement, information-carrying capacity, and the broad spectrum of light into the high-performance computing, the internet of things, and industries related to cloud, fog, and recently edge computing. Conversely, owing to their extraordinary properties, 0D, 1D, and 2D materials are being explored as a physical basis for the next generation of logic components and processors. Carbon nanotubes, for example, have been recently used to create a new processor beyond proof of principle. These developments, in conjunction with neuromorphic and quantum computing, are envisioned to maintain the growth of computing power beyond the projected plateau for silicon technology. We survey the qualitative figures of merit of technologies of current interest for the next generation computing with an emphasis on edge computing.
Collapse
Affiliation(s)
- Ali Passian
- Computing & Computational Sciences Directorate, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA.
| | - Neena Imam
- Computing & Computational Sciences Directorate, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA.
| |
Collapse
|
299
|
Leyva-Pupo I, Santoyo-González A, Cervelló-Pastor C. A Framework for the Joint Placement of Edge Service Infrastructure and User Plane Functions for 5G. Sensors (Basel) 2019; 19:E3975. [PMID: 31540093 DOI: 10.3390/s19183975] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Revised: 09/10/2019] [Accepted: 09/11/2019] [Indexed: 12/02/2022]
Abstract
Achieving less than 1 ms end-to-end communication latency, required for certain 5G services and use cases, is imposing severe technical challenges for the deployment of next-generation networks. To achieve such an ambitious goal, the service infrastructure and User Plane Function (UPF) placement at the network edge, is mandatory. However, this solution implies a substantial increase in deployment and operational costs. To cost-effectively solve this joint placement problem, this paper introduces a framework to jointly address the placement of edge nodes (ENs) and UPFs. Our framework proposal relies on Integer Linear Programming (ILP) and heuristic solutions. The main objective is to determine the ENs and UPFs’ optimal number and locations to minimize overall costs while satisfying the service requirements. To this aim, several parameters and factors are considered, such as capacity, latency, costs and site restrictions. The proposed solutions are evaluated based on different metrics and the obtained results showcase over 20% cost savings for the service infrastructure deployment. Moreover, the gap between the UPF placement heuristic and the optimal solution is equal to only one UPF in the worst cases, and a computation time reduction of over 35% is achieved in all the use cases studied.
Collapse
|
300
|
Short M, Twiddle J. An Industrial Digitalization Platform for Condition Monitoring and Predictive Maintenance of Pumping Equipment. Sensors (Basel) 2019; 19:s19173781. [PMID: 31480438 PMCID: PMC6749217 DOI: 10.3390/s19173781] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Revised: 08/22/2019] [Accepted: 08/28/2019] [Indexed: 11/16/2022]
Abstract
This paper is concerned with the implementation and field-testing of an edge device for real-time condition monitoring and fault detection for large-scale rotating equipment in the UK water industry. The edge device implements a local digital twin, processing information from low-cost transducers mounted on the equipment in real-time. Condition monitoring is achieved with sliding-mode observers employed as soft sensors to estimate critical internal pump parameters to help detect equipment weasr before damage occurs. The paper describes the implementation of the edge system on a prototype microcontroller-based embedded platform, which supports the Modbus protocol; IP/GSM communication gateways provide remote connectivity to the network core, allowing further detailed analytics for predictive maintenance to take place. The paper first describes validation testing of the edge device using Hardware-In-The-Loop techniques, followed by trials on large-scale pumping equipment in the field. The paper concludes that the proposed system potentially delivers a flexible and low-cost industrial digitalization platform for condition monitoring and predictive maintenance applications in the water industry.
Collapse
Affiliation(s)
- Michael Short
- School of Science, Engineering and Design, Teesside University, Middlesbrough TS1 3BA, UK.
| | - John Twiddle
- Scottish & Southern Energy Ltd., Knottingley, West Yorkshire WF11 8SQ, UK
| |
Collapse
|