1
|
Alasmary H. ScalableDigitalHealth (SDH): An IoT-Based Scalable Framework for Remote Patient Monitoring. Sensors (Basel) 2024; 24:1346. [PMID: 38400504 PMCID: PMC10893503 DOI: 10.3390/s24041346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 02/04/2024] [Accepted: 02/16/2024] [Indexed: 02/25/2024]
Abstract
Addressing the increasing demand for remote patient monitoring, especially among the elderly and mobility-impaired, this study proposes the "ScalableDigitalHealth" (SDH) framework. The framework integrates smart digital health solutions with latency-aware edge computing autoscaling, providing a novel approach to remote patient monitoring. By leveraging IoT technology and application autoscaling, the "SDH" enables the real-time tracking of critical health parameters, such as ECG, body temperature, blood pressure, and oxygen saturation. These vital metrics are efficiently transmitted in real time to AWS cloud storage through a layered networking architecture. The contributions are two-fold: (1) establishing real-time remote patient monitoring and (2) developing a scalable architecture that features latency-aware horizontal pod autoscaling for containerized healthcare applications. The architecture incorporates a scalable IoT-based architecture and an innovative microservice autoscaling strategy in edge computing, driven by dynamic latency thresholds and enhanced by the integration of custom metrics. This work ensures heightened accessibility, cost-efficiency, and rapid responsiveness to patient needs, marking a significant leap forward in the field. By dynamically adjusting pod numbers based on latency, the system optimizes system responsiveness, particularly in edge computing's proximity-based processing. This innovative fusion of technologies not only revolutionizes remote healthcare delivery but also enhances Kubernetes performance, preventing unresponsiveness during high usage.
Collapse
Affiliation(s)
- Hisham Alasmary
- Department of Computer Science, College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia
| |
Collapse
|
2
|
Urblik L, Kajati E, Papcun P, Zolotova I. A Modular Framework for Data Processing at the Edge: Design and Implementation. Sensors (Basel) 2023; 23:7662. [PMID: 37688118 PMCID: PMC10490771 DOI: 10.3390/s23177662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/26/2023] [Accepted: 09/02/2023] [Indexed: 09/10/2023]
Abstract
There is a rapid increase in the number of edge devices in IoT solutions, generating vast amounts of data that need to be processed and analyzed efficiently. Traditional cloud-based architectures can face latency, bandwidth, and privacy challenges when dealing with this data flood. There is currently no unified approach to the creation of edge computing solutions. This work addresses this problem by exploring containerization for data processing solutions at the network's edge. The current approach involves creating a specialized application compatible with the device used. Another approach involves using containerization for deployment and monitoring. The heterogeneity of edge environments would greatly benefit from a universal modular platform. Our proposed edge computing-based framework implements a streaming extract, transform, and load pipeline for data processing and analysis using ZeroMQ as the communication backbone and containerization for scalable deployment. Results demonstrate the effectiveness of the proposed framework, making it suitable for time-sensitive IoT applications.
Collapse
Affiliation(s)
- Lubomir Urblik
- Department of Cybernetics and Artificial Intelligence, Faculty of EE & Informatics, Technical University of Kosice, 042 00 Kosice, Slovakia; (E.K.); (P.P.)
| | | | | | - Iveta Zolotova
- Department of Cybernetics and Artificial Intelligence, Faculty of EE & Informatics, Technical University of Kosice, 042 00 Kosice, Slovakia; (E.K.); (P.P.)
| |
Collapse
|
3
|
Botez R, Pasca AG, Sferle AT, Ivanciu IA, Dobrota V. Efficient Network Slicing with SDN and Heuristic Algorithm for Low Latency Services in 5G/B5G Networks. Sensors (Basel) 2023; 23:6053. [PMID: 37447902 DOI: 10.3390/s23136053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 06/27/2023] [Accepted: 06/28/2023] [Indexed: 07/15/2023]
Abstract
This paper presents a novel approach for network slicing in 5G backhaul networks, targeting services with low or very low latency requirements. We propose a modified A* algorithm that incorporates network quality of service parameters into a composite metric. The algorithm's efficiency outperforms that of Dijkstra's algorithm using a precalculated heuristic function and a real-time monitoring strategy for congestion management. We integrate the algorithm into an SDN module called a path computation element, which computes the optimal path for the network slices. Experimental results show that the proposed algorithm significantly reduces processing time compared to Dijkstra's algorithm, particularly in complex topologies, with an order of magnitude improvement. The algorithm successfully adjusts paths in real-time to meet low latency requirements, preventing packet delay from exceeding the established threshold. The end-to-end measurements using the Speedtest client validate the algorithm's performance in differentiating traffic with and without delay requirements. These results demonstrate the efficacy of our approach in achieving ultra-reliable low-latency communication (URLLC) in 5G backhaul networks.
Collapse
Affiliation(s)
- Robert Botez
- Communications Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Andres-Gabriel Pasca
- Communications Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Alin-Tudor Sferle
- Communications Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | | | - Virgil Dobrota
- Communications Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| |
Collapse
|
4
|
Čilić I, Krivić P, Podnar Žarko I, Kušek M. Performance Evaluation of Container Orchestration Tools in Edge Computing Environments. Sensors (Basel) 2023; 23:4008. [PMID: 37112349 PMCID: PMC10143384 DOI: 10.3390/s23084008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 04/07/2023] [Accepted: 04/13/2023] [Indexed: 06/19/2023]
Abstract
Edge computing is a viable approach to improve service delivery and performance parameters by extending the cloud with resources placed closer to a given service environment. Numerous research papers in the literature have already identified the key benefits of this architectural approach. However, most results are based on simulations performed in closed network environments. This paper aims to analyze the existing implementations of processing environments containing edge resources, taking into account the targeted quality of service (QoS) parameters and the utilized orchestration platforms. Based on this analysis, the most popular edge orchestration platforms are evaluated in terms of their workflow that allows the inclusion of remote devices in the processing environment and their ability to adapt the logic of the scheduling algorithms to improve the targeted QoS attributes. The experimental results compare the performance of the platforms and show the current state of their readiness for edge computing in real network and execution environments. These findings suggest that Kubernetes and its distributions have the potential to provide effective scheduling across the resources on the network's edge. However, some challenges still have to be addressed to completely adapt these tools for such a dynamic and distributed execution environment as edge computing implies.
Collapse
|
5
|
Camacho C, Boratyn GM, Joukov V, Vera Alvarez R, Madden TL. ElasticBLAST: accelerating sequence search via cloud computing. BMC Bioinformatics 2023; 24:117. [PMID: 36967390 PMCID: PMC10040096 DOI: 10.1186/s12859-023-05245-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/21/2023] [Indexed: 03/28/2023] Open
Abstract
BACKGROUND Biomedical researchers use alignments produced by BLAST (Basic Local Alignment Search Tool) to categorize their query sequences. Producing such alignments is an essential bioinformatics task that is well suited for the cloud. The cloud can perform many calculations quickly as well as store and access large volumes of data. Bioinformaticians can also use it to collaborate with other researchers, sharing their results, datasets and even their pipelines on a common platform. RESULTS We present ElasticBLAST, a cloud native application to perform BLAST alignments in the cloud. ElasticBLAST can handle anywhere from a few to many thousands of queries and run the searches on thousands of virtual CPUs (if desired), deleting resources when it is done. It uses cloud native tools for orchestration and can request discounted instances, lowering cloud costs for users. It is supported on Amazon Web Services and Google Cloud Platform. It can search BLAST databases that are user provided or from the National Center for Biotechnology Information. CONCLUSION We show that ElasticBLAST is a useful application that can efficiently perform BLAST searches for the user in the cloud, demonstrating that with two examples. At the same time, it hides much of the complexity of working in the cloud, lowering the threshold to move work to the cloud.
Collapse
Affiliation(s)
- Christiam Camacho
- grid.280285.50000 0004 0507 7840National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, 8600 Rockville Pike, Bethesda, MD 20894 USA
| | - Grzegorz M. Boratyn
- grid.280285.50000 0004 0507 7840National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, 8600 Rockville Pike, Bethesda, MD 20894 USA
| | - Victor Joukov
- grid.280285.50000 0004 0507 7840National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, 8600 Rockville Pike, Bethesda, MD 20894 USA
| | - Roberto Vera Alvarez
- grid.280285.50000 0004 0507 7840National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, 8600 Rockville Pike, Bethesda, MD 20894 USA
| | - Thomas L. Madden
- grid.280285.50000 0004 0507 7840National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, 8600 Rockville Pike, Bethesda, MD 20894 USA
| |
Collapse
|
6
|
Wen S, Han R, Qiu K, Ma X, Li Z, Deng H, Liu CH. K8sSim: A Simulation Tool for Kubernetes Schedulers and Its Applications in Scheduling Algorithm Optimization. Micromachines (Basel) 2023; 14:651. [PMID: 36985058 PMCID: PMC10058403 DOI: 10.3390/mi14030651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 02/13/2023] [Accepted: 02/23/2023] [Indexed: 06/18/2023]
Abstract
In recent years, Kubernetes (K8s) has become a dominant resource management and scheduling system in the cloud. In practical scenarios, short-running cloud workloads are usually scheduled through different scheduling algorithms provided by Kubernetes. For example, artificial intelligence (AI) workloads are scheduled through different Volcano scheduling algorithms, such as GANG_MRP, GANG_LRP, and GANG_BRA. One key challenge is that the selection of scheduling algorithms has considerable impacts on job performance results. However, it takes a prohibitively long time to select the optimal algorithm because applying one algorithm in one single job may take a few minutes to complete. This poses the urgent requirement of a simulator that can quickly evaluate the performance impacts of different algorithms, while also considering scheduling-related factors, such as cluster resources, job structures and scheduler configurations. In this paper, we design and implement a Kubernetes simulator called K8sSim, which incorporates typical Kubernetes and Volcano scheduling algorithms for both generic and AI workloads, and provides an accurate simulation of their scheduling process in real clusters. We use real cluster traces from Alibaba to evaluate the effectiveness of K8sSim, and the evaluation results show that (i) compared to the real cluster, K8sSim can accurately evaluate the performance of different scheduling algorithms with similar CloseRate (a novel metric we define to intuitively show the simulation accuracy), and (ii) it can also quickly obtain the scheduling results of different scheduling algorithms by accelerating the scheduling time by an average of 38.56×.
Collapse
|
7
|
Vaño R, Lacalle I, Sowiński P, S-Julián R, Palau CE. Cloud-Native Workload Orchestration at the Edge: A Deployment Review and Future Directions. Sensors (Basel) 2023; 23:s23042215. [PMID: 36850813 PMCID: PMC9967903 DOI: 10.3390/s23042215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 02/13/2023] [Accepted: 02/14/2023] [Indexed: 05/14/2023]
Abstract
Cloud-native computing principles such as virtualization and orchestration are key to transferring to the promising paradigm of edge computing. Challenges of containerization, operative models and scarce availability of established tools make a thorough review indispensable. Therefore, the authors have described the practical methods and tools found in the literature as well as in current community-led development projects, and have thoroughly exposed the future directions of the field. Container virtualization and its orchestration through Kubernetes have dominated the cloud computing domain, while major efforts have been recently recorded focused on the adaptation of these technologies to the edge. Such initiatives have addressed either the reduction of container engines and the development of specific tailored operating systems or the development of smaller K8s distributions and edge-focused adaptations (such as KubeEdge). Finally, new workload virtualization approaches, such as WebAssembly modules together with the joint orchestration of these heterogeneous workloads, seem to be the topics to pay attention to in the short to medium term.
Collapse
Affiliation(s)
- Rafael Vaño
- Communications Department, Universitat Politècnica de València, 46022 Valencia, Spain
| | - Ignacio Lacalle
- Communications Department, Universitat Politècnica de València, 46022 Valencia, Spain
- Correspondence:
| | - Piotr Sowiński
- Systems Research Institute, Polish Academy of Sciences, ul. Newelska 6, 01-447 Warsaw, Poland
- Warsaw University of Technology, pl. Politechniki 1, 00-661 Warsaw, Poland
| | - Raúl S-Julián
- Communications Department, Universitat Politècnica de València, 46022 Valencia, Spain
| | - Carlos E. Palau
- Communications Department, Universitat Politècnica de València, 46022 Valencia, Spain
| |
Collapse
|
8
|
Kim SH, Kim T. Local Scheduling in KubeEdge-Based Edge Computing Environment. Sensors (Basel) 2023; 23:s23031522. [PMID: 36772562 PMCID: PMC9921110 DOI: 10.3390/s23031522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 01/12/2023] [Accepted: 01/26/2023] [Indexed: 05/14/2023]
Abstract
KubeEdge is an open-source platform that orchestrates containerized Internet of Things (IoT) application services in IoT edge computing environments. Based on Kubernetes, it supports heterogeneous IoT device protocols on edge nodes and provides various functions necessary to build edge computing infrastructure, such as network management between cloud and edge nodes. However, the resulting cloud-based systems are subject to several limitations. In this study, we evaluated the performance of KubeEdge in terms of the computational resource distribution and delay between edge nodes. We found that forwarding traffic between edge nodes degrades the throughput of clusters and causes service delay in edge computing environments. Based on these results, we proposed a local scheduling scheme that handles user traffic locally at each edge node. The performance evaluation results revealed that local scheduling outperforms the existing load-balancing algorithm in the edge computing environment.
Collapse
|
9
|
Prasad A, Mofjeld C, Peng Y. A Joint Model Provisioning and Request Dispatch Solution for Low-Latency Inference Services on Edge. Sensors (Basel) 2021; 21:s21196594. [PMID: 34640914 PMCID: PMC8513104 DOI: 10.3390/s21196594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 09/24/2021] [Accepted: 09/28/2021] [Indexed: 11/24/2022]
Abstract
With the advancement of machine learning, a growing number of mobile users rely on machine learning inference for making time-sensitive and safety-critical decisions. Therefore, the demand for high-quality and low-latency inference services at the network edge has become the key to modern intelligent society. This paper proposes a novel solution that jointly provisions machine learning models and dispatches inference requests to reduce inference latency on edge nodes. Existing solutions either direct inference requests to the nearest edge node to save network latency or balance edge nodes’ workload by reducing queuing and computing time. The proposed solution provisions each edge node with the optimal number and type of inference instances under a holistic consideration of networking, computing, and memory resources. Mobile users can thus be directed to utilize inference services on the edge nodes that offer minimal serving latency. The proposed solution has been implemented using TensorFlow Serving and Kubernetes on an edge cluster. Through simulation and testbed experiments under various system settings, the evaluation results showed that the joint strategy could consistently achieve lower latency than simply searching for the best edge node to serve inference requests.
Collapse
|
10
|
Caminero AC, Muñoz-Mansilla R. Quality of Service Provision in Fog Computing: Network-Aware Scheduling of Containers. Sensors (Basel) 2021; 21:3978. [PMID: 34207675 DOI: 10.3390/s21123978] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 06/03/2021] [Accepted: 06/07/2021] [Indexed: 11/17/2022]
Abstract
State-of-the-art scenarios, such as Internet of Things (IoT) and Smart Cities, have recently arisen. They involve the processing of huge data sets under strict time requirements, rendering the use of cloud resources unfeasible. For this reason, Fog computing has been proposed as a solution; however, there remains a need for intelligent allocation decisions, in order to make it a fully usable solution in such contexts. In this paper, a network-aware scheduling algorithm is presented, which aims to select the fog node most suitable for the execution of an application within a given deadline. This decision is made taking the status of the network into account. This scheduling algorithm was implemented as an extension to the Kubernetes default scheduler, and compared with existing proposals in the literature. The comparison shows that our proposal is the only one that can execute all the submitted jobs within their deadlines (i.e., no job is rejected or executed exceeding its deadline) with certain configurations in some of the scenarios tested, thus obtaining an optimal solution in such scenarios.
Collapse
|
11
|
Botez R, Costa-Requena J, Ivanciu IA, Strautiu V, Dobrota V. SDN-Based Network Slicing Mechanism for a Scalable 4G/5G Core Network: A Kubernetes Approach. Sensors (Basel) 2021; 21:3773. [PMID: 34072301 DOI: 10.3390/s21113773] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 05/26/2021] [Accepted: 05/26/2021] [Indexed: 11/17/2022]
Abstract
Managing the large volumes of IoT and M2M traffic requires the evaluation of the scalability and reliability for all the components in the end-to-end system. This includes connectivity, mobile network functions, and application or services receiving and processing the data from end devices. Firstly, this paper discusses the design of a containerized IoT and M2M application and the mechanisms for delivering automated scalability and high availability when deploying it in: (1) the edge using balenaCloud; (2) the Amazon Web Services cloud with EC2 instances; and (3) the dedicated Amazon Web Services IoT service. The experiments showed that there are no significant differences between edge and cloud deployments regarding resource consumption. Secondly, the solutions for scaling the 4G/5G network functions and mobile backhaul that provide the connectivity between devices and IoT/M2M applications are analyzed. In this case, the scalability and high availability of the 4G/5G components are provided by Kubernetes. The experiments showed that our proposed scaling algorithm for network slicing managed with SDN guarantees the necessary radio and network resources for end-to-end high availability.
Collapse
|
12
|
Mosciatti S, Lange C, Blomer J. Increasing the Execution Speed of Containerized Analysis Workflows Using an Image Snapshotter in Combination With CVMFS. Front Big Data 2021; 4:673163. [PMID: 34046587 PMCID: PMC8144464 DOI: 10.3389/fdata.2021.673163] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 04/08/2021] [Indexed: 11/13/2022] Open
Abstract
The past years have shown a revolution in the way scientific workloads are being executed thanks to the wide adoption of software containers. These containers run largely isolated from the host system, ensuring that the development and execution environments are the same everywhere. This enables full reproducibility of the workloads and therefore also the associated scientific analyses performed. However, as the research software used becomes increasingly complex, the software images grow easily to sizes of multiple gigabytes. Downloading the full image onto every single compute node on which the containers are executed becomes unpractical. In this paper, we describe a novel way of distributing software images on the Kubernetes platform, with which the container can start before the entire image contents become available locally (so-called "lazy pulling"). Each file required for the execution is fetched individually and subsequently cached on-demand using the CernVM file system (CVMFS), enabling the execution of very large software images on potentially thousands of Kubernetes nodes with very little overhead. We present several performance benchmarks making use of typical high-energy physics analysis workloads.
Collapse
Affiliation(s)
| | - Clemens Lange
- CERN, Experimental Physics Department, Geneva, Switzerland
| | - Jakob Blomer
- CERN, Experimental Physics Department, Geneva, Switzerland
| |
Collapse
|
13
|
Huang W, Zhou J, Zhang D. On-the-Fly Fusion of Remotely-Sensed Big Data Using an Elastic Computing Paradigm with a Containerized Spark Engine on Kubernetes. Sensors (Basel) 2021; 21:s21092971. [PMID: 33922709 PMCID: PMC8122984 DOI: 10.3390/s21092971] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 04/16/2021] [Accepted: 04/21/2021] [Indexed: 11/16/2022]
Abstract
Remotely-sensed satellite image fusion is indispensable for the generation of long-term gap-free Earth observation data. While cloud computing (CC) provides the big picture for RS big data (RSBD), the fundamental question of the efficient fusion of RSBD on CC platforms has not yet been settled. To this end, we propose a lightweight cloud-native framework for the elastic processing of RSBD in this study. With the scaling mechanisms provided by both the Infrastructure as a Service (IaaS) and Platform as a Services (PaaS) of CC, the Spark-on-Kubernetes operator model running in the framework can enhance the efficiency of Spark-based algorithms without considering bottlenecks such as task latency caused by an unbalanced workload, and can ease the burden to tune the performance parameters for their parallel algorithms. Internally, we propose a task scheduling mechanism (TSM) to dynamically change the Spark executor pods' affinities to the computing hosts. The TSM learns the workload of a computing host. Learning from the ratio between the number of completed and failed tasks on a computing host, the TSM dispatches Spark executor pods to newer and less-overwhelmed computing hosts. In order to illustrate the advantage, we implement a parallel enhanced spatial and temporal adaptive reflectance fusion model (PESTARFM) to enable the efficient fusion of big RS images with a Spark aggregation function. We construct an OpenStack cloud computing environment to test the usability of the framework. According to the experiments, TSM can improve the performance of the PESTARFM using only PaaS scaling to about 11.7%. When using both the IaaS and PaaS scaling, the maximum performance gain with the TSM can be even greater than 13.6%. The fusion of such big Sentinel and PlanetScope images requires less than 4 min in the experimental environment.
Collapse
|
14
|
Poniszewska-Marańda A, Czechowska E. Kubernetes Cluster for Automating Software Production Environment. Sensors (Basel) 2021; 21:s21051910. [PMID: 33803329 PMCID: PMC7967216 DOI: 10.3390/s21051910] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 03/03/2021] [Accepted: 03/04/2021] [Indexed: 11/16/2022]
Abstract
Microservices, Continuous Integration and Delivery, Docker, DevOps, Infrastructure as Code-these are the current trends and buzzwords in the technological world of 2020. A popular tool which can facilitate the deployment and maintenance of microservices is Kubernetes. Kubernetes is a platform for running containerized applications, for example microservices. There are two main questions which answer was important for us: how to deploy Kubernetes itself and how to ensure that the deployment fulfils the needs of a production environment. Our research concentrates on the analysis and evaluation of Kubernetes cluster as the software production environment. However, firstly it is necessary to determine and evaluate the requirements of production environment. The paper presents the determination and analysis of such requirements and their evaluation in the case of Kubernetes cluster. Next, the paper compares two methods of deploying a Kubernetes cluster: kops and eksctl. Both of the methods concern the AWS cloud, which was chosen mainly because of its wide popularity and the range of provided services. Besides the two chosen methods of deployment, there are many more, including the DIY method and deploying on-premises.
Collapse
|
15
|
Augustyn DR, Wyciślik Ł, Mrozek D. Perspectives of using Cloud computing in integrative analysis of multi-omics data. Brief Funct Genomics 2021; 20:198-206. [PMID: 33676373 DOI: 10.1093/bfgp/elab007] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 01/25/2021] [Accepted: 01/26/2021] [Indexed: 12/11/2022] Open
Abstract
Integrative analysis of multi-omics data is usually computationally demanding. It frequently requires building complex, multi-step analysis pipelines, applying dedicated techniques for data processing and combining several data sources. These efforts lead to a better understanding of life processes, current health state or the effects of therapeutic activities. However, many omics data analysis solutions focus only on a selected problem, disease, types of data or organisms. Moreover, they are implemented for general-purpose scientific computational platforms that most often do not easily scale the calculations natively. These features are not conducive to advances in understanding genotype-phenotypic relationships. Fortunately, with new technological paradigms, including Cloud computing, virtualization and containerization, these functionalities could be orchestrated for easy scaling and building independent analysis pipelines for omics data. Therefore, solutions can be re-used for purposes that they were not primarily designed. This paper shows perspectives of using Cloud computing advances and containerization approach for such a purpose. We first review how the Cloud computing model is utilized in multi-omics data analysis and show weak points of the adopted solutions. Then, we introduce containerization concepts, which allow both scaling and linking of functional services designed for various purposes. Finally, on the Bioconductor software package example, we disclose a verified concept model of a universal solution that exhibits the potentials for performing integrative analysis of multiple omics data sources.
Collapse
Affiliation(s)
- Dariusz R Augustyn
- Silesian University of Technology, Department of Applied Informatics, Gliwice 44-100, Poland
| | - Łukasz Wyciślik
- Silesian University of Technology, Department of Applied Informatics, Gliwice 44-100, Poland
| | - Dariusz Mrozek
- Silesian University of Technology, Department of Applied Informatics, Gliwice 44-100, Poland
| |
Collapse
|
16
|
Nguyen ND, Kim T. Balanced Leader Distribution Algorithm in Kubernetes Clusters. Sensors (Basel) 2021; 21:s21030869. [PMID: 33525452 PMCID: PMC7865615 DOI: 10.3390/s21030869] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 01/23/2021] [Accepted: 01/25/2021] [Indexed: 11/29/2022]
Abstract
Container-based virtualization is becoming a de facto way to build and deploy applications because of its simplicity and convenience. Kubernetes is a well-known open-source project that provides an orchestration platform for containerized applications. An application in Kubernetes can contain multiple replicas to achieve high scalability and availability. Stateless applications have no requirement for persistent storage; however, stateful applications require persistent storage for each replica. Therefore, stateful applications usually require a strong consistency of data among replicas. To achieve this, the application often relies on a leader, which is responsible for maintaining consistency and coordinating tasks among replicas. This leads to a problem that the leader often has heavy loads due to its inherent design. In a Kubernetes cluster, having the leaders of multiple applications concentrated in a specific node may become a bottleneck within the system. In this paper, we propose a leader election algorithm that overcomes the bottleneck problem by evenly distributing the leaders throughout nodes in the cluster. We also conduct experiments to prove the correctness and effectiveness of our leader election algorithm compared with a default algorithm in Kubernetes.
Collapse
|
17
|
Nguyen TT, Yeom YJ, Kim T, Park DH, Kim S. Horizontal Pod Autoscaling in Kubernetes for Elastic Container Orchestration. Sensors (Basel) 2020; 20:s20164621. [PMID: 32824508 PMCID: PMC7471989 DOI: 10.3390/s20164621] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 08/14/2020] [Accepted: 08/14/2020] [Indexed: 11/16/2022]
Abstract
Kubernetes, an open-source container orchestration platform, enables high availability and scalability through diverse autoscaling mechanisms such as Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler and Cluster Autoscaler. Amongst them, HPA helps provide seamless service by dynamically scaling up and down the number of resource units, called pods, without having to restart the whole system. Kubernetes monitors default Resource Metrics including CPU and memory usage of host machines and their pods. On the other hand, Custom Metrics, provided by external software such as Prometheus, are customizable to monitor a wide collection of metrics. In this paper, we investigate HPA through diverse experiments to provide critical knowledge on its operational behaviors. We also discuss the essential difference between Kubernetes Resource Metrics (KRM) and Prometheus Custom Metrics (PCM) and how they affect HPA's performance. Lastly, we provide deeper insights and lessons on how to optimize the performance of HPA for researchers, developers, and system administrators working with Kubernetes in the future.
Collapse
Affiliation(s)
- Thanh-Tung Nguyen
- School of Information and Communication Engineering, Chungbuk National University, Cheongju, Chungbuk 28644, Korea; (T.-T.N.); (Y.-J.Y.)
| | - Yu-Jin Yeom
- School of Information and Communication Engineering, Chungbuk National University, Cheongju, Chungbuk 28644, Korea; (T.-T.N.); (Y.-J.Y.)
| | - Taehong Kim
- School of Information and Communication Engineering, Chungbuk National University, Cheongju, Chungbuk 28644, Korea; (T.-T.N.); (Y.-J.Y.)
- Correspondence: (T.K.); (D.-H.P.)
| | - Dae-Heon Park
- Electronics and Telecommunications Research Institute, Daejeon 34129, Korea;
- Correspondence: (T.K.); (D.-H.P.)
| | - Sehan Kim
- Electronics and Telecommunications Research Institute, Daejeon 34129, Korea;
| |
Collapse
|
18
|
Németh B, Sonkoly B. Advanced Computation Capacity Modeling for Delay-Constrained Placement of IoT Services. Sensors (Basel) 2020; 20:E3830. [PMID: 32660037 DOI: 10.3390/s20143830] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 07/06/2020] [Indexed: 11/27/2022]
Abstract
A vast range of sensors gather data about our environment, industries and homes. The great profit hidden in this data can only be exploited if it is integrated with relevant services for analysis and usage. A core concept of the Internet of Things targets this business opportunity through various applications. The virtualized and software-controlled 5G networks are expected to achieve the scale and dynamicity of communication networks required by Internet of Things (IoT). As the computation and communication infrastructure rapidly evolves, the corresponding substrate models of service placement algorithms lag behind, failing to appropriately describe resource abstraction and dynamic features. Our paper provides an extension to existing IoT service placement algorithms to enable them to keep up with the latest infrastructure evolution, while maintaining their existing attributes, such as end-to-end delay constraints and the cost minimization objective. We complement our recent work on 5G service placement algorithms by theoretical foundation for resource abstraction, elasticity and delay constraint. We propose efficient solutions for the problems of aggregating computation resource capacities and behavior prediction of dynamic Kubernetes infrastructure in a delay-constrained service embedding framework. Our results are supported by mathematical theorems whose proofs are presented in detail.
Collapse
|
19
|
Santos J, Wauters T, Volckaert B, De Turck F. Resource Provisioning in Fog Computing: From Theory to Practice †. Sensors (Basel) 2019; 19:s19102238. [PMID: 31091838 PMCID: PMC6567354 DOI: 10.3390/s19102238] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 05/10/2019] [Accepted: 05/12/2019] [Indexed: 11/30/2022]
Abstract
The Internet-of-Things (IoT) and Smart Cities continue to expand at enormous rates. Centralized Cloud architectures cannot sustain the requirements imposed by IoT services. Enormous traffic demands and low latency constraints are among the strictest requirements, making cloud solutions impractical. As an answer, Fog Computing has been introduced to tackle this trend. However, only theoretical foundations have been established and the acceptance of its concepts is still in its early stages. Intelligent allocation decisions would provide proper resource provisioning in Fog environments. In this article, a Fog architecture based on Kubernetes, an open source container orchestration platform, is proposed to solve this challenge. Additionally, a network-aware scheduling approach for container-based applications in Smart City deployments has been implemented as an extension to the default scheduling mechanism available in Kubernetes. Last but not least, an optimization formulation for the IoT service problem has been validated as a container-based application in Kubernetes showing the full applicability of theoretical approaches in practical service deployments. Evaluations have been performed to compare the proposed approaches with the Kubernetes standard scheduling feature. Results show that the proposed approaches achieve reductions of 70% in terms of network latency when compared to the default scheduling mechanism.
Collapse
Affiliation(s)
- José Santos
- Department of Information Technology, Ghent University-imec, IDLab, Technologiepark-Zwijnaarde 126, 9052 Gent, Belgium.
| | - Tim Wauters
- Department of Information Technology, Ghent University-imec, IDLab, Technologiepark-Zwijnaarde 126, 9052 Gent, Belgium.
| | - Bruno Volckaert
- Department of Information Technology, Ghent University-imec, IDLab, Technologiepark-Zwijnaarde 126, 9052 Gent, Belgium.
| | - Filip De Turck
- Department of Information Technology, Ghent University-imec, IDLab, Technologiepark-Zwijnaarde 126, 9052 Gent, Belgium.
| |
Collapse
|