1
|
Xiao L, Luo K, Liu J, Foroughi A. A hybrid deep approach to recognizing student activity and monitoring health physique based on accelerometer data from smartphones. Sci Rep 2024; 14:14006. [PMID: 38890409 PMCID: PMC11189493 DOI: 10.1038/s41598-024-63934-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 05/31/2024] [Indexed: 06/20/2024] Open
Abstract
Smartphone sensors have gained considerable traction in Human Activity Recognition (HAR), drawing attention for their diverse applications. Accelerometer data monitoring holds promise in understanding students' physical activities, fostering healthier lifestyles. This technology tracks exercise routines, sedentary behavior, and overall fitness levels, potentially encouraging better habits, preempting health issues, and bolstering students' well-being. Traditionally, HAR involved analyzing signals linked to physical activities using handcrafted features. However, recent years have witnessed the integration of deep learning into HAR tasks, leveraging digital physiological signals from smartwatches and learning features automatically from raw sensory data. The Long Short-Term Memory (LSTM) network stands out as a potent algorithm for analyzing physiological signals, promising improved accuracy and scalability in automated signal analysis. In this article, we propose a feature analysis framework for recognizing student activity and monitoring health based on smartphone accelerometer data through an edge computing platform. Our objective is to boost HAR performance by accounting for the dynamic nature of human behavior. Nonetheless, the current LSTM network's presetting of hidden units and initial learning rate relies on prior knowledge, potentially leading to suboptimal states. To counter this, we employ Bidirectional LSTM (BiLSTM), enhancing sequence processing models. Furthermore, Bayesian optimization aids in fine-tuning the BiLSTM model architecture. Through fivefold cross-validation on training and testing datasets, our model showcases a classification accuracy of 97.5% on the tested dataset. Moreover, edge computing offers real-time processing, reduced latency, enhanced privacy, bandwidth efficiency, offline capabilities, energy efficiency, personalization, and scalability. Extensive experimental results validate that our proposed approach surpasses state-of-the-art methodologies in recognizing human activities and monitoring health based on smartphone accelerometer data.
Collapse
Affiliation(s)
- Lei Xiao
- Chengdu Technological University, Chengdu, 610000, China
- Graduate School of Business Faculty, Malaysia SEGi University, 47810, Petaling Jaya, Malaysia
| | - Kangrong Luo
- Chengdu Technological University, Chengdu, 610000, China
| | - Juntong Liu
- Chengdu University of Information Technology, Chengdu, 610000, China
| | - Andia Foroughi
- Department of Biomedical Engineering, Central Tehran Branch, Islamic Azad University, Tehran, Iran.
| |
Collapse
|
2
|
Wang K, Ghafurian M, Chumachenko D, Cao S, Butt ZA, Salim S, Abhari S, Morita PP. Application of artificial intelligence in active assisted living for aging population in real-world setting with commercial devices - A scoping review. Comput Biol Med 2024; 173:108340. [PMID: 38555702 DOI: 10.1016/j.compbiomed.2024.108340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 02/23/2024] [Accepted: 03/17/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND The aging population is steadily increasing, posing new challenges and opportunities for healthcare systems worldwide. Technological advancements, particularly in commercially available Active Assisted Living devices, offer a promising alternative. These readily accessible products, ranging from smartwatches to home automation systems, are often equipped with Artificial Intelligence capabilities that can monitor health metrics, predict adverse events, and facilitate a safer living environment. However, there is no review exploring how Artificial Intelligence has been integrated into commercially available Active Assisted Living technologies, and how these devices monitor health metrics and provide healthcare solutions in a real-world environment for healthy aging. This review is essential because it fills a knowledge gap in understanding AI's integration in Active Assisted Living technologies in promoting healthy aging in real-world settings, identifying key issues that require to be addressed in future studies. OBJECTIVE The aim of this overview is to outline current understanding, identify potential research opportunities, and highlight research gaps from published studies regarding the use of Artificial Intelligence in commercially available Active Assisted Living technologies that assists older individuals aging at home. METHODS A comprehensive search was conducted in six databases-PubMed, CINAHL, IEEE Xplore, Scopus, ACM Digital Library, and Web of Science-to identify relevant studies published over the past decade from 2013 to 2024. Our methodology adhered to the PRISMA extension for scoping reviews to ensure rigor and transparency throughout the review process. After applying predefined inclusion and exclusion criteria on 825 retrieved articles, a total of 64 papers were included for analysis and synthesis. RESULTS Several trends emerged from our analysis of the 64 selected papers. A majority of the work (39/64, 61%) was published after the year 2020. Geographically, most of the studies originated from East Asia and North America (36/64, 56%). The primary application goal of Artificial Intelligence in the reviewed literature was focused on activity recognition (34/64, 53%), followed by daily monitoring (10/64, 16%). Methodologically, tree-based and neural network-based approaches were the most prevalent Artificial Intelligence algorithms used in studies (32/64, 50% and 31/64, 48% respectively). A notable proportion of the studies (32/64, 50%) carried out their research using specially designed smart home testbeds that simulate the conditions in real-world. Moreover, ambient technology was a common thread (49/64, 77%), with occupancy-related data (such as motion and electrical appliance usage logs) and environmental sensors (indicators like temperature and humidity) being the most frequently used. CONCLUSION Our results suggest that Artificial Intelligence has been increasingly deployed in the real-world Active Assisted Living context over the past decade, offering a variety of applications aimed at healthy aging and facilitating independent living for the older adults. A wide range of smart home indicators were leveraged for comprehensive data analysis, exploring and enhancing the potentials and effectiveness of solutions. However, our review has identified multiple research gaps that need further investigation. First, most research has been conducted in controlled testbed environments, leaving a lack of real-world applications that could validate the technologies' efficacy and scalability. Second, there is a noticeable absence of research leveraging cloud technology, an essential tool for large-scale deployment and standardized data collection and management. Future work should prioritize these areas to maximize the potential benefits of Artificial Intelligence in Active Assisted Living settings.
Collapse
Affiliation(s)
- Kang Wang
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
| | - Moojan Ghafurian
- Department of Systems Design Engineering, University of Waterloo, ON, Canada
| | - Dmytro Chumachenko
- National Aerospace University "Kharkiv Aviation Institute", Kharkiv, Ukraine
| | - Shi Cao
- Department of Systems Design Engineering, University of Waterloo, ON, Canada
| | - Zahid A Butt
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
| | - Shahan Salim
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
| | - Shahabeddin Abhari
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
| | - Plinio P Morita
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada; Department of Systems Design Engineering, University of Waterloo, ON, Canada; Centre for Digital Therapeutics, Techna Institute, University Health Network, Toronto, ON, Canada; Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
3
|
Eren C, Karamzadeh S, Kartal M. Radar human breathing dataset for applications of ambient assisted living and search and rescue operations. Data Brief 2023; 51:109757. [PMID: 38053604 PMCID: PMC10694063 DOI: 10.1016/j.dib.2023.109757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 10/09/2023] [Accepted: 10/30/2023] [Indexed: 12/07/2023] Open
Abstract
This dataset consists of signatures of human vital signs that are recorded by ultrawideband radar and lidar sensors. The data acquisition scene considers the human posture models(supine/lateral/facedown), different radar antenna angles towards the human, various set of distances and operational radar characteristics (bandwidth selection/mean power). The raw data files of lidar&radar and processed data files are presented separately in the data repository. The lidar sensor is chosen as a reference sensor. There are 432 data records, and each data scene's trial number is eight. There is a homogeneous wooden table to mimic clutter while forming a dataset. Thus, this dataset covers applications of search and rescue operations, sleep monitoring, and ambient assisted living (AAL) applications.
Collapse
Affiliation(s)
- Cansu Eren
- Satellite Communication and Remote Sensing, Department of Communication Systems, Informatics Institute, Istanbul Technical University, Istanbul, Türkiye
| | - Saeid Karamzadeh
- Millimeter Wave Technologies, Intelligent Wireless System, Silicon Austria Labs (SAL), 4040 Linz, Austria. Electrical and Electronics Engineering Department, Faculty of Engineering and Natural Sciences, Bahçeşehir University, 34349 Istanbul, Türkiye
| | - Mesut Kartal
- Department of Electronics and Communication Engineering, Istanbul Technical University, Istanbul, Türkiye
| |
Collapse
|
4
|
Diraco G, Rescio G, Siciliano P, Leone A. Review on Human Action Recognition in Smart Living: Sensing Technology, Multimodality, Real-Time Processing, Interoperability, and Resource-Constrained Processing. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115281. [PMID: 37300008 DOI: 10.3390/s23115281] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 05/23/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023]
Abstract
Smart living, a concept that has gained increasing attention in recent years, revolves around integrating advanced technologies in homes and cities to enhance the quality of life for citizens. Sensing and human action recognition are crucial aspects of this concept. Smart living applications span various domains, such as energy consumption, healthcare, transportation, and education, which greatly benefit from effective human action recognition. This field, originating from computer vision, seeks to recognize human actions and activities using not only visual data but also many other sensor modalities. This paper comprehensively reviews the literature on human action recognition in smart living environments, synthesizing the main contributions, challenges, and future research directions. This review selects five key domains, i.e., Sensing Technology, Multimodality, Real-time Processing, Interoperability, and Resource-Constrained Processing, as they encompass the critical aspects required for successfully deploying human action recognition in smart living. These domains highlight the essential role that sensing and human action recognition play in successfully developing and implementing smart living solutions. This paper serves as a valuable resource for researchers and practitioners seeking to further explore and advance the field of human action recognition in smart living.
Collapse
Affiliation(s)
- Giovanni Diraco
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Gabriele Rescio
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Pietro Siciliano
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Alessandro Leone
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| |
Collapse
|
5
|
Tang J, Fan K, Xie W, Zeng L, Han F, Huang G, Wang T, Liu A, Zhang S. A Semi-supervised Sensing Rate Learning based CMAB scheme to combat COVID-19 by trustful data collection in the crowd. COMPUTER COMMUNICATIONS 2023; 206:85-100. [PMID: 37197296 PMCID: PMC10171893 DOI: 10.1016/j.comcom.2023.04.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 03/02/2023] [Accepted: 04/22/2023] [Indexed: 05/19/2023]
Abstract
The recruitment of trustworthy and high-quality workers is an important research issue for MCS. Previous studies either assume that the qualities of workers are known in advance, or assume that the platform knows the qualities of workers once it receives their collected data. In reality, to reduce costs and thus maximize revenue, many strategic workers do not perform their sensing tasks honestly and report fake data to the platform, which is called False data attacks. And it is very hard for the platform to evaluate the authenticity of the received data In this paper, an incentive mechanism named Semi-supervision based Combinatorial Multi-Armed Bandit reverse Auction (SCMABA) is proposed to solve the recruitment problem of multiple unknown and strategic workers in MCS. First, we model the worker recruitment as a multi-armed bandit reverse auction problem and design an UCB-based algorithm to separate the exploration and exploitation, regarding the Sensing Rates (SRs) of recruited workers as the gain of the bandit Next, a Semi-supervised Sensing Rate Learning (SSRL) approach is proposed to quickly and accurately obtain the workers' SRs, which consists of two phases, supervision and self-supervision. Last, SCMABA is designed organically combining the SRs acquisition mechanism with multi-armed bandit reverse auction, where supervised SR learning is used in the exploration, and the self-supervised one is used in the exploitation. We theoretically prove that our SCMABA achieves truthfulness and individual rationality and exhibits outstanding performances of the SCMABA mechanism through in-depth simulations of real-world data traces.
Collapse
Affiliation(s)
- Jianheng Tang
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China
| | - Kejia Fan
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China
| | - Wenxuan Xie
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China
| | - Luomin Zeng
- School of Civil Engineering, Central South University, Changsha, 410083, China
| | - Feijiang Han
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China
| | - Guosheng Huang
- School of computer Science and Engineering, Hunan First Normal University, Changsha, 410205, China
| | - Tian Wang
- Department of Artificial Intelligence and Future Networks, Beijing Normal University & UIC, Zhuhai, Guangdong, China
| | - Anfeng Liu
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China
| | - Shaobo Zhang
- School of Computer Science and Engineering of the Hunan University of Science and Technology, Xiangtan, 411201, China
| |
Collapse
|
6
|
Bhola G, Vishwakarma DK. A review of vision-based indoor HAR: state-of-the-art, challenges, and future prospects. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-41. [PMID: 37362688 PMCID: PMC10173923 DOI: 10.1007/s11042-023-15443-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 02/10/2023] [Accepted: 04/18/2023] [Indexed: 06/28/2023]
Abstract
With the advent of technology, we are getting more comfortable with the use of gadgets, cameras, etc., and find Artificial Intelligence as an integral part of most of the tasks we perform throughout the day. In such a scenario, the use of cameras and vision-based sensors comes as an escape from many real-time problems and challenges. One major application of these vision-based systems is Indoor Human Activity Recognition (HAR) which serves in a variety of scenarios ranging from smart homes, elderly care, assisted living, and human behavior pattern analysis for identifying any abnormal behavior to abnormal activity recognition like falling, slipping, domestic violence, etc. The effect of HAR in real time has made the area of indoor activity recognition a more explored zone by the industrial segment to attract users with their products in multiple domains. Hence, considering these aspects of HAR, this work proposes a detailed survey on indoor HAR. Through this work, we have highlighted the recent methodologies and their performance in the field of indoor activity recognition. We have also discussed- the challenges, detailed study of approaches with real-world applications of indoor-HAR, datasets available for indoor activity, and their technical details in this work. We have proposed a taxonomy for indoor HAR and highlighted the state-of-the-art and future prospects by mentioning the research gaps and the shortcomings of recent surveys with respect to our work.
Collapse
Affiliation(s)
- Geetanjali Bhola
- Biometric Research Laboratory, Department of Information Technology, Delhi Technological University, Bawana Road, Delhi, 11042 India
| | - Dinesh Kumar Vishwakarma
- Biometric Research Laboratory, Department of Information Technology, Delhi Technological University, Bawana Road, Delhi, 11042 India
| |
Collapse
|
7
|
Multimodal sensor fusion in the latent representation space. Sci Rep 2023; 13:2005. [PMID: 36737463 PMCID: PMC9898225 DOI: 10.1038/s41598-022-24754-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 11/21/2022] [Indexed: 02/05/2023] Open
Abstract
A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.
Collapse
|
8
|
Brishtel I, Krauss S, Chamseddine M, Rambach JR, Stricker D. Driving Activity Recognition Using UWB Radar and Deep Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:818. [PMID: 36679616 PMCID: PMC9862485 DOI: 10.3390/s23020818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/19/2022] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
In-car activity monitoring is a key enabler of various automotive safety functions. Existing approaches are largely based on vision systems. Radar, however, can provide a low-cost, privacy-preserving alternative. To this day, such systems based on the radar are not widely researched. In our work, we introduce a novel approach that uses the Doppler signal of an ultra-wideband (UWB) radar as an input to deep neural networks for the classification of driving activities. In contrast to previous work in the domain, we focus on generalization to unseen persons and make a new radar driving activity dataset (RaDA) available to the scientific community to encourage comparison and the benchmarking of future methods.
Collapse
Affiliation(s)
- Iuliia Brishtel
- Department of Augmented Vision, German Research Center for Artificial Intelligence, Trippstadter Str. 122, 67663 Kaiserslautern, Germany
- Department of Computer Science, RPTU, Erwin-Schrödinger-Str. 57, 67663 Kaiserslautern, Germany
| | - Stephan Krauss
- Department of Augmented Vision, German Research Center for Artificial Intelligence, Trippstadter Str. 122, 67663 Kaiserslautern, Germany
| | - Mahdi Chamseddine
- Department of Augmented Vision, German Research Center for Artificial Intelligence, Trippstadter Str. 122, 67663 Kaiserslautern, Germany
| | - Jason Raphael Rambach
- Department of Augmented Vision, German Research Center for Artificial Intelligence, Trippstadter Str. 122, 67663 Kaiserslautern, Germany
| | - Didier Stricker
- Department of Augmented Vision, German Research Center for Artificial Intelligence, Trippstadter Str. 122, 67663 Kaiserslautern, Germany
- Department of Computer Science, RPTU, Erwin-Schrödinger-Str. 57, 67663 Kaiserslautern, Germany
| |
Collapse
|
9
|
Islam MS, Jannat MKA, Hossain MN, Kim WS, Lee SW, Yang SH. STC-NLSTMNet: An Improved Human Activity Recognition Method Using Convolutional Neural Network with NLSTM from WiFi CSI. SENSORS (BASEL, SWITZERLAND) 2022; 23:356. [PMID: 36616954 PMCID: PMC9823549 DOI: 10.3390/s23010356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/13/2022] [Accepted: 12/21/2022] [Indexed: 06/17/2023]
Abstract
Human activity recognition (HAR) has emerged as a significant area of research due to its numerous possible applications, including ambient assisted living, healthcare, abnormal behaviour detection, etc. Recently, HAR using WiFi channel state information (CSI) has become a predominant and unique approach in indoor environments compared to others (i.e., sensor and vision) due to its privacy-preserving qualities, thereby eliminating the need to carry additional devices and providing flexibility of capture motions in both line-of-sight (LOS) and non-line-of-sight (NLOS) settings. Existing deep learning (DL)-based HAR approaches usually extract either temporal or spatial features and lack adequate means to integrate and utilize the two simultaneously, making it challenging to recognize different activities accurately. Motivated by this, we propose a novel DL-based model named spatio-temporal convolution with nested long short-term memory (STC-NLSTMNet), with the ability to extract spatial and temporal features concurrently and automatically recognize human activity with very high accuracy. The proposed STC-NLSTMNet model is mainly comprised of depthwise separable convolution (DS-Conv) blocks, feature attention module (FAM) and NLSTM. The DS-Conv blocks extract the spatial features from the CSI signal and add feature attention modules (FAM) to draw attention to the most essential features. These robust features are fed into NLSTM as inputs to explore the hidden intrinsic temporal features in CSI signals. The proposed STC-NLSTMNet model is evaluated using two publicly available datasets: Multi-environment and StanWiFi. The experimental results revealed that the STC-NLSTMNet model achieved activity recognition accuracies of 98.20% and 99.88% on Multi-environment and StanWiFi datasets, respectively. Its activity recognition performance is also compared with other existing approaches and our proposed STC-NLSTMNet model significantly improves the activity recognition accuracies by 4% and 1.88%, respectively, compared to the best existing method.
Collapse
Affiliation(s)
- Md Shafiqul Islam
- Department of Electronics Engineering, Kwangwoon University, Seoul 01897, Republic of Korea
| | - Mir Kanon Ara Jannat
- Department of Electronics Engineering, Kwangwoon University, Seoul 01897, Republic of Korea
| | - Mohammad Nahid Hossain
- Department of Electronics Engineering, Kwangwoon University, Seoul 01897, Republic of Korea
| | - Woo-Su Kim
- Graduate School of Knowledge-Based Technology and Energy, Tech University of Korea, Siheung 15073, Republic of Korea
| | - Soo-Wook Lee
- Kwangwoon Academy, Kwangwoon University, Seoul 01897, Republic of Korea
| | - Sung-Hyun Yang
- Department of Electronics Engineering, Kwangwoon University, Seoul 01897, Republic of Korea
| |
Collapse
|
10
|
Bocus MJ, Piechocki R. A comprehensive ultra-wideband dataset for non-cooperative contextual sensing. Sci Data 2022; 9:650. [PMID: 36273010 PMCID: PMC9587989 DOI: 10.1038/s41597-022-01776-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 10/14/2022] [Indexed: 11/08/2022] Open
Abstract
Nowadays, an increasing amount of attention is being devoted towards passive and non-intrusive sensing methods. The prime example is healthcare applications, where on-body sensors are not always an option or in other applications which require the detection and tracking of unauthorized (non-cooperative) targets within a given environment. Therefore, in this paper we present a dataset consisting of measurements obtained from Radio-Frequency (RF) devices. Essentially, the dataset consists of Ultra-Wideband (UWB) data in the form of Channel Impulse Response (CIR), acquired via a Commercial Off-the-Shelf (COTS) UWB equipment. Approximately 1.6 hours of annotated measurements are provided, which are collected in a residential environment. This dataset can be used to passively track a target's location in an indoor environment. Additionally, it can also be used to advance UWB-based Human Activity Recognition (HAR) since three basic human activities were recorded, namely, sitting, standing and walking. We anticipate that such datasets may be utilized to develop novel algorithms and methodologies for healthcare, smart homes and security applications.
Collapse
Affiliation(s)
- Mohammud J Bocus
- School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK.
| | - Robert Piechocki
- School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK
| |
Collapse
|