1
|
Yang Q, Wang X, Cao X, Liu S, Xie F, Li Y. Multi-classification of national fitness test grades based on statistical analysis and machine learning. PLoS One 2023; 18:e0295674. [PMID: 38134133 PMCID: PMC10745189 DOI: 10.1371/journal.pone.0295674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
Physical fitness is a key element of a healthy life, and being overweight or lacking physical exercise will lead to health problems. Therefore, assessing an individual's physical health status from a non-medical, cost-effective perspective is essential. This paper aimed to evaluate the national physical health status through national physical examination data, selecting 12 indicators to divide the physical health status into four levels: excellent, good, pass, and fail. The existing challenge lies in the fact that most literature on physical fitness assessment mainly focuses on the two major groups of sports athletes and school students. Unfortunately, there is no reasonable index system has been constructed. The evaluation method has limitations and cannot be applied to other groups. This paper builds a reasonable health indicator system based on national physical examination data, breaks group restrictions, studies national groups, and hopes to use machine learning models to provide helpful health suggestions for citizens to measure their physical status. We analyzed the significance of the selected indicators through nonparametric tests and exploratory statistical analysis. We used seven machine learning models to obtain the best multi-classification model for the physical fitness test level. Comprehensive research showed that MLP has the best classification effect, with macro-precision reaching 74.4% and micro-precision reaching 72.8%. Furthermore, the recall rates are also above 70%, and the Hamming loss is the smallest, i.e., 0.272. The practical implications of these findings are significant. Individuals can use the classification model to understand their physical fitness level and status, exercise appropriately according to the measurement indicators, and adjust their lifestyle, which is an important aspect of health management.
Collapse
Affiliation(s)
- Qian Yang
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| | - Xueli Wang
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| | - Xianbing Cao
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| | - Shuai Liu
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| | - Feng Xie
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| | - Yumei Li
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| |
Collapse
|
2
|
Baskaran P, Adams JA. Multi-dimensional task recognition for human-robot teaming: literature review. Front Robot AI 2023; 10:1123374. [PMID: 37609665 PMCID: PMC10440956 DOI: 10.3389/frobt.2023.1123374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 07/17/2023] [Indexed: 08/24/2023] Open
Abstract
Human-robot teams collaborating to achieve tasks under various conditions, especially in unstructured, dynamic environments will require robots to adapt autonomously to a human teammate's state. An important element of such adaptation is the robot's ability to infer the human teammate's tasks. Environmentally embedded sensors (e.g., motion capture and cameras) are infeasible in such environments for task recognition, but wearable sensors are a viable task recognition alternative. Human-robot teams will perform a wide variety of composite and atomic tasks, involving multiple activity components (i.e., gross motor, fine-grained motor, tactile, visual, cognitive, speech and auditory) that may occur concurrently. A robot's ability to recognize the human's composite, concurrent tasks is a key requirement for realizing successful teaming. Over a hundred task recognition algorithms across multiple activity components are evaluated based on six criteria: sensitivity, suitability, generalizability, composite factor, concurrency and anomaly awareness. The majority of the reviewed task recognition algorithms are not viable for human-robot teams in unstructured, dynamic environments, as they only detect tasks from a subset of activity components, incorporate non-wearable sensors, and rarely detect composite, concurrent tasks across multiple activity components.
Collapse
Affiliation(s)
- Prakash Baskaran
- Collaborative Robotics and Intelligent Systems Institute, Oregon State University, Corvallis, OR, United States
| | | |
Collapse
|
3
|
Biró A, Szilágyi SM, Szilágyi L, Martín-Martín J, Cuesta-Vargas AI. Machine Learning on Prediction of Relative Physical Activity Intensity Using Medical Radar Sensor and 3D Accelerometer. SENSORS (BASEL, SWITZERLAND) 2023; 23:3595. [PMID: 37050655 PMCID: PMC10099263 DOI: 10.3390/s23073595] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/17/2023] [Accepted: 03/27/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND One of the most critical topics in sports safety today is the reduction in injury risks through controlled fatigue using non-invasive athlete monitoring. Due to the risk of injuries, it is prohibited to use accelerometer-based smart trackers, activity measurement bracelets, and smart watches for recording health parameters during performance sports activities. This study analyzes the synergy feasibility of medical radar sensors and tri-axial acceleration sensor data to predict physical activity key performance indexes in performance sports by using machine learning (ML). The novelty of this method is that it uses a 24 GHz Doppler radar sensor to detect vital signs such as the heartbeat and breathing without touching the person and to predict the intensity of physical activity, combined with the acceleration data from 3D accelerometers. METHODS This study is based on the data collected from professional athletes and freely available datasets created for research purposes. A combination of sensor data management was used: a medical radar sensor with no-contact remote sensing to measure the heart rate (HR) and 3D acceleration to measure the velocity of the activity. Various advanced ML methods and models were employed on the top of sensors to analyze the vital parameters and predict the health activity key performance indexes. three-axial acceleration, heart rate data, age, as well as activity level variances. RESULTS The ML models recognized the physical activity intensity and estimated the energy expenditure on a realistic level. Leave-one-out (LOO) cross-validation (CV), as well as out-of-sample testing (OST) methods, have been used to evaluate the level of accuracy in activity intensity prediction. The energy expenditure prediction with three-axial accelerometer sensors by using linear regression provided 97-99% accuracy on selected sports (cycling, running, and soccer). The ML-based RPE results using medical radar sensors on a time-series heart rate (HR) dataset varied between 90 and 96% accuracy. The expected level of accuracy was examined with different models. The average accuracy for all the models (RPE and METs) and setups was higher than 90%. CONCLUSIONS The ML models that classify the rating of the perceived exertion and the metabolic equivalent of tasks perform consistently.
Collapse
Affiliation(s)
- Attila Biró
- Department of Physiotherapy, University of Malaga, 29071 Malaga, Spain; (A.B.)
- Department of Electrical Engineering and Information Technology, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, Str. Nicolae Iorga, Nr. 1, 540088 Targu Mures, Romania
- Biomedical Research Institute of Malaga (IBIMA), 29590 Malaga, Spain
| | - Sándor Miklós Szilágyi
- Department of Electrical Engineering and Information Technology, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, Str. Nicolae Iorga, Nr. 1, 540088 Targu Mures, Romania
| | - László Szilágyi
- Computational Intelligence Research Group, Sapientia Hungarian University of Transylvania, 540485 Targu Mures, Romania
- Physiological Controls Research Center, Óbuda University, 1034 Budapest, Hungary
| | - Jaime Martín-Martín
- Biomedical Research Institute of Malaga (IBIMA), 29590 Malaga, Spain
- Legal and Forensic Medicine Area, Department of Human Anatomy, Legal Medicine and History of Science, Faculty of Medicine, University of Malaga, 29071 Malaga, Spain
| | - Antonio Ignacio Cuesta-Vargas
- Department of Physiotherapy, University of Malaga, 29071 Malaga, Spain; (A.B.)
- Biomedical Research Institute of Malaga (IBIMA), 29590 Malaga, Spain
- Faculty of Health Science, School of Clinical Science, Queensland University Technology, Brisbane 4000, Australia
| |
Collapse
|
4
|
Empirical Comparison between Deep and Classical Classifiers for Speaker Verification in Emotional Talking Environments. INFORMATION 2022. [DOI: 10.3390/info13100456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Speech signals carry various bits of information relevant to the speaker such as age, gender, accent, language, health, and emotions. Emotions are conveyed through modulations of facial and vocal expressions. This paper conducts an empirical comparison of performances between the classical classifiers: Gaussian Mixture Model (GMM), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Artificial neural networks (ANN); and the deep learning classifiers, i.e., Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and Gated Recurrent Unit (GRU) in addition to the ivector approach for a text-independent speaker verification task in neutral and emotional talking environments. The deep models undergo hyperparameter tuning using the Grid Search optimization algorithm. The models are trained and tested using a private Arabic Emirati Speech Database, Ryerson Audio–Visual Database of Emotional Speech and Song dataset (RAVDESS) database, and a public Crowd-Sourced Emotional Multimodal Actors (CREMA) database. Experimental results illustrate that deep architectures do not necessarily outperform classical classifiers. In fact, evaluation was carried out through Equal Error Rate (EER) along with Area Under the Curve (AUC) scores. The findings reveal that the GMM model yields the lowest EER values and the best AUC scores across all datasets, amongst classical classifiers. In addition, the ivector model surpasses all the fine-tuned deep models (CNN, LSTM, and GRU) based on both evaluation metrics in the neutral, as well as the emotional speech. In addition, the GMM outperforms the ivector using the Emirati and RAVDESS databases.
Collapse
|
5
|
Abstract
It is undeniable that mobile devices have become an inseparable part of human’s daily routines due to the persistent growth of high-quality sensor devices, powerful computational resources and massive storage capacity nowadays. Similarly, the fast development of Internet of Things technology has motivated people into the research and wide applications of sensors, such as the human activity recognition system. This results in substantial existing works that have utilized wearable sensors to identify human activities with a variety of techniques. In this paper, a hybrid deep learning model that amalgamates a one-dimensional Convolutional Neural Network with a bidirectional long short-term memory (1D-CNN-BiLSTM) model is proposed for wearable sensor-based human activity recognition. The one-dimensional Convolutional Neural Network transforms the prominent information in the sensor time series data into high level representative features. Thereafter, the bidirectional long short-term memory encodes the long-range dependencies in the features by gating mechanisms. The performance evaluation reveals that the proposed 1D-CNN-BiLSTM outshines the existing methods with a recognition rate of 95.48% on the UCI-HAR dataset, 94.17% on the Motion Sense dataset and 100% on the Single Accelerometer dataset.
Collapse
|
6
|
Li X. Design and Implementation of Human Motion Recognition Information Processing System Based on LSTM Recurrent Neural Network Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:3669204. [PMID: 34326865 PMCID: PMC8277512 DOI: 10.1155/2021/3669204] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 06/19/2021] [Accepted: 06/29/2021] [Indexed: 11/30/2022]
Abstract
With the comprehensive development of national fitness, men, women, young, and old in China have joined the ranks of fitness. In order to increase the understanding of human movement, many researches have designed a lot of software or hardware to realize the analysis of human movement state. However, the recognition efficiency of various systems or platforms is not high, and the reduction ability is poor, so the recognition information processing system based on LSTM recurrent neural network under deep learning is proposed to collect and recognize human motion data. The system realizes the collection, processing, recognition, storage, and display of human motion data by constructing a three-layer human motion recognition information processing system and introduces LSTM recurrent neural network to optimize the recognition efficiency of the system, simplify the recognition process, and reduce the data missing rate caused by dimension reduction. Finally, we use the known dataset to train the model and analyze the performance and application effect of the system through the actual motion state. The final results show that the performance of LSTM recurrent neural network is better than the traditional algorithm, the accuracy can reach 0.980, and the confusion matrix results show that the recognition of human motion by the system can reach 85 points to the greatest extent. The test shows that the system can recognize and process the human movement data well, which has great application significance for future physical education and daily physical exercise.
Collapse
Affiliation(s)
- Xue Li
- Department of Physical Education, Xi'an International Studies University, Xi'an 710128, Shaanxi, China
| |
Collapse
|
7
|
Suto J. The effect of hyperparameter search on artificial neural network in human activity recognition. OPEN COMPUTER SCIENCE 2021. [DOI: 10.1515/comp-2020-0227] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
In the last decade, many researchers applied shallow and deep networks for human activity recognition (HAR). Currently, the trending research line in HAR is applying deep learning to extract features and classify activities from raw data. However, we observed that, authors of previous studies have not performed an efficient hyperparameter search on their artificial neural network (shallow or deep)-based classifier. Therefore, in this article, we demonstrate the effect of the random and Bayesian parameter search on a shallow neural network using five HAR databases. The result of this work shows that a shallow neural network with correct parameter optimization can achieve similar or even better recognition accuracy than the previous best deep classifier(s) on all databases. In addition, we draw conclusions about the advantages and disadvantages of the two hyperparameter search techniques according to the results.
Collapse
Affiliation(s)
- Jozsef Suto
- Department of IT Systems and Networks, Faculty of Informatics, University of Debrecen , Debrecen , 4028 , Hungary
| |
Collapse
|
8
|
Oh ST, Ga DH, Lim JH. Mobile Deep Learning System That Calculates UVI Using Illuminance Value of User's Location. SENSORS 2021; 21:s21041227. [PMID: 33572393 PMCID: PMC7916185 DOI: 10.3390/s21041227] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 01/31/2021] [Accepted: 02/05/2021] [Indexed: 12/23/2022]
Abstract
Ultraviolet rays are closely related with human health and, recently, optimum exposure to the UV rays has been recommended, with growing importance being placed on correct UV information. However, many countries provide UV information services at a local level, which makes it impossible for individuals to acquire user-based, accurate UV information unless individuals operate UV measurement devices with expertise on the relevant field for interpretation of the measurement results. There is a limit in measuring ultraviolet rays’ information by the users at their respective locations. Research about how to utilize mobile devices such as smartphones to overcome such limitation is also lacking. This paper proposes a mobile deep learning system that calculates UVI based on the illuminance values at the user’s location obtained with mobile devices’ help. The proposed method analyzed the correlation between illuminance and UVI based on the natural light DB collected through the actual measurements, and the deep learning model’s data set was extracted. After the selection of the input variables to calculate the correct UVI, the deep learning model based on the TensorFlow set with the optimum number of layers and number of nodes was designed and implemented, and learning was executed via the data set. After the data set was converted to the mobile deep learning model to operate under the mobile environment, the converted data were loaded on the mobile device. The proposed method enabled providing UV information at the user’s location through a mobile device on which the illuminance sensors were loaded even in the environment without UVI measuring equipment. The comparison of the experiment results with the reference device (spectrometer) proved that the proposed method could provide UV information with an accuracy of 90–95% in the summers, as well as in winters.
Collapse
Affiliation(s)
- Seung-Taek Oh
- Smart Natural Space Research Center, Kongju National University, Cheonan 31080, Korea;
| | - Deog-Hyeon Ga
- Department of Computer Science & Engineering, Kongju National University, Cheonan 31080, Korea;
| | - Jae-Hyun Lim
- Department of Computer Science & Engineering, Kongju National University, Cheonan 31080, Korea;
- Department of Urban Systems Engineering, Kongju National University, Cheonan 31080, Korea
- Correspondence: ; Tel.: +82-10-8864-6195
| |
Collapse
|
9
|
Filippou V, Redmond AC, Bennion J, Backhouse MR, Wong D. Capturing accelerometer outputs in healthy volunteers under normal and simulated-pathological conditions using ML classifiers .. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:4604-4607. [PMID: 33019019 DOI: 10.1109/embc44109.2020.9176201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Wearable devices offer a possible solution for acquiring objective measurements of physical activity. Most current algorithms are derived using data from healthy volunteers. It is unclear whether such algorithms are suitable in specific clinical scenarios, such as when an individual has altered gait. We hypothesized that algorithms trained on healthy population will result in less accurate results when tested in individuals with altered gait. We further hypothesized that algorithms trained on simulated-pathological gait would prove better at classifying abnormal activity. We studied healthy volunteers to assess whether activity classification accuracy differed for those with healthy and simulated-pathological conditions. Healthy participants (n=30) were recruited from the University of Leeds to perform nine predefined activities under healthy and simulated-pathological conditions. Activities were captured using a wrist-worn MOX accelerometer (Maastricht Instruments, NL). Data were analyzed based on the Activity-Recognition-Chain process. We trained a Neural-Network, Random-Forests, k-Nearest-Neighbors (k-NN), Support-Vector-Machines (SVM) and Naive Bayes models to classify activity. Algorithms were trained four times; once with `healthy' data, and once with `simulated-pathological data' for each of activity-type and activity-task classification. In activity-type instances, the SVM provided the best results; the accuracy was 98.4% when the algorithm was trained and then tested with unseen data from the same group of healthy individuals. Accuracy dropped to 52.8% when tested on simulated-pathological data. When the model was retrained with simulated-pathological data, prediction accuracy for the corresponding test set was 96.7%. Algorithms developed on healthy data are less accurate for pathological conditions. When evaluating pathological conditions, classifier algorithms developed using data from a target sub-population can restore accuracy to above 95%.
Collapse
|
10
|
Lopes JF, Ludwig L, Barbin DF, Grossmann MVE, Barbon S. Computer Vision Classification of Barley Flour Based on Spatial Pyramid Partition Ensemble. SENSORS (BASEL, SWITZERLAND) 2019; 19:E2953. [PMID: 31277468 PMCID: PMC6650935 DOI: 10.3390/s19132953] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 04/24/2019] [Accepted: 05/13/2019] [Indexed: 11/16/2022]
Abstract
Imaging sensors are largely employed in the food processing industry for quality control. Flour from malting barley varieties is a valuable ingredient in the food industry, but its use is restricted due to quality aspects such as color variations and the presence of husk fragments. On the other hand, naked varieties present superior quality with better visual appearance and nutritional composition for human consumption. Computer Vision Systems (CVS) can provide an automatic and precise classification of samples, but identification of grain and flour characteristics require more specialized methods. In this paper, we propose CVS combined with the Spatial Pyramid Partition ensemble (SPPe) technique to distinguish between naked and malting types of twenty-two flour varieties using image features and machine learning. SPPe leverages the analysis of patterns from different spatial regions, providing more reliable classification. Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), J48 decision tree, and Random Forest (RF) were compared for samples' classification. Machine learning algorithms embedded in the CVS were induced based on 55 image features. The results ranged from 75.00% (k-NN) to 100.00% (J48) accuracy, showing that sample assessment by CVS with SPPe was highly accurate, representing a potential technique for automatic barley flour classification.
Collapse
Affiliation(s)
| | - Leniza Ludwig
- Department of Food Sciences, Londrina State University (UEL), Londrina 86057-970, Brazil
| | | | | | - Sylvio Barbon
- Department of Computer Science, Londrina State University (UEL), Londrina 86057-970, Brazil.
| |
Collapse
|
11
|
Baldominos A, Cervantes A, Saez Y, Isasi P. A Comparison of Machine Learning and Deep Learning Techniques for Activity Recognition using Mobile Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E521. [PMID: 30691177 PMCID: PMC6386875 DOI: 10.3390/s19030521] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 01/16/2019] [Accepted: 01/22/2019] [Indexed: 01/16/2023]
Abstract
We have compared the performance of different machine learning techniques for human activity recognition. Experiments were made using a benchmark dataset where each subject wore a device in the pocket and another on the wrist. The dataset comprises thirteen activities, including physical activities, common postures, working activities and leisure activities. We apply a methodology known as the activity recognition chain, a sequence of steps involving preprocessing, segmentation, feature extraction and classification for traditional machine learning methods; we also tested convolutional deep learning networks that operate on raw data instead of using computed features. Results show that combination of two sensors does not necessarily result in an improved accuracy. We have determined that best results are obtained by the extremely randomized trees approach, operating on precomputed features and on data obtained from the wrist sensor. Deep learning architectures did not produce competitive results with the tested architecture.
Collapse
Affiliation(s)
- Alejandro Baldominos
- Department of Computer Science, University Carlos III of Madrid, 28911 Leganés, Madrid, Spain.
| | - Alejandro Cervantes
- Department of Computer Science, University Carlos III of Madrid, 28911 Leganés, Madrid, Spain.
| | - Yago Saez
- Department of Computer Science, University Carlos III of Madrid, 28911 Leganés, Madrid, Spain.
| | - Pedro Isasi
- Department of Computer Science, University Carlos III of Madrid, 28911 Leganés, Madrid, Spain.
| |
Collapse
|
12
|
De Falco I, De Pietro G, Sannino G. Evaluation of artificial intelligence techniques for the classification of different activities of daily living and falls. Neural Comput Appl 2019. [DOI: 10.1007/s00521-018-03973-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
13
|
Iss2Image: A Novel Signal-Encoding Technique for CNN-Based Human Activity Recognition. SENSORS 2018; 18:s18113910. [PMID: 30428600 PMCID: PMC6263516 DOI: 10.3390/s18113910] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Revised: 11/05/2018] [Accepted: 11/12/2018] [Indexed: 11/16/2022]
Abstract
The most significant barrier to success in human activity recognition is extracting and selecting the right features. In traditional methods, the features are chosen by humans, which requires the user to have expert knowledge or to do a large amount of empirical study. Newly developed deep learning technology can automatically extract and select features. Among the various deep learning methods, convolutional neural networks (CNNs) have the advantages of local dependency and scale invariance and are suitable for temporal data such as accelerometer (ACC) signals. In this paper, we propose an efficient human activity recognition method, namely Iss2Image (Inertial sensor signal to Image), a novel encoding technique for transforming an inertial sensor signal into an image with minimum distortion and a CNN model for image-based activity classification. Iss2Image converts real number values from the X, Y, and Z axes into three color channels to precisely infer correlations among successive sensor signal values in three different dimensions. We experimentally evaluated our method using several well-known datasets and our own dataset collected from a smartphone and smartwatch. The proposed method shows higher accuracy than other state-of-the-art approaches on the tested datasets.
Collapse
|
14
|
Zhao S, Li W, Cao J. A User-Adaptive Algorithm for Activity Recognition Based on K-Means Clustering, Local Outlier Factor, and Multivariate Gaussian Distribution. SENSORS 2018; 18:s18061850. [PMID: 29882788 PMCID: PMC6022149 DOI: 10.3390/s18061850] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2018] [Revised: 06/01/2018] [Accepted: 06/04/2018] [Indexed: 11/21/2022]
Abstract
Mobile activity recognition is significant to the development of human-centric pervasive applications including elderly care, personalized recommendations, etc. Nevertheless, the distribution of inertial sensor data can be influenced to a great extent by varying users. This means that the performance of an activity recognition classifier trained by one user’s dataset will degenerate when transferred to others. In this study, we focus on building a personalized classifier to detect four categories of human activities: light intensity activity, moderate intensity activity, vigorous intensity activity, and fall. In order to solve the problem caused by different distributions of inertial sensor signals, a user-adaptive algorithm based on K-Means clustering, local outlier factor (LOF), and multivariate Gaussian distribution (MGD) is proposed. To automatically cluster and annotate a specific user’s activity data, an improved K-Means algorithm with a novel initialization method is designed. By quantifying the samples’ informative degree in a labeled individual dataset, the most profitable samples can be selected for activity recognition model adaption. Through experiments, we conclude that our proposed models can adapt to new users with good recognition performance.
Collapse
Affiliation(s)
- Shizhen Zhao
- School of Logistics Engineering, Wuhan University of Technology, Wuhan 430070, China.
| | - Wenfeng Li
- School of Logistics Engineering, Wuhan University of Technology, Wuhan 430070, China.
| | - Jingjing Cao
- School of Logistics Engineering, Wuhan University of Technology, Wuhan 430070, China.
| |
Collapse
|