1
|
He R, Chen L, Chu P, Gao P, Wang J. Recent advances in nonenzymatic electrochemical biosensors for sports biomarkers: focusing on antibodies, aptamers and molecularly imprinted polymers. ANALYTICAL METHODS : ADVANCING METHODS AND APPLICATIONS 2024. [PMID: 39212159 DOI: 10.1039/d4ay01002g] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Nonenzymatic electrochemical biosensors, renowned for their high sensitivity, multi-target analysis capabilities, and miniaturized integration, align well with the requirements of non-invasive, multi-index integrated, continuous monitoring, and user-friendly wearable biosensors in sports science. In the past three years, novel strategies targeting specific responses to sports biomarkers have opened new avenues for applications in sports science. However, these advancements also pose challenges in achieving adequate sensitivity and specificity for online analysis of complex sweat bio-samples. Our article focuses on three key nonenzymatic electrochemical biosensing strategies: antigen-antibody reactions, nucleic acid aptamer recognition, and molecular imprinting capture. We delve into strategies to enhance specificity and sensitivity in the application of biosensors in sports science, including shortening signal transduction paths through built-in signal probes, increasing reaction sites through increased surface area and the introduction of nanostructures, multi-target analyses, and microfluidic techniques.
Collapse
Affiliation(s)
- Rui He
- Physical Education Department, Wuhan University, No. 299 Bayi Road, Wuchang District, Wuhan City, Hubei province, People's Republic of China
| | - Long Chen
- School of Physical Education and Equestrian, Wuhan Business University, No. 816 Dongfeng Avenue, Wuhan Economic and Technological Development Zone, Hubei Province, People's Republic of China
| | - Pengfei Chu
- School of Sports Science and Physical Education, China University of Geosciences, Wuhan 430074, People's Republic of China.
| | - Pengcheng Gao
- Faculty of Materials Science and Chemistry, China University of Geosciences, Wuhan 430074, People's Republic of China.
| | - Junjie Wang
- School of Sports Science and Physical Education, China University of Geosciences, Wuhan 430074, People's Republic of China.
| |
Collapse
|
2
|
Khan AR, Manzoor HU, Ayaz F, Imran MA, Zoha A. A Privacy and Energy-Aware Federated Framework for Human Activity Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:9339. [PMID: 38067712 PMCID: PMC10708886 DOI: 10.3390/s23239339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/08/2023] [Accepted: 11/17/2023] [Indexed: 12/18/2023]
Abstract
Human activity recognition (HAR) using wearable sensors enables continuous monitoring for healthcare applications. However, the conventional centralised training of deep learning models on sensor data poses challenges related to privacy, communication costs, and on-device efficiency. This paper proposes a federated learning framework integrating spiking neural networks (SNNs) with long short-term memory (LSTM) networks for energy-efficient and privacy-preserving HAR. The hybrid spiking-LSTM (S-LSTM) model synergistically combines the event-driven efficiency of SNNs and the sequence modelling capability of LSTMs. The model is trained using surrogate gradient learning and backpropagation through time, enabling fully supervised end-to-end learning. Extensive evaluations of two public datasets demonstrate that the proposed approach outperforms LSTM, CNN, and S-CNN models in accuracy and energy efficiency. For instance, the proposed S-LSTM achieved an accuracy of 97.36% and 89.69% for indoor and outdoor scenarios, respectively. Furthermore, the results also showed a significant improvement in energy efficiency of 32.30%, compared to simple LSTM. Additionally, we highlight the significance of personalisation in HAR, where fine-tuning with local data enhances model accuracy by up to 9% for individual users.
Collapse
Affiliation(s)
- Ahsan Raza Khan
- James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK; (A.R.K.); (H.U.M.); (F.A.); (M.A.I.)
| | - Habib Ullah Manzoor
- James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK; (A.R.K.); (H.U.M.); (F.A.); (M.A.I.)
- FSD-Campus, University of Engineering and Technology, Lahore 38000, Pakistan
| | - Fahad Ayaz
- James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK; (A.R.K.); (H.U.M.); (F.A.); (M.A.I.)
| | - Muhammad Ali Imran
- James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK; (A.R.K.); (H.U.M.); (F.A.); (M.A.I.)
| | - Ahmed Zoha
- James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK; (A.R.K.); (H.U.M.); (F.A.); (M.A.I.)
| |
Collapse
|
3
|
Khan YA, Imaduddin S, Singh YP, Wajid M, Usman M, Abbas M. Artificial Intelligence Based Approach for Classification of Human Activities Using MEMS Sensors Data. SENSORS (BASEL, SWITZERLAND) 2023; 23:1275. [PMID: 36772315 PMCID: PMC9919731 DOI: 10.3390/s23031275] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/15/2023] [Accepted: 01/18/2023] [Indexed: 06/18/2023]
Abstract
The integration of Micro Electronic Mechanical Systems (MEMS) sensor technology in smartphones has greatly improved the capability for Human Activity Recognition (HAR). By utilizing Machine Learning (ML) techniques and data from these sensors, various human motion activities can be classified. This study performed experiments and compiled a large dataset of nine daily activities, including Laying Down, Stationary, Walking, Brisk Walking, Running, Stairs-Up, Stairs-Down, Squatting, and Cycling. Several ML models, such as Decision Tree Classifier, Random Forest Classifier, K Neighbors Classifier, Multinomial Logistic Regression, Gaussian Naive Bayes, and Support Vector Machine, were trained on sensor data collected from accelerometer, gyroscope, and magnetometer embedded in smartphones and wearable devices. The highest test accuracy of 95% was achieved using the random forest algorithm. Additionally, a custom-built Bidirectional Long-Short-Term Memory (Bi-LSTM) model, a type of Recurrent Neural Network (RNN), was proposed and yielded an improved test accuracy of 98.1%. This approach differs from traditional algorithmic-based human activity detection used in current wearable technologies, resulting in improved accuracy.
Collapse
Affiliation(s)
- Yusuf Ahmed Khan
- Department of Electronics Engineering, ZHCET, Aligarh Muslim University, Aligarh 202002, India
| | - Syed Imaduddin
- Department of Electronics Engineering, ZHCET, Aligarh Muslim University, Aligarh 202002, India
| | - Yash Pratap Singh
- Department of Electronics Engineering, ZHCET, Aligarh Muslim University, Aligarh 202002, India
| | - Mohd Wajid
- Department of Electronics Engineering, ZHCET, Aligarh Muslim University, Aligarh 202002, India
| | - Mohammed Usman
- Department of Electrical Engineering, King Khalid University, Abha 61411, Saudi Arabia
| | - Mohamed Abbas
- Electrical Engineering Department, College of Engineering, King Khalid University, Abha 61421, Saudi Arabia
- Electronics and Communication Department, College of Engineering, Delta University for Science and Technology, Gamasa 35712, Egypt
| |
Collapse
|
4
|
Chan HL, Ouyang Y, Chen RS, Lai YH, Kuo CC, Liao GS, Hsu WY, Chang YJ. Deep Neural Network for the Detections of Fall and Physical Activities Using Foot Pressures and Inertial Sensing. SENSORS (BASEL, SWITZERLAND) 2023; 23:495. [PMID: 36617087 PMCID: PMC9824659 DOI: 10.3390/s23010495] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 12/16/2022] [Accepted: 12/28/2022] [Indexed: 06/17/2023]
Abstract
Fall detection and physical activity (PA) classification are important health maintenance issues for the elderly and people with mobility dysfunctions. The literature review showed that most studies concerning fall detection and PA classification addressed these issues individually, and many were based on inertial sensing from the trunk and upper extremities. While shoes are common footwear in daily off-bed activities, most of the aforementioned studies did not focus much on shoe-based measurements. In this paper, we propose a novel footwear approach to detect falls and classify various types of PAs based on a convolutional neural network and recurrent neural network hybrid. The footwear-based detections using deep-learning technology were demonstrated to be efficient based on the data collected from 32 participants, each performing simulated falls and various types of PAs: fall detection with inertial measures had a higher F1-score than detection using foot pressures; the detections of dynamic PAs (jump, jog, walks) had higher F1-scores while using inertial measures, whereas the detections of static PAs (sit, stand) had higher F1-scores while using foot pressures; the combination of foot pressures and inertial measures was most efficient in detecting fall, static, and dynamic PAs.
Collapse
Affiliation(s)
- Hsiao-Lung Chan
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
- Department of Biomedical Engineering, Chang Gung University, Taoyuan 333, Taiwan
- Neuroscience Research Center, Chang Gung Memorial Hospital, Linkou, Taoyuan 333, Taiwan
| | - Yuan Ouyang
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
- Department of Neurology, Chang Gung Memorial Hospital, Linkou, Taoyuan 333, Taiwan
| | - Rou-Shayn Chen
- Neuroscience Research Center, Chang Gung Memorial Hospital, Linkou, Taoyuan 333, Taiwan
- Department of Neurology, Chang Gung Memorial Hospital, Linkou, Taoyuan 333, Taiwan
| | - Yen-Hung Lai
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
| | - Cheng-Chung Kuo
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
| | - Guo-Sheng Liao
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
| | - Wen-Yen Hsu
- Department of Electrical Engineering, Chang Gung University, Taoyuan 333, Taiwan
| | - Ya-Ju Chang
- Neuroscience Research Center, Chang Gung Memorial Hospital, Linkou, Taoyuan 333, Taiwan
- School of Physical Therapy and Graduate Institute of Rehabilitation Science, College of Medicine, and Health Aging Research Center, Chang Gung University, Taoyuan 333, Taiwan
| |
Collapse
|
5
|
New machine learning approaches for real-life human activity recognition using smartphone sensor-based data. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
6
|
Ige AO, Mohd Noor MH. A lightweight deep learning with feature weighting for activity recognition. Comput Intell 2022. [DOI: 10.1111/coin.12565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
7
|
Arshad MH, Bilal M, Gani A. Human Activity Recognition: Review, Taxonomy and Open Challenges. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22176463. [PMID: 36080922 PMCID: PMC9460866 DOI: 10.3390/s22176463] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 08/23/2022] [Accepted: 08/24/2022] [Indexed: 06/12/2023]
Abstract
Nowadays, Human Activity Recognition (HAR) is being widely used in a variety of domains, and vision and sensor-based data enable cutting-edge technologies to detect, recognize, and monitor human activities. Several reviews and surveys on HAR have already been published, but due to the constantly growing literature, the status of HAR literature needed to be updated. Hence, this review aims to provide insights on the current state of the literature on HAR published since 2018. The ninety-five articles reviewed in this study are classified to highlight application areas, data sources, techniques, and open research challenges in HAR. The majority of existing research appears to have concentrated on daily living activities, followed by user activities based on individual and group-based activities. However, there is little literature on detecting real-time activities such as suspicious activity, surveillance, and healthcare. A major portion of existing studies has used Closed-Circuit Television (CCTV) videos and Mobile Sensors data. Convolutional Neural Network (CNN), Long short-term memory (LSTM), and Support Vector Machine (SVM) are the most prominent techniques in the literature reviewed that are being utilized for the task of HAR. Lastly, the limitations and open challenges that needed to be addressed are discussed.
Collapse
Affiliation(s)
- Muhammad Haseeb Arshad
- Department of Computer Science, National University of Computer and Emerging Sciences, Chiniot-Faisalabad Campus, Chiniot 35400, Pakistan
| | - Muhammad Bilal
- Department of Software Engineering, National University of Computer and Emerging Sciences, Chiniot-Faisalabad Campus, Chiniot 35400, Pakistan
| | - Abdullah Gani
- Faculty of Computing and Informatics, University Malaysia Sabah, Kota Kinabalu 88400, Sabah, Malaysia
| |
Collapse
|
8
|
Abstract
It is undeniable that mobile devices have become an inseparable part of human’s daily routines due to the persistent growth of high-quality sensor devices, powerful computational resources and massive storage capacity nowadays. Similarly, the fast development of Internet of Things technology has motivated people into the research and wide applications of sensors, such as the human activity recognition system. This results in substantial existing works that have utilized wearable sensors to identify human activities with a variety of techniques. In this paper, a hybrid deep learning model that amalgamates a one-dimensional Convolutional Neural Network with a bidirectional long short-term memory (1D-CNN-BiLSTM) model is proposed for wearable sensor-based human activity recognition. The one-dimensional Convolutional Neural Network transforms the prominent information in the sensor time series data into high level representative features. Thereafter, the bidirectional long short-term memory encodes the long-range dependencies in the features by gating mechanisms. The performance evaluation reveals that the proposed 1D-CNN-BiLSTM outshines the existing methods with a recognition rate of 95.48% on the UCI-HAR dataset, 94.17% on the Motion Sense dataset and 100% on the Single Accelerometer dataset.
Collapse
|
9
|
|
10
|
Ariza-Colpas PP, Vicario E, Oviedo-Carrascal AI, Butt Aziz S, Piñeres-Melo MA, Quintero-Linero A, Patara F. Human Activity Recognition Data Analysis: History, Evolutions, and New Trends. SENSORS 2022; 22:s22093401. [PMID: 35591091 PMCID: PMC9103712 DOI: 10.3390/s22093401] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 03/31/2022] [Accepted: 04/04/2022] [Indexed: 01/23/2023]
Abstract
The Assisted Living Environments Research Area–AAL (Ambient Assisted Living), focuses on generating innovative technology, products, and services to assist, medical care and rehabilitation to older adults, to increase the time in which these people can live. independently, whether they suffer from neurodegenerative diseases or some disability. This important area is responsible for the development of activity recognition systems—ARS (Activity Recognition Systems), which is a valuable tool when it comes to identifying the type of activity carried out by older adults, to provide them with assistance. that allows you to carry out your daily activities with complete normality. This article aims to show the review of the literature and the evolution of the different techniques for processing this type of data from supervised, unsupervised, ensembled learning, deep learning, reinforcement learning, transfer learning, and metaheuristics approach applied to this sector of science. health, showing the metrics of recent experiments for researchers in this area of knowledge. As a result of this article, it can be identified that models based on reinforcement or transfer learning constitute a good line of work for the processing and analysis of human recognition activities.
Collapse
Affiliation(s)
- Paola Patricia Ariza-Colpas
- Department of Computer Science and Electronics, Universidad de la Costa CUC, Barranquilla 080002, Colombia
- Faculty of Engineering in Information and Communication Technologies, Universidad Pontificia Bolivariana, Medellín 050031, Colombia;
- Correspondence:
| | - Enrico Vicario
- Department of Information Engineering, University of Florence, 50139 Firenze, Italy; (E.V.); (F.P.)
| | - Ana Isabel Oviedo-Carrascal
- Faculty of Engineering in Information and Communication Technologies, Universidad Pontificia Bolivariana, Medellín 050031, Colombia;
| | - Shariq Butt Aziz
- Department of Computer Science and IT, University of Lahore, Lahore 44000, Pakistan;
| | | | | | - Fulvio Patara
- Department of Information Engineering, University of Florence, 50139 Firenze, Italy; (E.V.); (F.P.)
| |
Collapse
|
11
|
Wearable Sensors for Activity Recognition in Ultimate Frisbee Using Convolutional Neural Networks and Transfer Learning. SENSORS 2022; 22:s22072560. [PMID: 35408174 PMCID: PMC9002797 DOI: 10.3390/s22072560] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/22/2022] [Accepted: 03/24/2022] [Indexed: 12/10/2022]
Abstract
In human activity recognition (HAR), activities are automatically recognized and classified from a continuous stream of input sensor data. Although the scientific community has developed multiple approaches for various sports in recent years, marginal sports are rarely considered. These approaches cannot directly be applied to marginal sports, where available data are sparse and costly to acquire. Thus, we recorded and annotated inertial measurement unit (IMU) data containing different types of Ultimate Frisbee throws to investigate whether Convolutional Neural Networks (CNNs) and transfer learning can solve this. The relevant actions were automatically detected and were classified using a CNN. The proposed pipeline reaches an accuracy of 66.6%, distinguishing between nine different fine-grained classes. For the classification of the three basic throwing techniques, we achieve an accuracy of 89.9%. Furthermore, the results were compared to a transfer learning-based approach using a beach volleyball dataset as the source. Even if transfer learning could not improve the classification accuracy, the training time was significantly reduced. Finally, the effect of transfer learning on a reduced dataset, i.e., without data augmentations, is analyzed. While having the same number of training subjects, using the pre-trained weights improves the generalization capabilities of the network, i.e., increasing the accuracy and F1 score. This shows that transfer learning can be beneficial, especially when dealing with small datasets, as in marginal sports, and therefore, can improve the tracking of marginal sports.
Collapse
|
12
|
Li Y, Wang L. Human Activity Recognition Based on Residual Network and BiLSTM. SENSORS 2022; 22:s22020635. [PMID: 35062604 PMCID: PMC8778132 DOI: 10.3390/s22020635] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 12/19/2021] [Accepted: 01/12/2022] [Indexed: 12/07/2022]
Abstract
Due to the wide application of human activity recognition (HAR) in sports and health, a large number of HAR models based on deep learning have been proposed. However, many existing models ignore the effective extraction of spatial and temporal features of human activity data. This paper proposes a deep learning model based on residual block and bi-directional LSTM (BiLSTM). The model first extracts spatial features of multidimensional signals of MEMS inertial sensors automatically using the residual block, and then obtains the forward and backward dependencies of feature sequence using BiLSTM. Finally, the obtained features are fed into the Softmax layer to complete the human activity recognition. The optimal parameters of the model are obtained by experiments. A homemade dataset containing six common human activities of sitting, standing, walking, running, going upstairs and going downstairs is developed. The proposed model is evaluated on our dataset and two public datasets, WISDM and PAMAP2. The experimental results show that the proposed model achieves the accuracy of 96.95%, 97.32% and 97.15% on our dataset, WISDM and PAMAP2, respectively. Compared with some existing models, the proposed model has better performance and fewer parameters.
Collapse
Affiliation(s)
- Yong Li
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China;
| | - Luping Wang
- School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou 510006, China
- Correspondence:
| |
Collapse
|
13
|
Human Behavior Recognition Model Based on Feature and Classifier Selection. SENSORS 2021; 21:s21237791. [PMID: 34883795 PMCID: PMC8659462 DOI: 10.3390/s21237791] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 11/07/2021] [Accepted: 11/19/2021] [Indexed: 02/04/2023]
Abstract
With the rapid development of the computer and sensor field, inertial sensor data have been widely used in human activity recognition. At present, most relevant studies divide human activities into basic actions and transitional actions, in which basic actions are classified by unified features, while transitional actions usually use context information to determine the category. For the existing single method that cannot well realize human activity recognition, this paper proposes a human activity classification and recognition model based on smartphone inertial sensor data. The model fully considers the feature differences of different properties of actions, uses a fixed sliding window to segment the human activity data of inertial sensors with different attributes and, finally, extracts the features and recognizes them on different classifiers. The experimental results show that dynamic and transitional actions could obtain the best recognition performance on support vector machines, while static actions could obtain better classification effects on ensemble classifiers; as for feature selection, the frequency-domain feature used in dynamic action had a high recognition rate, up to 99.35%. When time-domain features were used for static and transitional actions, higher recognition rates were obtained, 98.40% and 91.98%, respectively.
Collapse
|
14
|
IMU-Based Hand Gesture Interface Implementing a Sequence-Matching Algorithm for the Control of Assistive Technologies. SIGNALS 2021. [DOI: 10.3390/signals2040043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Assistive technologies (ATs) often have a high-dimensionality of possible movements (e.g., assistive robot with several degrees of freedom or a computer), but the users have to control them with low-dimensionality sensors and interfaces (e.g., switches). This paper presents the development of an open-source interface based on a sequence-matching algorithm for the control of ATs. Sequence matching allows the user to input several different commands with low-dimensionality sensors by not only recognizing their output, but also their sequential pattern through time, similarly to Morse code. In this paper, the algorithm is applied to the recognition of hand gestures, inputted using an inertial measurement unit worn by the user. An SVM-based algorithm, that is aimed to be robust, with small training sets (e.g., five examples per class) is developed to recognize gestures in real-time. Finally, the interface is applied to control a computer’s mouse and keyboard. The interface was compared against (and combined with) the head movement-based AssystMouse software. The hand gesture interface showed encouraging results for this application but could also be used with other body parts (e.g., head and feet) and could control various ATs (e.g., assistive robotic arm and prosthesis).
Collapse
|
15
|
Ganser A, Hollaus B, Stabinger S. Classification of Tennis Shots with a Neural Network Approach. SENSORS 2021; 21:s21175703. [PMID: 34502593 PMCID: PMC8433919 DOI: 10.3390/s21175703] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 08/01/2021] [Accepted: 08/18/2021] [Indexed: 01/17/2023]
Abstract
Data analysis plays an increasingly valuable role in sports. The better the data that is analysed, the more concise training methods that can be chosen. Several solutions already exist for this purpose in the tennis industry; however, none of them combine data generation with a wristband and classification with a deep convolutional neural network (CNN). In this article, we demonstrate the development of a reliable shot detection trigger and a deep neural network that classifies tennis shots into three and five shot types. We generate a dataset for the training of neural networks with the help of a sensor wristband, which recorded 11 signals, including an inertial measurement unit (IMU). The final dataset included 5682 labelled shots of 16 players of age 13–70 years, predominantly at an amateur level. Two state-of-the-art architectures for time series classification (TSC) are compared, namely a fully convolutional network (FCN) and a residual network (ResNet). Recent advances in the field of machine learning, like the Mish activation function and the Ranger optimizer, are utilized. Training with the rather inhomogeneous dataset led to an F1 score of 96% in classification of the main shots and 94% for the expansion. Consequently, the study yielded a solid base for more complex tennis analysis tools, such as the indication of success rates per shot type.
Collapse
Affiliation(s)
- Andreas Ganser
- Department of Mechatronics, MCI, Maximilianstraße 2, 6020 Innsbruck, Austria;
| | - Bernhard Hollaus
- Department of Mechatronics, MCI, Maximilianstraße 2, 6020 Innsbruck, Austria;
- Correspondence: ; Tel.: +43-(0)-512-2070-3934
| | | |
Collapse
|
16
|
Russell B, McDaid A, Toscano W, Hume P. Predicting Fatigue in Long Duration Mountain Events with a Single Sensor and Deep Learning Model. SENSORS 2021; 21:s21165442. [PMID: 34450884 PMCID: PMC8399921 DOI: 10.3390/s21165442] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 07/31/2021] [Accepted: 08/07/2021] [Indexed: 01/09/2023]
Abstract
AIM To determine whether an AI model and single sensor measuring acceleration and ECG could model cognitive and physical fatigue for a self-paced trail run. METHODS A field-based protocol of continuous fatigue repeated hourly induced physical (~45 min) and cognitive (~10 min) fatigue on one healthy participant. The physical load was a 3.8 km, 200 m vertical gain, trail run, with acceleration and electrocardiogram (ECG) data collected using a single sensor. Cognitive load was a Multi Attribute Test Battery (MATB) and separate assessment battery included the Finger Tap Test (FTT), Stroop, Trail Making A and B, Spatial Memory, Paced Visual Serial Addition Test (PVSAT), and a vertical jump. A fatigue prediction model was implemented using a Convolutional Neural Network (CNN). RESULTS When the fatigue test battery results were compared for sensitivity to the protocol load, FTT right hand (R2 0.71) and Jump Height (R2 0.78) were the most sensitive while the other tests were less sensitive (R2 values Stroop 0.49, Trail Making A 0.29, Trail Making B 0.05, PVSAT 0.03, spatial memory 0.003). The best prediction results were achieved with a rolling average of 200 predictions (102.4 s), during set activity types, mean absolute error for 'walk up' (MAE200 12.5%), and range of absolute error for 'run down' (RAE200 16.7%). CONCLUSIONS We were able to measure cognitive and physical fatigue using a single wearable sensor during a practical field protocol, including contextual factors in conjunction with a neural network model. This research has practical application to fatigue research in the field.
Collapse
Affiliation(s)
- Brian Russell
- Sports Performance Research Institute, Auckland University of Technology, Auckland 0632, New Zealand;
- National Aeronautics and Space Administration, Ames Research Center, Moffett Field, CA 94043, USA;
- Correspondence:
| | - Andrew McDaid
- Department of Mechanical Engineering, University of Auckland, Auckland 1142, New Zealand;
| | - William Toscano
- National Aeronautics and Space Administration, Ames Research Center, Moffett Field, CA 94043, USA;
| | - Patria Hume
- Sports Performance Research Institute, Auckland University of Technology, Auckland 0632, New Zealand;
| |
Collapse
|
17
|
Liu J. Convolutional Neural Network-Based Human Movement Recognition Algorithm in Sports Analysis. Front Psychol 2021; 12:663359. [PMID: 34248758 PMCID: PMC8267374 DOI: 10.3389/fpsyg.2021.663359] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 04/20/2021] [Indexed: 11/13/2022] Open
Abstract
In order to analyse the sports psychology of athletes and to identify the psychology of athletes in their movements, a human action recognition (HAR) algorithm has been designed in this study. First, a HAR model is established based on the convolutional neural network (CNN) to classify the current action state by analysing the action information of a task in the collected videos. Secondly, the psychology of basketball players displaying fake actions during the offensive and defensive process is investigated by combining with related sports psychological theories. Then, the psychology of athletes is also analysed through the collected videos, so as to predict the next response action of the athletes. Experimental results show that the combination of grayscale and red-green-blue (RGB) images can reduce the image loss and effectively improve the recognition accuracy of the model. The optimised convolutional three-dimensional network (C3D) HAR model designed in this study has a recognition accuracy of 80% with an image loss of 5.6. Besides, the time complexity is reduced by 33%. Therefore, the proposed optimised C3D can recognise effectively human actions, and the results of this study can provide a reference for the investigation of the image recognition of human action in sports.
Collapse
Affiliation(s)
- Jiatian Liu
- College of Strength and Conditioning, Beijing Sport University, Beijing, China
| |
Collapse
|
18
|
Mei Q, Li M. Research on sports aided teaching and training decision system oriented to deep convolutional neural network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-219033] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Aiming at the construction of the decision-making system for sports-assisted teaching and training, this article first gives a deep convolutional neural network model for sports-assisted teaching and training decision-making. Subsequently, In order to meet the needs of athletes to assist in physical exercise, a squat training robot is built using a self-developed modular flexible cable drive unit, and its control system is designed to assist athletes in squatting training in sports. First, the human squat training mechanism is analyzed, and the overall structure of the robot is determined; second, the robot force servo control strategy is designed, including the flexible cable traction force planning link, the lateral force compensation link and the establishment of a single flexible cable passive force controller; In order to verify the effect of robot training, a single flexible cable force control experiment and a man-machine squat training experiment were carried out. In the single flexible cable force control experiment, the suppression effect of excess force reached more than 50%. In the squat experiment under 200 N, the standard deviation of the system loading force is 7.52 N, and the dynamic accuracy is above 90.2%. Experimental results show that the robot has a reasonable configuration, small footprint, stable control system, high loading accuracy, and can assist in squat training in physical education.
Collapse
Affiliation(s)
- Qinyu Mei
- School of Football, Chengdu Sport University, Chengdu, Sichuan, China
| | - Ming Li
- School of Wushu, Chengdu Sport University, Chengdu, Sichuan, China
| |
Collapse
|
19
|
Li H, Zhang C, Bo J, Ding Z. Deep learning techniques‐based perfection of multi‐sensor fusion oriented human‐robot interaction system for identification of dense organisms. COGNITIVE COMPUTATION AND SYSTEMS 2021. [DOI: 10.1049/ccs2.12010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Haiju Li
- Institute of Automation and Electronics Engineering Qingdao University of Science and Technology Qingdao China
| | - Chuntang Zhang
- Institute of Automation and Electronics Engineering Qingdao University of Science and Technology Qingdao China
| | - Jingwen Bo
- Institute of Automation and Electronics Engineering Qingdao University of Science and Technology Qingdao China
| | | |
Collapse
|
20
|
Maincer D, Mansour M, Hamache A, Boudjedir C, Bounabi M. Switched time delay control based on artificial neural network for fault detection and compensation in robot manipulators. SN APPLIED SCIENCES 2021. [DOI: 10.1007/s42452-021-04376-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
AbstractThis work proposes a switched time delay control scheme based on neural networks for robots subjected to sensors faults. In this scheme, a multilayer perceptron (MLP) artificial neural network (ANN) is introduced to reproduce the same behavior of a robot in the case of no faults. The reproduction characteristic of the MLPs allows instant detection of any important sensor faults. In order to compensate the effects of these faults on the robot’s behavior, a time delay control (TDC) procedure is presented. The proposed controller is composed of two control laws: The first one contains a small gain applied to the faultless robot, while the second scheme uses a high gain that is applied to the robot subjected to faults. The control method applied to the system is decided based on the ANN detection results which switches from the first control law to the second one in the case where an important fault is detected. Simulations are performed on a SCARA arm manipulator to illustrate the feasibility and effectiveness of the proposed controller. The results demonstrate that the free-model aspect of the proposed controller makes it highly suitable for industrial applications.
Collapse
|
21
|
Hair Fescue and Sheep Sorrel Identification Using Deep Learning in Wild Blueberry Production. REMOTE SENSING 2021. [DOI: 10.3390/rs13050943] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Deep learning convolutional neural networks (CNNs) are an emerging technology that provide an opportunity to increase agricultural efficiency through remote sensing and automatic inferencing of field conditions. This paper examined the novel use of CNNs to identify two weeds, hair fescue and sheep sorrel, in images of wild blueberry fields. Commercial herbicide sprayers provide a uniform application of agrochemicals to manage patches of these weeds. Three object-detection and three image-classification CNNs were trained to identify hair fescue and sheep sorrel using images from 58 wild blueberry fields. The CNNs were trained using 1280x720 images and were tested at four different internal resolutions. The CNNs were retrained with progressively smaller training datasets ranging from 3780 to 472 images to determine the effect of dataset size on accuracy. YOLOv3-Tiny was the best object-detection CNN, detecting at least one target weed per image with F1-scores of 0.97 for hair fescue and 0.90 for sheep sorrel at 1280 × 736 resolution. Darknet Reference was the most accurate image-classification CNN, classifying images containing hair fescue and sheep sorrel with F1-scores of 0.96 and 0.95, respectively at 1280 × 736. MobileNetV2 achieved comparable results at the lowest resolution, 864 × 480, with F1-scores of 0.95 for both weeds. Training dataset size had minimal effect on accuracy for all CNNs except Darknet Reference. This technology can be used in a smart sprayer to control target specific spray applications, reducing herbicide use. Future work will involve testing the CNNs for use on a smart sprayer and the development of an application to provide growers with field-specific information. Using CNNs to improve agricultural efficiency will create major cost-savings for wild blueberry producers.
Collapse
|
22
|
Compressing Deep Networks by Neuron Agglomerative Clustering. SENSORS 2020; 20:s20216033. [PMID: 33114078 PMCID: PMC7660330 DOI: 10.3390/s20216033] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 10/03/2020] [Accepted: 10/16/2020] [Indexed: 11/17/2022]
Abstract
In recent years, deep learning models have achieved remarkable successes in various applications, such as pattern recognition, computer vision, and signal processing. However, high-performance deep architectures are often accompanied by a large storage space and long computational time, which make it difficult to fully exploit many deep neural networks (DNNs), especially in scenarios in which computing resources are limited. In this paper, to tackle this problem, we introduce a method for compressing the structure and parameters of DNNs based on neuron agglomerative clustering (NAC). Specifically, we utilize the agglomerative clustering algorithm to find similar neurons, while these similar neurons and the connections linked to them are then agglomerated together. Using NAC, the number of parameters and the storage space of DNNs are greatly reduced, without the support of an extra library or hardware. Extensive experiments demonstrate that NAC is very effective for the neuron agglomeration of both the fully connected and convolutional layers, which are common building blocks of DNNs, delivering similar or even higher network accuracy. Specifically, on the benchmark CIFAR-10 and CIFAR-100 datasets, using NAC to compress the parameters of the original VGGNet by 92.96% and 81.10%, respectively, the compact network obtained still outperforms the original networks.
Collapse
|
23
|
Li J, Wang J, Peng H, Zhang L, Hu Y, Su H. Neural fuzzy approximation enhanced autonomous tracking control of the wheel-legged robot under uncertain physical interaction. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.05.091] [Citation(s) in RCA: 70] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
24
|
Adaptive Robust Force Position Control for Flexible Active Prosthetic Knee Using Gait Trajectory. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10082755] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Active prosthetic knees (APKs) are widely used in the past decades. However, it is still challenging to make them more natural and controllable because: (1) most existing APKs that use rigid actuators have difficulty obtaining more natural walking; and (2) traditional finite-state impedance control has difficulty adjusting parameters for different motions and users. In this paper, a flexible APK with a compact variable stiffness actuator (VSA) is designed for obtaining more flexible bionic characteristics. The VSA joint is implemented by two motors of different sizes, which connect the knee angle and the joint stiffness. Considering the complexity of prothetic lower limb control due to unknown APK dynamics, as well as strong coupling between biological joints and prosthetic joints, an adaptive robust force/position control method is designed for generating a desired gait trajectory of the prosthesis. It can operate without the explicit model of the system dynamics and multiple tuning parameters of different gaits. The proposed model-free scheme utilizes the time-delay estimation technique, sliding mode control, and fuzzy neural network to realize finite-time convergence and gait trajectory tracking. The virtual prototype of APK was established in ADAMS as a testing platform and compared with two traditional time-delay control schemes. Some demonstrations are illustrated, which show that the proposed method has superior tracking characteristics and stronger robustness under uncertain disturbances within the trajectory error in ± 0 . 5 degrees. The VSA joint can reduce energy consumption by adjusting stiffness appropriately. Furthermore, the feasibility of this method was verified in a human–machine hybrid control model.
Collapse
|
25
|
Neira-Rodado D, Nugent C, Cleland I, Velasquez J, Viloria A. Evaluating the Impact of a Two-Stage Multivariate Data Cleansing Approach to Improve to the Performance of Machine Learning Classifiers: A Case Study in Human Activity Recognition. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20071858. [PMID: 32230844 PMCID: PMC7180455 DOI: 10.3390/s20071858] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 03/12/2020] [Accepted: 03/13/2020] [Indexed: 06/10/2023]
Abstract
Human activity recognition (HAR) is a popular field of study. The outcomes of the projects in this area have the potential to impact on the quality of life of people with conditions such as dementia. HAR is focused primarily on applying machine learning classifiers on data from low level sensors such as accelerometers. The performance of these classifiers can be improved through an adequate training process. In order to improve the training process, multivariate outlier detection was used in order to improve the quality of data in the training set and, subsequently, performance of the classifier. The impact of the technique was evaluated with KNN and random forest (RF) classifiers. In the case of KNN, the performance of the classifier was improved from 55.9% to 63.59%.
Collapse
Affiliation(s)
- Dionicio Neira-Rodado
- Department of Industrial Agroindustrial and Operations Management GIAO, Universidad de la Costa, Barranquilla 080002, Colombia; (J.V.); (A.V.)
| | - Chris Nugent
- School of Computing, Ulster University, Shore Road, Newtownabbey, County Antrim BT37 0QB, Northern Ireland, UK; (C.N.); (I.C.)
| | - Ian Cleland
- School of Computing, Ulster University, Shore Road, Newtownabbey, County Antrim BT37 0QB, Northern Ireland, UK; (C.N.); (I.C.)
| | - Javier Velasquez
- Department of Industrial Agroindustrial and Operations Management GIAO, Universidad de la Costa, Barranquilla 080002, Colombia; (J.V.); (A.V.)
| | - Amelec Viloria
- Department of Industrial Agroindustrial and Operations Management GIAO, Universidad de la Costa, Barranquilla 080002, Colombia; (J.V.); (A.V.)
| |
Collapse
|
26
|
CAPTCHA Image Generation: Two-Step Style-Transfer Learning in Deep Neural Networks. SENSORS 2020; 20:s20051495. [PMID: 32182829 PMCID: PMC7085644 DOI: 10.3390/s20051495] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 03/02/2020] [Accepted: 03/05/2020] [Indexed: 11/26/2022]
Abstract
Mobile devices such as sensors are used to connect to the Internet and provide services to users. Web services are vulnerable to automated attacks, which can restrict mobile devices from accessing websites. To prevent such automated attacks, CAPTCHAs are widely used as a security solution. However, when a high level of distortion has been applied to a CAPTCHA to make it resistant to automated attacks, the CAPTCHA becomes difficult for a human to recognize. In this work, we propose a method for generating a CAPTCHA image that will resist recognition by machines while maintaining its recognizability to humans. The method utilizes the style transfer method, and creates a new image, called a style-plugged-CAPTCHA image, by incorporating the styles of other images while keeping the content of the original CAPTCHA. In our experiment, we used the TensorFlow machine learning library and six CAPTCHA datasets in use on actual websites. The experimental results show that the proposed scheme reduces the rate of recognition by the DeCAPTCHA system to 3.5% and 3.2% using one style image and two style images, respectively, while maintaining recognizability by humans.
Collapse
|
27
|
Su H, Ovur SE, Zhou X, Qi W, Ferrigno G, De Momi E. Depth vision guided hand gesture recognition using electromyographic signals. Adv Robot 2020. [DOI: 10.1080/01691864.2020.1713886] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Hang Su
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milano, Italy
| | - Salih Ertug Ovur
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milano, Italy
| | - Xuanyi Zhou
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milano, Italy
- State Key Laboratory of High Performance Complicated, Central South University, Changsha, People's Republic of China
| | - Wen Qi
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milano, Italy
| | - Giancarlo Ferrigno
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milano, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milano, Italy
| |
Collapse
|
28
|
Qi W, Aliverti A. A Multimodal Wearable System for Continuous and Real-Time Breathing Pattern Monitoring During Daily Activity. IEEE J Biomed Health Inform 2020; 24:2199-2207. [PMID: 31902783 DOI: 10.1109/jbhi.2019.2963048] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE This study aims to understand breathing patterns during daily activities by developing a wearable respiratory and activity monitoring (WRAM) system. METHODS A novel multimodal fusion architecture is proposed to calculate the respiratory and exercise parameters and simultaneously identify human actions. A hybrid hierarchical classification (HHC) algorithm combining deep learning and threshold-based methods is presented to distinguish 15 complex activities for accuracy enhancement and fast computation. A series of signal processing algorithms are utilized and integrated to calculate breathing and motion indices. The designed wireless communication structure achieves the interactions among chest bands, mobile devices, and the data processing center. RESULTS The advantage of the proposed HHC method is evaluated by comparing the average accuracy (97.22%) and predictive time (0.0094 s) with machine learning and deep learning approaches. The nine breathing patterns during 15 activities were analyzed by investigating the data from 12 subjects. With 12 hours of naturalistic data collected from one participant, the WRAM system reports the breathing and exercise performance within the identified motions. The demonstration shows the ability of the WRAM system to monitor multiple users breathing and exercise status in real-time. CONCLUSION The present system demonstrates the usefulness of the framework of breathing pattern monitoring during daily activities, which may be potentially used in healthcare. SIGNIFICANCE The proposed multimodal based WRAM system offers new insights into the breathing function of exercise in action and presents a novel approach for precision medicine and health state monitoring.
Collapse
|
29
|
Zhuang Z, Xue Y. Sport-Related Human Activity Detection and Recognition Using a Smartwatch. SENSORS 2019; 19:s19225001. [PMID: 31744127 PMCID: PMC6891622 DOI: 10.3390/s19225001] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 11/09/2019] [Accepted: 11/12/2019] [Indexed: 11/20/2022]
Abstract
As an active research field, sport-related activity monitoring plays an important role in people’s lives and health. This is often viewed as a human activity recognition task in which a fixed-length sliding window is used to segment long-term activity signals. However, activities with complex motion states and non-periodicity can be better monitored if the monitoring algorithm is able to accurately detect the duration of meaningful motion states. However, this ability is lacking in the sliding window approach. In this study, we focused on two types of activities for sport-related activity monitoring, which we regard as a human activity detection and recognition task. For non-periodic activities, we propose an interval-based detection and recognition method. The proposed approach can accurately determine the duration of each target motion state by generating candidate intervals. For weak periodic activities, we propose a classification-based periodic matching method that uses periodic matching to segment the motion sate. Experimental results show that the proposed methods performed better than the sliding window method.
Collapse
|