1
|
Arulkumar A, Babu P. Human hand gesture recognition using fast Fourier transform with coot optimization based on deep neural network. NETWORK (BRISTOL, ENGLAND) 2024; 35:488-519. [PMID: 39169674 DOI: 10.1080/0954898x.2024.2389231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 12/28/2023] [Accepted: 03/11/2024] [Indexed: 08/23/2024]
Abstract
Hand motion detection is particularly important for managing the movement of individuals who have limbs amputated. The existing algorithm is complex, time-consuming and difficult to achieve better accuracy. A DNN is suggested to recognize human hand movements in order to get over these problems.Initially, the raw input EMG signal is captured then the signal is pre-processed using high-pass Butterworth filter and low-pass filter which is utilized to eliminate the noise present in the signal. After that pre-processed EMG signal is segmented using sliding window which is used for solving the issue of overlapping. Then the features are extracted from the segmented signal using Fast Fourier Transform. Then selected the appropriate and optimal number of features from the feature subset using coot optimization algorithm. After that selected features are given as input for deep neural network classifier for recognizing the hand movements of human. The simulation analysis shows that the proposed method obtain 95% accuracy, 0.05% error, precision is 94%, and specificity is 92%.The simulation analysis shows that the developed approach attain better performance compared to other existing approaches. This prediction model helps in controlling the movement of amputee patients suffering from disable hand motion and improve their living standard.
Collapse
Affiliation(s)
- Arumugam Arulkumar
- Department of Electrical and Electronics Engineering, Nehru Institute of Engineering and Technology, Coimbatore, India
| | - Palanisamy Babu
- Department of Electronics and Communication Engineering, K.S. Rangasamy College of Technology, Tiruchengode, India
| |
Collapse
|
2
|
Andersson R, Bermejo-García J, Agujetas R, Cronhjort M, Chilo J. Smartphone IMU Sensors for Human Identification through Hip Joint Angle Analysis. SENSORS (BASEL, SWITZERLAND) 2024; 24:4769. [PMID: 39123816 PMCID: PMC11314747 DOI: 10.3390/s24154769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 07/19/2024] [Accepted: 07/20/2024] [Indexed: 08/12/2024]
Abstract
Gait monitoring using hip joint angles offers a promising approach for person identification, leveraging the capabilities of smartphone inertial measurement units (IMUs). This study investigates the use of smartphone IMUs to extract hip joint angles for distinguishing individuals based on their gait patterns. The data were collected from 10 healthy subjects (8 males, 2 females) walking on a treadmill at 4 km/h for 10 min. A sensor fusion technique that combined accelerometer, gyroscope, and magnetometer data was used to derive meaningful hip joint angles. We employed various machine learning algorithms within the WEKA environment to classify subjects based on their hip joint pattern and achieved a classification accuracy of 88.9%. Our findings demonstrate the feasibility of using hip joint angles for person identification, providing a baseline for future research in gait analysis for biometric applications. This work underscores the potential of smartphone-based gait analysis in personal identification systems.
Collapse
Affiliation(s)
- Rabé Andersson
- Department of Electrical Engineering, Mathematics and Science, University of Gävle, 801 76 Gävle, Sweden; (M.C.); (J.C.)
| | - Javier Bermejo-García
- Departamento de Ingeniería Mecánica, Energética y de los Materiales, Escuela de Ingenierías Industriales, Universidad de Extremadura, 06006 Badajoz, Spain; (J.B.-G.); (R.A.)
| | - Rafael Agujetas
- Departamento de Ingeniería Mecánica, Energética y de los Materiales, Escuela de Ingenierías Industriales, Universidad de Extremadura, 06006 Badajoz, Spain; (J.B.-G.); (R.A.)
| | - Mikael Cronhjort
- Department of Electrical Engineering, Mathematics and Science, University of Gävle, 801 76 Gävle, Sweden; (M.C.); (J.C.)
| | - José Chilo
- Department of Electrical Engineering, Mathematics and Science, University of Gävle, 801 76 Gävle, Sweden; (M.C.); (J.C.)
| |
Collapse
|
3
|
Ahmed N, Numan MOA, Kabir R, Islam MR, Watanobe Y. A Robust Deep Feature Extraction Method for Human Activity Recognition Using a Wavelet Based Spectral Visualisation Technique. SENSORS (BASEL, SWITZERLAND) 2024; 24:4343. [PMID: 39001122 PMCID: PMC11244405 DOI: 10.3390/s24134343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Revised: 06/24/2024] [Accepted: 06/28/2024] [Indexed: 07/16/2024]
Abstract
Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right of every human. However, it is challenging to extract potential features from 1D multi-sensor data. Thus, this research focuses on extracting distinguishable patterns and deep features from spectral images by time-frequency-domain analysis of 1D multi-sensor data. Wearable sensor data, particularly accelerator and gyroscope data, act as input signals of different daily activities, and provide potential information using time-frequency analysis. This potential time series information is mapped into spectral images through a process called use of 'scalograms', derived from the continuous wavelet transform. The deep activity features are extracted from the activity image using deep learning models such as CNN, MobileNetV3, ResNet, and GoogleNet and subsequently classified using a conventional classifier. To validate the proposed model, SisFall and PAMAP2 benchmark datasets are used. Based on the experimental results, this proposed model shows the optimal performance for activity recognition obtaining an accuracy of 98.4% for SisFall and 98.1% for PAMAP2, using Morlet as the mother wavelet with ResNet-101 and a softmax classifier, and outperforms state-of-the-art algorithms.
Collapse
Affiliation(s)
- Nadeem Ahmed
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Md Obaydullah Al Numan
- Department of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu 965-8580, Japan
| | - Raihan Kabir
- Department of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu 965-8580, Japan
| | - Md Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu 965-8580, Japan
| |
Collapse
|
4
|
Kan R, Qiu H, Liu X, Zhang P, Wang Y, Huang M, Wang M. Indoor Human Action Recognition Based on Dual Kinect V2 and Improved Ensemble Learning Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:8921. [PMID: 37960620 PMCID: PMC10647458 DOI: 10.3390/s23218921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/25/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023]
Abstract
Indoor human action recognition, essential across various applications, faces significant challenges such as orientation constraints and identification limitations, particularly in systems reliant on non-contact devices. Self-occlusions and non-line of sight (NLOS) situations are important representatives among them. To address these challenges, this paper presents a novel system utilizing dual Kinect V2, enhanced by an advanced Transmission Control Protocol (TCP) and sophisticated ensemble learning techniques, tailor-made to handle self-occlusions and NLOS situations. Our main works are as follows: (1) a data-adaptive adjustment mechanism, anchored on localization outcomes, to mitigate self-occlusion in dynamic orientations; (2) the adoption of sophisticated ensemble learning techniques, including a Chirp acoustic signal identification method, based on an optimized fuzzy c-means-AdaBoost algorithm, for improving positioning accuracy in NLOS contexts; and (3) an amalgamation of the Random Forest model and bat algorithm, providing innovative action identification strategies for intricate scenarios. We conduct extensive experiments, and our results show that the proposed system augments human action recognition precision by a substantial 30.25%, surpassing the benchmarks set by current state-of-the-art works.
Collapse
Affiliation(s)
- Ruixiang Kan
- School of Information and Communication, Guilin University of Electronic Technology, Guilin 541004, China;
| | - Hongbing Qiu
- School of Information and Communication, Guilin University of Electronic Technology, Guilin 541004, China;
- Ministry of Education Key Laboratory of Cognitive Radio and Information Processing, Guilin University of Electronic Technology, Guilin 541004, China
| | - Xin Liu
- College of Information Science and Engineering, Guilin University of Technology, Guilin 541004, China; (X.L.); (M.H.); (M.W.)
| | - Peng Zhang
- State Grid Qianshan City Electric Power Supply Company, Qianshan 246300, China;
| | - Yan Wang
- Northwest Survey and Planning Institute of the National Forestry and Grassland Administration, Xi’an 710048, China;
| | - Mengxiang Huang
- College of Information Science and Engineering, Guilin University of Technology, Guilin 541004, China; (X.L.); (M.H.); (M.W.)
| | - Mei Wang
- College of Information Science and Engineering, Guilin University of Technology, Guilin 541004, China; (X.L.); (M.H.); (M.W.)
| |
Collapse
|
5
|
Yu S, Zhan H, Lian X, Low SS, Xu Y, Li J, Zhang Y, Sun X, Liu J. A Smartphone-Based sEMG Signal Analysis System for Human Action Recognition. BIOSENSORS 2023; 13:805. [PMID: 37622891 PMCID: PMC10452551 DOI: 10.3390/bios13080805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 07/26/2023] [Accepted: 08/04/2023] [Indexed: 08/26/2023]
Abstract
In lower-limb rehabilitation, human action recognition (HAR) technology can be introduced to analyze the surface electromyography (sEMG) signal generated by movements, which can provide an objective and accurate evaluation of the patient's action. To balance the long cycle required for rehabilitation and the inconvenient factors brought by wearing sEMG devices, a portable sEMG signal acquisition device was developed that can be used under daily scenarios. Additionally, a mobile application was developed to meet the demand for real-time monitoring and analysis of sEMG signals. This application can monitor data in real time and has functions such as plotting, filtering, storage, and action capture and recognition. To build the dataset required for the recognition model, six lower-limb motions were developed for rehabilitation (kick, toe off, heel off, toe off and heel up, step back and kick, and full gait). The sEMG segment and action label were combined for training a convolutional neural network (CNN) to achieve high-precision recognition performance for human lower-limb actions (with a maximum accuracy of 97.96% and recognition accuracy for all actions reaching over 97%). The results show that the smartphone-based sEMG analysis system proposed in this paper can provide reliable information for the clinical evaluation of lower-limb rehabilitation.
Collapse
Affiliation(s)
- Shixin Yu
- College of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; (S.Y.); (H.Z.); (X.L.); (Y.X.); (J.L.); (Y.Z.); (X.S.)
| | - Hang Zhan
- College of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; (S.Y.); (H.Z.); (X.L.); (Y.X.); (J.L.); (Y.Z.); (X.S.)
| | - Xingwang Lian
- College of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; (S.Y.); (H.Z.); (X.L.); (Y.X.); (J.L.); (Y.Z.); (X.S.)
| | - Sze Shin Low
- Research Centre of Life Science and HealthCare, China Beacons Institute, University of Nottingham Ningbo China, 199 Taikang East Road, Ningbo 315100, China;
| | - Yifei Xu
- College of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; (S.Y.); (H.Z.); (X.L.); (Y.X.); (J.L.); (Y.Z.); (X.S.)
| | - Jiangyong Li
- College of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; (S.Y.); (H.Z.); (X.L.); (Y.X.); (J.L.); (Y.Z.); (X.S.)
| | - Yan Zhang
- College of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; (S.Y.); (H.Z.); (X.L.); (Y.X.); (J.L.); (Y.Z.); (X.S.)
| | - Xiaojun Sun
- College of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; (S.Y.); (H.Z.); (X.L.); (Y.X.); (J.L.); (Y.Z.); (X.S.)
| | - Jingjing Liu
- College of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; (S.Y.); (H.Z.); (X.L.); (Y.X.); (J.L.); (Y.Z.); (X.S.)
| |
Collapse
|
6
|
de Souza P, Silva D, de Andrade I, Dias J, Lima JP, Teichrieb V, Quintino JP, da Silva FQB, Santos ALM. A Study on the Influence of Sensors in Frequency and Time Domains on Context Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:5756. [PMID: 37420921 DOI: 10.3390/s23125756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 06/02/2023] [Accepted: 06/09/2023] [Indexed: 07/09/2023]
Abstract
Adaptive AI for context and activity recognition remains a relatively unexplored field due to difficulty in collecting sufficient information to develop supervised models. Additionally, building a dataset for human context activities "in the wild" demands time and human resources, which explains the lack of public datasets available. Some of the available datasets for activity recognition were collected using wearable sensors, since they are less invasive than images and precisely capture a user's movements in time series. However, frequency series contain more information about sensors' signals. In this paper, we investigate the use of feature engineering to improve the performance of a Deep Learning model. Thus, we propose using Fast Fourier Transform algorithms to extract features from frequency series instead of time series. We evaluated our approach on the ExtraSensory and WISDM datasets. The results show that using Fast Fourier Transform algorithms to extract features performed better than using statistics measures to extract features from temporal series. Additionally, we examined the impact of individual sensors on identifying specific labels and proved that incorporating more sensors enhances the model's effectiveness. On the ExtraSensory dataset, the use of frequency features outperformed that of time-domain features by 8.9 p.p., 0.2 p.p., 39.5 p.p., and 0.4 p.p. in Standing, Sitting, Lying Down, and Walking activities, respectively, and on the WISDM dataset, the model performance improved by 1.7 p.p., just by using feature engineering.
Collapse
Affiliation(s)
- Pedro de Souza
- Centro de Informática, Universidade Federal de Pernambuco, Recife 50740-560, PE, Brazil
| | - Diógenes Silva
- Centro de Informática, Universidade Federal de Pernambuco, Recife 50740-560, PE, Brazil
| | - Isabella de Andrade
- Centro de Informática, Universidade Federal de Pernambuco, Recife 50740-560, PE, Brazil
| | - Júlia Dias
- Centro de Informática, Universidade Federal de Pernambuco, Recife 50740-560, PE, Brazil
| | - João Paulo Lima
- Centro de Informática, Universidade Federal de Pernambuco, Recife 50740-560, PE, Brazil
- Visual Computing Lab, Departamento de Computação, Universidade Federal Rural de Pernambuco, Recife 52171-900, PE, Brazil
| | - Veronica Teichrieb
- Centro de Informática, Universidade Federal de Pernambuco, Recife 50740-560, PE, Brazil
| | - Jonysberg P Quintino
- Projeto CIn-UFPE Samsung, Centro de Informática, Av. Jorn. Anibal Fernandes, s/n, Recife 50740-560, PE, Brazil
| | - Fabio Q B da Silva
- Centro de Informática, Universidade Federal de Pernambuco, Recife 50740-560, PE, Brazil
| | - Andre L M Santos
- Centro de Informática, Universidade Federal de Pernambuco, Recife 50740-560, PE, Brazil
| |
Collapse
|
7
|
Akter M, Ansary S, Khan MAM, Kim D. Human Activity Recognition Using Attention-Mechanism-Based Deep Learning Feature Combination. SENSORS (BASEL, SWITZERLAND) 2023; 23:5715. [PMID: 37420881 DOI: 10.3390/s23125715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 06/14/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
Human activity recognition (HAR) performs a vital function in various fields, including healthcare, rehabilitation, elder care, and monitoring. Researchers are using mobile sensor data (i.e., accelerometer, gyroscope) by adapting various machine learning (ML) or deep learning (DL) networks. The advent of DL has enabled automatic high-level feature extraction, which has been effectively leveraged to optimize the performance of HAR systems. In addition, the application of deep-learning techniques has demonstrated success in sensor-based HAR across diverse domains. In this study, a novel methodology for HAR was introduced, which utilizes convolutional neural networks (CNNs). The proposed approach combines features from multiple convolutional stages to generate a more comprehensive feature representation, and an attention mechanism was incorporated to extract more refined features, further enhancing the accuracy of the model. The novelty of this study lies in the integration of feature combinations from multiple stages as well as in proposing a generalized model structure with CBAM modules. This leads to a more informative and effective feature extraction technique by feeding the model with more information in every block operation. This research used spectrograms of the raw signals instead of extracting hand-crafted features through intricate signal processing techniques. The developed model has been assessed on three datasets, including KU-HAR, UCI-HAR, and WISDM datasets. The experimental findings showed that the classification accuracies of the suggested technique on the KU-HAR, UCI-HAR, and WISDM datasets were 96.86%, 93.48%, and 93.89%, respectively. The other evaluation criteria also demonstrate that the proposed methodology is comprehensive and competent compared to previous works.
Collapse
Affiliation(s)
- Morsheda Akter
- Department of Electronics Engineering, Dong-A University, Busan 49315, Republic of Korea
| | - Shafew Ansary
- Department of Electronics Engineering, Dong-A University, Busan 49315, Republic of Korea
| | - Md Al-Masrur Khan
- Department of ICT Integrated Ocean Smart and Cities Engineering, Dong-A University, Busan 49315, Republic of Korea
| | - Dongwan Kim
- Department of Electronics Engineering, Dong-A University, Busan 49315, Republic of Korea
| |
Collapse
|
8
|
Human Motion Pattern Recognition and Feature Extraction: An Approach Using Multi-Information Fusion. MICROMACHINES 2022; 13:mi13081205. [PMID: 36014127 PMCID: PMC9416603 DOI: 10.3390/mi13081205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 07/24/2022] [Accepted: 07/26/2022] [Indexed: 11/17/2022]
Abstract
An exoskeleton is a kind of intelligent wearable device with bioelectronics and biomechanics. To realize its effective assistance to the human body, an exoskeleton needs to recognize the real time movement pattern of the human body in order to make corresponding movements at the right time. However, it is of great difficulty for an exoskeleton to fully identify human motion patterns, which are mainly manifested as incomplete acquisition of lower limb motion information, poor feature extraction ability, and complicated steps. Aiming at the above consideration, the motion mechanisms of human lower limbs have been analyzed in this paper, and a set of wearable bioelectronics devices are introduced based on an electromyography (EMG) sensor and inertial measurement unit (IMU), which help to obtain biological and kinematic information of the lower limb. Then, the Dual Stream convolutional neural network (CNN)-ReliefF was presented to extract features from the fusion sensors’ data, which were input into four different classifiers to obtain the recognition accuracy of human motion patterns. Compared with a single sensor (EMG or IMU) and single stream CNN or manual designed feature extraction methods, the feature extraction based on Dual Stream CNN-ReliefF shows better performance in terms of visualization performance and recognition accuracy. This method was used to extract features from EMG and IMU data of six subjects and input these features into four different classifiers. The motion pattern recognition accuracy of each subject under the four classifiers is above 97%, with the highest average recognition accuracy reaching 99.12%. It can be concluded that the wearable bioelectronics device and Dual Stream CNN-ReliefF feature extraction method proposed in this paper enhanced an exoskeleton’s ability to capture human movement patterns, thus providing optimal assistance to the human body at the appropriate time. Therefore, it can provide a novel approach for improving the human-machine interaction of exoskeletons.
Collapse
|
9
|
Machine Learning Strategies for Low-Cost Insole-Based Prediction of Center of Gravity during Gait in Healthy Males. SENSORS 2022; 22:s22093499. [PMID: 35591188 PMCID: PMC9100257 DOI: 10.3390/s22093499] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 04/28/2022] [Accepted: 04/28/2022] [Indexed: 02/04/2023]
Abstract
Whole-body center of gravity (CG) movements in relation to the center of pressure (COP) offer insights into the balance control strategies of the human body. Existing CG measurement methods using expensive measurement equipment fixed in a laboratory environment are not intended for continuous monitoring. The development of wireless sensing technology makes it possible to expand the measurement in daily life. The insole system is a wearable device that can evaluate human balance ability by measuring pressure distribution on the ground. In this study, a novel protocol (data preparation and model training) for estimating the 3-axis CG trajectory from vertical plantar pressures was proposed and its performance was evaluated. Input and target data were obtained through gait experiments conducted on 15 adult and 15 elderly males using a self-made insole prototype and optical motion capture system. One gait cycle was divided into four semantic phases. Features specified for each phase were extracted and the CG trajectory was predicted using a bi-directional long short-term memory (Bi-LSTM) network. The performance of the proposed CG prediction model was evaluated by a comparative study with four prediction models having no gait phase segmentation. The CG trajectory calculated with the optoelectronic system was used as a golden standard. The relative root mean square error of the proposed model on the 3-axis of anterior/posterior, medial/lateral, and proximal/distal showed the best prediction performance, with 2.12%, 12.97%, and 12.47%. Biomechanical analysis of two healthy male groups was conducted. A statistically significant difference between CG trajectories of the two groups was shown in the proposed model. Large CG sway of the medial/lateral axis trajectory and CG fall of the proximal/distal axis trajectory is shown in the old group. The protocol proposed in this study is a basic step to have gait analysis in daily life. It is expected to be utilized as a key element for clinical applications.
Collapse
|
10
|
Gupta N, Gupta SK, Pathak RK, Jain V, Rashidi P, Suri JS. Human activity recognition in artificial intelligence framework: a narrative review. Artif Intell Rev 2022; 55:4755-4808. [PMID: 35068651 PMCID: PMC8763438 DOI: 10.1007/s10462-021-10116-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Human activity recognition (HAR) has multifaceted applications due to its worldly usage of acquisition devices such as smartphones, video cameras, and its ability to capture human activity data. While electronic devices and their applications are steadily growing, the advances in Artificial intelligence (AI) have revolutionized the ability to extract deep hidden information for accurate detection and its interpretation. This yields a better understanding of rapidly growing acquisition devices, AI, and applications, the three pillars of HAR under one roof. There are many review articles published on the general characteristics of HAR, a few have compared all the HAR devices at the same time, and few have explored the impact of evolving AI architecture. In our proposed review, a detailed narration on the three pillars of HAR is presented covering the period from 2011 to 2021. Further, the review presents the recommendations for an improved HAR design, its reliability, and stability. Five major findings were: (1) HAR constitutes three major pillars such as devices, AI and applications; (2) HAR has dominated the healthcare industry; (3) Hybrid AI models are in their infancy stage and needs considerable work for providing the stable and reliable design. Further, these trained models need solid prediction, high accuracy, generalization, and finally, meeting the objectives of the applications without bias; (4) little work was observed in abnormality detection during actions; and (5) almost no work has been done in forecasting actions. We conclude that: (a) HAR industry will evolve in terms of the three pillars of electronic devices, applications and the type of AI. (b) AI will provide a powerful impetus to the HAR industry in future. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-021-10116-x.
Collapse
Affiliation(s)
- Neha Gupta
- CSE Department, Bennett University, Greater Noida, UP India
- Bharati Vidyapeeth’s College of Engineering, Paschim Vihar, New Delhi, India
| | | | | | - Vanita Jain
- Bharati Vidyapeeth’s College of Engineering, Paschim Vihar, New Delhi, India
| | - Parisa Rashidi
- Intelligent Health Laboratory, Department of Biomedical Engineering, University of Florida, Gainesville, USA
| | - Jasjit S. Suri
- Stroke Diagnostic and Monitoring Division, AtheroPointTM, Roseville, CA 95661 USA
- Global Biomedical Technologies, Inc., Roseville, CA USA
| |
Collapse
|
11
|
SensorHub: Multimodal Sensing in Real-Life Enables Home-Based Studies. SENSORS 2022; 22:s22010408. [PMID: 35009950 PMCID: PMC8749618 DOI: 10.3390/s22010408] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 12/19/2021] [Accepted: 12/22/2021] [Indexed: 01/27/2023]
Abstract
Observational studies are an important tool for determining whether the findings from controlled experiments can be transferred into scenarios that are closer to subjects’ real-life circumstances. A rigorous approach to observational studies involves collecting data from different sensors to comprehensively capture the situation of the subject. However, this leads to technical difficulties especially if the sensors are from different manufacturers, as multiple data collection tools have to run simultaneously. We present SensorHub, a system that can collect data from various wearable devices from different manufacturers, such as inertial measurement units, portable electrocardiographs, portable electroencephalographs, portable photoplethysmographs, and sensors for electrodermal activity. Additionally, our tool offers the possibility to include ecological momentary assessments (EMAs) in studies. Hence, SensorHub enables multimodal sensor data collection under real-world conditions and allows direct user feedback to be collected through questionnaires, enabling studies at home. In a first study with 11 participants, we successfully used SensorHub to record multiple signals with different devices and collected additional information with the help of EMAs. In addition, we evaluated SensorHub’s technical capabilities in several trials with up to 21 participants recording simultaneously using multiple sensors with sampling frequencies as high as 1000 Hz. We could show that although there is a theoretical limitation to the transmissible data rate, in practice this limitation is not an issue and data loss is rare. We conclude that with modern communication protocols and with the increasingly powerful smartphones and wearables, a system like our SensorHub establishes an interoperability framework to adequately combine consumer-grade sensing hardware which enables observational studies in real life.
Collapse
|
12
|
Chen J, Sun Y, Sun S. Muscle Synergy of Lower Limb Motion in Subjects with and without Knee Pathology. Diagnostics (Basel) 2021; 11:diagnostics11081318. [PMID: 34441253 PMCID: PMC8392845 DOI: 10.3390/diagnostics11081318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 07/15/2021] [Accepted: 07/21/2021] [Indexed: 11/29/2022] Open
Abstract
Surface electromyography (sEMG) has great potential in investigating the neuromuscular mechanism for knee pathology. However, due to the complex nature of neural control in lower limb motions and the divergences in subjects’ health and habits, it is difficult to directly use the raw sEMG signals to establish a robust sEMG analysis system. To solve this, muscle synergy analysis based on non-negative matrix factorization (NMF) of sEMG is carried out in this manuscript. The similarities of muscle synergy of subjects with and without knee pathology performing three different lower limb motions are calculated. Based on that, we have designed a classification method for motion recognition and knee pathology diagnosis. First, raw sEMG segments are preprocessed and then decomposed to muscle synergy matrices by NMF. Then, a two-stage feature selection method is executed to reduce the dimension of feature sets extracted from aforementioned matrices. Finally, the random forest classifier is adopted to identify motions or diagnose knee pathology. The study was conducted on an open dataset of 11 healthy subjects and 11 patients. Results show that the NMF-based sEMG classifier can achieve good performance in lower limb motion recognition, and is also an attractive solution for clinical application of knee pathology diagnosis.
Collapse
Affiliation(s)
- Jingcheng Chen
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China; (J.C.); (Y.S.)
- University of Science and Technology of China, Hefei 230026, China
| | - Yining Sun
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China; (J.C.); (Y.S.)
- University of Science and Technology of China, Hefei 230026, China
| | - Shaoming Sun
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China; (J.C.); (Y.S.)
- University of Science and Technology of China, Hefei 230026, China
- Correspondence:
| |
Collapse
|