1
|
Yuhai O, Choi A, Cho Y, Kim H, Mun JH. Deep-Learning-Based Recovery of Missing Optical Marker Trajectories in 3D Motion Capture Systems. Bioengineering (Basel) 2024; 11:560. [PMID: 38927796 PMCID: PMC11200691 DOI: 10.3390/bioengineering11060560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/17/2024] [Accepted: 05/30/2024] [Indexed: 06/28/2024] Open
Abstract
Motion capture (MoCap) technology, essential for biomechanics and motion analysis, faces challenges from data loss due to occlusions and technical issues. Traditional recovery methods, based on inter-marker relationships or independent marker treatment, have limitations. This study introduces a novel U-net-inspired bi-directional long short-term memory (U-Bi-LSTM) autoencoder-based technique for recovering missing MoCap data across multi-camera setups. Leveraging multi-camera and triangulated 3D data, this method employs a sophisticated U-shaped deep learning structure with an adaptive Huber regression layer, enhancing outlier robustness and minimizing reconstruction errors, proving particularly beneficial for long-term data loss scenarios. Our approach surpasses traditional piecewise cubic spline and state-of-the-art sparse low rank methods, demonstrating statistically significant improvements in reconstruction error across various gap lengths and numbers. This research not only advances the technical capabilities of MoCap systems but also enriches the analytical tools available for biomechanical research, offering new possibilities for enhancing athletic performance, optimizing rehabilitation protocols, and developing personalized treatment plans based on precise biomechanical data.
Collapse
Affiliation(s)
- Oleksandr Yuhai
- Department of Bio-Mechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea; (O.Y.); (Y.C.)
| | - Ahnryul Choi
- Department of Biomedical Engineering, College of Medical Convergence, Catholic Kwandong University, Gangneung 25601, Republic of Korea;
| | - Yubin Cho
- Department of Bio-Mechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea; (O.Y.); (Y.C.)
| | - Hyunggun Kim
- Department of Bio-Mechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea; (O.Y.); (Y.C.)
| | - Joung Hwan Mun
- Department of Bio-Mechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea; (O.Y.); (Y.C.)
| |
Collapse
|
2
|
Vuong TH, Doan T, Takasu A. Deep Wavelet Convolutional Neural Networks for Multimodal Human Activity Recognition Using Wearable Inertial Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:9721. [PMID: 38139567 PMCID: PMC10747357 DOI: 10.3390/s23249721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/02/2023] [Accepted: 12/05/2023] [Indexed: 12/24/2023]
Abstract
Recent advances in wearable systems have made inertial sensors, such as accelerometers and gyroscopes, compact, lightweight, multimodal, low-cost, and highly accurate. Wearable inertial sensor-based multimodal human activity recognition (HAR) methods utilize the rich sensing data from embedded multimodal sensors to infer human activities. However, existing HAR approaches either rely on domain knowledge or fail to address the time-frequency dependencies of multimodal sensor signals. In this paper, we propose a novel method called deep wavelet convolutional neural networks (DWCNN) designed to learn features from the time-frequency domain and improve accuracy for multimodal HAR. DWCNN introduces a framework that combines continuous wavelet transforms (CWT) with enhanced deep convolutional neural networks (DCNN) to capture the dependencies of sensing signals in the time-frequency domain, thereby enhancing the feature representation ability for multiple wearable inertial sensor-based HAR tasks. Within the CWT, we further propose an algorithm to estimate the wavelet scale parameter. This helps enhance the performance of CWT when computing the time-frequency representation of the input signals. The output of the CWT then serves as input for the proposed DCNN, which consists of residual blocks for extracting features from different modalities and attention blocks for fusing these features of multimodal signals. We conducted extensive experiments on five benchmark HAR datasets: WISDM, UCI-HAR, Heterogeneous, PAMAP2, and UniMiB SHAR. The experimental results demonstrate the superior performance of the proposed model over existing competitors.
Collapse
Affiliation(s)
- Thi Hong Vuong
- Department of Informatics, National Institute of Informatics, Tokyo 101-0003, Japan;
| | - Tung Doan
- Department of Computer Engineering, School of Information and Communication Technology, Hanoi University of Science and Technology, Hanoi 11615, Vietnam;
| | - Atsuhiro Takasu
- Department of Informatics, National Institute of Informatics, Tokyo 101-0003, Japan;
| |
Collapse
|
3
|
Hellmers S, Krey E, Gashi A, Koschate J, Schmidt L, Stuckenschneider T, Hein A, Zieschang T. Comparison of machine learning approaches for near-fall-detection with motion sensors. Front Digit Health 2023; 5:1223845. [PMID: 37564882 PMCID: PMC10410450 DOI: 10.3389/fdgth.2023.1223845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 07/06/2023] [Indexed: 08/12/2023] Open
Abstract
Introduction Falls are one of the most common causes of emergency hospital visits in older people. Early recognition of an increased fall risk, which can be indicated by the occurrence of near-falls, is important to initiate interventions. Methods In a study with 87 subjects we simulated near-fall events on a perturbation treadmill and recorded them with inertial measurement units (IMU) at seven different positions. We investigated different machine learning models for the near-fall detection including support vector machines, AdaBoost, convolutional neural networks, and bidirectional long short-term memory networks. Additionally, we analyzed the influence of the sensor position on the classification results. Results The best results showed a DeepConvLSTM with an F1 score of 0.954 (precision 0.969, recall 0.942) at the sensor position "left wrist." Discussion Since these results were obtained in the laboratory, the next step is to evaluate the suitability of the classifiers in the field.
Collapse
Affiliation(s)
- Sandra Hellmers
- Assistance Systems and Medical Device Technology, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Elias Krey
- Assistance Systems and Medical Device Technology, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Arber Gashi
- Geriatric Medicine, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Jessica Koschate
- Geriatric Medicine, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Laura Schmidt
- Geriatric Medicine, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Tim Stuckenschneider
- Geriatric Medicine, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Andreas Hein
- Assistance Systems and Medical Device Technology, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Tania Zieschang
- Geriatric Medicine, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| |
Collapse
|
4
|
Mohammad Z, Anwary AR, Mridha MF, Shovon MSH, Vassallo M. An Enhanced Ensemble Deep Neural Network Approach for Elderly Fall Detection System Based on Wearable Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:4774. [PMID: 37430686 DOI: 10.3390/s23104774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/27/2023] [Accepted: 05/12/2023] [Indexed: 07/12/2023]
Abstract
Fatal injuries and hospitalizations caused by accidental falls are significant problems among the elderly. Detecting falls in real-time is challenging, as many falls occur in a short period. Developing an automated monitoring system that can predict falls before they happen, provide safeguards during the fall, and issue remote notifications after the fall is essential to improving the level of care for the elderly. This study proposed a concept for a wearable monitoring framework that aims to anticipate falls during their beginning and descent, activating a safety mechanism to minimize fall-related injuries and issuing a remote notification after the body impacts the ground. However, the demonstration of this concept in the study involved the offline analysis of an ensemble deep neural network architecture based on a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) and existing data. It is important to note that this study did not involve the implementation of hardware or other elements beyond the developed algorithm. The proposed approach utilized CNN for robust feature extraction from accelerometer and gyroscope data and RNN to model the temporal dynamics of the falling process. A distinct class-based ensemble architecture was developed, where each ensemble model identified a specific class. The proposed approach was evaluated on the annotated SisFall dataset and achieved a mean accuracy of 95%, 96%, and 98% for Non-Fall, Pre-Fall, and Fall detection events, respectively, outperforming state-of-the-art fall detection methods. The overall evaluation demonstrated the effectiveness of the developed deep learning architecture. This wearable monitoring system will prevent injuries and improve the quality of life of elderly individuals.
Collapse
Affiliation(s)
- Zabir Mohammad
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Arif Reza Anwary
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| | - Muhammad Firoz Mridha
- Department of Computer Science, American International University-Bangladesh (AIUB), Dhaka 1229, Bangladesh
| | - Md Sakib Hossain Shovon
- Department of Computer Science, American International University-Bangladesh (AIUB), Dhaka 1229, Bangladesh
| | | |
Collapse
|
5
|
Jung S, de l’Escalopier N, Oudre L, Truong C, Dorveaux E, Gorintin L, Ricard D. A Machine Learning Pipeline for Gait Analysis in a Semi Free-Living Environment. SENSORS (BASEL, SWITZERLAND) 2023; 23:4000. [PMID: 37112339 PMCID: PMC10145775 DOI: 10.3390/s23084000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 04/03/2023] [Accepted: 04/13/2023] [Indexed: 06/19/2023]
Abstract
This paper presents a novel approach to creating a graphical summary of a subject's activity during a protocol in a Semi Free-Living Environment. Thanks to this new visualization, human behavior, in particular locomotion, can now be condensed into an easy-to-read and user-friendly output. As time series collected while monitoring patients in Semi Free-Living Environments are often long and complex, our contribution relies on an innovative pipeline of signal processing methods and machine learning algorithms. Once learned, the graphical representation is able to sum up all activities present in the data and can quickly be applied to newly acquired time series. In a nutshell, raw data from inertial measurement units are first segmented into homogeneous regimes with an adaptive change-point detection procedure, then each segment is automatically labeled. Then, features are extracted from each regime, and lastly, a score is computed using these features. The final visual summary is constructed from the scores of the activities and their comparisons to healthy models. This graphical output is a detailed, adaptive, and structured visualization that helps better understand the salient events in a complex gait protocol.
Collapse
Affiliation(s)
- Sylvain Jung
- Université Paris Saclay, Université Paris Cité, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-91190 Gif-sur-Yvette, France
- Université Sorbonne Paris Nord, L2TI, UR 3043, F-93430 Villetaneuse, France
- AbilyCare, 130 Rue de Lourmel, F-75015 Paris, France
- ENGIE Lab CRIGEN, F-93249 Stains, France
| | - Nicolas de l’Escalopier
- Université Paris Cité, Université Paris Saclay, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-75006 Paris, France
- Service de Neurologie, Service de Santé des Armées, HIA Percy, F-92190 Clamart, France
| | - Laurent Oudre
- Université Paris Saclay, Université Paris Cité, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-91190 Gif-sur-Yvette, France
| | - Charles Truong
- Université Paris Saclay, Université Paris Cité, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-91190 Gif-sur-Yvette, France
| | - Eric Dorveaux
- AbilyCare, 130 Rue de Lourmel, F-75015 Paris, France
| | - Louis Gorintin
- Novakamp, 10-12 Avenue du Bosquet, F-95560 Baillet en France, France
| | - Damien Ricard
- Université Paris Cité, Université Paris Saclay, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-75006 Paris, France
- Service de Neurologie, Service de Santé des Armées, HIA Percy, F-92190 Clamart, France
- Ecole du Val-de-Grâce, Service de Santé des Armées, F-75005 Paris, France
| |
Collapse
|
6
|
Cardenas JD, Gutierrez CA, Aguilar-Ponce R. Deep Learning Multi-Class Approach for Human Fall Detection Based on Doppler Signatures. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:1123. [PMID: 36673883 PMCID: PMC9858740 DOI: 10.3390/ijerph20021123] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/30/2022] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Falling events are a global health concern with short- and long-term physical and psychological implications, especially for the elderly population. This work aims to monitor human activity in an indoor environment and recognize falling events without requiring users to carry a device or sensor on their bodies. A sensing platform based on the transmission of a continuous wave (CW) radio-frequency (RF) probe signal was developed using general-purpose equipment. The CW probe signal is similar to the pilot subcarriers transmitted by commercial off-the-shelf WiFi devices. As a result, our methodology can easily be integrated into a joint radio sensing and communication scheme. The sensing process is carried out by analyzing the changes in phase, amplitude, and frequency that the probe signal suffers when it is reflected or scattered by static and moving bodies. These features are commonly extracted from the channel state information (CSI) of WiFi signals. However, CSI relies on complex data acquisition and channel estimation processes. Doppler radars have also been used to monitor human activity. While effective, a radar-based fall detection system requires dedicated hardware. In this paper, we follow an alternative method to characterize falling events on the basis of the Doppler signatures imprinted on the CW probe signal by a falling person. A multi-class deep learning framework for classification was conceived to differentiate falling events from other activities that can be performed in indoor environments. Two neural network models were implemented. The first is based on a long-short-term memory network (LSTM) and the second on a convolutional neural network (CNN). A series of experiments comprising 11 subjects were conducted to collect empirical data and test the system's performance. Falls were detected with an accuracy of 92.1% for the LSTM case, while for the CNN, an accuracy rate of 92.1% was obtained. The results demonstrate the viability of human fall detection based on a radio sensing system such as the one described in this paper.
Collapse
|