1
|
Milone D, Longo F, Merlino G, De Marchis C, Risitano G, D’Agati L. MocapMe: DeepLabCut-Enhanced Neural Network for Enhanced Markerless Stability in Sit-to-Stand Motion Capture. SENSORS (BASEL, SWITZERLAND) 2024; 24:3022. [PMID: 38793876 PMCID: PMC11125421 DOI: 10.3390/s24103022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 04/26/2024] [Accepted: 05/06/2024] [Indexed: 05/26/2024]
Abstract
This study examined the efficacy of an optimized DeepLabCut (DLC) model in motion capture, with a particular focus on the sit-to-stand (STS) movement, which is crucial for assessing the functional capacity in elderly and postoperative patients. This research uniquely compared the performance of this optimized DLC model, which was trained using 'filtered' estimates from the widely used OpenPose (OP) model, thereby emphasizing computational effectiveness, motion-tracking precision, and enhanced stability in data capture. Utilizing a combination of smartphone-captured videos and specifically curated datasets, our methodological approach included data preparation, keypoint annotation, and extensive model training, with an emphasis on the flow of the optimized model. The findings demonstrate the superiority of the optimized DLC model in various aspects. It exhibited not only higher computational efficiency, with reduced processing times, but also greater precision and consistency in motion tracking thanks to the stability brought about by the meticulous selection of the OP data. This precision is vital for developing accurate biomechanical models for clinical interventions. Moreover, this study revealed that the optimized DLC maintained higher average confidence levels across datasets, indicating more reliable and accurate detection capabilities compared with standalone OP. The clinical relevance of these findings is profound. The optimized DLC model's efficiency and enhanced point estimation stability make it an invaluable tool in rehabilitation monitoring and patient assessments, potentially streamlining clinical workflows. This study suggests future research directions, including integrating the optimized DLC model with virtual reality environments for enhanced patient engagement and leveraging its improved data quality for predictive analytics in healthcare. Overall, the optimized DLC model emerged as a transformative tool for biomechanical analysis and physical rehabilitation, promising to enhance the quality of patient care and healthcare delivery efficiency.
Collapse
Affiliation(s)
- Dario Milone
- Department of Engineering (DI), University of Messina, Contrada di Dio, 98166 Messina, Italy; (F.L.); (G.M.); (C.D.M.); (G.R.); (L.D.)
| | | | | | | | | | | |
Collapse
|
2
|
Ishida T, Ino T, Yamakawa Y, Wada N, Koshino Y, Samukawa M, Kasahara S, Tohyama H. Estimation of Vertical Ground Reaction Force during Single-leg Landing Using Two-dimensional Video Images and Pose Estimation Artificial Intelligence. Phys Ther Res 2024; 27:35-41. [PMID: 38690532 PMCID: PMC11057390 DOI: 10.1298/ptr.e10276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 01/09/2024] [Indexed: 05/02/2024]
Abstract
OBJECTIVE Assessment of the vertical ground reaction force (VGRF) during landing tasks is crucial for physical therapy in sports. The purpose of this study was to determine whether the VGRF during a single-leg landing can be estimated from a two-dimensional (2D) video image and pose estimation artificial intelligence (AI). METHODS Eighteen healthy male participants (age: 23.0 ± 1.6 years) performed a single-leg landing task from a 30-cm height. The VGRF was measured using a force plate and estimated using center of mass (COM) position data from a 2D video image with pose estimation AI (2D-AI) and three-dimensional optical motion capture (3D-Mocap). The measured and estimated peak VGRFs were compared using a paired t-test and Pearson's correlation coefficient. The absolute errors of the peak VGRF were also compared between the two estimations. RESULTS No significant difference in the peak VGRF was found between the force plate measured VGRF and the 2D-AI or 3D-Mocap estimated VGRF (force plate: 3.37 ± 0.42 body weight [BW], 2D-AI: 3.32 ± 0.42 BW, 3D-Mocap: 3.50 ± 0.42 BW). There was no significant difference in the absolute error of the peak VGRF between the 2D-AI and 3D-Mocap estimations (2D-AI: 0.20 ± 0.16 BW, 3D-Mocap: 0.13 ± 0.09 BW, P = 0.163). The measured peak VGRF was significantly correlated with the estimated peak by 2D-AI (R = 0.835, P <0.001). CONCLUSION The results of this study indicate that peak VGRF estimation using 2D video images and pose estimation AI is useful for the clinical assessment of single-leg landing.
Collapse
Affiliation(s)
- Tomoya Ishida
- Faculty of Health Sciences, Hokkaido University, Japan
| | - Takumi Ino
- Faculty of Health Sciences, Hokkaido University of Science, Japan
| | | | - Naofumi Wada
- Faculty of Engineering, Hokkaido University of Science, Japan
| | - Yuta Koshino
- Faculty of Health Sciences, Hokkaido University, Japan
| | - Mina Samukawa
- Faculty of Health Sciences, Hokkaido University, Japan
| | | | | |
Collapse
|
3
|
Yang J, Park K. Improving Gait Analysis Techniques with Markerless Pose Estimation Based on Smartphone Location. Bioengineering (Basel) 2024; 11:141. [PMID: 38391625 PMCID: PMC10886083 DOI: 10.3390/bioengineering11020141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 01/25/2024] [Accepted: 01/29/2024] [Indexed: 02/24/2024] Open
Abstract
Marker-based 3D motion capture systems, widely used for gait analysis, are accurate but have disadvantages such as cost and accessibility. Whereas markerless pose estimation has emerged as a convenient and cost-effective alternative for gait analysis, challenges remain in achieving optimal accuracy. Given the limited research on the effects of camera location and orientation on data collection accuracy, this study investigates how camera placement affects gait assessment accuracy utilizing five smartphones. This study aimed to explore the differences in data collection accuracy between marker-based systems and pose estimation, as well as to assess the impact of camera location and orientation on accuracy in pose estimation. The results showed that the differences in joint angles between pose estimation and marker-based systems are below 5°, an acceptable level for gait analysis, with a strong correlation between the two datasets supporting the effectiveness of pose estimation in gait analysis. In addition, hip and knee angles were accurately measured at the front diagonal of the subject and ankle angle at the lateral side. This research highlights the significance of careful camera placement for reliable gait analysis using pose estimation, serving as a concise reference to guide future efforts in enhancing the quantitative accuracy of gait analysis.
Collapse
Affiliation(s)
- Junhyuk Yang
- Department of Mechatronics Engineering, Incheon National University, Incheon 22012, Republic of Korea
| | - Kiwon Park
- Department of Mechatronics Engineering, Incheon National University, Incheon 22012, Republic of Korea
| |
Collapse
|
4
|
Ino T, Samukawa M, Ishida T, Wada N, Koshino Y, Kasahara S, Tohyama H. Validity of AI-Based Gait Analysis for Simultaneous Measurement of Bilateral Lower Limb Kinematics Using a Single Video Camera. SENSORS (BASEL, SWITZERLAND) 2023; 23:9799. [PMID: 38139644 PMCID: PMC10747245 DOI: 10.3390/s23249799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/02/2023] [Accepted: 12/12/2023] [Indexed: 12/24/2023]
Abstract
Accuracy validation of gait analysis using pose estimation with artificial intelligence (AI) remains inadequate, particularly in objective assessments of absolute error and similarity of waveform patterns. This study aimed to clarify objective measures for absolute error and waveform pattern similarity in gait analysis using pose estimation AI (OpenPose). Additionally, we investigated the feasibility of simultaneous measuring both lower limbs using a single camera from one side. We compared motion analysis data from pose estimation AI using video footage that was synchronized with a three-dimensional motion analysis device. The comparisons involved mean absolute error (MAE) and the coefficient of multiple correlation (CMC) to compare the waveform pattern similarity. The MAE ranged from 2.3 to 3.1° on the camera side and from 3.1 to 4.1° on the opposite side, with slightly higher accuracy on the camera side. Moreover, the CMC ranged from 0.936 to 0.994 on the camera side and from 0.890 to 0.988 on the opposite side, indicating a "very good to excellent" waveform similarity. Gait analysis using a single camera revealed that the precision on both sides was sufficiently robust for clinical evaluation, while measurement accuracy was slightly superior on the camera side.
Collapse
Affiliation(s)
- Takumi Ino
- Graduate School of Health Sciences, Hokkaido University, Sapporo 0600812, Japan;
- Department of Physical Therapy, Faculty of Health Sciences, Hokkaido University of Science, Sapporo 0068585, Japan
| | - Mina Samukawa
- Faculty of Health Sciences, Hokkaido University, Sapporo 0600812, Japan
| | - Tomoya Ishida
- Faculty of Health Sciences, Hokkaido University, Sapporo 0600812, Japan
| | - Naofumi Wada
- Department of Information and Computer Science, Faculty of Engineering, Hokkaido University of Science, Sapporo 0068585, Japan;
| | - Yuta Koshino
- Faculty of Health Sciences, Hokkaido University, Sapporo 0600812, Japan
| | - Satoshi Kasahara
- Faculty of Health Sciences, Hokkaido University, Sapporo 0600812, Japan
| | - Harukazu Tohyama
- Faculty of Health Sciences, Hokkaido University, Sapporo 0600812, Japan
| |
Collapse
|
5
|
Wade L, Needham L, Evans M, McGuigan P, Colyer S, Cosker D, Bilzon J. Examination of 2D frontal and sagittal markerless motion capture: Implications for markerless applications. PLoS One 2023; 18:e0293917. [PMID: 37943887 PMCID: PMC10635560 DOI: 10.1371/journal.pone.0293917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 10/21/2023] [Indexed: 11/12/2023] Open
Abstract
This study examined if occluded joint locations, obtained from 2D markerless motion capture (single camera view), produced 2D joint angles with reduced agreement compared to visible joints, and if 2D frontal plane joint angles were usable for practical applications. Fifteen healthy participants performed over-ground walking whilst recorded by fifteen marker-based cameras and two machine vision cameras (frontal and sagittal plane). Repeated measures Bland-Altman analysis illustrated that markerless standard deviation of bias and limits of agreement for the occluded-side hip and knee joint angles in the sagittal plane were double that of the camera-side (visible) hip and knee. Camera-side sagittal plane knee and hip angles were near or within marker-based error values previously observed. While frontal plane limits of agreement accounted for 35-46% of total range of motion at the hip and knee, Bland-Altman bias and limits of agreement (-4.6-1.6 ± 3.7-4.2˚) were actually similar to previously reported marker-based error values. This was not true for the ankle, where the limits of agreement (± 12˚) were still too high for practical applications. Our results add to previous literature, highlighting shortcomings of current pose estimation algorithms and labelled datasets. As such, this paper finishes by reviewing methods for creating anatomically accurate markerless training data using marker-based motion capture data.
Collapse
Affiliation(s)
- Logan Wade
- Centre for the Analysis of Motion, Entertainment Research and Applications, University of Bath, Bath, United Kingdom
| | - Laurie Needham
- Centre for the Analysis of Motion, Entertainment Research and Applications, University of Bath, Bath, United Kingdom
| | - Murray Evans
- Centre for the Analysis of Motion, Entertainment Research and Applications, University of Bath, Bath, United Kingdom
| | - Polly McGuigan
- Centre for the Analysis of Motion, Entertainment Research and Applications, University of Bath, Bath, United Kingdom
| | - Steffi Colyer
- Centre for the Analysis of Motion, Entertainment Research and Applications, University of Bath, Bath, United Kingdom
| | - Darren Cosker
- Centre for the Analysis of Motion, Entertainment Research and Applications, University of Bath, Bath, United Kingdom
| | - James Bilzon
- Centre for the Analysis of Motion, Entertainment Research and Applications, University of Bath, Bath, United Kingdom
| |
Collapse
|
6
|
Martínez-Zarzuela M, González-Alonso J, Antón-Rodríguez M, Díaz-Pernas FJ, Müller H, Simón-Martínez C. Multimodal video and IMU kinematic dataset on daily life activities using affordable devices. Sci Data 2023; 10:648. [PMID: 37737210 PMCID: PMC10516922 DOI: 10.1038/s41597-023-02554-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 09/08/2023] [Indexed: 09/23/2023] Open
Abstract
Human activity recognition and clinical biomechanics are challenging problems in physical telerehabilitation medicine. However, most publicly available datasets on human body movements cannot be used to study both problems in an out-of-the-lab movement acquisition setting. The objective of the VIDIMU dataset is to pave the way towards affordable patient gross motor tracking solutions for daily life activities recognition and kinematic analysis. The dataset includes 13 activities registered using a commodity camera and five inertial sensors. The video recordings were acquired in 54 subjects, of which 16 also had simultaneous recordings of inertial sensors. The novelty of dataset lies in: (i) the clinical relevance of the chosen movements, (ii) the combined utilization of affordable video and custom sensors, and (iii) the implementation of state-of-the-art tools for multimodal data processing of 3D body pose tracking and motion reconstruction in a musculoskeletal model from inertial data. The validation confirms that a minimally disturbing acquisition protocol, performed according to real-life conditions can provide a comprehensive picture of human joint angles during daily life activities.
Collapse
Affiliation(s)
| | | | | | | | - Henning Müller
- University of Applied Sciences and Arts Western Switzerland (HES-SO) Valais-Wallis, Sierre, Switzerland
- Medical faculty, University of Geneva, Geneva, Switzerland
| | - Cristina Simón-Martínez
- University of Applied Sciences and Arts Western Switzerland (HES-SO) Valais-Wallis, Sierre, Switzerland
| |
Collapse
|
7
|
Reliability and validity of OpenPose for measuring hip-knee-ankle angle in patients with knee osteoarthritis. Sci Rep 2023; 13:3297. [PMID: 36841842 PMCID: PMC9968277 DOI: 10.1038/s41598-023-30352-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 02/21/2023] [Indexed: 02/27/2023] Open
Abstract
We aimed to assess the reliability and validity of OpenPose, a posture estimation algorithm, for measuring hip-knee-ankle (HKA) angle in patients with knee osteoarthritis, by comparing it with radiography. In this prospective study, we analysed 60 knees (30 patients) with knee osteoarthritis. We measured HKA angle using OpenPose and radiography before or after total knee arthroplasty and assessed the test-retest reliability of each method with intraclass correlation coefficient (1, 1). We evaluated the ability to estimate the radiographic measurement values from the OpenPose values using linear regression analysis and used intraclass correlation coefficients (2, 1) and Bland-Altman analyses to evaluate the agreement and error between OpenPose and radiographic measurements. OpenPose had excellent test-retest reliability (intraclass correlation coefficient (1, 1) = 1.000) and excellent agreement with radiography (intraclass correlation coefficient (2, 1) = 0.915), with regression analysis indicating a large correlation (R2 = 0.865). OpenPose also had a 1.1° fixed error and no systematic error when compared with radiography. This is the first study to validate the use of OpenPose for the estimation of HKA angle in patients with knee osteoarthritis. OpenPose is a reliable and valid tool for measuring HKA angle in patients with knee osteoarthritis. OpenPose, which enables non-invasive and simple measurements, may be a useful tool to assess changes in HKA angle and monitor the progression and post-operative course of knee osteoarthritis. Furthermore, this validated tool can be used not only in clinics and hospitals, but also at home and in training gyms; thus, its use could potentially be expanded to include self-assessment/monitoring.
Collapse
|
8
|
Fujita K, Hiyama T, Wada K, Aihara T, Matsumura Y, Hamatsuka T, Yoshinaka Y, Kimura M, Kuzuya M. Machine learning-based muscle mass estimation using gait parameters in community-dwelling older adults: A cross-sectional study. Arch Gerontol Geriatr 2022; 103:104793. [PMID: 35987032 DOI: 10.1016/j.archger.2022.104793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 08/03/2022] [Accepted: 08/13/2022] [Indexed: 11/22/2022]
Abstract
BACKGROUND Loss of skeletal muscle mass is associated with numerous factors such as metabolic diseases, lack of independence, and mortality in older adults. Therefore, developing simple, safe, and reliable tools for assessing skeletal muscle mass is needed. Some studies recently reported that the risks of the incidence of geriatric conditions could be estimated by analyzing older adults' gait; however, no studies have assessed the association between gait parameters and skeletal muscle loss in older adults. In this study, we applied machine learning approach to the gait parameters derived from three-dimensional skeletal models to distinguish older adults' low skeletal muscle mass. We also identified the most important gait parameters for detecting low muscle mass. METHODS Sixty-six community-dwelling older adults were recruited. Thirty-two gait parameters were created using a three-dimensional skeletal model involving 10-meter comfortable walking. After skeletal muscle mass measurement using a bioimpedance analyzer, low muscle mass was judged in accordance with the guideline of the Asia Working Group for Sarcopenia. The eXtreme gradient boosting (XGBoost) model was applied to discriminate between low and high skeletal muscle mass. RESULTS Eleven subjects had a low muscle mass. The c-statistics, sensitivity, specificity, precision of the final model were 0.7, 59.5%, 81.4%, and 70.5%, respectively. The top three dominant gait parameters were, in order of strongest effect, stride length, hip dynamic range of motion, and trunk rotation variability. CONCLUSION Machine learning-based gait analysis is a useful approach to determine the low skeletal muscle mass of community-dwelling older adults.
Collapse
Affiliation(s)
- Kosuke Fujita
- Department of Community Healthcare and Geriatrics, Graduate School of Medicine, Nagoya University, Nagoya, Japan; Department of Prevention and Care Science, Center for Development of Advanced Medicine for Dementia, National Center for Geriatrics and Gerontology, Obu, Japan.
| | - Takahiro Hiyama
- Technology Division, Panasonic Holdings Corporation, Kadoma, Japan
| | - Kengo Wada
- Electric Works Company, Panasonic Corporation, Kadoma, Japan
| | - Takahiro Aihara
- Electric Works Company, Panasonic Corporation, Kadoma, Japan
| | | | | | - Yasuko Yoshinaka
- Department of Bioenvironment, Kyoto University of Advanced Science, Kameoka, Japan
| | - Misaka Kimura
- Department of Bioenvironment, Kyoto University of Advanced Science, Kameoka, Japan; Doshisha Women's College of Liberal Arts, Graduate School of Nursing, Kyotanabe, Japan
| | - Masafumi Kuzuya
- Department of Community Healthcare and Geriatrics, Graduate School of Medicine, Nagoya University, Nagoya, Japan
| |
Collapse
|
9
|
Washabaugh EP, Shanmugam TA, Ranganathan R, Krishnan C. Comparing the accuracy of open-source pose estimation methods for measuring gait kinematics. Gait Posture 2022; 97:188-195. [PMID: 35988434 DOI: 10.1016/j.gaitpost.2022.08.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/30/2022] [Accepted: 08/12/2022] [Indexed: 02/02/2023]
Abstract
BACKGROUND Open-source pose estimation is rapidly reducing the costs associated with motion capture, as machine learning partially eliminates the need for specialized cameras and equipment. This technology could be particularly valuable for clinical gait analysis, which is often performed qualitatively due to the prohibitive cost and setup required for conventional, marker-based motion capture. RESEARCH QUESTION How do open-source pose estimation software packages compare in their ability to measure kinematics and spatiotemporal gait parameters for gait analysis? METHODS This analysis used an existing dataset that contained video and synchronous motion capture data from 32 able-bodied participants while walking. Sagittal plane videos were analyzed using pre-trained algorithms from four open-source pose estimation methods-OpenPose, Tensorflow MoveNet Lightning, Tensorflow MoveNet Thunder, and DeepLabCut-to extract keypoints (i.e., landmarks) and calculate hip and knee kinematics and spatiotemporal gait parameters. The absolute error when using each markerless pose estimation method was computed against conventional marker-based optical motion capture. Errors were compared between pose estimation methods using statistical parametric mapping. RESULTS Pose estimation methods differed in their ability to measure kinematics. OpenPose and Tensorflow MoveNet Thunder methods were most accurate for measuring hip kinematics, averaging 3.7 ± 1.3 deg and 4.6 ± 1.8 deg (mean ± std) over the entire gait cycle, respectively. OpenPose was most accurate when measuring knee kinematics, averaging 5.1 ± 2.5 deg of error over the gait cycle. MoveNet Thunder and OpenPose had the lowest errors when measuring spatiotemporal gait parameters but were not statistically different from one another. SIGNIFICANCE The results indicate that OpenPose significantly outperforms other existing platforms for pose-estimation of healthy gait kinematics and spatiotemporal gait parameters and could serve as an alternative to conventional motion capture systems in clinical and research settings when marker-based systems are not available.
Collapse
Affiliation(s)
- Edward P Washabaugh
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA; Department of Physical Medicine and Rehabilitation, Michigan Medicine, Ann Arbor, MI, USA
| | | | - Rajiv Ranganathan
- Department of Kinesiology, Michigan State University, East Lansing, MI, USA
| | - Chandramouli Krishnan
- Department of Physical Medicine and Rehabilitation, Michigan Medicine, Ann Arbor, MI, USA; Michigan Robotics Institute, University of Michigan, Ann Arbor, MI, USA; Department of Physical Therapy, College of Health Sciences, University of Michigan-Flint, Flint, MI, USA.
| |
Collapse
|
10
|
A Lightweight Pose Sensing Scheme for Contactless Abnormal Gait Behavior Measurement. SENSORS 2022; 22:s22114070. [PMID: 35684689 PMCID: PMC9185243 DOI: 10.3390/s22114070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 05/24/2022] [Accepted: 05/25/2022] [Indexed: 12/10/2022]
Abstract
The recognition of abnormal gait behavior is important in the field of motion assessment and disease diagnosis. Currently, abnormal gait behavior is primarily recognized by pressure and inertial data obtained from wearable sensors. However, the data drift and wearing difficulties for patients have impeded the application of these wearable sensors. Here, we propose a contactless abnormal gait behavior recognition method that captures human pose data using a monocular camera. A lightweight OpenPose (OP) model is generated with Depthwise Separable Convolution to recognize joint points and extract their coordinates during walking in real time. For the walking data errors extracted in the 2D plane, a 3D reconstruction is performed on the walking data, and a total of 11 types of abnormal gait features are extracted by the OP model. Finally, the XGBoost algorithm is used for feature screening. The final experimental results show that the Random Forest (RF) algorithm in combination with 3D features delivers the highest precision (92.13%) for abnormal gait behavior recognition. The proposed scheme overcomes the data drift of inertial sensors and sensor wearing challenges in the elderly while reducing the hardware requirements for model deployment. With excellent real-time and contactless capabilities, the scheme is expected to enjoy a wide range of applications in the field of abnormal gait measurement.
Collapse
|