1
|
Lavikainen J, Vartiainen P, Stenroth L, Karjalainen PA, Korhonen RK, Liukkonen MK, Mononen ME. Gait data from 51 healthy participants with motion capture, inertial measurement units, and computer vision. Data Brief 2024; 56:110841. [PMID: 39257685 PMCID: PMC11385067 DOI: 10.1016/j.dib.2024.110841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 07/26/2024] [Accepted: 08/09/2024] [Indexed: 09/12/2024] Open
Abstract
We present a dataset comprising motion capture, inertial measurement unit data, and sagittal-plane video data from walking at three different instructed speeds (slow, comfortable, fast). The dataset contains 51 healthy participants with approximately 60 walking trials from each participant. Each walking trial contains data from motion capture, inertial measurement units, and computer vision. Motion capture data comprises ground reaction forces and moments from floor-embedded force plates and the 3D trajectories of subject-worn motion capture markers. Inertial measurement unit data comprises 3D accelerometer readings and 3D orientations from the lower limbs and pelvis. Computer vision data comprises 2D keypoint trajectories detected using the OpenPose human pose estimation algorithm from sagittal-plane video of the walking trial. Additionally, the dataset contains participant demographic and anthropometric information such as mass, height, sex, age, lower limb dimensions, and knee intercondylar distance measured from magnetic resonance images. The dataset can be used in musculoskeletal modelling and simulation to calculate kinematics and kinetics of motion and to compare data between motion capture, inertial measurement, and video capture.
Collapse
Affiliation(s)
- Jere Lavikainen
- Department of Technical Physics, University of Eastern Finland, P.O. Box 1627, Yliopistonranta 8 (Melania building), 70211 Kuopio, Finland
- Diagnostic Imaging Centre, Kuopio University Hospital, Wellbeing Services County of North Savo, Puijonlaaksontie 2, 70210 Kuopio, Finland
| | - Paavo Vartiainen
- Department of Technical Physics, University of Eastern Finland, P.O. Box 1627, Yliopistonranta 8 (Melania building), 70211 Kuopio, Finland
| | - Lauri Stenroth
- Department of Technical Physics, University of Eastern Finland, P.O. Box 1627, Yliopistonranta 8 (Melania building), 70211 Kuopio, Finland
| | - Pasi A Karjalainen
- Department of Technical Physics, University of Eastern Finland, P.O. Box 1627, Yliopistonranta 8 (Melania building), 70211 Kuopio, Finland
| | - Rami K Korhonen
- Department of Technical Physics, University of Eastern Finland, P.O. Box 1627, Yliopistonranta 8 (Melania building), 70211 Kuopio, Finland
| | - Mimmi K Liukkonen
- Diagnostic Imaging Centre, Kuopio University Hospital, Wellbeing Services County of North Savo, Puijonlaaksontie 2, 70210 Kuopio, Finland
| | - Mika E Mononen
- Department of Technical Physics, University of Eastern Finland, P.O. Box 1627, Yliopistonranta 8 (Melania building), 70211 Kuopio, Finland
| |
Collapse
|
2
|
Goto G, Ariga K, Tanaka N, Oda K, Haro H, Ohba T. Clinical Significance of Pose Estimation Methods Compared with Radiographic Parameters in Adolescent Patients with Idiopathic Scoliosis. Spine Surg Relat Res 2024; 8:485-493. [PMID: 39399450 PMCID: PMC11464822 DOI: 10.22603/ssrr.2023-0269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 01/06/2024] [Indexed: 10/15/2024] Open
Abstract
Introduction Human pose estimation, a computer vision technique that identifies body parts and constructs human body representations from images and videos, has recently demonstrated high performance through deep learning. However, its potential application in clinical photography remains underexplored. This study aimed to establish photographic parameters for patients with adolescent idiopathic scoliosis (AIS) using pose estimation and to determine correlations between these photographic parameters and corresponding radiographic measures. Methods We conducted a study involving 42 patients with AIS who had undergone spinal correction surgery and conservative treatment. Preoperative photographs were captured using an iPhone 13 Pro mounted on a tripod positioned at the head of an X-ray tube. From the outputs of pose estimation, we derived five photographic parameters and subsequently conducted a statistical analysis to assess their correlations with relevant conventional radiographic parameters. Results In the sagittal plane, we identified significant correlations between photographic and radiographic parameters measuring trunk tilt angles. In the coronal plane, significant correlations were found between photographic parameters measuring shoulder height and trunk tilt and corresponding radiographic measurements. Conclusions The results suggest that pose estimation, achievable with common mobile devices, offers potential for AIS screening, early detection, and continuous posture monitoring, effectively mitigating the need for X-ray radiation exposure. Level of Evidence: 3.
Collapse
Affiliation(s)
- Go Goto
- Department of Orthopaedic Surgery, University of Yamanashi, Yamanashi, Japan
- National Hospital Organization Kofu National Hospital, Yamanashi, Japan
| | | | - Nobuki Tanaka
- Department of Orthopaedic Surgery, University of Yamanashi, Yamanashi, Japan
| | - Kotaro Oda
- Department of Orthopaedic Surgery, University of Yamanashi, Yamanashi, Japan
| | - Hirotaka Haro
- Department of Orthopaedic Surgery, University of Yamanashi, Yamanashi, Japan
| | - Tetsuro Ohba
- Department of Orthopaedic Surgery, University of Yamanashi, Yamanashi, Japan
| |
Collapse
|
3
|
Halvorsen K, Peng W, Olsson F, Åberg AC. Two-step deep-learning identification of heel keypoints from video-recorded gait. Med Biol Eng Comput 2024:10.1007/s11517-024-03189-7. [PMID: 39292381 DOI: 10.1007/s11517-024-03189-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 08/27/2024] [Indexed: 09/19/2024]
Abstract
Accurate and fast extraction of step parameters from video recordings of gait allows for richer information to be obtained from clinical tests such as Timed Up and Go. Current deep-learning methods are promising, but lack in accuracy for many clinical use cases. Extracting step parameters will often depend on extracted landmarks (keypoints) on the feet. We hypothesize that such keypoints can be determined with an accuracy relevant for clinical practice from video recordings by combining an existing general-purpose pose estimation method (OpenPose) with custom convolutional neural networks (convnets) specifically trained to identify keypoints on the heel. The combined method finds keypoints on the posterior and lateral aspects of the heel of the foot in side-view and frontal-view images from which step length and step width can be determined for calibrated cameras. Six different candidate convnets were evaluated, combining three different standard architectures as networks for feature extraction (backbone), and with two different networks for predicting keypoints on the heel (head networks). Using transfer learning, the backbone networks were pre-trained on the ImageNet dataset, and the combined networks (backbone + head) were fine-tuned on data from 184 trials of older, unimpaired adults. The data was recorded at three different locations and consisted of 193 k side-view images and 110 k frontal-view images. We evaluated the six different models using the absolute distance on the floor between predicted keypoints and manually labelled keypoints. For the best-performing convnet, the median error was 0.55 cm and the 75% quartile was below 1.26 cm using data from the side-view camera. The predictions are overall accurate, but show some outliers. The results indicate potential for future clinical use by automating a key step in marker-less gait parameter extraction.
Collapse
Affiliation(s)
| | - Wei Peng
- Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden
| | | | - Anna Cristina Åberg
- School of Health and Welfare, Dalarna University, Falun, Sweden
- Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden
| |
Collapse
|
4
|
Mobbs A, Kahn M, Williams G, Mentiplay BF, Pua YH, Clark RA. Machine learning for automating subjective clinical assessment of gait impairment in people with acquired brain injury - a comparison of an image extraction and classification system to expert scoring. J Neuroeng Rehabil 2024; 21:124. [PMID: 39039594 PMCID: PMC11264460 DOI: 10.1186/s12984-024-01406-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 06/14/2024] [Indexed: 07/24/2024] Open
Abstract
BACKGROUND Walking impairment is a common disability post acquired brain injury (ABI), with visually evident arm movement abnormality identified as negatively impacting a multitude of psychological factors. The International Classification of Functioning, Disability and Health (ICF) qualifiers scale has been used to subjectively assess arm movement abnormality, showing strong intra-rater and test-retest reliability, however, only moderate inter-rater reliability. This impacts clinical utility, limiting its use as a measurement tool. To both automate the analysis and overcome these errors, the primary aim of this study was to evaluate the ability of a novel two-level machine learning model to assess arm movement abnormality during walking in people with ABI. METHODS Frontal plane gait videos were used to train four networks with 50%, 75%, 90%, and 100% of participants (ABI: n = 42, healthy controls: n = 34) to automatically identify anatomical landmarks using DeepLabCut™ and calculate two-dimensional kinematic joint angles. Assessment scores from three experienced neurorehabilitation clinicians were used with these joint angles to train random forest networks with nested cross-validation to predict assessor scores for all videos. Agreement between unseen participant (i.e. test group participants that were not used to train the model) predictions and each individual assessor's scores were compared using quadratic weighted kappa. One sample t-tests (to determine over/underprediction against clinician ratings) and one-way ANOVA (to determine differences between networks) were applied to the four networks. RESULTS The machine learning predictions have similar agreement to experienced human assessors, with no statistically significant (p < 0.05) difference for any match contingency. There was no statistically significant difference between the predictions from the four networks (F = 0.119; p = 0.949). The four networks did however under-predict scores with small effect sizes (p range = 0.007 to 0.040; Cohen's d range = 0.156 to 0.217). CONCLUSIONS This study demonstrated that machine learning can perform similarly to experienced clinicians when subjectively assessing arm movement abnormality in people with ABI. The relatively small sample size may have resulted in under-prediction of some scores, albeit with small effect sizes. Studies with larger sample sizes that objectively and automatically assess dynamic movement in both local and telerehabilitation assessments, for example using smartphones and edge-based machine learning, to reduce measurement error and healthcare access inequality are needed.
Collapse
Affiliation(s)
- Ashleigh Mobbs
- School of Health, University of the Sunshine Coast, Sippy Downs, QLD, Australia
| | - Michelle Kahn
- Department of Physiotherapy, Epworth Healthcare, Richmond, VIC, Australia
| | - Gavin Williams
- Department of Physiotherapy, Epworth Healthcare, Richmond, VIC, Australia
- School of Health Sciences, University of Melbourne, Parkville, VIC, Australia
| | - Benjamin F Mentiplay
- School of Allied Health, Human Services and Sport, La Trobe University, Bundoora, VIC, Australia
| | - Yong-Hao Pua
- Department of Physiotherapy, Singapore General Hospital, Singapore, Singapore
- Duke-National University of Singapore Medical School, Singapore, Singapore
| | - Ross A Clark
- School of Health, University of the Sunshine Coast, Sippy Downs, QLD, Australia.
| |
Collapse
|
5
|
Milone D, Longo F, Merlino G, De Marchis C, Risitano G, D’Agati L. MocapMe: DeepLabCut-Enhanced Neural Network for Enhanced Markerless Stability in Sit-to-Stand Motion Capture. SENSORS (BASEL, SWITZERLAND) 2024; 24:3022. [PMID: 38793876 PMCID: PMC11125421 DOI: 10.3390/s24103022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 04/26/2024] [Accepted: 05/06/2024] [Indexed: 05/26/2024]
Abstract
This study examined the efficacy of an optimized DeepLabCut (DLC) model in motion capture, with a particular focus on the sit-to-stand (STS) movement, which is crucial for assessing the functional capacity in elderly and postoperative patients. This research uniquely compared the performance of this optimized DLC model, which was trained using 'filtered' estimates from the widely used OpenPose (OP) model, thereby emphasizing computational effectiveness, motion-tracking precision, and enhanced stability in data capture. Utilizing a combination of smartphone-captured videos and specifically curated datasets, our methodological approach included data preparation, keypoint annotation, and extensive model training, with an emphasis on the flow of the optimized model. The findings demonstrate the superiority of the optimized DLC model in various aspects. It exhibited not only higher computational efficiency, with reduced processing times, but also greater precision and consistency in motion tracking thanks to the stability brought about by the meticulous selection of the OP data. This precision is vital for developing accurate biomechanical models for clinical interventions. Moreover, this study revealed that the optimized DLC maintained higher average confidence levels across datasets, indicating more reliable and accurate detection capabilities compared with standalone OP. The clinical relevance of these findings is profound. The optimized DLC model's efficiency and enhanced point estimation stability make it an invaluable tool in rehabilitation monitoring and patient assessments, potentially streamlining clinical workflows. This study suggests future research directions, including integrating the optimized DLC model with virtual reality environments for enhanced patient engagement and leveraging its improved data quality for predictive analytics in healthcare. Overall, the optimized DLC model emerged as a transformative tool for biomechanical analysis and physical rehabilitation, promising to enhance the quality of patient care and healthcare delivery efficiency.
Collapse
Affiliation(s)
- Dario Milone
- Department of Engineering (DI), University of Messina, Contrada di Dio, 98166 Messina, Italy; (F.L.); (G.M.); (C.D.M.); (G.R.); (L.D.)
| | | | | | | | | | | |
Collapse
|
6
|
Hulleck AA, AlShehhi A, El Rich M, Khan R, Katmah R, Mohseni M, Arjmand N, Khalaf K. BlazePose-Seq2Seq: Leveraging Regular RGB Cameras for Robust Gait Assessment. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1715-1724. [PMID: 38648155 DOI: 10.1109/tnsre.2024.3391908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Evaluation of human gait through smartphone-based pose estimation algorithms provides an attractive alternative to costly lab-bound instrumented assessment and offers a paradigm shift with real time gait capture for clinical assessment. Systems based on smart phones, such as OpenPose and BlazePose have demonstrated potential for virtual motion assessment but still lack the accuracy and repeatability standards required for clinical viability. Seq2seq architecture offers an alternative solution to conventional deep learning techniques for predicting joint kinematics during gait. This study introduces a novel enhancement to the low-powered BlazePose algorithm by incorporating a Seq2seq autoencoder deep learning model. To ensure data accuracy and reliability, synchronized motion capture involving an RGB camera and ten Vicon cameras were employed across three distinct self-selected walking speeds. This investigation presents a groundbreaking avenue for remote gait assessment, harnessing the potential of Seq2seq architectures inspired by natural language processing (NLP) to enhance pose estimation accuracy. When comparing BlazePose alone to the combination of BlazePose and 1D convolution Long Short-term Memory Network (1D-LSTM), Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM), the average mean absolute errors decreased from 13.4° to 5.3° for fast gait, from 16.3° to 7.5° for normal gait, and from 15.5° to 7.5° for slow gait at the left ankle joint angle respectively. The strategic utilization of synchronized data and rigorous testing methodologies further bolsters the robustness and credibility of these findings.
Collapse
|
7
|
Stenum J, Hsu MM, Pantelyat AY, Roemmich RT. Clinical gait analysis using video-based pose estimation: Multiple perspectives, clinical populations, and measuring change. PLOS DIGITAL HEALTH 2024; 3:e0000467. [PMID: 38530801 DOI: 10.1371/journal.pdig.0000467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 02/12/2024] [Indexed: 03/28/2024]
Abstract
Gait dysfunction is common in many clinical populations and often has a profound and deleterious impact on independence and quality of life. Gait analysis is a foundational component of rehabilitation because it is critical to identify and understand the specific deficits that should be targeted prior to the initiation of treatment. Unfortunately, current state-of-the-art approaches to gait analysis (e.g., marker-based motion capture systems, instrumented gait mats) are largely inaccessible due to prohibitive costs of time, money, and effort required to perform the assessments. Here, we demonstrate the ability to perform quantitative gait analyses in multiple clinical populations using only simple videos recorded using low-cost devices (tablets). We report four primary advances: 1) a novel, versatile workflow that leverages an open-source human pose estimation algorithm (OpenPose) to perform gait analyses using videos recorded from multiple different perspectives (e.g., frontal, sagittal), 2) validation of this workflow in three different populations of participants (adults without gait impairment, persons post-stroke, and persons with Parkinson's disease) via comparison to ground-truth three-dimensional motion capture, 3) demonstration of the ability to capture clinically relevant, condition-specific gait parameters, and 4) tracking of within-participant changes in gait, as is required to measure progress in rehabilitation and recovery. Importantly, our workflow has been made freely available and does not require prior gait analysis expertise. The ability to perform quantitative gait analyses in nearly any setting using only low-cost devices and computer vision offers significant potential for dramatic improvement in the accessibility of clinical gait analysis across different patient populations.
Collapse
Affiliation(s)
- Jan Stenum
- Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, United States of America
- Department of Physical Medicine and Rehabilitation, The Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
| | - Melody M Hsu
- Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, United States of America
- Department of Neuroscience, The Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
| | - Alexander Y Pantelyat
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
| | - Ryan T Roemmich
- Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, United States of America
- Department of Physical Medicine and Rehabilitation, The Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
| |
Collapse
|
8
|
John K, Stenum J, Chiang CC, French MA, Kim C, Manor J, Statton MA, Cherry-Allen KM, Roemmich RT. Accuracy of Video-Based Gait Analysis Using Pose Estimation During Treadmill Walking Versus Overground Walking in Persons After Stroke. Phys Ther 2024; 104:pzad121. [PMID: 37682075 DOI: 10.1093/ptj/pzad121] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 07/11/2023] [Accepted: 08/13/2023] [Indexed: 09/09/2023]
Abstract
OBJECTIVE Video-based pose estimation is an emerging technology that shows significant promise for improving clinical gait analysis by enabling quantitative movement analysis with little costs of money, time, or effort. The objective of this study is to determine the accuracy of pose estimation-based gait analysis when video recordings are constrained to 3 common clinical or in-home settings (ie, frontal and sagittal views of overground walking and sagittal views of treadmill walking). METHODS Simultaneous video and motion capture recordings were collected from 30 persons after stroke during overground and treadmill walking. Spatiotemporal and kinematic gait parameters were calculated from videos using an open-source human pose estimation algorithm and from motion capture data using traditional gait analysis. Repeated-measures analyses of variance were then used to assess the accuracy of the pose estimation-based gait analysis across the different settings, and the authors examined Pearson and intraclass correlations with ground-truth motion capture data. RESULTS Sagittal videos of overground and treadmill walking led to more accurate measurements of spatiotemporal gait parameters versus frontal videos of overground walking. Sagittal videos of overground walking resulted in the strongest correlations between video-based and motion capture measurements of lower extremity joint kinematics. Video-based measurements of hip and knee kinematics showed stronger correlations with motion capture versus ankle kinematics for both overground and treadmill walking. CONCLUSION Video-based gait analysis using pose estimation provides accurate measurements of step length, step time, and hip and knee kinematics during overground and treadmill walking in persons after stroke. Generally, sagittal videos of overground gait provide the most accurate results. IMPACT Many clinicians lack access to expensive gait analysis tools that can help identify patient-specific gait deviations and guide therapy decisions. These findings show that video-based methods that require only common household devices provide accurate measurements of a variety of gait parameters in persons after stroke and could make quantitative gait analysis significantly more accessible.
Collapse
Affiliation(s)
- Kristen John
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, USA
- Zucker School of Medicine, Hofstra University, Hempstead, New York, USA
| | - Jan Stenum
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, USA
| | - Cheng-Chuan Chiang
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Margaret A French
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Christopher Kim
- Drexel University College of Medicine, Philadelphia, Pennsylvania, USA
| | - John Manor
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Matthew A Statton
- MedStar National Rehabilitation Hospital, Washington, District of Columbia, USA
| | - Kendra M Cherry-Allen
- Department of Physical Therapy Education, Western University of Health Sciences, Lebanon, Oregon, USA
| | - Ryan T Roemmich
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, USA
| |
Collapse
|
9
|
Kim S, Kim HS, Yoo J. Sarcopenia classification model for musculoskeletal patients using smart insole and artificial intelligence gait analysis. J Cachexia Sarcopenia Muscle 2023; 14:2793-2803. [PMID: 37884824 PMCID: PMC10751435 DOI: 10.1002/jcsm.13356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 08/23/2023] [Accepted: 09/19/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND The relationship between physical function, musculoskeletal disorders and sarcopenia is intricate. Current physical function tests, such as the gait speed test and the chair stand test, have limitations in eliminating subjective influences. To overcome this, smart devices utilizing inertial measurement unit sensors and artificial intelligence (AI)-based methods are being developed. METHODS We employed cutting-edge technologies, including the smart insole device and pose estimation based on AI, along with three classification models: random forest (RF), support vector machine and artificial neural network, to classify control and sarcopenia groups. Patient data of 83 individuals were divided into train and test sets, with approximately 67% allocated for training. Classification models were implemented using RStudio, considering individual and combined variables obtained through pose estimation and smart insole measurements. RESULTS Performance evaluation of the classification models utilized accuracy, precision, recall and F1-score indicators. Using only pose estimation variables, accuracy ranged from 0.92 to 0.96, with F1-scores of 0.94-0.97. Key variables identified by the RF model were 'Hip_dif', 'Ankle_dif' and 'Hipankle_dif'. Combining variables from both methods increased accuracy to 0.80-1.00, with F1-scores of 0.73-1.00. CONCLUSIONS In our study, a classification model that integrates smart insole and pose estimation technology was assessed. The RF model showed impressive results, particularly in the case of the Hip and Ankle variables. The growth of advanced measurement technologies suggests a promising avenue for identifying and utilizing additional digital biomarkers in the management of various disorders. The convergence of AI technologies with diagnostics and treatment approaches a promising future for enhanced interventions in conditions like sarcopenia.
Collapse
Affiliation(s)
- Shinjune Kim
- Department of Biomedical Research InstituteInha University HospitalIncheonSouth Korea
| | - Hyeon Su Kim
- Department of Biomedical Research InstituteInha University HospitalIncheonSouth Korea
| | - Jun‐Il Yoo
- Department of Orthopaedic SurgeryInha University HospitalIncheonSouth Korea
| |
Collapse
|