1
|
Garmendia-Orbegozo A, Anton MA, Nuñez-Gonzalez JD. Reduction of Vision-Based Models for Fall Detection. SENSORS (BASEL, SWITZERLAND) 2024; 24:7256. [PMID: 39599033 PMCID: PMC11598268 DOI: 10.3390/s24227256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2024] [Revised: 11/05/2024] [Accepted: 11/12/2024] [Indexed: 11/29/2024]
Abstract
Due to the limitations that falls have on humans, early detection of these becomes essential to avoid further damage. In many applications, various technologies are used to acquire accurate information from individuals such as wearable sensors, environmental sensors or cameras, but all of these require high computational resources in many cases, delaying the response of the entire system. The complexity of the models used to process the input data and detect these activities makes them almost impossible to complete on devices with limited resources, which are the ones that could offer an immediate response avoiding unnecessary communications between sensors and centralized computing centers. In this work, we chose to reduce the models to detect falls using images as input data. We proceeded to use image sequences as video frames, using data from two open source datasets, and we applied the Sparse Low Rank Method to reduce certain layers of the Convolutional Neural Networks that were the backbone of the models. Additionally, we chose to replace a convolutional block with Long Short Term Memory to consider the latest updates of these data sequences. The results showed that performance was maintained decently while significantly reducing the parameter size of the resulting models.
Collapse
Affiliation(s)
- Asier Garmendia-Orbegozo
- Fundación Tecnalia Research & Innovation, Basque Research and Technology Alliance (BRTA), 20009 San Sebastian, Spain;
| | - Miguel Angel Anton
- Fundación Tecnalia Research & Innovation, Basque Research and Technology Alliance (BRTA), 20009 San Sebastian, Spain;
| | | |
Collapse
|
2
|
Huang X, Xue Y, Ren S, Wang F. Sensor-Based Wearable Systems for Monitoring Human Motion and Posture: A Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:9047. [PMID: 38005436 PMCID: PMC10675437 DOI: 10.3390/s23229047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 11/06/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023]
Abstract
In recent years, marked progress has been made in wearable technology for human motion and posture recognition in the areas of assisted training, medical health, VR/AR, etc. This paper systematically reviews the status quo of wearable sensing systems for human motion capture and posture recognition from three aspects, which are monitoring indicators, sensors, and system design. In particular, it summarizes the monitoring indicators closely related to human posture changes, such as trunk, joints, and limbs, and analyzes in detail the types, numbers, locations, installation methods, and advantages and disadvantages of sensors in different monitoring systems. Finally, it is concluded that future research in this area will emphasize monitoring accuracy, data security, wearing comfort, and durability. This review provides a reference for the future development of wearable sensing systems for human motion capture.
Collapse
Affiliation(s)
- Xinxin Huang
- Guangdong Modern Apparel Technology & Engineering Center, Guangdong University of Technology, Guangzhou 510075, China or (X.H.); (Y.X.); (S.R.)
- Xiayi Lixing Research Institute of Textiles and Apparel, Shangqiu 476499, China
| | - Yunan Xue
- Guangdong Modern Apparel Technology & Engineering Center, Guangdong University of Technology, Guangzhou 510075, China or (X.H.); (Y.X.); (S.R.)
| | - Shuyun Ren
- Guangdong Modern Apparel Technology & Engineering Center, Guangdong University of Technology, Guangzhou 510075, China or (X.H.); (Y.X.); (S.R.)
| | - Fei Wang
- School of Textile Materials and Engineering, Wuyi University, Jiangmen 529020, China
| |
Collapse
|
3
|
Fang Z, Woodford S, Senanayake D, Ackland D. Conversion of Upper-Limb Inertial Measurement Unit Data to Joint Angles: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:6535. [PMID: 37514829 PMCID: PMC10386307 DOI: 10.3390/s23146535] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 07/11/2023] [Accepted: 07/17/2023] [Indexed: 07/30/2023]
Abstract
Inertial measurement units (IMUs) have become the mainstay in human motion evaluation outside of the laboratory; however, quantification of 3-dimensional upper limb motion using IMUs remains challenging. The objective of this systematic review is twofold. Firstly, to evaluate computational methods used to convert IMU data to joint angles in the upper limb, including for the scapulothoracic, humerothoracic, glenohumeral, and elbow joints; and secondly, to quantify the accuracy of these approaches when compared to optoelectronic motion analysis. Fifty-two studies were included. Maximum joint motion measurement accuracy from IMUs was achieved using Euler angle decomposition and Kalman-based filters. This resulted in differences between IMU and optoelectronic motion analysis of 4° across all degrees of freedom of humerothoracic movement. Higher accuracy has been achieved at the elbow joint with functional joint axis calibration tasks and the use of kinematic constraints on gyroscope data, resulting in RMS errors between IMU and optoelectronic motion for flexion-extension as low as 2°. For the glenohumeral joint, 3D joint motion has been described with RMS errors of 6° and higher. In contrast, scapulothoracic joint motion tracking yielded RMS errors in excess of 10° in the protraction-retraction and anterior-posterior tilt direction. The findings of this study demonstrate high-quality 3D humerothoracic and elbow joint motion measurement capability using IMUs and underscore the challenges of skin motion artifacts in scapulothoracic and glenohumeral joint motion analysis. Future studies ought to implement functional joint axis calibrations, and IMU-based scapula locators to address skin motion artifacts at the scapula, and explore the use of artificial neural networks and data-driven approaches to directly convert IMU data to joint angles.
Collapse
Affiliation(s)
- Zhou Fang
- Department of Biomedical Engineering, The University of Melbourne, Melbourne 3052, Australia; (Z.F.); (S.W.); (D.S.)
| | - Sarah Woodford
- Department of Biomedical Engineering, The University of Melbourne, Melbourne 3052, Australia; (Z.F.); (S.W.); (D.S.)
| | - Damith Senanayake
- Department of Biomedical Engineering, The University of Melbourne, Melbourne 3052, Australia; (Z.F.); (S.W.); (D.S.)
- Department of Mechanical Engineering, The University of Melbourne, Melbourne 3052, Australia
| | - David Ackland
- Department of Biomedical Engineering, The University of Melbourne, Melbourne 3052, Australia; (Z.F.); (S.W.); (D.S.)
| |
Collapse
|
4
|
Longo UG, De Salvatore S, Carnevale A, Tecce SM, Bandini B, Lalli A, Schena E, Denaro V. Optical Motion Capture Systems for 3D Kinematic Analysis in Patients with Shoulder Disorders. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:12033. [PMID: 36231336 PMCID: PMC9566555 DOI: 10.3390/ijerph191912033] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 09/14/2022] [Accepted: 09/17/2022] [Indexed: 06/16/2023]
Abstract
Shoulder dysfunctions represent the third musculoskeletal disorder by frequency. However, monitoring the movement of the shoulder is particularly challenging due to the complexity of the joint kinematics. The 3D kinematic analysis with optical motion capture systems (OMCs) makes it possible to overcome clinical tests' shortcomings and obtain objective data on the characteristics and quality of movement. This systematic review aims to retrieve the current knowledge about using OMCs for 3D shoulder kinematic analysis in patients with musculoskeletal shoulder disorders and their corresponding clinical relevance. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used to improve the reporting of the review. Studies employing OMCs for 3D kinematic analysis in patients with musculoskeletal shoulder disorders were retrieved. Eleven articles were considered eligible for this study. OMCs can be considered a powerful tool in orthopedic clinical research. The high costs and organizing complexities of experimental setups are likely outweighed by the impact of these systems in guiding clinical practice and patient follow-up. However, additional high-quality studies on using OMCs in clinical practice are required, with standardized protocols and methodologies to make comparing clinical trials easier.
Collapse
Affiliation(s)
- Umile Giuseppe Longo
- Research Unit of Orthopaedic and Trauma Surgery, Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Roma, Italy
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Roma, Italy
| | - Sergio De Salvatore
- Research Unit of Orthopaedic and Trauma Surgery, Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Roma, Italy
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Roma, Italy
| | - Arianna Carnevale
- Research Unit of Orthopaedic and Trauma Surgery, Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Roma, Italy
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Roma, Italy
- Laboratory of Measurement and Biomedical Instrumentation, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 200, 00128 Rome, Italy
| | - Salvatore Maria Tecce
- Research Unit of Orthopaedic and Trauma Surgery, Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Roma, Italy
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Roma, Italy
| | - Benedetta Bandini
- Research Unit of Orthopaedic and Trauma Surgery, Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Roma, Italy
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Roma, Italy
| | - Alberto Lalli
- Research Unit of Orthopaedic and Trauma Surgery, Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Roma, Italy
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Roma, Italy
| | - Emiliano Schena
- Laboratory of Measurement and Biomedical Instrumentation, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 200, 00128 Rome, Italy
| | - Vincenzo Denaro
- Research Unit of Orthopaedic and Trauma Surgery, Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Roma, Italy
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Roma, Italy
| |
Collapse
|
5
|
Planning Collision-Free Robot Motions in a Human–Robot Shared Workspace via Mixed Reality and Sensor-Fusion Skeleton Tracking. ELECTRONICS 2022. [DOI: 10.3390/electronics11152407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The paper describes a method for planning collision-free motions of an industrial manipulator that shares the workspace with human operators during a human–robot collaborative application with strict safety requirements. The proposed workflow exploits the advantages of mixed reality to insert real entities into a virtual scene, wherein the robot control command is computed and validated by simulating robot motions without risks for the human. The proposed motion planner relies on a sensor-fusion algorithm that improves the 3D perception of the humans inside the robot workspace. Such an algorithm merges the estimations of the pose of the human bones reconstructed by means of a pointcloud-based skeleton tracking algorithm with the orientation data acquired from wearable inertial measurement units (IMUs) supposed to be fixed to the human bones. The algorithm provides a final reconstruction of the position and of the orientation of the human bones that can be used to include the human in the virtual simulation of the robotic workcell. A dynamic motion-planning algorithm can be processed within such a mixed-reality environment, allowing the computation of a collision-free joint velocity command for the real robot.
Collapse
|