1
|
Espitia-Mora LA, Vélez-Guerrero MA, Callejas-Cuervo M. Development of a Low-Cost Markerless Optical Motion Capture System for Gait Analysis and Anthropometric Parameter Quantification. SENSORS (BASEL, SWITZERLAND) 2024; 24:3371. [PMID: 38894161 PMCID: PMC11174744 DOI: 10.3390/s24113371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/15/2024] [Accepted: 05/20/2024] [Indexed: 06/21/2024]
Abstract
Technological advancements have expanded the range of methods for capturing human body motion, including solutions involving inertial sensors (IMUs) and optical alternatives. However, the rising complexity and costs associated with commercial solutions have prompted the exploration of more cost-effective alternatives. This paper presents a markerless optical motion capture system using a RealSense depth camera and intelligent computer vision algorithms. It facilitates precise posture assessment, the real-time calculation of joint angles, and acquisition of subject-specific anthropometric data for gait analysis. The proposed system stands out for its simplicity and affordability in comparison to complex commercial solutions. The gathered data are stored in comma-separated value (CSV) files, simplifying subsequent analysis and data mining. Preliminary tests, conducted in controlled laboratory environments and employing a commercial MEMS-IMU system as a reference, revealed a maximum relative error of 7.6% in anthropometric measurements, with a maximum absolute error of 4.67 cm at average height. Stride length measurements showed a maximum relative error of 11.2%. Static joint angle tests had a maximum average error of 10.2%, while dynamic joint angle tests showed a maximum average error of 9.06%. The proposed optical system offers sufficient accuracy for potential application in areas such as rehabilitation, sports analysis, and entertainment.
Collapse
Affiliation(s)
| | | | - Mauro Callejas-Cuervo
- Software Research Group, Universidad Pedagógica y Tecnológica de Colombia, Tunja 150002, Colombia; (L.A.E.-M.); (M.A.V.-G.)
| |
Collapse
|
2
|
Wang C, Pei Z, Fan Y, Qiu S, Tang Z. Review of Vision-Based Environmental Perception for Lower-Limb Exoskeleton Robots. Biomimetics (Basel) 2024; 9:254. [PMID: 38667265 PMCID: PMC11048416 DOI: 10.3390/biomimetics9040254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 04/13/2024] [Accepted: 04/15/2024] [Indexed: 04/28/2024] Open
Abstract
The exoskeleton robot is a wearable electromechanical device inspired by animal exoskeletons. It combines technologies such as sensing, control, information, and mobile computing, enhancing human physical abilities and assisting in rehabilitation training. In recent years, with the development of visual sensors and deep learning, the environmental perception of exoskeletons has drawn widespread attention in the industry. Environmental perception can provide exoskeletons with a certain level of autonomous perception and decision-making ability, enhance their stability and safety in complex environments, and improve the human-machine-environment interaction loop. This paper provides a review of environmental perception and its related technologies of lower-limb exoskeleton robots. First, we briefly introduce the visual sensors and control system. Second, we analyze and summarize the key technologies of environmental perception, including related datasets, detection of critical terrains, and environment-oriented adaptive gait planning. Finally, we analyze the current factors limiting the development of exoskeleton environmental perception and propose future directions.
Collapse
Affiliation(s)
| | | | | | | | - Zhiyong Tang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China; (C.W.); (Z.P.); (Y.F.); (S.Q.)
| |
Collapse
|
3
|
Zhao S, Yu Z, Wang Z, Liu H, Zhou Z, Ruan L, Wang Q. A Learning-Free Method for Locomotion Mode Prediction by Terrain Reconstruction and Visual-Inertial Odometry. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3895-3905. [PMID: 37782585 DOI: 10.1109/tnsre.2023.3321077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
This research introduces a novel, highly precise, and learning-free approach to locomotion mode prediction, a technique with potential for broad applications in the field of lower-limb wearable robotics. This study represents the pioneering effort to amalgamate 3D reconstruction and Visual-Inertial Odometry (VIO) into a locomotion mode prediction method, which yields robust prediction performance across diverse subjects and terrains, and resilience against various factors including camera view, walking direction, step size, and disturbances from moving obstacles without the need of parameter adjustments. The proposed Depth-enhanced Visual-Inertial Odometry (D-VIO) has been meticulously designed to operate within computational constraints of wearable configurations while demonstrating resilience against unpredictable human movements and sparse features. Evidence of its effectiveness, both in terms of accuracy and operational time consumption, is substantiated through tests conducted using open-source dataset and closed-loop evaluations. Comprehensive experiments were undertaken to validate its prediction accuracy across various test conditions such as subjects, scenarios, sensor mounting positions, camera views, step sizes, walking directions, and disturbances from moving obstacles. A comprehensive prediction accuracy rate of 99.00% confirms the efficacy, generality, and robustness of the proposed method.
Collapse
|
4
|
Cheng S, Laubscher CA, Gregg RD. Controlling Powered Prosthesis Kinematics over Continuous Transitions Between Walk and Stair Ascent. PROCEEDINGS OF THE ... IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 2023; 2023:2108-2115. [PMID: 38130335 PMCID: PMC10732262 DOI: 10.1109/iros55552.2023.10341457] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
One of the primary benefits of emerging powered prosthetic legs is their ability to facilitate step-over-step stair ascent by providing positive mechanical work. Existing control methods typically have distinct steady-state activity modes for walking and stair ascent, where activity transitions involve discretely switching between controllers and often must be initiated with a particular leg. However, these discrete transitions do not necessarily replicate able-bodied joint biomechanics, which have been shown to continuously adjust over a transition stride. This paper presents a phase-based kinematic controller for a powered knee-ankle prosthesis that enables continuous, biomimetic transitions between walking and stair ascent. The controller tracks joint angles from a data-driven kinematic model that continuously interpolates between the steady-state kinematic models, and it allows both the prosthetic and intact leg to lead the transitions. Results from experiments with two transfemoral amputee participants indicate that knee and ankle kinematics smoothly transition between walking and stair ascent, with comparable or lower root mean square errors compared to variations from able-bodied data.
Collapse
Affiliation(s)
- Shihao Cheng
- Department of Robotics, University of Michigan, Ann Arbor, MI, 48109 USA
| | - Curt A Laubscher
- Department of Robotics, University of Michigan, Ann Arbor, MI, 48109 USA
| | - Robert D Gregg
- Department of Robotics, University of Michigan, Ann Arbor, MI, 48109 USA
| |
Collapse
|
5
|
Islam MR, Haque MR, Imtiaz MH, Shen X, Sazonov E. Vision-Based Recognition of Human Motion Intent during Staircase Approaching. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115355. [PMID: 37300082 DOI: 10.3390/s23115355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/31/2023] [Accepted: 06/01/2023] [Indexed: 06/12/2023]
Abstract
Walking in real-world environments involves constant decision-making, e.g., when approaching a staircase, an individual decides whether to engage (climbing the stairs) or avoid. For the control of assistive robots (e.g., robotic lower-limb prostheses), recognizing such motion intent is an important but challenging task, primarily due to the lack of available information. This paper presents a novel vision-based method to recognize an individual's motion intent when approaching a staircase before the potential transition of motion mode (walking to stair climbing) occurs. Leveraging the egocentric images from a head-mounted camera, the authors trained a YOLOv5 object detection model to detect staircases. Subsequently, an AdaBoost and gradient boost (GB) classifier was developed to recognize the individual's intention of engaging or avoiding the upcoming stairway. This novel method has been demonstrated to provide reliable (97.69%) recognition at least 2 steps before the potential mode transition, which is expected to provide ample time for the controller mode transition in an assistive robot in real-world use.
Collapse
Affiliation(s)
- Md Rafi Islam
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| | - Md Rejwanul Haque
- Department of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| | - Masudul H Imtiaz
- Department of Electrical and Computer Engineering, Clarkson University, Potsdam, NY 13699, USA
| | - Xiangrong Shen
- Department of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| | - Edward Sazonov
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| |
Collapse
|
6
|
Manz S, Seifert D, Altenburg B, Schmalz T, Dosen S, Gonzalez-Vargas J. Using embedded prosthesis sensors for clinical gait analyses in people with lower limb amputation: A feasibility study. Clin Biomech (Bristol, Avon) 2023; 106:105988. [PMID: 37230008 DOI: 10.1016/j.clinbiomech.2023.105988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 05/27/2023]
Abstract
BACKGROUND Biomechanical gait analyses are typically performed in laboratory settings, and are associated with limitations due to space, marker placement, and tasks that are not representative of the real-world usage of lower limb prostheses. Therefore, the purpose of this study was to investigate the possibility of accurately measuring gait parameters using embedded sensors in a microprocessor-controlled knee joint. METHODS Ten participants were recruited for this study and equipped with a Genium X3 prosthetic knee joint. They performed level walking, stair/ramp descent, and ascent. During these tasks, kinematics and kinetics (sagittal knee and thigh segment angle, and knee moment) were recorded using an optical motion capture system and force plates (gold standard), as well as the prosthesis-embedded sensors. Root mean square errors, relative errors, correlation coefficients, and discrete outcome variables of clinical relevance were calculated and compared between the gold standard and the embedded sensors. FINDINGS The average root mean square errors were found to be 0.6°, 5.3°, and 0.08 Nm/kg, for the knee angle, thigh angle, and knee moment, respectively. The average relative errors were 0.75% for the knee angle, 11.67% for the thigh angle, and 9.66%, for the knee moment. The discrete outcome variables showed small but significant differences between the two measurement systems for a number of tasks (higher differences only at the thigh). INTERPRETATION The findings highlight the potential of prosthesis-embedded sensors to accurately measure gait parameters across a wide range of tasks. This paves the way for assessing prosthesis performance in realistic environments outside the lab.
Collapse
Affiliation(s)
- Sabina Manz
- Ottobock SE & Co. KGaA, Duderstadt, Germany; Department of Health Science and Technology, Aalborg University, Aalborg, Denmark.
| | | | | | | | - Strahinja Dosen
- Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | | |
Collapse
|
7
|
Domínguez-Ruiz A, López-Caudana EO, Lugo-González E, Espinosa-García FJ, Ambrocio-Delgado R, García UD, López-Gutiérrez R, Alfaro-Ponce M, Ponce P. Low limb prostheses and complex human prosthetic interaction: A systematic literature review. Front Robot AI 2023; 10:1032748. [PMID: 36860557 PMCID: PMC9968924 DOI: 10.3389/frobt.2023.1032748] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 01/11/2023] [Indexed: 02/15/2023] Open
Abstract
A few years ago, powered prostheses triggered new technological advances in diverse areas such as mobility, comfort, and design, which have been essential to improving the quality of life of individuals with lower limb disability. The human body is a complex system involving mental and physical health, meaning a dependant relationship between its organs and lifestyle. The elements used in the design of these prostheses are critical and related to lower limb amputation level, user morphology and human-prosthetic interaction. Hence, several technologies have been employed to accomplish the end user's needs, for example, advanced materials, control systems, electronics, energy management, signal processing, and artificial intelligence. This paper presents a systematic literature review on such technologies, to identify the latest advances, challenges, and opportunities in developing lower limb prostheses with the analysis on the most significant papers. Powered prostheses for walking in different terrains were illustrated and examined, with the kind of movement the device should perform by considering the electronics, automatic control, and energy efficiency. Results show a lack of a specific and generalised structure to be followed by new developments, gaps in energy management and improved smoother patient interaction. Additionally, Human Prosthetic Interaction (HPI) is a term introduced in this paper since no other research has integrated this interaction in communication between the artificial limb and the end-user. The main goal of this paper is to provide, with the found evidence, a set of steps and components to be followed by new researchers and experts looking to improve knowledge in this field.
Collapse
Affiliation(s)
- Adan Domínguez-Ruiz
- Institute for the Future of Education, Tecnologico de Monterrey, Mexico City, México
| | | | - Esther Lugo-González
- Instituto de Electrónica y Mecatrónica, Universidad Tecnológica de la Mixteca, Huajuapan de León, Oaxaca, México
| | | | - Rocío Ambrocio-Delgado
- División de Estudios de Posgrado, Universidad Tecnológica de la Mixteca, Huajuapan de León, Oaxaca, México
| | - Ulises D. García
- CONACYT-CINVESTAV, Av. Instituto Politécnico Nacional 2508, col. San Pedro Zacatenco, Ciudad deMéxico, México
| | - Ricardo López-Gutiérrez
- CONACYT-CINVESTAV, Av. Instituto Politécnico Nacional 2508, col. San Pedro Zacatenco, Ciudad deMéxico, México
| | - Mariel Alfaro-Ponce
- Institute of Advanced Materials for Sustainable Manufacturing, Tecnologico de Monterrey, Mexico City, México
| | - Pedro Ponce
- Institute of Advanced Materials for Sustainable Manufacturing, Tecnologico de Monterrey, Mexico City, México,*Correspondence: Pedro Ponce,
| |
Collapse
|
8
|
Murray R, Mendez J, Gabert L, Fey NP, Liu H, Lenzi T. Ambulation Mode Classification of Individuals with Transfemoral Amputation through A-Mode Sonomyography and Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:9350. [PMID: 36502055 PMCID: PMC9736589 DOI: 10.3390/s22239350] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/24/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
Many people struggle with mobility impairments due to lower limb amputations. To participate in society, they need to be able to walk on a wide variety of terrains, such as stairs, ramps, and level ground. Current lower limb powered prostheses require different control strategies for varying ambulation modes, and use data from mechanical sensors within the prosthesis to determine which ambulation mode the user is in. However, it can be challenging to distinguish between ambulation modes. Efforts have been made to improve classification accuracy by adding electromyography information, but this requires a large number of sensors, has a low signal-to-noise ratio, and cannot distinguish between superficial and deep muscle activations. An alternative sensing modality, A-mode ultrasound, can detect and distinguish between changes in superficial and deep muscles. It has also shown promising results in upper limb gesture classification. Despite these advantages, A-mode ultrasound has yet to be employed for lower limb activity classification. Here we show that A- mode ultrasound can classify ambulation mode with comparable, and in some cases, superior accuracy to mechanical sensing. In this study, seven transfemoral amputee subjects walked on an ambulation circuit while wearing A-mode ultrasound transducers, IMU sensors, and their passive prosthesis. The circuit consisted of sitting, standing, level-ground walking, ramp ascent, ramp descent, stair ascent, and stair descent, and a spatial-temporal convolutional network was trained to continuously classify these seven activities. Offline continuous classification with A-mode ultrasound alone was able to achieve an accuracy of 91.8±3.4%, compared with 93.8±3.0%, when using kinematic data alone. Combined kinematic and ultrasound produced 95.8±2.3% accuracy. This suggests that A-mode ultrasound provides additional useful information about the user's gait beyond what is provided by mechanical sensors, and that it may be able to improve ambulation mode classification. By incorporating these sensors into powered prostheses, users may enjoy higher reliability for their prostheses, and more seamless transitions between ambulation modes.
Collapse
Affiliation(s)
- Rosemarie Murray
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
| | - Joel Mendez
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
| | - Lukas Gabert
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
- Rocky Mountain Center for Occupational and Environmental Health, Salt Lake City, UT 84111, USA
| | - Nicholas P. Fey
- Walker Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| | - Honghai Liu
- State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Shenzhen 518055, China
- School of Computing, University of Portsmouth, Portsmouth PO1 3HE, UK
| | - Tommaso Lenzi
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
- Rocky Mountain Center for Occupational and Environmental Health, Salt Lake City, UT 84111, USA
| |
Collapse
|
9
|
Cheng S, Bolívar-Nieto E, Welker CG, Gregg RD. Modeling the Transitional Kinematics Between Variable-Incline Walking and Stair Climbing. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2022; 4:840-851. [PMID: 35991942 PMCID: PMC9386740 DOI: 10.1109/tmrb.2022.3185405] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Although emerging powered prostheses can enable people with lower-limb amputation to walk and climb stairs over different task conditions (e.g., speeds and inclines), the control architecture typically uses a finite-state machine to switch between activity-specific controllers. Because these controllers focus on steady-state locomotion, powered prostheses abruptly switch between controllers during gait transitions rather than continuously adjusting leg biomechanics in synchrony with the users. This paper introduces a new framework for powered prosthesis control by modeling the lower-limb joint kinematics over a continuum of variable-incline walking and stair climbing, including steady-state and transitional gaits. Steady-state models for walking and stair climbing represent joint kinematics as continuous functions of gait phase, forward speed, and incline. Transition models interpolate kinematics as convex combinations of the two steady-state models, with an additional term to account for kinematics that fall outside their convex hull. The coefficients of this convex combination denote the similarity of the transitional kinematics to each steady-state mode, providing insight into how able-bodied individuals continuously transition between ambulation modes. Cross-validation demonstrates that the model predictions of untrained kinematics have errors within the range of physiological variability for all joints. Simulation results demonstrate the model's robustness to incline estimation and mode classification errors.
Collapse
Affiliation(s)
- Shihao Cheng
- Department of Mechanical Engineering and the Robotics Institute, University of Michigan, Ann Arbor, MI, 48109 USA
| | - Edgar Bolívar-Nieto
- Department of Electrical and Computer Engineering and the Robotics Institute, University of Michigan, Ann Arbor, MI, 48109 USA
| | - Cara Gonzalez Welker
- Department of Electrical and Computer Engineering and the Robotics Institute, University of Michigan, Ann Arbor, MI, 48109 USA
| | - Robert D Gregg
- Department of Electrical and Computer Engineering and the Robotics Institute, University of Michigan, Ann Arbor, MI, 48109 USA
| |
Collapse
|
10
|
Liu J, Tian Y, Geng G, Wang H, Song D, Li K, Zhou M, Cao X. UMA-Net: an unsupervised representation learning network for 3D point cloud classification. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:1085-1094. [PMID: 36215539 DOI: 10.1364/josaa.456153] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 05/07/2022] [Indexed: 06/16/2023]
Abstract
The success of deep neural networks usually relies on massive amounts of manually labeled data, which is both expensive and difficult to obtain in many real-world datasets. In this paper, a novel unsupervised representation learning network, UMA-Net, is proposed for the downstream 3D object classification. First, the multi-scale shell-based encoder is proposed, which is able to extract the local features from different scales in a simple yet effective manner. Second, an improved angular loss is presented to get a good metric for measuring the similarity between local features and global representations. Subsequently, the self-reconstruction loss is introduced to ensure the global representations do not deviate from the input data. Additionally, the output point clouds are generated by the proposed cross-dim-based decoder. Finally, a linear classifier is trained using the global representations obtained from the pre-trained model. Furthermore, the performance of this model is evaluated on ModelNet40 and applied to the real-world 3D Terracotta Warriors fragments dataset. Experimental results demonstrate that our model achieves comparable performance and narrows the gap between unsupervised and supervised learning approaches in downstream object classification tasks. Moreover, it is the first attempt to apply the unsupervised representation learning for 3D Terracotta Warriors fragments. We hope this success can provide a new avenue for the virtual protection of cultural relics.
Collapse
|
11
|
Chen C, Zhang K, Leng Y, Chen X, Fu C. Unsupervised Sim-to-Real Adaptation for Environmental Recognition in Assistive Walking. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1350-1360. [PMID: 35584064 DOI: 10.1109/tnsre.2022.3176410] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Powered lower-limb prostheses with vision sensors are expected to restore amputees' mobility in various environments with supervised learning-based environmental recognition. Due to the sim-to-real gap, such as real-world unstructured terrains and the perspective and performance limitations of vision sensor, simulated data cannot meet the requirement for supervised learning. To mitigate this gap, this paper presents an unsupervised sim-to-real adaptation method to accurately classify five common real-world (level ground, stair ascent, stair descent, ramp ascent and ramp descent) and assist amputee's terrain-adaptive locomotion. In this study, augmented simulated environments are generated from a virtual camera perspective to better simulate the real world. Then, unsupervised domain adaptation is incorporated to train the proposed adaptation network consisting of a feature extractor and two classifiers is trained on simulated data and unlabeled real-world data to minimize domain shift between source domain (simulation) and target domain (real world). To interpret the classification mechanism visually, essential features of different terrains extracted by the network are visualized. The classification results in walking experiments indicate that the average accuracy on eight subjects reaches (98.06% ± 0.71%) and (95.91% ± 1.09%) in indoor and outdoor environments respectively, which is close to the result of supervised learning using both type of labeled data (98.37% and 97.05%). The promising results demonstrate that the proposed method is expected to realize accurate real-world environmental classification and successful sim-to-real transfer.
Collapse
|
12
|
Li M, Zhong B, Lobaton E, Huang H. Fusion of Human Gaze and Machine Vision for Predicting Intended Locomotion Mode. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1103-1112. [PMID: 35442889 DOI: 10.1109/tnsre.2022.3168796] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Predicting the user's intended locomotion mode is critical for wearable robot control to assist the user's seamless transitions when walking on changing terrains. Although machine vision has recently proven to be a promising tool in identifying upcoming terrains in the travel path, existing approaches are limited to environment perception rather than human intent recognition that is essential for coordinated wearable robot operation. Hence, in this study, we aim to develop a novel system that fuses the human gaze (representing user intent) and machine vision (capturing environmental information) for accurate prediction of the user's locomotion mode. The system possesses multimodal visual information and recognizes user's locomotion intent in a complex scene, where multiple terrains are present. Additionally, based on the dynamic time warping algorithm, a fusion strategy was developed to align temporal predictions from individual modalities while producing flexible decisions on the timing of locomotion mode transition for wearable robot control. System performance was validated using experimental data collected from five participants, showing high accuracy (over 96% in average) of intent recognition and reliable decision-making on locomotion transition with adjustable lead time. The promising results demonstrate the potential of fusing human gaze and machine vision for locomotion intent recognition of lower limb wearable robots.
Collapse
|
13
|
Ma T, Zhu J, Zhang K, Xiao W, Liu H, Leng Y, Yu H, Fu C. Gait Phase Subdivision and Leg Stiffness Estimation during Stair Climbing. IEEE Trans Neural Syst Rehabil Eng 2022; 30:860-868. [PMID: 35349445 DOI: 10.1109/tnsre.2022.3163130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Leg stiffness is considered a prevalent parameter used in data analysis of leg locomotion during different gaits, such as walking, running, and hopping. Quantification of the change in support leg stiffness during stair ascent and descent will enhance our understanding of complex stair climbing gait dynamics. The purpose of this study is to investigate a methodology to estimate leg stiffness during stair climbing and subdivide the stair climbing gait cycle. Leg stiffness was determined as the ratio of changes in ground reaction force in the direction of the support leg Fl (leg force) to the respective changes in length Ll during the entire stance phase. Eight subjects ascended and descended an instrumented staircase at different cadences. In this study, the changes of leg force and length (force-length curve) are described as the leg stiffness curve, the slope of which represents the normalized stiffness during stair climbing. The stair ascent and descent gait cycles were subdivided based on the negative and positive work fluctuations of the center-of-mass (CoM) work rate curve and the characteristics of leg stiffness. We found that the leg stiffness curve consists of several segments in which the force-length relationship was similarly linear and the stiffness value was relatively constant; the phase divided by the leg stiffness curve corresponds to the phase divided by the CoM work rate curve. The results of this study may guide biomimetic control strategies for a wearable lower-extremity robot for the entire stance phase during stair climbing.
Collapse
|
14
|
Sharma A, Rombokas E. Improving IMU-based prediction of lower limb kinematics in natural environments using egocentric optical flow. IEEE Trans Neural Syst Rehabil Eng 2022; 30:699-708. [PMID: 35245198 DOI: 10.1109/tnsre.2022.3156884] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We seek to predict knee and ankle motion using wearable sensors. These predictions could serve as target trajectories for a lower limb prosthesis. In this manuscript, we investigate the use of egocentric vision for improving performance over kinematic wearable motion capture. We present an out-of-the-lab dataset of 23 healthy subjects navigating public classrooms, a large atrium, and stairs for a total of almost 12 hours of recording. The prediction task is difficult because the movements include avoiding obstacles, other people, idiosyncratic movements such as traversing doors, and individual choices in selecting the future path. We demonstrate that using vision improves the quality of the predicted knee and ankle trajectories, especially in congested spaces and when the visual environment provides information that does not appear simply in the movements of the body. Overall, including vision results in 7.9% and 7.0% improvement in root mean squared error of knee and ankle angle predictions respectively. The improvement in Pearson Correlation Coefficient for knee and ankle predictions is 1.5% and 12.3% respectively. We discuss particular moments where vision greatly improved, or failed to improve, the prediction performance. We also find that the benefits of vision can be enhanced with more data. Lastly, we discuss challenges of continuous estimation of gait in natural, out-of-the-lab datasets.
Collapse
|
15
|
Woodward R, Simon A, Seyforth E, Hargrove L. Real-Time Adaptation of an Artificial Neural Network for Transfemoral Amputees Using a Powered Prosthesis. IEEE Trans Biomed Eng 2022; 69:1202-1211. [PMID: 34652995 PMCID: PMC8988236 DOI: 10.1109/tbme.2021.3120616] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE We evaluated a two-step method to improve control accuracy for a powered prosthetic leg using machine learning and adaptation, while reducing subject training time. METHODS First, information from three transfemoral amputees was grouped together, to create a baseline control system that was subsequently tested using data from a fourth subject (user-independent classification). Second, online adaptation was investigated, whereby the fourth subject's data were used to improve the baseline control system in real-time. Results were compared for user-independent classification and for user-dependent classification (data collected from and tested in the same subject), with and without adaptation. RESULTS The combination of a user-independent classifier with real-time adaptation based on a unique individual's data produced a classification error rate as low as 1.61% [0.15 standard error of the mean (SEM)] without requiring collection of initial training data from that individual. Training/testing using a subject's own data (user-dependent classification), combined with adaptation, resulted in a classification error rate of 0.9% [0.12 SEM], which was not significantly different (P 0.05) but required, on average, an additional 7.52 hours [0.92 SEM] of training sessions. CONCLUSION AND SIGNIFICANCE We found that the combination of a user-independent dataset with adaptation resulted in error rates that were not significantly different from using a user-dependent dataset. Furthermore, this method eliminated the need for individual training sessions, saving many hours of data collection time.
Collapse
|
16
|
Laschowski B, McNally W, Wong A, McPhee J. Environment Classification for Robotic Leg Prostheses and Exoskeletons Using Deep Convolutional Neural Networks. Front Neurorobot 2022; 15:730965. [PMID: 35185507 PMCID: PMC8855111 DOI: 10.3389/fnbot.2021.730965] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/20/2021] [Indexed: 01/25/2023] Open
Abstract
Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, the current locomotion mode recognition systems being developed for automated high-level control and decision-making rely on mechanical, inertial, and/or neuromuscular sensors, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, we developed an environment classification system powered by computer vision and deep learning to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions. In this study, we first reviewed the development of our “ExoNet” database—the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labeling architecture. We then trained and tested over a dozen state-of-the-art deep convolutional neural networks (CNNs) on the ExoNet database for image classification and automatic feature engineering, including: EfficientNetB0, InceptionV3, MobileNet, MobileNetV2, VGG16, VGG19, Xception, ResNet50, ResNet101, ResNet152, DenseNet121, DenseNet169, and DenseNet201. Finally, we quantitatively compared the benchmarked CNN architectures and their environment classification predictions using an operational metric called “NetScore,” which balances the image classification accuracy with the computational and memory storage requirements (i.e., important for onboard real-time inference with mobile computing devices). Our comparative analyses showed that the EfficientNetB0 network achieves the highest test accuracy; VGG16 the fastest inference time; and MobileNetV2 the best NetScore, which can inform the optimal architecture design or selection depending on the desired performance. Overall, this study provides a large-scale benchmark and reference for next-generation environment classification systems for robotic leg prostheses and exoskeletons.
Collapse
Affiliation(s)
- Brokoslaw Laschowski
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
- *Correspondence: Brokoslaw Laschowski
| | - William McNally
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - John McPhee
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
17
|
Ma T, Wang Y, Chen X, Chen C, Hou Z, Yu H, Fu C. A Piecewise Monotonic Smooth Phase Variable for Speed-Adaptation Control of Powered Knee-Ankle Prostheses. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3182536] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Teng Ma
- Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems and Guangdong Provincial Key Laboratory of Human Augmentation and Rehabilitation Robotics in Universities, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yuxuan Wang
- Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems and Guangdong Provincial Key Laboratory of Human Augmentation and Rehabilitation Robotics in Universities, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Xinxing Chen
- Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems and Guangdong Provincial Key Laboratory of Human Augmentation and Rehabilitation Robotics in Universities, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Chuheng Chen
- Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems and Guangdong Provincial Key Laboratory of Human Augmentation and Rehabilitation Robotics in Universities, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Zhimin Hou
- Department of Biomedical Engineering, National University of Singapore, SingaporeSingapore
| | - Haoyong Yu
- Department of Biomedical Engineering, National University of Singapore, SingaporeSingapore
| | - Chenglong Fu
- Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems and Guangdong Provincial Key Laboratory of Human Augmentation and Rehabilitation Robotics in Universities, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China
| |
Collapse
|
18
|
Mouchoux J, Bravo-Cabrera MA, Dosen S, Schilling AF, Markovic M. Impact of Shared Control Modalities on Performance and Usability of Semi-autonomous Prostheses. Front Neurorobot 2021; 15:768619. [PMID: 34975446 PMCID: PMC8718752 DOI: 10.3389/fnbot.2021.768619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/22/2021] [Indexed: 11/13/2022] Open
Abstract
Semi-autonomous (SA) control of upper-limb prostheses can improve the performance and decrease the cognitive burden of a user. In this approach, a prosthesis is equipped with additional sensors (e.g., computer vision) that provide contextual information and enable the system to accomplish some tasks automatically. Autonomous control is fused with a volitional input of a user to compute the commands that are sent to the prosthesis. Although several promising prototypes demonstrating the potential of this approach have been presented, methods to integrate the two control streams (i.e., autonomous and volitional) have not been systematically investigated. In the present study, we implemented three shared control modalities (i.e., sequential, simultaneous, and continuous) and compared their performance, as well as the cognitive and physical burdens imposed on the user. In the sequential approach, the volitional input disabled the autonomous control. In the simultaneous approach, the volitional input to a specific degree of freedom (DoF) activated autonomous control of other DoFs, whereas in the continuous approach, autonomous control was always active except for the DoFs controlled by the user. The experiment was conducted in ten able-bodied subjects, and these subjects used an SA prosthesis to perform reach-and-grasp tasks while reacting to audio cues (dual tasking). The results demonstrated that, compared to the manual baseline (volitional control only), all three SA modalities accomplished the task in a shorter time and resulted in less volitional control input. The simultaneous SA modality performed worse than the sequential and continuous SA approaches. When systematic errors were introduced in the autonomous controller to generate a mismatch between the goals of the user and controller, the performance of SA modalities substantially decreased, even below the manual baseline. The sequential SA scheme was the least impacted one in terms of errors. The present study demonstrates that a specific approach for integrating volitional and autonomous control is indeed an important factor that significantly affects the performance and physical and cognitive load, and therefore these should be considered when designing SA prostheses.
Collapse
Affiliation(s)
- Jérémy Mouchoux
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Miguel A. Bravo-Cabrera
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Strahinja Dosen
- Faculty of Medicine, Department of Health Science and Technology Center for Sensory-Motor Interaction, Aalborg University, Aalborg, Denmark
| | - Arndt F. Schilling
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Marko Markovic
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| |
Collapse
|
19
|
Jamieson A, Murray L, Stankovic L, Stankovic V, Buis A. Human Activity Recognition of Individuals with Lower Limb Amputation in Free-Living Conditions: A Pilot Study. SENSORS 2021; 21:s21248377. [PMID: 34960463 PMCID: PMC8704297 DOI: 10.3390/s21248377] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 12/09/2021] [Accepted: 12/13/2021] [Indexed: 12/21/2022]
Abstract
This pilot study aimed to investigate the implementation of supervised classifiers and a neural network for the recognition of activities carried out by Individuals with Lower Limb Amputation (ILLAs), as well as individuals without gait impairment, in free living conditions. Eight individuals with no gait impairments and four ILLAs wore a thigh-based accelerometer and walked on an improvised route in the vicinity of their homes across a variety of terrains. Various machine learning classifiers were trained and tested for recognition of walking activities. Additional investigations were made regarding the detail of the activity label versus classifier accuracy and whether the classifiers were capable of being trained exclusively on non-impaired individuals’ data and could recognize physical activities carried out by ILLAs. At a basic level of label detail, Support Vector Machines (SVM) and Long-Short Term Memory (LSTM) networks were able to acquire 77–78% mean classification accuracy, which fell with increased label detail. Classifiers trained on individuals without gait impairment could not recognize activities carried out by ILLAs. This investigation presents the groundwork for a HAR system capable of recognizing a variety of walking activities, both for individuals with no gait impairments and ILLAs.
Collapse
Affiliation(s)
- Alexander Jamieson
- Wolfson Centre, Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0NW, UK; (A.J.); (L.M.)
| | - Laura Murray
- Wolfson Centre, Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0NW, UK; (A.J.); (L.M.)
| | - Lina Stankovic
- Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow G1 1XW, UK; (L.S.); (V.S.)
| | - Vladimir Stankovic
- Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow G1 1XW, UK; (L.S.); (V.S.)
| | - Arjan Buis
- Wolfson Centre, Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0NW, UK; (A.J.); (L.M.)
- Correspondence:
| |
Collapse
|
20
|
Chen X, Zhang K, Liu H, Leng Y, Fu C. A Probability Distribution Model-Based Approach for Foot Placement Prediction in the Early Swing Phase With a Wearable IMU Sensor. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2595-2604. [PMID: 34874865 DOI: 10.1109/tnsre.2021.3133656] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Predicting the next foot placement of humans during walking can help improve compliant interactions between humans and walking aid robots. Previous studies have focused on foot placement estimation with wearable inertial sensors after heel-strike, but few have predicted foot placements in advance during the early swing phase. In this study, a Bayesian inference-based foot placement prediction approach was proposed. Possible foot placements were modeled as a probability distribution grid map. With selected foot motion feature events detected sequentially in the early swing phase, the foot placement probability map could be updated iteratively using the feature models we built. The weighted center of the probability distribution was regarded as the predicted foot placement. Prediction errors were evaluated with collected walking data sets. When testing with the data from inertial measurement units, the prediction errors were (5.46 cm ± 10.89 cm, -0.83 cm ± 10.56 cm) for cross-velocity walking data and (-4.99 cm ± 12.31 cm, -11.27 cm ± 7.74 cm) for cross-subject-cross-velocity walking data. The results were comparable to previous works yet the prediction could be made earlier. For the subject who walked with more stable gaits, the prediction error can be further decreased. The proposed foot placement prediction approach can be utilized to help walking aid robots adjust their pose before each heel-strike event during walking, which will make human-robot interactions more compliant. This study is also expected to inspire additional probabilistic gait analysis works.
Collapse
|
21
|
Babič J, Laffranchi M, Tessari F, Verstraten T, Novak D, Šarabon N, Ugurlu B, Peternel L, Torricelli D, Veneman JF. Challenges and solutions for application and wider adoption of wearable robots. WEARABLE TECHNOLOGIES 2021; 2:e14. [PMID: 38486636 PMCID: PMC10936284 DOI: 10.1017/wtc.2021.13] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 08/25/2021] [Accepted: 09/18/2021] [Indexed: 03/17/2024]
Abstract
The science and technology of wearable robots are steadily advancing, and the use of such robots in our everyday life appears to be within reach. Nevertheless, widespread adoption of wearable robots should not be taken for granted, especially since many recent attempts to bring them to real-life applications resulted in mixed outcomes. The aim of this article is to address the current challenges that are limiting the application and wider adoption of wearable robots that are typically worn over the human body. We categorized the challenges into mechanical layout, actuation, sensing, body interface, control, human-robot interfacing and coadaptation, and benchmarking. For each category, we discuss specific challenges and the rationale for why solving them is important, followed by an overview of relevant recent works. We conclude with an opinion that summarizes possible solutions that could contribute to the wider adoption of wearable robots.
Collapse
Affiliation(s)
- Jan Babič
- Laboratory for Neuromechanics and Biorobotics, Department of Automation, Biocybernetics and Robotics, Jožef Stefan Institute, Ljubljana, Slovenia
| | - Matteo Laffranchi
- Rehab Technologies Lab, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Federico Tessari
- Rehab Technologies Lab, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Tom Verstraten
- Robotics & Multibody Mechanics Research Group, Vrije Universiteit Brussel and Flanders Make, Brussels, Belgium
| | - Domen Novak
- University of Wyoming, Laramie, Wyoming, USA
| | - Nejc Šarabon
- Faculty of Health Sciences, University of Primorska, Izola, Slovenia
| | - Barkan Ugurlu
- Biomechatronics Laboratory, Faculty of Engineering, Ozyegin University, Istanbul, Turkey
| | - Luka Peternel
- Delft Haptics Lab, Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| | - Diego Torricelli
- Cajal Institute, Spanish National Research Council, Madrid, Spain
| | | |
Collapse
|
22
|
Laschowski B, McNally W, Wong A, McPhee J. Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4631-4635. [PMID: 34892246 DOI: 10.1109/embc46164.2021.9630064] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., intelligent high-level controllers), we designed an environment recognition system using computer vision and deep learning. Here we first reviewed the development of the "ExoNet" database - the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labelling architecture. We then trained and tested the EfficientNetB0 convolutional neural network, which was optimized for efficiency using neural architecture search, to forward predict the walking environments. Our environment recognition system achieved ~73% image classification accuracy. These results provide the inaugural benchmark performance on the ExoNet database. Future research should evaluate and compare different convolutional neural networks to develop an accurate and real- time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.
Collapse
|
23
|
Zhang K, Luo J, Xiao W, Zhang W, Liu H, Zhu J, Lu Z, Rong Y, de Silva CW, Fu C. A Subvision System for Enhancing the Environmental Adaptability of the Powered Transfemoral Prosthesis. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3285-3297. [PMID: 32203049 DOI: 10.1109/tcyb.2020.2978216] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Visual information is indispensable to human locomotion in complex environments. Although amputees can perceive the environmental information by eyes, they cannot transmit the neural signals to prostheses directly. To augment human-prosthesis interaction, this article introduces a subvision system that can perceive environments actively, assist to control the powered prosthesis predictively, and accordingly reconstruct a complete vision-locomotion loop for transfemoral amputees. By using deep learning, the subvision system can classify common static terrains (e.g., level ground, stairs, and ramps) and estimate corresponding motion intents of amputees with high accuracy (98%). After applying the subvision system to the locomotion control system, the powered prosthesis can help amputees to achieve nonrhythmic locomotion naturally, including switching between different locomotion modes and crossing the obstacle. The subvision system can also recognize dynamic objects, such as an unexpected obstacle approaching the amputee, and assist in generating an agile obstacle-avoidance reflex movement. The experimental results demonstrate that the subvision system can cooperate with the powered prosthesis to reconstruct a complete vision-locomotion loop, which enhances the environmental adaptability of the amputees.
Collapse
|
24
|
Zhang K, Liu H, Fan Z, Chen X, Leng Y, de Silva CW, Fu C. Foot Placement Prediction for Assistive Walking by Fusing Sequential 3D Gaze and Environmental Context. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3062003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
25
|
Laschowski B, McNally W, Wong A, McPhee J. ExoNet Database: Wearable Camera Images of Human Locomotion Environments. Front Robot AI 2021; 7:562061. [PMID: 33501327 PMCID: PMC7805730 DOI: 10.3389/frobt.2020.562061] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 11/06/2020] [Indexed: 12/02/2022] Open
Affiliation(s)
- Brock Laschowski
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - William McNally
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - John McPhee
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
26
|
Labarrière F, Thomas E, Calistri L, Optasanu V, Gueugnon M, Ornetti P, Laroche D. Machine Learning Approaches for Activity Recognition and/or Activity Prediction in Locomotion Assistive Devices-A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6345. [PMID: 33172158 PMCID: PMC7664393 DOI: 10.3390/s20216345] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 10/22/2020] [Accepted: 11/04/2020] [Indexed: 01/16/2023]
Abstract
Locomotion assistive devices equipped with a microprocessor can potentially automatically adapt their behavior when the user is transitioning from one locomotion mode to another. Many developments in the field have come from machine learning driven controllers on locomotion assistive devices that recognize/predict the current locomotion mode or the upcoming one. This review synthesizes the machine learning algorithms designed to recognize or to predict a locomotion mode in order to automatically adapt the behavior of a locomotion assistive device. A systematic review was conducted on the Web of Science and MEDLINE databases (as well as in the retrieved papers) to identify articles published between 1 January 2000 to 31 July 2020. This systematic review is reported in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines and is registered on Prospero (CRD42020149352). Study characteristics, sensors and algorithms used, accuracy and robustness were also summarized. In total, 1343 records were identified and 58 studies were included in this review. The experimental condition which was most often investigated was level ground walking along with stair and ramp ascent/descent activities. The machine learning algorithms implemented in the included studies reached global mean accuracies of around 90%. However, the robustness of those algorithms seems to be more broadly evaluated, notably, in everyday life. We also propose some guidelines for homogenizing future reports.
Collapse
Affiliation(s)
- Floriant Labarrière
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
| | - Elizabeth Thomas
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
| | - Laurine Calistri
- PROTEOR, 6 rue de la Redoute, CS 37833, CEDEX 21078 Dijon, France;
| | - Virgil Optasanu
- ICB, UMR 6303 CNRS, Université de Bourgogne Franche Comté 9 Av. Alain Savary, CEDEX 21078 Dijon, France;
| | - Mathieu Gueugnon
- INSERM, CIC 1432, Module Plurithematique, Plateforme d’Investigation Technologique, CHU Dijon-Bourgogne, Centre d’Investigation Clinique, Module Plurithématique, Plateforme d’Investigation Technologique, 21079 Dijon, France;
| | - Paul Ornetti
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
- INSERM, CIC 1432, Module Plurithematique, Plateforme d’Investigation Technologique, CHU Dijon-Bourgogne, Centre d’Investigation Clinique, Module Plurithématique, Plateforme d’Investigation Technologique, 21079 Dijon, France;
- Department of Rheumatology, Dijon University Hospital, 21079 Dijon, France
| | - Davy Laroche
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
- INSERM, CIC 1432, Module Plurithematique, Plateforme d’Investigation Technologique, CHU Dijon-Bourgogne, Centre d’Investigation Clinique, Module Plurithématique, Plateforme d’Investigation Technologique, 21079 Dijon, France;
| |
Collapse
|
27
|
Rast FM, Labruyère R. Systematic review on the application of wearable inertial sensors to quantify everyday life motor activity in people with mobility impairments. J Neuroeng Rehabil 2020; 17:148. [PMID: 33148315 PMCID: PMC7640711 DOI: 10.1186/s12984-020-00779-y] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 10/22/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Recent advances in wearable sensor technologies enable objective and long-term monitoring of motor activities in a patient's habitual environment. People with mobility impairments require appropriate data processing algorithms that deal with their altered movement patterns and determine clinically meaningful outcome measures. Over the years, a large variety of algorithms have been published and this review provides an overview of their outcome measures, the concepts of the algorithms, the type and placement of required sensors as well as the investigated patient populations and measurement properties. METHODS A systematic search was conducted in MEDLINE, EMBASE, and SCOPUS in October 2019. The search strategy was designed to identify studies that (1) involved people with mobility impairments, (2) used wearable inertial sensors, (3) provided a description of the underlying algorithm, and (4) quantified an aspect of everyday life motor activity. The two review authors independently screened the search hits for eligibility and conducted the data extraction for the narrative review. RESULTS Ninety-five studies were included in this review. They covered a large variety of outcome measures and algorithms which can be grouped into four categories: (1) maintaining and changing a body position, (2) walking and moving, (3) moving around using a wheelchair, and (4) activities that involve the upper extremity. The validity or reproducibility of these outcomes measures was investigated in fourteen different patient populations. Most of the studies evaluated the algorithm's accuracy to detect certain activities in unlabeled raw data. The type and placement of required sensor technologies depends on the activity and outcome measure and are thoroughly described in this review. The usability of the applied sensor setups was rarely reported. CONCLUSION This systematic review provides a comprehensive overview of applications of wearable inertial sensors to quantify everyday life motor activity in people with mobility impairments. It summarizes the state-of-the-art, it provides quick access to the relevant literature, and it enables the identification of gaps for the evaluation of existing and the development of new algorithms.
Collapse
Affiliation(s)
- Fabian Marcel Rast
- Swiss Children’s Rehab, University Children’s Hospital Zurich, Mühlebergstrasse 104, 8910 Affoltern am Albis, Switzerland
- Children’s Research Center, University Children’s Hospital of Zurich, University of Zurich, Zurich, Switzerland
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Rob Labruyère
- Swiss Children’s Rehab, University Children’s Hospital Zurich, Mühlebergstrasse 104, 8910 Affoltern am Albis, Switzerland
- Children’s Research Center, University Children’s Hospital of Zurich, University of Zurich, Zurich, Switzerland
| |
Collapse
|
28
|
Chen C, Zhang Y, Li Y, Wang Z, Liu Y, Cao W, Wu X. Iterative Learning Control for a Soft Exoskeleton with Hip and Knee Joint Assistance. SENSORS (BASEL, SWITZERLAND) 2020; 20:E4333. [PMID: 32759646 PMCID: PMC7435451 DOI: 10.3390/s20154333] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 07/28/2020] [Accepted: 08/02/2020] [Indexed: 11/16/2022]
Abstract
Walking on different terrains leads to different biomechanics, which motivates the development of exoskeletons for assisting on walking according to the type of a terrain. The design of a lightweight soft exoskeleton that simultaneously assists multiple joints in the lower limb is presented in this paper. It is used to assist both hip and knee joints in a single system, the assistance force is directly applied to the hip joint flexion and the knee joint extension, while indirectly to the hip extension also. Based on the biological torque of human walking at three different slopes, a novel strategy is developed to improve the performance of assistance. A parameter optimal iterative learning control (POILC) method is introduced to reduce the error generated due to the difference between the wearing position and the biological features of the different wearers. In order to obtain the metabolic rate, three subjects walked on a treadmill, for 10 min on each terrain, at a speed of 4 km/h under both conditions of wearing and not wearing the soft exoskeleton. Results showed that the metabolic rate was decreased with the increasing slope of the terrain. The reductions in the net metabolic rate in the experiments on the downhill, flat ground, and uphill were, respectively, 9.86%, 12.48%, and 22.08% compared to the condition of not wearing the soft exoskeleton, where their corresponding absolute values were 0.28 W/kg, 0.72 W/kg, and 1.60 W/kg.
Collapse
Affiliation(s)
- Chunjie Chen
- CAS Key Laboratory of Human-Machine-Intelligence Synergic Systems, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China; (C.C.); (Y.Z.); (Z.W.); (Y.L.); (W.C.)
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- ShenZhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yu Zhang
- CAS Key Laboratory of Human-Machine-Intelligence Synergic Systems, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China; (C.C.); (Y.Z.); (Z.W.); (Y.L.); (W.C.)
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Harbin Institute of Technology, School of Mechanical Engineering and Automation, Shenzhen 518055, China;
| | - Yanjie Li
- Harbin Institute of Technology, School of Mechanical Engineering and Automation, Shenzhen 518055, China;
| | - Zhuo Wang
- CAS Key Laboratory of Human-Machine-Intelligence Synergic Systems, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China; (C.C.); (Y.Z.); (Z.W.); (Y.L.); (W.C.)
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Harbin Institute of Technology, School of Mechanical Engineering and Automation, Shenzhen 518055, China;
| | - Yida Liu
- CAS Key Laboratory of Human-Machine-Intelligence Synergic Systems, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China; (C.C.); (Y.Z.); (Z.W.); (Y.L.); (W.C.)
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- ShenZhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wujing Cao
- CAS Key Laboratory of Human-Machine-Intelligence Synergic Systems, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China; (C.C.); (Y.Z.); (Z.W.); (Y.L.); (W.C.)
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- ShenZhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xinyu Wu
- CAS Key Laboratory of Human-Machine-Intelligence Synergic Systems, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China; (C.C.); (Y.Z.); (Z.W.); (Y.L.); (W.C.)
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- ShenZhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
29
|
Tschiedel M, Russold MF, Kaniusas E. Relying on more sense for enhancing lower limb prostheses control: a review. J Neuroeng Rehabil 2020; 17:99. [PMID: 32680530 PMCID: PMC7368691 DOI: 10.1186/s12984-020-00726-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 07/06/2020] [Indexed: 12/02/2022] Open
Abstract
Modern lower limb prostheses have the capability to replace missing body parts and improve the patients' quality of life. However, missing environmental information often makes a seamless adaptation to transitions between different forms of locomotion challenging. The aim of this review is to identify the progress made in this area over the last decade, addressing two main questions: which types of novel sensors for environmental awareness are used in lower limb prostheses, and how do they enhance device control towards more comfort and safety. A literature search was conducted on two Internet databases, PubMed and IEEE Xplore. Based on the criteria for inclusion and exclusion, 32 papers were selected for the review analysis, 18 of those are related to explicit environmental sensing and 14 to implicit environmental sensing. Characteristics were discussed with a focus on update rate and resolution as well as on computing power and energy consumption. Our analysis identified numerous state-of-the-art sensors, some of which are able to "look through" clothing or cosmetic covers. Five control categories were identified, how "next generation prostheses" could be extended. There is a clear tendency towards more upcoming object or terrain prediction concepts using all types of distance and depth-based sensors. Other advanced strategies, such as bilateral gait segmentation from unilateral sensors, could also play an important role in movement-dependent control applications. The studies demonstrated promising accuracy in well-controlled laboratory settings, but it is unclear how the systems will perform in real-world environments, both indoors and outdoors. At the moment the main limitation proves to be the necessity of having an unobstructed field of view.
Collapse
Affiliation(s)
- Michael Tschiedel
- Research Group Biomedical Sensing, TU Wien, Institute of Electrodynamics, Microwave and Circuit Engineering, Vienna, 1040 Austria
- Global Research, Ottobock Healthcare Products GmbH, Vienna, 1110 Austria
| | | | - Eugenijus Kaniusas
- Research Group Biomedical Sensing, TU Wien, Institute of Electrodynamics, Microwave and Circuit Engineering, Vienna, 1040 Austria
| |
Collapse
|
30
|
Gao F, Liu G, Liang F, Liao WH. IMU-Based Locomotion Mode Identification for Transtibial Prostheses, Orthoses, and Exoskeletons. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1334-1343. [PMID: 32286999 DOI: 10.1109/tnsre.2020.2987155] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Active transtibial prostheses, orthoses, and exoskeletons hold the promise of improving the mobility of lower-limb impaired or amputated individuals. Locomotion mode identification (LMI) is essential for these devices precisely reproducing the required function in different terrains. In this study, a terrain geometry-based LMI algorithm is proposed. The environment should be built according to the inclination grade of the ground. For example, when the inclination angle is between 7 degrees and 15 degrees, the environment should be a ramp. If the inclination angle is around 30 degrees, the environment is preferred to be equipped with stairs. Given that, the locomotion mode/terrain can be classified by the inclination grade. Besides, human feet always move along the surface of terrain to minimize the energy expenditure for transporting lower limbs and get the required foot clearance. Hence, the foot trajectory estimated by an IMU was used to derive the inclination grade of the terrain that we traverse to identify the locomotion mode. In addition, a novel trigger condition (an elliptical boundary) is proposed to activate the decision-making of the LMI algorithm before the next foot strike thus leaving enough time for performing preparatory work in the swing phase. When the estimated foot trajectory goes across the elliptical boundary, the decision-making will be executed. Experimental results show that the average accuracy for three healthy subjects and three below-knee amputees is 98.5% in five locomotion modes: level-ground walking, up slope, down slope, stair descent, and stair ascent. Besides, all the locomotion modes can be identified before the next foot strike.
Collapse
|
31
|
Zhang K, Wang J, de Silva CW, Fu C. Unsupervised Cross-Subject Adaptation for Predicting Human Locomotion Intent. IEEE Trans Neural Syst Rehabil Eng 2020; 28:646-657. [PMID: 31944980 DOI: 10.1109/tnsre.2020.2966749] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Accurately predicting human locomotion intent is beneficial in controlling wearable robots and in assisting humans to walk smoothly on different terrains. Traditional methods for predicting human locomotion intent require collecting and labeling the human signals, and training specific classifiers for each new subject, which introduce a heavy burden on both the subject and the researcher. In addressing this issue, the present study liberates the subject and the researcher from labeling a large amount of data, by incorporating an unsupervised cross-subject adaptation method to predict the locomotion intent of a target subject whose signals are not labeled. The adaptation is realized by designing two classifiers to maximize the classification discrepancy and a feature generator to align the hidden features of the source and the target subjects to minimize the classification discrepancy. A neural network is trained by the labeled training set of source subjects and the unlabeled training set of target subjects. Then it is validated and tested on the validation set and the test set of target subjects. Experimental results in the leave-one-subject-out test indicate that the present method can classify the locomotion intent and activities of target subjects at the averaged accuracy of 93.60% and 94.59% on two public datasets. The present method increases the user-independence of the classifiers, but it has been evaluated only on the data of subjects without disabilities. The potential of the present method to predict the locomotion intent of subjects with disabilities and control the wearable robots will be evaluated in future work.
Collapse
|
32
|
Krausz NE, Hargrove LJ. A Survey of Teleceptive Sensing for Wearable Assistive Robotic Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E5238. [PMID: 31795240 PMCID: PMC6928925 DOI: 10.3390/s19235238] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 11/04/2019] [Accepted: 11/21/2019] [Indexed: 11/24/2022]
Abstract
Teleception is defined as sensing that occurs remotely, with no physical contact with the object being sensed. To emulate innate control systems of the human body, a control system for a semi- or fully autonomous assistive device not only requires feedforward models of desired movement, but also the environmental or contextual awareness that could be provided by teleception. Several recent publications present teleception modalities integrated into control systems and provide preliminary results, for example, for performing hand grasp prediction or endpoint control of an arm assistive device; and gait segmentation, forward prediction of desired locomotion mode, and activity-specific control of a prosthetic leg or exoskeleton. Collectively, several different approaches to incorporating teleception have been used, including sensor fusion, geometric segmentation, and machine learning. In this paper, we summarize the recent and ongoing published work in this promising new area of research.
Collapse
Affiliation(s)
- Nili E. Krausz
- Neural Engineering for Prosthetics and Orthotics Lab, Center of Bionic Medicine, Shirley Ryan AbilityLab (Formerly Rehabilitation Institute of Chicago), Chicago, IL 60611, USA;
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Levi J. Hargrove
- Neural Engineering for Prosthetics and Orthotics Lab, Center of Bionic Medicine, Shirley Ryan AbilityLab (Formerly Rehabilitation Institute of Chicago), Chicago, IL 60611, USA;
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
- Physical Medicine and Rehabilitation Department, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
33
|
Krausz NE, Hu BH, Hargrove LJ. Subject- and Environment-Based Sensor Variability for Wearable Lower-Limb Assistive Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4887. [PMID: 31717471 PMCID: PMC6891559 DOI: 10.3390/s19224887] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 10/29/2019] [Accepted: 11/06/2019] [Indexed: 02/08/2023]
Abstract
Significant research effort has gone towards the development of powered lower limb prostheses that control power during gait. These devices use forward prediction based on electromyography (EMG), kinetics and kinematics to command the prosthesis which locomotion activity is desired. Unfortunately these predictions can have substantial errors, which can potentially lead to trips or falls. It is hypothesized that one reason for the significant prediction errors in the current control systems for powered lower-limb prostheses is due to the inter- and intra-subject variability of the data sources used for prediction. Environmental data, recorded from a depth sensor worn on a belt, should have less variability across trials and subjects as compared to kinetics, kinematics and EMG data, and thus its addition is proposed. The variability of each data source was analyzed, once normalized, to determine the intra-activity and intra-subject variability for each sensor modality. Then measures of separability, repeatability, clustering and overall desirability were computed. Results showed that combining Vision, EMG, IMU (inertial measurement unit), and Goniometer features yielded the best separability, repeatability, clustering and desirability across subjects and activities. This will likely be useful for future application in a forward predictor, which will incorporate Vision-based environmental data into a forward predictor for powered lower-limb prosthesis and exoskeletons.
Collapse
Affiliation(s)
- Nili E. Krausz
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Blair H. Hu
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Levi J. Hargrove
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
- Physical Medicine and Rehabilitation Department, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
34
|
Zhang K, Zhang W, Xiao W, Liu H, De Silva CW, Fu C. Sequential Decision Fusion for Environmental Classification in Assistive Walking. IEEE Trans Neural Syst Rehabil Eng 2019; 27:1780-1790. [PMID: 31425118 DOI: 10.1109/tnsre.2019.2935765] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Powered prostheses are effective for helping amputees walk in a single environment, but these devices are inconvenient to use in complex environments. In order to help amputees walk in complex environments, prostheses need to understand the motion intent of amputees. Recently, researchers have found that vision sensors can be utilized to classify environments and predict the motion intent of amputees. Although previous studies have been able to classify environments accurately in offline analysis, the corresponding time delay has not been considered. To increase the accuracy and decrease the time delay of environmental classification, the present paper proposes a new decision fusion method. In this method, the sequential decisions of environmental classification are fused by constructing a hidden Markov model and designing a transition probability matrix. The developed method is evaluated by inviting five able-bodied subjects and three amputees to perform indoor and outdoor walking experiments. The results indicate that the proposed method can classify environments with accuracy improvements of 1.01% (indoor) and 2.48% (outdoor) over the previous voting method when a delay of only one frame is incorporated. The present method also achieves higher classification accuracy than with the methods of recurrent neural network (RNN), long-short term memory (LSTM), and gated recurrent unit (GRU). When achieving the same classification accuracy, the method of the present paper can decrease the time delay by 67 ms (indoor) and 733 ms (outdoor) in comparison to the previous voting method. Besides classifying environments, the proposed decision fusion method may be able to optimize the sequential predictions of the human motion intent.
Collapse
|
35
|
Hao M, Chen K, Fu C. Smoother-Based 3-D Foot Trajectory Estimation Using Inertial Sensors. IEEE Trans Biomed Eng 2019; 66:3534-3542. [PMID: 30932822 DOI: 10.1109/tbme.2019.2907322] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Measuring three-dimensional (3-D) foot trajectories with foot-worn inertial measurement units (IMUs) is essential for a variety of applications, such as gait analysis and fall risk assessment. IMU-based foot trajectory is usually reconstructed by double integrating the global coordinate acceleration, in which drifts of signals are accumulated and lead to unbounded error increase. To reduce drift errors, a smoother-based method is proposed in this paper. METHODS The smoother-based method not only corrects initial values of integrations, but also smooths integrating processes through a backward update. Both the orientation estimation and the velocity estimation are improved in this concept, which contribute to the improvement of the trajectory estimation. RESULTS The final results are compared with an optical motion capture system as reference. Accuracy is evaluated with ground level walking of nine adult participants, 2302 strides in total. Errors are reduced by 62% on stride length and 44% on stride width of the estimation without our method, with final errors [Formula: see text] and [Formula: see text]. CONCLUSION/SIGNIFICANCE Results prove that our method can improve the accuracy of 3-D foot trajectory estimation. Furthermore, this smoother-based method can reduce drift-related errors when estimating trajectories, which will allow it to expand its applications into other IMU-based measurements.
Collapse
|