1
|
Liu Y, Chen C, Wang Z, Tian Y, Wang S, Xiao Y, Yang F, Wu X. Continuous Locomotion Mode and Task Identification for an Assistive Exoskeleton Based on Neuromuscular-Mechanical Fusion. Bioengineering (Basel) 2024; 11:150. [PMID: 38391636 PMCID: PMC10886133 DOI: 10.3390/bioengineering11020150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/15/2024] [Accepted: 01/18/2024] [Indexed: 02/24/2024] Open
Abstract
Human walking parameters exhibit significant variability depending on the terrain, speed, and load. Assistive exoskeletons currently focus on the recognition of locomotion terrain, ignoring the identification of locomotion tasks, which are also essential for control strategies. The aim of this study was to develop an interface for locomotion mode and task identification based on a neuromuscular-mechanical fusion algorithm. The modes of level and incline and tasks of speed and load were explored, and seven able-bodied participants were recruited. A continuous stream of assistive decisions supporting timely exoskeleton control was achieved according to the classification of locomotion. We investigated the optimal algorithm, feature set, window increment, window length, and robustness for precise identification and synchronization between exoskeleton assistive force and human limb movements (human-machine collaboration). The best recognition results were obtained when using a support vector machine, a root mean square/waveform length/acceleration feature set, a window length of 170, and a window increment of 20. The average identification accuracy reached 98.7% ± 1.3%. These results suggest that the surface electromyography-acceleration can be effectively used for locomotion mode and task identification. This study contributes to the development of locomotion mode and task recognition as well as exoskeleton control for seamless transitions.
Collapse
Affiliation(s)
- Yao Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Chunjie Chen
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhuo Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yongtang Tian
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Sheng Wang
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yang Xiao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Fangliang Yang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xinyu Wu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
2
|
Yang B, Chen X, Xiao X, Yan P, Hasegawa Y, Huang J. Gaze and Environmental Context-Guided Deep Neural Network and Sequential Decision Fusion for Grasp Intention Recognition. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3687-3698. [PMID: 37703142 DOI: 10.1109/tnsre.2023.3314503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
Grasp intention recognition plays a crucial role in controlling assistive robots to aid older people and individuals with limited mobility in restoring arm and hand function. Among the various modalities used for intention recognition, the eye-gaze movement has emerged as a promising approach due to its simplicity, intuitiveness, and effectiveness. Existing gaze-based approaches insufficiently integrate gaze data with environmental context and underuse temporal information, leading to inadequate intention recognition performance. The objective of this study is to eliminate the proposed deficiency and establish a gaze-based framework for object detection and its associated intention recognition. A novel gaze-based grasp intention recognition and sequential decision fusion framework (GIRSDF) is proposed. The GIRSDF comprises three main components: gaze attention map generation, the Gaze-YOLO grasp intention recognition model, and sequential decision fusion models (HMM, LSTM, and GRU). To evaluate the performance of GIRSDF, a dataset named Invisible containing data from healthy individuals and hemiplegic patients is established. GIRSDF is validated by trial-based and subject-based experiments on Invisible and outperforms the previous gaze-based grasp intention recognition methods. In terms of running efficiency, the proposed framework can run at a frequency of about 22 Hz, which ensures real-time grasp intention recognition. This study is expected to inspire additional gaze-related grasp intention recognition works.
Collapse
|
3
|
Murray R, Mendez J, Gabert L, Fey NP, Liu H, Lenzi T. Ambulation Mode Classification of Individuals with Transfemoral Amputation through A-Mode Sonomyography and Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:9350. [PMID: 36502055 PMCID: PMC9736589 DOI: 10.3390/s22239350] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/24/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
Many people struggle with mobility impairments due to lower limb amputations. To participate in society, they need to be able to walk on a wide variety of terrains, such as stairs, ramps, and level ground. Current lower limb powered prostheses require different control strategies for varying ambulation modes, and use data from mechanical sensors within the prosthesis to determine which ambulation mode the user is in. However, it can be challenging to distinguish between ambulation modes. Efforts have been made to improve classification accuracy by adding electromyography information, but this requires a large number of sensors, has a low signal-to-noise ratio, and cannot distinguish between superficial and deep muscle activations. An alternative sensing modality, A-mode ultrasound, can detect and distinguish between changes in superficial and deep muscles. It has also shown promising results in upper limb gesture classification. Despite these advantages, A-mode ultrasound has yet to be employed for lower limb activity classification. Here we show that A- mode ultrasound can classify ambulation mode with comparable, and in some cases, superior accuracy to mechanical sensing. In this study, seven transfemoral amputee subjects walked on an ambulation circuit while wearing A-mode ultrasound transducers, IMU sensors, and their passive prosthesis. The circuit consisted of sitting, standing, level-ground walking, ramp ascent, ramp descent, stair ascent, and stair descent, and a spatial-temporal convolutional network was trained to continuously classify these seven activities. Offline continuous classification with A-mode ultrasound alone was able to achieve an accuracy of 91.8±3.4%, compared with 93.8±3.0%, when using kinematic data alone. Combined kinematic and ultrasound produced 95.8±2.3% accuracy. This suggests that A-mode ultrasound provides additional useful information about the user's gait beyond what is provided by mechanical sensors, and that it may be able to improve ambulation mode classification. By incorporating these sensors into powered prostheses, users may enjoy higher reliability for their prostheses, and more seamless transitions between ambulation modes.
Collapse
Affiliation(s)
- Rosemarie Murray
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
| | - Joel Mendez
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
| | - Lukas Gabert
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
- Rocky Mountain Center for Occupational and Environmental Health, Salt Lake City, UT 84111, USA
| | - Nicholas P. Fey
- Walker Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| | - Honghai Liu
- State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Shenzhen 518055, China
- School of Computing, University of Portsmouth, Portsmouth PO1 3HE, UK
| | - Tommaso Lenzi
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
- Rocky Mountain Center for Occupational and Environmental Health, Salt Lake City, UT 84111, USA
| |
Collapse
|
4
|
Chen C, Zhang K, Leng Y, Chen X, Fu C. Unsupervised Sim-to-Real Adaptation for Environmental Recognition in Assistive Walking. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1350-1360. [PMID: 35584064 DOI: 10.1109/tnsre.2022.3176410] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Powered lower-limb prostheses with vision sensors are expected to restore amputees' mobility in various environments with supervised learning-based environmental recognition. Due to the sim-to-real gap, such as real-world unstructured terrains and the perspective and performance limitations of vision sensor, simulated data cannot meet the requirement for supervised learning. To mitigate this gap, this paper presents an unsupervised sim-to-real adaptation method to accurately classify five common real-world (level ground, stair ascent, stair descent, ramp ascent and ramp descent) and assist amputee's terrain-adaptive locomotion. In this study, augmented simulated environments are generated from a virtual camera perspective to better simulate the real world. Then, unsupervised domain adaptation is incorporated to train the proposed adaptation network consisting of a feature extractor and two classifiers is trained on simulated data and unlabeled real-world data to minimize domain shift between source domain (simulation) and target domain (real world). To interpret the classification mechanism visually, essential features of different terrains extracted by the network are visualized. The classification results in walking experiments indicate that the average accuracy on eight subjects reaches (98.06% ± 0.71%) and (95.91% ± 1.09%) in indoor and outdoor environments respectively, which is close to the result of supervised learning using both type of labeled data (98.37% and 97.05%). The promising results demonstrate that the proposed method is expected to realize accurate real-world environmental classification and successful sim-to-real transfer.
Collapse
|
5
|
Li M, Zhong B, Lobaton E, Huang H. Fusion of Human Gaze and Machine Vision for Predicting Intended Locomotion Mode. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1103-1112. [PMID: 35442889 DOI: 10.1109/tnsre.2022.3168796] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Predicting the user's intended locomotion mode is critical for wearable robot control to assist the user's seamless transitions when walking on changing terrains. Although machine vision has recently proven to be a promising tool in identifying upcoming terrains in the travel path, existing approaches are limited to environment perception rather than human intent recognition that is essential for coordinated wearable robot operation. Hence, in this study, we aim to develop a novel system that fuses the human gaze (representing user intent) and machine vision (capturing environmental information) for accurate prediction of the user's locomotion mode. The system possesses multimodal visual information and recognizes user's locomotion intent in a complex scene, where multiple terrains are present. Additionally, based on the dynamic time warping algorithm, a fusion strategy was developed to align temporal predictions from individual modalities while producing flexible decisions on the timing of locomotion mode transition for wearable robot control. System performance was validated using experimental data collected from five participants, showing high accuracy (over 96% in average) of intent recognition and reliable decision-making on locomotion transition with adjustable lead time. The promising results demonstrate the potential of fusing human gaze and machine vision for locomotion intent recognition of lower limb wearable robots.
Collapse
|
6
|
Laschowski B, McNally W, Wong A, McPhee J. Environment Classification for Robotic Leg Prostheses and Exoskeletons Using Deep Convolutional Neural Networks. Front Neurorobot 2022; 15:730965. [PMID: 35185507 PMCID: PMC8855111 DOI: 10.3389/fnbot.2021.730965] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/20/2021] [Indexed: 01/25/2023] Open
Abstract
Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, the current locomotion mode recognition systems being developed for automated high-level control and decision-making rely on mechanical, inertial, and/or neuromuscular sensors, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, we developed an environment classification system powered by computer vision and deep learning to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions. In this study, we first reviewed the development of our “ExoNet” database—the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labeling architecture. We then trained and tested over a dozen state-of-the-art deep convolutional neural networks (CNNs) on the ExoNet database for image classification and automatic feature engineering, including: EfficientNetB0, InceptionV3, MobileNet, MobileNetV2, VGG16, VGG19, Xception, ResNet50, ResNet101, ResNet152, DenseNet121, DenseNet169, and DenseNet201. Finally, we quantitatively compared the benchmarked CNN architectures and their environment classification predictions using an operational metric called “NetScore,” which balances the image classification accuracy with the computational and memory storage requirements (i.e., important for onboard real-time inference with mobile computing devices). Our comparative analyses showed that the EfficientNetB0 network achieves the highest test accuracy; VGG16 the fastest inference time; and MobileNetV2 the best NetScore, which can inform the optimal architecture design or selection depending on the desired performance. Overall, this study provides a large-scale benchmark and reference for next-generation environment classification systems for robotic leg prostheses and exoskeletons.
Collapse
Affiliation(s)
- Brokoslaw Laschowski
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
- *Correspondence: Brokoslaw Laschowski
| | - William McNally
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - John McPhee
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
7
|
Ma T, Wang Y, Chen X, Chen C, Hou Z, Yu H, Fu C. A Piecewise Monotonic Smooth Phase Variable for Speed-Adaptation Control of Powered Knee-Ankle Prostheses. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3182536] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Teng Ma
- Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems and Guangdong Provincial Key Laboratory of Human Augmentation and Rehabilitation Robotics in Universities, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yuxuan Wang
- Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems and Guangdong Provincial Key Laboratory of Human Augmentation and Rehabilitation Robotics in Universities, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Xinxing Chen
- Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems and Guangdong Provincial Key Laboratory of Human Augmentation and Rehabilitation Robotics in Universities, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Chuheng Chen
- Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems and Guangdong Provincial Key Laboratory of Human Augmentation and Rehabilitation Robotics in Universities, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Zhimin Hou
- Department of Biomedical Engineering, National University of Singapore, SingaporeSingapore
| | - Haoyong Yu
- Department of Biomedical Engineering, National University of Singapore, SingaporeSingapore
| | - Chenglong Fu
- Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems and Guangdong Provincial Key Laboratory of Human Augmentation and Rehabilitation Robotics in Universities, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China
| |
Collapse
|
8
|
Jamieson A, Murray L, Stankovic L, Stankovic V, Buis A. Human Activity Recognition of Individuals with Lower Limb Amputation in Free-Living Conditions: A Pilot Study. SENSORS 2021; 21:s21248377. [PMID: 34960463 PMCID: PMC8704297 DOI: 10.3390/s21248377] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 12/09/2021] [Accepted: 12/13/2021] [Indexed: 12/21/2022]
Abstract
This pilot study aimed to investigate the implementation of supervised classifiers and a neural network for the recognition of activities carried out by Individuals with Lower Limb Amputation (ILLAs), as well as individuals without gait impairment, in free living conditions. Eight individuals with no gait impairments and four ILLAs wore a thigh-based accelerometer and walked on an improvised route in the vicinity of their homes across a variety of terrains. Various machine learning classifiers were trained and tested for recognition of walking activities. Additional investigations were made regarding the detail of the activity label versus classifier accuracy and whether the classifiers were capable of being trained exclusively on non-impaired individuals’ data and could recognize physical activities carried out by ILLAs. At a basic level of label detail, Support Vector Machines (SVM) and Long-Short Term Memory (LSTM) networks were able to acquire 77–78% mean classification accuracy, which fell with increased label detail. Classifiers trained on individuals without gait impairment could not recognize activities carried out by ILLAs. This investigation presents the groundwork for a HAR system capable of recognizing a variety of walking activities, both for individuals with no gait impairments and ILLAs.
Collapse
Affiliation(s)
- Alexander Jamieson
- Wolfson Centre, Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0NW, UK; (A.J.); (L.M.)
| | - Laura Murray
- Wolfson Centre, Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0NW, UK; (A.J.); (L.M.)
| | - Lina Stankovic
- Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow G1 1XW, UK; (L.S.); (V.S.)
| | - Vladimir Stankovic
- Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow G1 1XW, UK; (L.S.); (V.S.)
| | - Arjan Buis
- Wolfson Centre, Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0NW, UK; (A.J.); (L.M.)
- Correspondence:
| |
Collapse
|
9
|
Laschowski B, McNally W, Wong A, McPhee J. Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4631-4635. [PMID: 34892246 DOI: 10.1109/embc46164.2021.9630064] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., intelligent high-level controllers), we designed an environment recognition system using computer vision and deep learning. Here we first reviewed the development of the "ExoNet" database - the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labelling architecture. We then trained and tested the EfficientNetB0 convolutional neural network, which was optimized for efficiency using neural architecture search, to forward predict the walking environments. Our environment recognition system achieved ~73% image classification accuracy. These results provide the inaugural benchmark performance on the ExoNet database. Future research should evaluate and compare different convolutional neural networks to develop an accurate and real- time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.
Collapse
|
10
|
Zhang K, Luo J, Xiao W, Zhang W, Liu H, Zhu J, Lu Z, Rong Y, de Silva CW, Fu C. A Subvision System for Enhancing the Environmental Adaptability of the Powered Transfemoral Prosthesis. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3285-3297. [PMID: 32203049 DOI: 10.1109/tcyb.2020.2978216] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Visual information is indispensable to human locomotion in complex environments. Although amputees can perceive the environmental information by eyes, they cannot transmit the neural signals to prostheses directly. To augment human-prosthesis interaction, this article introduces a subvision system that can perceive environments actively, assist to control the powered prosthesis predictively, and accordingly reconstruct a complete vision-locomotion loop for transfemoral amputees. By using deep learning, the subvision system can classify common static terrains (e.g., level ground, stairs, and ramps) and estimate corresponding motion intents of amputees with high accuracy (98%). After applying the subvision system to the locomotion control system, the powered prosthesis can help amputees to achieve nonrhythmic locomotion naturally, including switching between different locomotion modes and crossing the obstacle. The subvision system can also recognize dynamic objects, such as an unexpected obstacle approaching the amputee, and assist in generating an agile obstacle-avoidance reflex movement. The experimental results demonstrate that the subvision system can cooperate with the powered prosthesis to reconstruct a complete vision-locomotion loop, which enhances the environmental adaptability of the amputees.
Collapse
|
11
|
Lu Y, Wang H, Qi Y, Xi H. Evaluation of classification performance in human lower limb jump phases of signal correlation information and LSTM models. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102279] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
12
|
Laschowski B, McNally W, Wong A, McPhee J. ExoNet Database: Wearable Camera Images of Human Locomotion Environments. Front Robot AI 2021; 7:562061. [PMID: 33501327 PMCID: PMC7805730 DOI: 10.3389/frobt.2020.562061] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 11/06/2020] [Indexed: 12/02/2022] Open
Affiliation(s)
- Brock Laschowski
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - William McNally
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - John McPhee
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
13
|
Labarrière F, Thomas E, Calistri L, Optasanu V, Gueugnon M, Ornetti P, Laroche D. Machine Learning Approaches for Activity Recognition and/or Activity Prediction in Locomotion Assistive Devices-A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6345. [PMID: 33172158 PMCID: PMC7664393 DOI: 10.3390/s20216345] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 10/22/2020] [Accepted: 11/04/2020] [Indexed: 01/16/2023]
Abstract
Locomotion assistive devices equipped with a microprocessor can potentially automatically adapt their behavior when the user is transitioning from one locomotion mode to another. Many developments in the field have come from machine learning driven controllers on locomotion assistive devices that recognize/predict the current locomotion mode or the upcoming one. This review synthesizes the machine learning algorithms designed to recognize or to predict a locomotion mode in order to automatically adapt the behavior of a locomotion assistive device. A systematic review was conducted on the Web of Science and MEDLINE databases (as well as in the retrieved papers) to identify articles published between 1 January 2000 to 31 July 2020. This systematic review is reported in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines and is registered on Prospero (CRD42020149352). Study characteristics, sensors and algorithms used, accuracy and robustness were also summarized. In total, 1343 records were identified and 58 studies were included in this review. The experimental condition which was most often investigated was level ground walking along with stair and ramp ascent/descent activities. The machine learning algorithms implemented in the included studies reached global mean accuracies of around 90%. However, the robustness of those algorithms seems to be more broadly evaluated, notably, in everyday life. We also propose some guidelines for homogenizing future reports.
Collapse
Affiliation(s)
- Floriant Labarrière
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
| | - Elizabeth Thomas
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
| | - Laurine Calistri
- PROTEOR, 6 rue de la Redoute, CS 37833, CEDEX 21078 Dijon, France;
| | - Virgil Optasanu
- ICB, UMR 6303 CNRS, Université de Bourgogne Franche Comté 9 Av. Alain Savary, CEDEX 21078 Dijon, France;
| | - Mathieu Gueugnon
- INSERM, CIC 1432, Module Plurithematique, Plateforme d’Investigation Technologique, CHU Dijon-Bourgogne, Centre d’Investigation Clinique, Module Plurithématique, Plateforme d’Investigation Technologique, 21079 Dijon, France;
| | - Paul Ornetti
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
- INSERM, CIC 1432, Module Plurithematique, Plateforme d’Investigation Technologique, CHU Dijon-Bourgogne, Centre d’Investigation Clinique, Module Plurithématique, Plateforme d’Investigation Technologique, 21079 Dijon, France;
- Department of Rheumatology, Dijon University Hospital, 21079 Dijon, France
| | - Davy Laroche
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
- INSERM, CIC 1432, Module Plurithematique, Plateforme d’Investigation Technologique, CHU Dijon-Bourgogne, Centre d’Investigation Clinique, Module Plurithématique, Plateforme d’Investigation Technologique, 21079 Dijon, France;
| |
Collapse
|
14
|
Tschiedel M, Russold MF, Kaniusas E. Relying on more sense for enhancing lower limb prostheses control: a review. J Neuroeng Rehabil 2020; 17:99. [PMID: 32680530 PMCID: PMC7368691 DOI: 10.1186/s12984-020-00726-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 07/06/2020] [Indexed: 12/02/2022] Open
Abstract
Modern lower limb prostheses have the capability to replace missing body parts and improve the patients' quality of life. However, missing environmental information often makes a seamless adaptation to transitions between different forms of locomotion challenging. The aim of this review is to identify the progress made in this area over the last decade, addressing two main questions: which types of novel sensors for environmental awareness are used in lower limb prostheses, and how do they enhance device control towards more comfort and safety. A literature search was conducted on two Internet databases, PubMed and IEEE Xplore. Based on the criteria for inclusion and exclusion, 32 papers were selected for the review analysis, 18 of those are related to explicit environmental sensing and 14 to implicit environmental sensing. Characteristics were discussed with a focus on update rate and resolution as well as on computing power and energy consumption. Our analysis identified numerous state-of-the-art sensors, some of which are able to "look through" clothing or cosmetic covers. Five control categories were identified, how "next generation prostheses" could be extended. There is a clear tendency towards more upcoming object or terrain prediction concepts using all types of distance and depth-based sensors. Other advanced strategies, such as bilateral gait segmentation from unilateral sensors, could also play an important role in movement-dependent control applications. The studies demonstrated promising accuracy in well-controlled laboratory settings, but it is unclear how the systems will perform in real-world environments, both indoors and outdoors. At the moment the main limitation proves to be the necessity of having an unobstructed field of view.
Collapse
Affiliation(s)
- Michael Tschiedel
- Research Group Biomedical Sensing, TU Wien, Institute of Electrodynamics, Microwave and Circuit Engineering, Vienna, 1040 Austria
- Global Research, Ottobock Healthcare Products GmbH, Vienna, 1110 Austria
| | | | - Eugenijus Kaniusas
- Research Group Biomedical Sensing, TU Wien, Institute of Electrodynamics, Microwave and Circuit Engineering, Vienna, 1040 Austria
| |
Collapse
|
15
|
Zhang K, Wang J, de Silva CW, Fu C. Unsupervised Cross-Subject Adaptation for Predicting Human Locomotion Intent. IEEE Trans Neural Syst Rehabil Eng 2020; 28:646-657. [PMID: 31944980 DOI: 10.1109/tnsre.2020.2966749] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Accurately predicting human locomotion intent is beneficial in controlling wearable robots and in assisting humans to walk smoothly on different terrains. Traditional methods for predicting human locomotion intent require collecting and labeling the human signals, and training specific classifiers for each new subject, which introduce a heavy burden on both the subject and the researcher. In addressing this issue, the present study liberates the subject and the researcher from labeling a large amount of data, by incorporating an unsupervised cross-subject adaptation method to predict the locomotion intent of a target subject whose signals are not labeled. The adaptation is realized by designing two classifiers to maximize the classification discrepancy and a feature generator to align the hidden features of the source and the target subjects to minimize the classification discrepancy. A neural network is trained by the labeled training set of source subjects and the unlabeled training set of target subjects. Then it is validated and tested on the validation set and the test set of target subjects. Experimental results in the leave-one-subject-out test indicate that the present method can classify the locomotion intent and activities of target subjects at the averaged accuracy of 93.60% and 94.59% on two public datasets. The present method increases the user-independence of the classifiers, but it has been evaluated only on the data of subjects without disabilities. The potential of the present method to predict the locomotion intent of subjects with disabilities and control the wearable robots will be evaluated in future work.
Collapse
|