1
|
Wang C, Pei Z, Fan Y, Qiu S, Tang Z. Review of Vision-Based Environmental Perception for Lower-Limb Exoskeleton Robots. Biomimetics (Basel) 2024; 9:254. [PMID: 38667265 PMCID: PMC11048416 DOI: 10.3390/biomimetics9040254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 04/13/2024] [Accepted: 04/15/2024] [Indexed: 04/28/2024] Open
Abstract
The exoskeleton robot is a wearable electromechanical device inspired by animal exoskeletons. It combines technologies such as sensing, control, information, and mobile computing, enhancing human physical abilities and assisting in rehabilitation training. In recent years, with the development of visual sensors and deep learning, the environmental perception of exoskeletons has drawn widespread attention in the industry. Environmental perception can provide exoskeletons with a certain level of autonomous perception and decision-making ability, enhance their stability and safety in complex environments, and improve the human-machine-environment interaction loop. This paper provides a review of environmental perception and its related technologies of lower-limb exoskeleton robots. First, we briefly introduce the visual sensors and control system. Second, we analyze and summarize the key technologies of environmental perception, including related datasets, detection of critical terrains, and environment-oriented adaptive gait planning. Finally, we analyze the current factors limiting the development of exoskeleton environmental perception and propose future directions.
Collapse
Affiliation(s)
| | | | | | | | - Zhiyong Tang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China; (C.W.); (Z.P.); (Y.F.); (S.Q.)
| |
Collapse
|
2
|
Kurbis AG, Kuzmenko D, Ivanyuk-Skulskiy B, Mihailidis A, Laschowski B. StairNet: visual recognition of stairs for human-robot locomotion. Biomed Eng Online 2024; 23:20. [PMID: 38360664 PMCID: PMC10870468 DOI: 10.1186/s12938-024-01216-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 02/02/2024] [Indexed: 02/17/2024] Open
Abstract
Human-robot walking with prosthetic legs and exoskeletons, especially over complex terrains, such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. This motivated us to develop the StairNet initiative to support the development of new deep learning models for visual perception of real-world stair environments. In this study, we present a comprehensive overview of the StairNet initiative and key research to date. First, we summarize the development of our large-scale data set with over 515,000 manually labeled images. We then provide a summary and detailed comparison of the performances achieved with different algorithms (i.e., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks), training methods (i.e., supervised learning with and without temporal data, and semi-supervised learning with unlabeled images), and deployment methods (i.e., mobile and embedded computing), using the StairNet data set. Finally, we discuss the challenges and future directions. To date, our StairNet models have consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model accuracy and size. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. In comparison, when deployed on our custom-designed CPU-powered smart glasses, our models yielded slower inference speeds of 1.5 s, presenting a trade-off between human-centered design and performance. Overall, the results of numerous experiments presented herein provide consistent evidence that StairNet can be an effective platform to develop and study new deep learning models for visual perception of human-robot walking environments, with an emphasis on stair recognition. This research aims to support the development of next-generation vision-based control systems for robotic prosthetic legs, exoskeletons, and other mobility assistive technologies.
Collapse
Affiliation(s)
- Andrew Garrett Kurbis
- Institute of Biomedical Engineering, University of Toronto, Toronto, Canada.
- KITE Research Institute, Toronto Rehabilitation Institute, Toronto, Canada.
| | - Dmytro Kuzmenko
- Department of Mathematics, National University of Kyiv-Mohyla Academy, Kyiv, Ukraine
| | | | - Alex Mihailidis
- Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
- KITE Research Institute, Toronto Rehabilitation Institute, Toronto, Canada
| | - Brokoslaw Laschowski
- Robotics Institute, University of Toronto, Toronto, Canada
- KITE Research Institute, Toronto Rehabilitation Institute, Toronto, Canada
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada
| |
Collapse
|
3
|
Zhang Y, Doyle T. Integrating intention-based systems in human-robot interaction: a scoping review of sensors, algorithms, and trust. Front Robot AI 2023; 10:1233328. [PMID: 37876910 PMCID: PMC10591094 DOI: 10.3389/frobt.2023.1233328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 09/18/2023] [Indexed: 10/26/2023] Open
Abstract
The increasing adoption of robot systems in industrial settings and teaming with humans have led to a growing interest in human-robot interaction (HRI) research. While many robots use sensors to avoid harming humans, they cannot elaborate on human actions or intentions, making them passive reactors rather than interactive collaborators. Intention-based systems can determine human motives and predict future movements, but their closer interaction with humans raises concerns about trust. This scoping review provides an overview of sensors, algorithms, and examines the trust aspect of intention-based systems in HRI scenarios. We searched MEDLINE, Embase, and IEEE Xplore databases to identify studies related to the forementioned topics of intention-based systems in HRI. Results from each study were summarized and categorized according to different intention types, representing various designs. The literature shows a range of sensors and algorithms used to identify intentions, each with their own advantages and disadvantages in different scenarios. However, trust of intention-based systems is not well studied. Although some research in AI and robotics can be applied to intention-based systems, their unique characteristics warrant further study to maximize collaboration performance. This review highlights the need for more research on the trust aspects of intention-based systems to better understand and optimize their role in human-robot interactions, at the same time establishes a foundation for future research in sensor and algorithm designs for intention-based systems.
Collapse
Affiliation(s)
- Yifei Zhang
- Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON, Canada
| | - Thomas Doyle
- Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON, Canada
- School of Biomedical Engineering, McMaster University, Hamilton, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| |
Collapse
|
4
|
Kurbis AG, Laschowski B, Mihailidis A. Stair Recognition for Robotic Exoskeleton Control using Computer Vision and Deep Learning. IEEE Int Conf Rehabil Robot 2022; 2022:1-6. [PMID: 36176138 DOI: 10.1109/icorr55369.2022.9896501] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Computer vision can be used in robotic exoskeleton control to improve transitions between different locomotion modes through the prediction of future environmental states. Here we present the development of a large-scale automated stair recognition system powered by convolutional neural networks to recognize indoor and outdoor real-world stair environments. Building on the ExoNet database- the largest and most diverse open-source dataset of wearable camera images of walking environments-we designed a new computer vision dataset, called StairNet, specifically for stair recognition with over 515,000 images. We then developed and optimized an efficient deep learning model for automatic feature engineering and image classification. Our system was able to accurately predict complex stair environments with 98.4% classification accuracy. These promising results present an opportunity to increase the autonomy and safety of human-exoskeleton locomotion for real-world community mobility. Future work will explore the mobile deployment of our automated stair recognition system for onboard real-time inference.
Collapse
|
5
|
Chen C, Zhang K, Leng Y, Chen X, Fu C. Unsupervised Sim-to-Real Adaptation for Environmental Recognition in Assistive Walking. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1350-1360. [PMID: 35584064 DOI: 10.1109/tnsre.2022.3176410] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Powered lower-limb prostheses with vision sensors are expected to restore amputees' mobility in various environments with supervised learning-based environmental recognition. Due to the sim-to-real gap, such as real-world unstructured terrains and the perspective and performance limitations of vision sensor, simulated data cannot meet the requirement for supervised learning. To mitigate this gap, this paper presents an unsupervised sim-to-real adaptation method to accurately classify five common real-world (level ground, stair ascent, stair descent, ramp ascent and ramp descent) and assist amputee's terrain-adaptive locomotion. In this study, augmented simulated environments are generated from a virtual camera perspective to better simulate the real world. Then, unsupervised domain adaptation is incorporated to train the proposed adaptation network consisting of a feature extractor and two classifiers is trained on simulated data and unlabeled real-world data to minimize domain shift between source domain (simulation) and target domain (real world). To interpret the classification mechanism visually, essential features of different terrains extracted by the network are visualized. The classification results in walking experiments indicate that the average accuracy on eight subjects reaches (98.06% ± 0.71%) and (95.91% ± 1.09%) in indoor and outdoor environments respectively, which is close to the result of supervised learning using both type of labeled data (98.37% and 97.05%). The promising results demonstrate that the proposed method is expected to realize accurate real-world environmental classification and successful sim-to-real transfer.
Collapse
|
6
|
Sharma A, Rombokas E. Improving IMU-based prediction of lower limb kinematics in natural environments using egocentric optical flow. IEEE Trans Neural Syst Rehabil Eng 2022; 30:699-708. [PMID: 35245198 DOI: 10.1109/tnsre.2022.3156884] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We seek to predict knee and ankle motion using wearable sensors. These predictions could serve as target trajectories for a lower limb prosthesis. In this manuscript, we investigate the use of egocentric vision for improving performance over kinematic wearable motion capture. We present an out-of-the-lab dataset of 23 healthy subjects navigating public classrooms, a large atrium, and stairs for a total of almost 12 hours of recording. The prediction task is difficult because the movements include avoiding obstacles, other people, idiosyncratic movements such as traversing doors, and individual choices in selecting the future path. We demonstrate that using vision improves the quality of the predicted knee and ankle trajectories, especially in congested spaces and when the visual environment provides information that does not appear simply in the movements of the body. Overall, including vision results in 7.9% and 7.0% improvement in root mean squared error of knee and ankle angle predictions respectively. The improvement in Pearson Correlation Coefficient for knee and ankle predictions is 1.5% and 12.3% respectively. We discuss particular moments where vision greatly improved, or failed to improve, the prediction performance. We also find that the benefits of vision can be enhanced with more data. Lastly, we discuss challenges of continuous estimation of gait in natural, out-of-the-lab datasets.
Collapse
|
7
|
Laschowski B, McNally W, Wong A, McPhee J. Environment Classification for Robotic Leg Prostheses and Exoskeletons Using Deep Convolutional Neural Networks. Front Neurorobot 2022; 15:730965. [PMID: 35185507 PMCID: PMC8855111 DOI: 10.3389/fnbot.2021.730965] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/20/2021] [Indexed: 01/25/2023] Open
Abstract
Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, the current locomotion mode recognition systems being developed for automated high-level control and decision-making rely on mechanical, inertial, and/or neuromuscular sensors, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, we developed an environment classification system powered by computer vision and deep learning to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions. In this study, we first reviewed the development of our “ExoNet” database—the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labeling architecture. We then trained and tested over a dozen state-of-the-art deep convolutional neural networks (CNNs) on the ExoNet database for image classification and automatic feature engineering, including: EfficientNetB0, InceptionV3, MobileNet, MobileNetV2, VGG16, VGG19, Xception, ResNet50, ResNet101, ResNet152, DenseNet121, DenseNet169, and DenseNet201. Finally, we quantitatively compared the benchmarked CNN architectures and their environment classification predictions using an operational metric called “NetScore,” which balances the image classification accuracy with the computational and memory storage requirements (i.e., important for onboard real-time inference with mobile computing devices). Our comparative analyses showed that the EfficientNetB0 network achieves the highest test accuracy; VGG16 the fastest inference time; and MobileNetV2 the best NetScore, which can inform the optimal architecture design or selection depending on the desired performance. Overall, this study provides a large-scale benchmark and reference for next-generation environment classification systems for robotic leg prostheses and exoskeletons.
Collapse
Affiliation(s)
- Brokoslaw Laschowski
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
- *Correspondence: Brokoslaw Laschowski
| | - William McNally
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - John McPhee
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
8
|
Laschowski B, McNally W, Wong A, McPhee J. Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4631-4635. [PMID: 34892246 DOI: 10.1109/embc46164.2021.9630064] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., intelligent high-level controllers), we designed an environment recognition system using computer vision and deep learning. Here we first reviewed the development of the "ExoNet" database - the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labelling architecture. We then trained and tested the EfficientNetB0 convolutional neural network, which was optimized for efficiency using neural architecture search, to forward predict the walking environments. Our environment recognition system achieved ~73% image classification accuracy. These results provide the inaugural benchmark performance on the ExoNet database. Future research should evaluate and compare different convolutional neural networks to develop an accurate and real- time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.
Collapse
|
9
|
Review of control strategies for lower-limb exoskeletons to assist gait. J Neuroeng Rehabil 2021; 18:119. [PMID: 34315499 PMCID: PMC8314580 DOI: 10.1186/s12984-021-00906-3] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 06/25/2021] [Indexed: 12/20/2022] Open
Abstract
Background Many lower-limb exoskeletons have been developed to assist gait, exhibiting a large range of control methods. The goal of this paper is to review and classify these control strategies, that determine how these devices interact with the user. Methods In addition to covering the recent publications on the control of lower-limb exoskeletons for gait assistance, an effort has been made to review the controllers independently of the hardware and implementation aspects. The common 3-level structure (high, middle, and low levels) is first used to separate the continuous behavior (mid-level) from the implementation of position/torque control (low-level) and the detection of the terrain or user’s intention (high-level). Within these levels, different approaches (functional units) have been identified and combined to describe each considered controller. Results 291 references have been considered and sorted by the proposed classification. The methods identified in the high-level are manual user input, brain interfaces, or automatic mode detection based on the terrain or user’s movements. In the mid-level, the synchronization is most often based on manual triggers by the user, discrete events (followed by state machines or time-based progression), or continuous estimations using state variables. The desired action is determined based on position/torque profiles, model-based calculations, or other custom functions of the sensory signals. In the low-level, position or torque controllers are used to carry out the desired actions. In addition to a more detailed description of these methods, the variants of implementation within each one are also compared and discussed in the paper. Conclusions By listing and comparing the features of the reviewed controllers, this work can help in understanding the numerous techniques found in the literature. The main identified trends are the use of pre-defined trajectories for full-mobilization and event-triggered (or adaptive-frequency-oscillator-synchronized) torque profiles for partial assistance. More recently, advanced methods to adapt the position/torque profiles online and automatically detect terrains or locomotion modes have become more common, but these are largely still limited to laboratory settings. An analysis of the possible underlying reasons of the identified trends is also carried out and opportunities for further studies are discussed. Supplementary Information The online version contains supplementary material available at 10.1186/s12984-021-00906-3.
Collapse
|
10
|
Zhang K, Luo J, Xiao W, Zhang W, Liu H, Zhu J, Lu Z, Rong Y, de Silva CW, Fu C. A Subvision System for Enhancing the Environmental Adaptability of the Powered Transfemoral Prosthesis. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3285-3297. [PMID: 32203049 DOI: 10.1109/tcyb.2020.2978216] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Visual information is indispensable to human locomotion in complex environments. Although amputees can perceive the environmental information by eyes, they cannot transmit the neural signals to prostheses directly. To augment human-prosthesis interaction, this article introduces a subvision system that can perceive environments actively, assist to control the powered prosthesis predictively, and accordingly reconstruct a complete vision-locomotion loop for transfemoral amputees. By using deep learning, the subvision system can classify common static terrains (e.g., level ground, stairs, and ramps) and estimate corresponding motion intents of amputees with high accuracy (98%). After applying the subvision system to the locomotion control system, the powered prosthesis can help amputees to achieve nonrhythmic locomotion naturally, including switching between different locomotion modes and crossing the obstacle. The subvision system can also recognize dynamic objects, such as an unexpected obstacle approaching the amputee, and assist in generating an agile obstacle-avoidance reflex movement. The experimental results demonstrate that the subvision system can cooperate with the powered prosthesis to reconstruct a complete vision-locomotion loop, which enhances the environmental adaptability of the amputees.
Collapse
|
11
|
Zhang K, Liu H, Fan Z, Chen X, Leng Y, de Silva CW, Fu C. Foot Placement Prediction for Assistive Walking by Fusing Sequential 3D Gaze and Environmental Context. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3062003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
12
|
Laschowski B, McNally W, Wong A, McPhee J. ExoNet Database: Wearable Camera Images of Human Locomotion Environments. Front Robot AI 2021; 7:562061. [PMID: 33501327 PMCID: PMC7805730 DOI: 10.3389/frobt.2020.562061] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 11/06/2020] [Indexed: 12/02/2022] Open
Affiliation(s)
- Brock Laschowski
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - William McNally
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - John McPhee
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
13
|
Tschiedel M, Russold MF, Kaniusas E. Relying on more sense for enhancing lower limb prostheses control: a review. J Neuroeng Rehabil 2020; 17:99. [PMID: 32680530 PMCID: PMC7368691 DOI: 10.1186/s12984-020-00726-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 07/06/2020] [Indexed: 12/02/2022] Open
Abstract
Modern lower limb prostheses have the capability to replace missing body parts and improve the patients' quality of life. However, missing environmental information often makes a seamless adaptation to transitions between different forms of locomotion challenging. The aim of this review is to identify the progress made in this area over the last decade, addressing two main questions: which types of novel sensors for environmental awareness are used in lower limb prostheses, and how do they enhance device control towards more comfort and safety. A literature search was conducted on two Internet databases, PubMed and IEEE Xplore. Based on the criteria for inclusion and exclusion, 32 papers were selected for the review analysis, 18 of those are related to explicit environmental sensing and 14 to implicit environmental sensing. Characteristics were discussed with a focus on update rate and resolution as well as on computing power and energy consumption. Our analysis identified numerous state-of-the-art sensors, some of which are able to "look through" clothing or cosmetic covers. Five control categories were identified, how "next generation prostheses" could be extended. There is a clear tendency towards more upcoming object or terrain prediction concepts using all types of distance and depth-based sensors. Other advanced strategies, such as bilateral gait segmentation from unilateral sensors, could also play an important role in movement-dependent control applications. The studies demonstrated promising accuracy in well-controlled laboratory settings, but it is unclear how the systems will perform in real-world environments, both indoors and outdoors. At the moment the main limitation proves to be the necessity of having an unobstructed field of view.
Collapse
Affiliation(s)
- Michael Tschiedel
- Research Group Biomedical Sensing, TU Wien, Institute of Electrodynamics, Microwave and Circuit Engineering, Vienna, 1040 Austria
- Global Research, Ottobock Healthcare Products GmbH, Vienna, 1110 Austria
| | | | - Eugenijus Kaniusas
- Research Group Biomedical Sensing, TU Wien, Institute of Electrodynamics, Microwave and Circuit Engineering, Vienna, 1040 Austria
| |
Collapse
|
14
|
Gao F, Liu G, Liang F, Liao WH. IMU-Based Locomotion Mode Identification for Transtibial Prostheses, Orthoses, and Exoskeletons. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1334-1343. [PMID: 32286999 DOI: 10.1109/tnsre.2020.2987155] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Active transtibial prostheses, orthoses, and exoskeletons hold the promise of improving the mobility of lower-limb impaired or amputated individuals. Locomotion mode identification (LMI) is essential for these devices precisely reproducing the required function in different terrains. In this study, a terrain geometry-based LMI algorithm is proposed. The environment should be built according to the inclination grade of the ground. For example, when the inclination angle is between 7 degrees and 15 degrees, the environment should be a ramp. If the inclination angle is around 30 degrees, the environment is preferred to be equipped with stairs. Given that, the locomotion mode/terrain can be classified by the inclination grade. Besides, human feet always move along the surface of terrain to minimize the energy expenditure for transporting lower limbs and get the required foot clearance. Hence, the foot trajectory estimated by an IMU was used to derive the inclination grade of the terrain that we traverse to identify the locomotion mode. In addition, a novel trigger condition (an elliptical boundary) is proposed to activate the decision-making of the LMI algorithm before the next foot strike thus leaving enough time for performing preparatory work in the swing phase. When the estimated foot trajectory goes across the elliptical boundary, the decision-making will be executed. Experimental results show that the average accuracy for three healthy subjects and three below-knee amputees is 98.5% in five locomotion modes: level-ground walking, up slope, down slope, stair descent, and stair ascent. Besides, all the locomotion modes can be identified before the next foot strike.
Collapse
|
15
|
Design and verification of a human-robot interaction system for upper limb exoskeleton rehabilitation. Med Eng Phys 2020; 79:19-25. [PMID: 32205023 DOI: 10.1016/j.medengphy.2020.01.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 01/19/2020] [Accepted: 01/26/2020] [Indexed: 11/22/2022]
Abstract
This paper presents the design of a motion intent recognition system, based on an altitude signal sensor, to improve the human-robot interaction performance of upper limb exoskeleton robots during rehabilitation training. A modified adaptive Kalman filter combined with clipping filtering is proposed for the control system to mitigate the noise and time delay of the collected signal. The clipping filtering method was used to filter the accidental error and avoid the safety problem caused by a mistrigger. A modified adaptive Kalman filter was used to account for the sudden change of the motion state during rehabilitation training. The results show that the intent recognition system designed herein can accurately recognize the human-robot interaction information, and estimate the intent of human motion in time. Therefore, it can be concluded that the designed system effectively follows the predicted motion intent with the proposed method, which is a significant improvement for human-robot interaction control of upper limb extremity rehabilitation robots.
Collapse
|
16
|
Laschowski B, McNally W, Wong A, McPhee J. Preliminary Design of an Environment Recognition System for Controlling Robotic Lower-Limb Prostheses and Exoskeletons. IEEE Int Conf Rehabil Robot 2020; 2019:868-873. [PMID: 31374739 DOI: 10.1109/icorr.2019.8779540] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Drawing inspiration from autonomous vehicles, using future environment information could improve the control of wearable biomechatronic devices for assisting human locomotion. To the authors knowledge, this research represents the first documented investigation using machine vision and deep convolutional neural networks for environment recognition to support the predictive control of robotic lower-limb prostheses and exoskeletons. One participant was instrumented with a battery-powered, chest-mounted RGB camera system. Approximately 10 hours of video footage were experimentally collected while ambulating throughout unknown outdoor and indoor environments. The sampled images were preprocessed and individually labelled. A deep convolutional neural network was developed and trained to automatically recognize three walking environments: level-ground, incline staircases, and decline staircases. The environment recognition system achieved 94.85% overall image classification accuracy. Extending these preliminary findings, future research should incorporate other environment classes (e.g., incline ramps) and integrate the environment recognition system with electromechanical sensors and/or surface electromyography for automated locomotion mode recognition. The challenges associated with implementing deep learning on wearable biomechatronic devices are discussed.
Collapse
|
17
|
Zhang K, Wang J, de Silva CW, Fu C. Unsupervised Cross-Subject Adaptation for Predicting Human Locomotion Intent. IEEE Trans Neural Syst Rehabil Eng 2020; 28:646-657. [PMID: 31944980 DOI: 10.1109/tnsre.2020.2966749] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Accurately predicting human locomotion intent is beneficial in controlling wearable robots and in assisting humans to walk smoothly on different terrains. Traditional methods for predicting human locomotion intent require collecting and labeling the human signals, and training specific classifiers for each new subject, which introduce a heavy burden on both the subject and the researcher. In addressing this issue, the present study liberates the subject and the researcher from labeling a large amount of data, by incorporating an unsupervised cross-subject adaptation method to predict the locomotion intent of a target subject whose signals are not labeled. The adaptation is realized by designing two classifiers to maximize the classification discrepancy and a feature generator to align the hidden features of the source and the target subjects to minimize the classification discrepancy. A neural network is trained by the labeled training set of source subjects and the unlabeled training set of target subjects. Then it is validated and tested on the validation set and the test set of target subjects. Experimental results in the leave-one-subject-out test indicate that the present method can classify the locomotion intent and activities of target subjects at the averaged accuracy of 93.60% and 94.59% on two public datasets. The present method increases the user-independence of the classifiers, but it has been evaluated only on the data of subjects without disabilities. The potential of the present method to predict the locomotion intent of subjects with disabilities and control the wearable robots will be evaluated in future work.
Collapse
|
18
|
Krausz NE, Hargrove LJ. A Survey of Teleceptive Sensing for Wearable Assistive Robotic Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E5238. [PMID: 31795240 PMCID: PMC6928925 DOI: 10.3390/s19235238] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 11/04/2019] [Accepted: 11/21/2019] [Indexed: 11/24/2022]
Abstract
Teleception is defined as sensing that occurs remotely, with no physical contact with the object being sensed. To emulate innate control systems of the human body, a control system for a semi- or fully autonomous assistive device not only requires feedforward models of desired movement, but also the environmental or contextual awareness that could be provided by teleception. Several recent publications present teleception modalities integrated into control systems and provide preliminary results, for example, for performing hand grasp prediction or endpoint control of an arm assistive device; and gait segmentation, forward prediction of desired locomotion mode, and activity-specific control of a prosthetic leg or exoskeleton. Collectively, several different approaches to incorporating teleception have been used, including sensor fusion, geometric segmentation, and machine learning. In this paper, we summarize the recent and ongoing published work in this promising new area of research.
Collapse
Affiliation(s)
- Nili E. Krausz
- Neural Engineering for Prosthetics and Orthotics Lab, Center of Bionic Medicine, Shirley Ryan AbilityLab (Formerly Rehabilitation Institute of Chicago), Chicago, IL 60611, USA;
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Levi J. Hargrove
- Neural Engineering for Prosthetics and Orthotics Lab, Center of Bionic Medicine, Shirley Ryan AbilityLab (Formerly Rehabilitation Institute of Chicago), Chicago, IL 60611, USA;
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
- Physical Medicine and Rehabilitation Department, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
19
|
Krausz NE, Hu BH, Hargrove LJ. Subject- and Environment-Based Sensor Variability for Wearable Lower-Limb Assistive Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4887. [PMID: 31717471 PMCID: PMC6891559 DOI: 10.3390/s19224887] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 10/29/2019] [Accepted: 11/06/2019] [Indexed: 02/08/2023]
Abstract
Significant research effort has gone towards the development of powered lower limb prostheses that control power during gait. These devices use forward prediction based on electromyography (EMG), kinetics and kinematics to command the prosthesis which locomotion activity is desired. Unfortunately these predictions can have substantial errors, which can potentially lead to trips or falls. It is hypothesized that one reason for the significant prediction errors in the current control systems for powered lower-limb prostheses is due to the inter- and intra-subject variability of the data sources used for prediction. Environmental data, recorded from a depth sensor worn on a belt, should have less variability across trials and subjects as compared to kinetics, kinematics and EMG data, and thus its addition is proposed. The variability of each data source was analyzed, once normalized, to determine the intra-activity and intra-subject variability for each sensor modality. Then measures of separability, repeatability, clustering and overall desirability were computed. Results showed that combining Vision, EMG, IMU (inertial measurement unit), and Goniometer features yielded the best separability, repeatability, clustering and desirability across subjects and activities. This will likely be useful for future application in a forward predictor, which will incorporate Vision-based environmental data into a forward predictor for powered lower-limb prosthesis and exoskeletons.
Collapse
Affiliation(s)
- Nili E. Krausz
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Blair H. Hu
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Levi J. Hargrove
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
- Physical Medicine and Rehabilitation Department, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
20
|
Zhang K, Zhang W, Xiao W, Liu H, De Silva CW, Fu C. Sequential Decision Fusion for Environmental Classification in Assistive Walking. IEEE Trans Neural Syst Rehabil Eng 2019; 27:1780-1790. [PMID: 31425118 DOI: 10.1109/tnsre.2019.2935765] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Powered prostheses are effective for helping amputees walk in a single environment, but these devices are inconvenient to use in complex environments. In order to help amputees walk in complex environments, prostheses need to understand the motion intent of amputees. Recently, researchers have found that vision sensors can be utilized to classify environments and predict the motion intent of amputees. Although previous studies have been able to classify environments accurately in offline analysis, the corresponding time delay has not been considered. To increase the accuracy and decrease the time delay of environmental classification, the present paper proposes a new decision fusion method. In this method, the sequential decisions of environmental classification are fused by constructing a hidden Markov model and designing a transition probability matrix. The developed method is evaluated by inviting five able-bodied subjects and three amputees to perform indoor and outdoor walking experiments. The results indicate that the proposed method can classify environments with accuracy improvements of 1.01% (indoor) and 2.48% (outdoor) over the previous voting method when a delay of only one frame is incorporated. The present method also achieves higher classification accuracy than with the methods of recurrent neural network (RNN), long-short term memory (LSTM), and gated recurrent unit (GRU). When achieving the same classification accuracy, the method of the present paper can decrease the time delay by 67 ms (indoor) and 733 ms (outdoor) in comparison to the previous voting method. Besides classifying environments, the proposed decision fusion method may be able to optimize the sequential predictions of the human motion intent.
Collapse
|
21
|
Yan Q, Huang J, Tao C, Chen X, Xu W. Intelligent mobile walking-aids: perception, control and safety. Adv Robot 2019. [DOI: 10.1080/01691864.2019.1653225] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Affiliation(s)
- Qingyang Yan
- Key Laboratory of Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Jian Huang
- Key Laboratory of Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Chunjing Tao
- National Research Center for Rehabilitation Technical Aids, Beijing, People's Republic of China
| | - Xinxing Chen
- Key Laboratory of Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Wenxia Xu
- School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan, People's Republic of China
| |
Collapse
|
22
|
Deep Learning Based Object Recognition Using Physically-Realistic Synthetic Depth Scenes. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2019. [DOI: 10.3390/make1030051] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recognizing objects and estimating their poses have a wide range of application in robotics. For instance, to grasp objects, robots need the position and orientation of objects in 3D. The task becomes challenging in a cluttered environment with different types of objects. A popular approach to tackle this problem is to utilize a deep neural network for object recognition. However, deep learning-based object detection in cluttered environments requires a substantial amount of data. Collection of these data requires time and extensive human labor for manual labeling. In this study, our objective was the development and validation of a deep object recognition framework using a synthetic depth image dataset. We synthetically generated a depth image dataset of 22 objects randomly placed in a 0.5 m × 0.5 m × 0.1 m box, and automatically labeled all objects with an occlusion rate below 70%. Faster Region Convolutional Neural Network (R-CNN) architecture was adopted for training using a dataset of 800,000 synthetic depth images, and its performance was tested on a real-world depth image dataset consisting of 2000 samples. Deep object recognizer has 40.96% detection accuracy on the real depth images and 93.5% on the synthetic depth images. Training the deep learning model with noise-added synthetic images improves the recognition accuracy for real images to 46.3%. The object detection framework can be trained on synthetically generated depth data, and then employed for object recognition on the real depth data in a cluttered environment. Synthetic depth data-based deep object detection has the potential to substantially decrease the time and human effort required for the extensive data collection and labeling.
Collapse
|
23
|
Zhang K, Xiong C, Zhang W, Liu H, Lai D, Rong Y, Fu C. Environmental Features Recognition for Lower Limb Prostheses Toward Predictive Walking. IEEE Trans Neural Syst Rehabil Eng 2019; 27:465-476. [PMID: 30703033 DOI: 10.1109/tnsre.2019.2895221] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
This paper aims to present a robust environmental features recognition system (EFRS) for lower limb prosthesis, which can assist the control of prosthesis by predicting the locomotion modes of amputees and estimating environmental features in the following steps. A depth sensor and an inertial measurement unit are combined to stabilize the point cloud of environments. Subsequently, the 2D point cloud is extracted from origin 3D point cloud and is classified through a neural network. Environmental features, including slope of road, width, and height of stair, were also estimated via the 2D point cloud. Finally, the EFRS is evaluated through classifying and recognizing five kinds of common environments in simulation, indoor experiments, and outdoor experiments by six healthy subjects and three transfemoral amputees, and databases of five healthy subjects and three amputees are used to validate without training. The classification accuracy of five kinds of common environments reach up to 99.3% and 98.5% for the amputees in the indoor and outdoor experiments, respectively. The locomotion modes are predicted at least 0.6 s before the switch of actual locomotion modes. Most estimation errors of indoor and outdoor environments features are lower than 5% and 10%, respectively. The overall process of EFRS takes less than 0.023 s. The promising results demonstrate the robustness and the potential application of the presented EFRS to help the control of lower limb prostheses.
Collapse
|