1
|
Cheng S, Laubscher CA, Gregg RD. Automatic Stub Avoidance for a Powered Prosthetic Leg Over Stairs and Obstacles. IEEE Trans Biomed Eng 2024; 71:1499-1510. [PMID: 38060364 PMCID: PMC11035099 DOI: 10.1109/tbme.2023.3340628] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Passive prosthetic legs require undesirable compensations from amputee users to avoid stubbing obstacles and stairsteps. Powered prostheses can reduce those compensations by restoring normative joint biomechanics, but the absence of user proprioception and volitional control combined with the absence of environmental awareness by the prosthesis increases the risk of collisions. This article presents a novel stub avoidance controller that automatically adjusts prosthetic knee/ankle kinematics based on suprasensory measurements of environmental distance from a small, lightweight, low-power, low-cost ultrasonic sensor mounted above the prosthetic ankle. In a case study with two transfemoral amputee participants, this control method reduced the stub rate during stair ascent by 89.95% and demonstrated an 87.50% avoidance rate for crossing different obstacles on level ground. No thigh kinematic compensation was required to achieve these results. These findings demonstrate a practical perception solution for powered prostheses to avoid collisions with stairs and obstacles while restoring normative biomechanics during daily activities.
Collapse
|
2
|
Sheng W, Gao T, Liang K, Wang Y. Bilateral Elimination Rule-Based Finite Class Bayesian Inference System for Circular and Linear Walking Prediction. Biomimetics (Basel) 2024; 9:266. [PMID: 38786476 PMCID: PMC11118229 DOI: 10.3390/biomimetics9050266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 04/25/2024] [Accepted: 04/26/2024] [Indexed: 05/25/2024] Open
Abstract
OBJECTIVE The prediction of upcoming circular walking during linear walking is important for the usability and safety of the interaction between a lower limb assistive device and the wearer. This study aims to build a bilateral elimination rule-based finite class Bayesian inference system (BER-FC-BesIS) with the ability to predict the transition between circular walking and linear walking using inertial measurement units. METHODS Bilateral motion data of the human body were used to improve the recognition and prediction accuracy of BER-FC-BesIS. RESULTS The mean predicted time of BER-FC-BesIS in predicting the left and right lower limbs' upcoming steady walking activities is 119.32 ± 9.71 ms and 113.75 ± 11.83 ms, respectively. The mean time differences between the predicted time and the real time of BER-FC-BesIS in the left and right lower limbs' prediction are 14.22 ± 3.74 ms and 13.59 ± 4.92 ms, respectively. The prediction accuracy of BER-FC-BesIS is 93.98%. CONCLUSION Upcoming steady walking activities (e.g., linear walking and circular walking) can be accurately predicted by BER-FC-BesIS innovatively. SIGNIFICANCE This study could be helpful and instructional to improve the lower limb assistive devices' capabilities of walking activity prediction with emphasis on non-linear walking activities in daily living.
Collapse
Affiliation(s)
- Wentao Sheng
- School of Mechanical Engineering, Jiangsu University of Technology (JSUT), Changzhou 213001, China;
| | - Tianyu Gao
- School of Intelligent Manufacturing, Nanjing University of Science and Technology (NJUST), Nanjing 210094, China;
| | - Keyao Liang
- School of Mechatronics Engineering, Harbin Institute of Technology (HIT), Harbin 150001, China
| | - Yumo Wang
- School of Intelligent Manufacturing, Nanjing University of Science and Technology (NJUST), Nanjing 210094, China;
| |
Collapse
|
3
|
Cowan M, Creveling S, Sullivan LM, Gabert L, Lenzi T. A Unified Controller for Natural Ambulation on Stairs and Level Ground with a Powered Robotic Knee Prosthesis. PROCEEDINGS OF THE ... IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 2023; 2023:2146-2151. [PMID: 38562517 PMCID: PMC10984323 DOI: 10.1109/iros55552.2023.10341691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Powered lower-limb prostheses have the potential to improve amputee mobility by closely imitating the biomechanical function of the missing biological leg. To accomplish this goal, powered prostheses need controllers that can seamlessly adapt to the ambulation activity intended by the user. Most powered prosthesis control architectures address this issue by switching between specific controllers for each activity. This approach requires online classification of the intended ambulation activity. Unfortunately, any misclassification can cause the prosthesis to perform a different movement than the user expects, increasing the likelihood of falls and injuries. Therefore, classification approaches require near-perfect accuracy to be used safely in real life. In this paper, we propose a unified controller for powered knee prostheses which allows for walking, stair ascent, and stair descent without the need for explicit activity classification. Experiments with one individual with an above-knee amputation show that the proposed controller enables seamless transitions between activities. Moreover, transition between activities is possible while leading with either the sound-side or the prosthesis. A controller with these characteristics has the potential to improve amputee mobility.
Collapse
Affiliation(s)
- Marissa Cowan
- Department of Mechanical Engineering and the Robotics Center at the University of Utah
| | - Suzi Creveling
- Department of Mechanical Engineering and the Robotics Center at the University of Utah
| | - Liam M Sullivan
- Department of Mechanical Engineering and the Robotics Center at the University of Utah
| | - Lukas Gabert
- Department of Mechanical Engineering and the Robotics Center at the University of Utah
- Rocky Mountain Center for Occupational and Environmental Health
| | - Tommaso Lenzi
- Department of Mechanical Engineering and the Robotics Center at the University of Utah
- Rocky Mountain Center for Occupational and Environmental Health
- Department of Biomedical Engineering at the University of Utah
| |
Collapse
|
4
|
Cheng S, Laubscher CA, Gregg RD. Controlling Powered Prosthesis Kinematics over Continuous Transitions Between Walk and Stair Ascent. PROCEEDINGS OF THE ... IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 2023; 2023:2108-2115. [PMID: 38130335 PMCID: PMC10732262 DOI: 10.1109/iros55552.2023.10341457] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
One of the primary benefits of emerging powered prosthetic legs is their ability to facilitate step-over-step stair ascent by providing positive mechanical work. Existing control methods typically have distinct steady-state activity modes for walking and stair ascent, where activity transitions involve discretely switching between controllers and often must be initiated with a particular leg. However, these discrete transitions do not necessarily replicate able-bodied joint biomechanics, which have been shown to continuously adjust over a transition stride. This paper presents a phase-based kinematic controller for a powered knee-ankle prosthesis that enables continuous, biomimetic transitions between walking and stair ascent. The controller tracks joint angles from a data-driven kinematic model that continuously interpolates between the steady-state kinematic models, and it allows both the prosthetic and intact leg to lead the transitions. Results from experiments with two transfemoral amputee participants indicate that knee and ankle kinematics smoothly transition between walking and stair ascent, with comparable or lower root mean square errors compared to variations from able-bodied data.
Collapse
Affiliation(s)
- Shihao Cheng
- Department of Robotics, University of Michigan, Ann Arbor, MI, 48109 USA
| | - Curt A Laubscher
- Department of Robotics, University of Michigan, Ann Arbor, MI, 48109 USA
| | - Robert D Gregg
- Department of Robotics, University of Michigan, Ann Arbor, MI, 48109 USA
| |
Collapse
|
5
|
Ahkami B, Ahmed K, Thesleff A, Hargrove L, Ortiz-Catalan M. Electromyography-Based Control of Lower Limb Prostheses: A Systematic Review. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2023; 5:547-562. [PMID: 37655190 PMCID: PMC10470657 DOI: 10.1109/tmrb.2023.3282325] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Most amputations occur in lower limbs and despite improvements in prosthetic technology, no commercially available prosthetic leg uses electromyography (EMG) information as an input for control. Efforts to integrate EMG signals as part of the control strategy have increased in the last decade. In this systematic review, we summarize the research in the field of lower limb prosthetic control using EMG. Four different online databases were searched until June 2022: Web of Science, Scopus, PubMed, and Science Direct. We included articles that reported systems for controlling a prosthetic leg (with an ankle and/or knee actuator) by decoding gait intent using EMG signals alone or in combination with other sensors. A total of 1,331 papers were initially assessed and 121 were finally included in this systematic review. The literature showed that despite the burgeoning interest in research, controlling a leg prosthesis using EMG signals remains challenging. Specifically, regarding EMG signal quality and stability, electrode placement, prosthetic hardware, and control algorithms, all of which need to be more robust for everyday use. In the studies that were investigated, large variations were found between the control methodologies, type of research participant, recording protocols, assessments, and prosthetic hardware.
Collapse
Affiliation(s)
- Bahareh Ahkami
- Center for Bionics and Pain Research, 43130 Mölndal, Sweden, and also with the Department of Electrical Engineering, Chalmers University of Technology, 41296 Gothenburg, Sweden
| | - Kirstin Ahmed
- Center for Bionics and Pain Research, 43130 Mölndal, Sweden, and also with the Department of Electrical Engineering, Chalmers University of Technology, 41296 Gothenburg, Sweden
| | - Alexander Thesleff
- Center for Bionics and Pain Research, 43130 Mölndal, Sweden, also with the Department of Electrical Engineering, Chalmers University of Technology, 41296 Gothenburg, Sweden, and also with Integrum AB, 43153 Molndal, Sweden
| | - Levi Hargrove
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL 60611 USA, and also with the Regenstein Foundation Center for Bionic Medicine, Shirley Ryan AbilityLab, Chicago, IL 60611 USA
| | - Max Ortiz-Catalan
- Center for Bionics and Pain Research, 43130 Mölndal, Sweden, also with the Department of Electrical Engineering, Chalmers University of Technology, 41296 Gothenburg, Sweden, also with the Operational Area 3, Sahlgrenska University Hospital, 41345 Gothenburg, Sweden, and also with Bionics Institute, Melbourne, VIC 3002, Australia
| |
Collapse
|
6
|
Islam MR, Haque MR, Imtiaz MH, Shen X, Sazonov E. Vision-Based Recognition of Human Motion Intent during Staircase Approaching. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115355. [PMID: 37300082 DOI: 10.3390/s23115355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/31/2023] [Accepted: 06/01/2023] [Indexed: 06/12/2023]
Abstract
Walking in real-world environments involves constant decision-making, e.g., when approaching a staircase, an individual decides whether to engage (climbing the stairs) or avoid. For the control of assistive robots (e.g., robotic lower-limb prostheses), recognizing such motion intent is an important but challenging task, primarily due to the lack of available information. This paper presents a novel vision-based method to recognize an individual's motion intent when approaching a staircase before the potential transition of motion mode (walking to stair climbing) occurs. Leveraging the egocentric images from a head-mounted camera, the authors trained a YOLOv5 object detection model to detect staircases. Subsequently, an AdaBoost and gradient boost (GB) classifier was developed to recognize the individual's intention of engaging or avoiding the upcoming stairway. This novel method has been demonstrated to provide reliable (97.69%) recognition at least 2 steps before the potential mode transition, which is expected to provide ample time for the controller mode transition in an assistive robot in real-world use.
Collapse
Affiliation(s)
- Md Rafi Islam
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| | - Md Rejwanul Haque
- Department of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| | - Masudul H Imtiaz
- Department of Electrical and Computer Engineering, Clarkson University, Potsdam, NY 13699, USA
| | - Xiangrong Shen
- Department of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| | - Edward Sazonov
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| |
Collapse
|
7
|
Gehlhar R, Tucker M, Young AJ, Ames AD. A Review of Current State-of-the-Art Control Methods for Lower-Limb Powered Prostheses. ANNUAL REVIEWS IN CONTROL 2023; 55:142-164. [PMID: 37635763 PMCID: PMC10449377 DOI: 10.1016/j.arcontrol.2023.03.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
Lower-limb prostheses aim to restore ambulatory function for individuals with lower-limb amputations. While the design of lower-limb prostheses is important, this paper focuses on the complementary challenge - the control of lower-limb prostheses. Specifically, we focus on powered prostheses, a subset of lower-limb prostheses, which utilize actuators to inject mechanical power into the walking gait of a human user. In this paper, we present a review of existing control strategies for lower-limb powered prostheses, including the control objectives, sensing capabilities, and control methodologies. We separate the various control methods into three main tiers of prosthesis control: high-level control for task and gait phase estimation, mid-level control for desired torque computation (both with and without the use of reference trajectories), and low-level control for enforcing the computed torque commands on the prosthesis. In particular, we focus on the high- and mid-level control approaches in this review. Additionally, we outline existing methods for customizing the prosthetic behavior for individual human users. Finally, we conclude with a discussion on future research directions for powered lower-limb prostheses based on the potential of current control methods and open problems in the field.
Collapse
Affiliation(s)
- Rachel Gehlhar
- Department of Mechanical and Civil Engineering, California Institute of Technology, 1200 E. California Blvd., Pasadena, 91125, CA, USA
| | - Maegan Tucker
- Department of Mechanical and Civil Engineering, California Institute of Technology, 1200 E. California Blvd., Pasadena, 91125, CA, USA
| | - Aaron J Young
- Woodruff School of Mechanical Engineering, Georgia Institute of Technology, North Avenue, Atlanta, 30332, GA, USA
- Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, North Avenue, Atlanta, 30332, GA, USA
| | - Aaron D Ames
- Department of Mechanical and Civil Engineering, California Institute of Technology, 1200 E. California Blvd., Pasadena, 91125, CA, USA
- Department of Computing and Mathematical Sciences, California Institute of Technology, 1200 E. California Blvd., Pasadena, 91125, CA, USA
| |
Collapse
|
8
|
Murray R, Mendez J, Gabert L, Fey NP, Liu H, Lenzi T. Ambulation Mode Classification of Individuals with Transfemoral Amputation through A-Mode Sonomyography and Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:9350. [PMID: 36502055 PMCID: PMC9736589 DOI: 10.3390/s22239350] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/24/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
Many people struggle with mobility impairments due to lower limb amputations. To participate in society, they need to be able to walk on a wide variety of terrains, such as stairs, ramps, and level ground. Current lower limb powered prostheses require different control strategies for varying ambulation modes, and use data from mechanical sensors within the prosthesis to determine which ambulation mode the user is in. However, it can be challenging to distinguish between ambulation modes. Efforts have been made to improve classification accuracy by adding electromyography information, but this requires a large number of sensors, has a low signal-to-noise ratio, and cannot distinguish between superficial and deep muscle activations. An alternative sensing modality, A-mode ultrasound, can detect and distinguish between changes in superficial and deep muscles. It has also shown promising results in upper limb gesture classification. Despite these advantages, A-mode ultrasound has yet to be employed for lower limb activity classification. Here we show that A- mode ultrasound can classify ambulation mode with comparable, and in some cases, superior accuracy to mechanical sensing. In this study, seven transfemoral amputee subjects walked on an ambulation circuit while wearing A-mode ultrasound transducers, IMU sensors, and their passive prosthesis. The circuit consisted of sitting, standing, level-ground walking, ramp ascent, ramp descent, stair ascent, and stair descent, and a spatial-temporal convolutional network was trained to continuously classify these seven activities. Offline continuous classification with A-mode ultrasound alone was able to achieve an accuracy of 91.8±3.4%, compared with 93.8±3.0%, when using kinematic data alone. Combined kinematic and ultrasound produced 95.8±2.3% accuracy. This suggests that A-mode ultrasound provides additional useful information about the user's gait beyond what is provided by mechanical sensors, and that it may be able to improve ambulation mode classification. By incorporating these sensors into powered prostheses, users may enjoy higher reliability for their prostheses, and more seamless transitions between ambulation modes.
Collapse
Affiliation(s)
- Rosemarie Murray
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
| | - Joel Mendez
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
| | - Lukas Gabert
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
- Rocky Mountain Center for Occupational and Environmental Health, Salt Lake City, UT 84111, USA
| | - Nicholas P. Fey
- Walker Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| | - Honghai Liu
- State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Shenzhen 518055, China
- School of Computing, University of Portsmouth, Portsmouth PO1 3HE, UK
| | - Tommaso Lenzi
- Department of Mechanical Engineering, and Robotics Center, The University of Utah, Salt Lake City, UT 84112, USA
- Rocky Mountain Center for Occupational and Environmental Health, Salt Lake City, UT 84111, USA
| |
Collapse
|
9
|
Vu HTT, Cao HL, Dong D, Verstraten T, Geeroms J, Vanderborght B. Comparison of machine learning and deep learning-based methods for locomotion mode recognition using a single inertial measurement unit. Front Neurorobot 2022; 16:923164. [PMID: 36524219 PMCID: PMC9745042 DOI: 10.3389/fnbot.2022.923164] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 09/06/2022] [Indexed: 09/09/2023] Open
Abstract
Locomotion mode recognition provides the prosthesis control with the information on when to switch between different walking modes, whereas the gait phase detection indicates where we are in the gait cycle. But powered prostheses often implement a different control strategy for each locomotion mode to improve the functionality of the prosthesis. Existing studies employed several classical machine learning methods for locomotion mode recognition. However, these methods were less effective for data with complex decision boundaries and resulted in misclassifications of motion recognition. Deep learning-based methods potentially resolve these limitations as it is a special type of machine learning method with more sophistication. Therefore, this study evaluated three deep learning-based models for locomotion mode recognition, namely recurrent neural network (RNN), long short-term memory (LSTM) neural network, and convolutional neural network (CNN), and compared the recognition performance of deep learning models to the machine learning model with random forest classifier (RFC). The models are trained from data of one inertial measurement unit (IMU) placed on the lower shanks of four able-bodied subjects to perform four walking modes, including level ground walking (LW), standing (ST), and stair ascent/stair descent (SA/SD). The results indicated that CNN and LSTM models outperformed other models, and these models were promising for applying locomotion mode recognition in real-time for robotic prostheses.
Collapse
Affiliation(s)
- Huong Thi Thu Vu
- Brubotics, Vrije Universiteit Brussel and imec, Brussels, Belgium
- Faculty of Electronics Engineering Technology, Hanoi University of Industry, Hanoi, Vietnam
| | - Hoang-Long Cao
- Brubotics, Vrije Universiteit Brussel and Flanders Make, Brussels, Belgium
- College of Engineering Technology, Can Tho University, Can Tho, Vietnam
| | - Dianbiao Dong
- School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an, China
| | - Tom Verstraten
- Brubotics, Vrije Universiteit Brussel and Flanders Make, Brussels, Belgium
| | - Joost Geeroms
- Brubotics, Vrije Universiteit Brussel and Flanders Make, Brussels, Belgium
| | | |
Collapse
|
10
|
Kouzbary HA, Kouzbary MA, Tham LK, Liu J, Shasmin HN, Abu Osman NA. Generating an Adaptive and Robust Walking Pattern for a Prosthetic Ankle-Foot by Utilizing a Nonlinear Autoregressive Network With Exogenous Inputs. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6297-6305. [PMID: 33979293 DOI: 10.1109/tnnls.2021.3076060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
One of the major challenges in developing powered lower limb prostheses is emulating the behavior of an intact lower limb with different walking speeds over diverse terrains. Numerous studies have been conducted on control algorithms in the field of rehabilitation robotics to achieve this overarching goal. Recent studies on powered prostheses have frequently used a hierarchical control scheme consisting of three control levels. Most control structures have at least one element of discrete transition properties that requires numerous sensors to improve classification accuracy, consequently increasing computational load and costs. In this study, we proposed a user-independent and free-mode method for eliminating the need to switch among different controllers. We constructed a database by using four OPAL wearable devices (Mobility Lab, APDM Inc., USA) for seven able-bodied subjects. We recorded the gait of each subject at three ambulation speeds during ground-level walking to train a nonlinear autoregressive network with an exogenous input recurrent neural network (NARX RNN) to estimate foot orientation (angular position) in the sagittal plane using shank angular velocity as external input. The trained NARX RNN estimated the foot orientation of all the subjects at different walking speeds over flat terrain with an average root-mean-square error (RMSE) of 2.1° ± 1.7°. The minimum correlation between the estimated and measured values was 86%. Moreover, a t-test showed that the error was normally distributed with a high certainty level (0.88 minimum p -value).
Collapse
|
11
|
Faridi P, Mehr JK, Wilson D, Sharifi M, Tavakoli M, Pilarski PM, Mushahwar VK. Machine-learned Adaptive Switching in Voluntary Lower-limb Exoskeleton Control: Preliminary Results. IEEE Int Conf Rehabil Robot 2022; 2022:1-6. [PMID: 36176101 DOI: 10.1109/icorr55369.2022.9896611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Lower-limb exoskeletons utilize fixed control strategies and are not adaptable to user's intention. To this end, the goal of this study was to investigate the potential of using temporal-difference learning and general value functions for predicting the next possible walking mode that will be selected by users wearing exoskeletons in order to reduce the effort and cognitive load while switching between different modes of walking. Experiments were performed with a user wearing the Indego exoskeleton and given the authority to switch between five walking modes that were different in terms of speed and turn direction. The user's switching preferences were learned and predicted from device-centric and room-centric measurements by considering similarities in the movements being performed. A switching list was updated to show the most probable future next modes to be selected by the user. In contrast to other approaches that either can only predict a single time-step or require intensive offline training, this work used a computationally inexpensive method for learning and has the potential of providing temporally extended sets of predictions in real-time. Comparing the number of required manual switches between the machine-learned switching list and the best possible static lists showed an average decrease of 42.44% in the required switches for the machine-learned adaptive strategy. These promising results will facilitate the path for real-time application of this technique.
Collapse
|
12
|
Li M, Zhong B, Lobaton E, Huang H. Fusion of Human Gaze and Machine Vision for Predicting Intended Locomotion Mode. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1103-1112. [PMID: 35442889 DOI: 10.1109/tnsre.2022.3168796] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Predicting the user's intended locomotion mode is critical for wearable robot control to assist the user's seamless transitions when walking on changing terrains. Although machine vision has recently proven to be a promising tool in identifying upcoming terrains in the travel path, existing approaches are limited to environment perception rather than human intent recognition that is essential for coordinated wearable robot operation. Hence, in this study, we aim to develop a novel system that fuses the human gaze (representing user intent) and machine vision (capturing environmental information) for accurate prediction of the user's locomotion mode. The system possesses multimodal visual information and recognizes user's locomotion intent in a complex scene, where multiple terrains are present. Additionally, based on the dynamic time warping algorithm, a fusion strategy was developed to align temporal predictions from individual modalities while producing flexible decisions on the timing of locomotion mode transition for wearable robot control. System performance was validated using experimental data collected from five participants, showing high accuracy (over 96% in average) of intent recognition and reliable decision-making on locomotion transition with adjustable lead time. The promising results demonstrate the potential of fusing human gaze and machine vision for locomotion intent recognition of lower limb wearable robots.
Collapse
|
13
|
Kang I, Molinaro DD, Choi G, Camargo J, Young AJ. Subject-Independent Continuous Locomotion Mode Classification for Robotic Hip Exoskeleton Applications. IEEE Trans Biomed Eng 2022; 69:3234-3242. [PMID: 35389859 DOI: 10.1109/tbme.2022.3165547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Autonomous lower-limb exoskeletons must modulate assistance based on locomotion mode (e.g., ramp or stair ascent) to adapt to the corresponding changes in human biological joint dynamics. However, current mode classification strategies for exoskeletons often require user-specific tuning, have a slow update rate, and rely on additional sensors outside of the exoskeleton sensor suite. In this study, we introduce a deep convolutional neural network-based locomotion mode classifier for hip exoskeleton applications using an open-source gait biomechanics dataset with various wearable sensors. Our approach removed the limitations of previous systems as it is 1) subject-independent (i.e., no user-specific data), 2) capable of continuously classifying for smooth and seamless mode transitions, and 3) only utilizes minimal wearable sensors native to a conventional hip exoskeleton. We optimized our model, based on several important factors contributing to overall performance, such as transition label timing, model architecture, and sensor placement, which provides a holistic understanding of mode classifier design. Our optimized DL model showed a 3.13% classification error (steady-state: 0.80 ± 0.38% and transitional: 6.49 ± 1.42%), outperforming other machine learning-based benchmarks commonly practiced in the field (p<0.05). Furthermore, our multi-modal analysis indicated that our model can maintain high performance in different settings such as to unseen slopes on stairs or ramps. Thus, our study presents a novel locomotion mode framework, capable of advancing robotic exoskeleton applications towards assisting community ambulation.
Collapse
|
14
|
Application of Wearable Sensors in Actuation and Control of Powered Ankle Exoskeletons: A Comprehensive Review. SENSORS 2022; 22:s22062244. [PMID: 35336413 PMCID: PMC8954890 DOI: 10.3390/s22062244] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 02/28/2022] [Accepted: 03/08/2022] [Indexed: 02/06/2023]
Abstract
Powered ankle exoskeletons (PAEs) are robotic devices developed for gait assistance, rehabilitation, and augmentation. To fulfil their purposes, PAEs vastly rely heavily on their sensor systems. Human–machine interface sensors collect the biomechanical signals from the human user to inform the higher level of the control hierarchy about the user’s locomotion intention and requirement, whereas machine–machine interface sensors monitor the output of the actuation unit to ensure precise tracking of the high-level control commands via the low-level control scheme. The current article aims to provide a comprehensive review of how wearable sensor technology has contributed to the actuation and control of the PAEs developed over the past two decades. The control schemes and actuation principles employed in the reviewed PAEs, as well as their interaction with the integrated sensor systems, are investigated in this review. Further, the role of wearable sensors in overcoming the main challenges in developing fully autonomous portable PAEs is discussed. Finally, a brief discussion on how the recent technology advancements in wearable sensors, including environment—machine interface sensors, could promote the future generation of fully autonomous portable PAEs is provided.
Collapse
|
15
|
Sharma A, Rombokas E. Improving IMU-based prediction of lower limb kinematics in natural environments using egocentric optical flow. IEEE Trans Neural Syst Rehabil Eng 2022; 30:699-708. [PMID: 35245198 DOI: 10.1109/tnsre.2022.3156884] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We seek to predict knee and ankle motion using wearable sensors. These predictions could serve as target trajectories for a lower limb prosthesis. In this manuscript, we investigate the use of egocentric vision for improving performance over kinematic wearable motion capture. We present an out-of-the-lab dataset of 23 healthy subjects navigating public classrooms, a large atrium, and stairs for a total of almost 12 hours of recording. The prediction task is difficult because the movements include avoiding obstacles, other people, idiosyncratic movements such as traversing doors, and individual choices in selecting the future path. We demonstrate that using vision improves the quality of the predicted knee and ankle trajectories, especially in congested spaces and when the visual environment provides information that does not appear simply in the movements of the body. Overall, including vision results in 7.9% and 7.0% improvement in root mean squared error of knee and ankle angle predictions respectively. The improvement in Pearson Correlation Coefficient for knee and ankle predictions is 1.5% and 12.3% respectively. We discuss particular moments where vision greatly improved, or failed to improve, the prediction performance. We also find that the benefits of vision can be enhanced with more data. Lastly, we discuss challenges of continuous estimation of gait in natural, out-of-the-lab datasets.
Collapse
|
16
|
Laschowski B, McNally W, Wong A, McPhee J. Environment Classification for Robotic Leg Prostheses and Exoskeletons Using Deep Convolutional Neural Networks. Front Neurorobot 2022; 15:730965. [PMID: 35185507 PMCID: PMC8855111 DOI: 10.3389/fnbot.2021.730965] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/20/2021] [Indexed: 01/25/2023] Open
Abstract
Robotic leg prostheses and exoskeletons can provide powered locomotor assistance to older adults and/or persons with physical disabilities. However, the current locomotion mode recognition systems being developed for automated high-level control and decision-making rely on mechanical, inertial, and/or neuromuscular sensors, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). Inspired by the human vision-locomotor control system, we developed an environment classification system powered by computer vision and deep learning to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions. In this study, we first reviewed the development of our “ExoNet” database—the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labeling architecture. We then trained and tested over a dozen state-of-the-art deep convolutional neural networks (CNNs) on the ExoNet database for image classification and automatic feature engineering, including: EfficientNetB0, InceptionV3, MobileNet, MobileNetV2, VGG16, VGG19, Xception, ResNet50, ResNet101, ResNet152, DenseNet121, DenseNet169, and DenseNet201. Finally, we quantitatively compared the benchmarked CNN architectures and their environment classification predictions using an operational metric called “NetScore,” which balances the image classification accuracy with the computational and memory storage requirements (i.e., important for onboard real-time inference with mobile computing devices). Our comparative analyses showed that the EfficientNetB0 network achieves the highest test accuracy; VGG16 the fastest inference time; and MobileNetV2 the best NetScore, which can inform the optimal architecture design or selection depending on the desired performance. Overall, this study provides a large-scale benchmark and reference for next-generation environment classification systems for robotic leg prostheses and exoskeletons.
Collapse
Affiliation(s)
- Brokoslaw Laschowski
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
- *Correspondence: Brokoslaw Laschowski
| | - William McNally
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - John McPhee
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
17
|
Babič J, Laffranchi M, Tessari F, Verstraten T, Novak D, Šarabon N, Ugurlu B, Peternel L, Torricelli D, Veneman JF. Challenges and solutions for application and wider adoption of wearable robots. WEARABLE TECHNOLOGIES 2021; 2:e14. [PMID: 38486636 PMCID: PMC10936284 DOI: 10.1017/wtc.2021.13] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 08/25/2021] [Accepted: 09/18/2021] [Indexed: 03/17/2024]
Abstract
The science and technology of wearable robots are steadily advancing, and the use of such robots in our everyday life appears to be within reach. Nevertheless, widespread adoption of wearable robots should not be taken for granted, especially since many recent attempts to bring them to real-life applications resulted in mixed outcomes. The aim of this article is to address the current challenges that are limiting the application and wider adoption of wearable robots that are typically worn over the human body. We categorized the challenges into mechanical layout, actuation, sensing, body interface, control, human-robot interfacing and coadaptation, and benchmarking. For each category, we discuss specific challenges and the rationale for why solving them is important, followed by an overview of relevant recent works. We conclude with an opinion that summarizes possible solutions that could contribute to the wider adoption of wearable robots.
Collapse
Affiliation(s)
- Jan Babič
- Laboratory for Neuromechanics and Biorobotics, Department of Automation, Biocybernetics and Robotics, Jožef Stefan Institute, Ljubljana, Slovenia
| | - Matteo Laffranchi
- Rehab Technologies Lab, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Federico Tessari
- Rehab Technologies Lab, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Tom Verstraten
- Robotics & Multibody Mechanics Research Group, Vrije Universiteit Brussel and Flanders Make, Brussels, Belgium
| | - Domen Novak
- University of Wyoming, Laramie, Wyoming, USA
| | - Nejc Šarabon
- Faculty of Health Sciences, University of Primorska, Izola, Slovenia
| | - Barkan Ugurlu
- Biomechatronics Laboratory, Faculty of Engineering, Ozyegin University, Istanbul, Turkey
| | - Luka Peternel
- Delft Haptics Lab, Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| | - Diego Torricelli
- Cajal Institute, Spanish National Research Council, Madrid, Spain
| | | |
Collapse
|
18
|
Su B, Liu YX, Gutierrez-Farewik EM. Locomotion Mode Transition Prediction Based on Gait-Event Identification Using Wearable Sensors and Multilayer Perceptrons. SENSORS 2021; 21:s21227473. [PMID: 34833549 PMCID: PMC8620781 DOI: 10.3390/s21227473] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 11/04/2021] [Accepted: 11/06/2021] [Indexed: 11/16/2022]
Abstract
People walk on different types of terrain daily; for instance, level-ground walking, ramp and stair ascent and descent, and stepping over obstacles are common activities in daily life. Movement patterns change as people move from one terrain to another. The prediction of transitions between locomotion modes is important for developing assistive devices, such as exoskeletons, as the optimal assistive strategies may differ for different locomotion modes. The prediction of locomotion mode transitions is often accompanied by gait-event detection that provides important information during locomotion about critical events, such as foot contact (FC) and toe off (TO). In this study, we introduce a method to integrate locomotion mode prediction and gait-event identification into one machine learning framework, comprised of two multilayer perceptrons (MLP). Input features to the framework were from fused data from wearable sensors—specifically, electromyography sensors and inertial measurement units. The first MLP successfully identified FC and TO, FC events were identified accurately, and a small number of misclassifications only occurred near TO events. A small time difference (2.5 ms and −5.3 ms for FC and TO, respectively) was found between predicted and true gait events. The second MLP correctly identified walking, ramp ascent, and ramp descent transitions with the best aggregate accuracy of 96.3%, 90.1%, and 90.6%, respectively, with sufficient prediction time prior to the critical events. The models in this study demonstrate high accuracy in predicting transitions between different locomotion modes in the same side’s mid- to late stance of the stride prior to the step into the new mode using data from EMG and IMU sensors. Our results may help assistive devices achieve smooth and seamless transitions in different locomotion modes for those with motor disorders.
Collapse
Affiliation(s)
- Binbin Su
- KTH MoveAbility Lab, Department of Engineering Mechanics, KTH Royal Institute of Technology, 10044 Stockholm, Sweden; (B.S.); (Y.-X.L.)
| | - Yi-Xing Liu
- KTH MoveAbility Lab, Department of Engineering Mechanics, KTH Royal Institute of Technology, 10044 Stockholm, Sweden; (B.S.); (Y.-X.L.)
| | - Elena M. Gutierrez-Farewik
- KTH MoveAbility Lab, Department of Engineering Mechanics, KTH Royal Institute of Technology, 10044 Stockholm, Sweden; (B.S.); (Y.-X.L.)
- Department of Women’s and Children’s Health, Karolinska Institute, 17177 Stockholm, Sweden
- Correspondence:
| |
Collapse
|
19
|
Laschowski B, McNally W, Wong A, McPhee J. Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4631-4635. [PMID: 34892246 DOI: 10.1109/embc46164.2021.9630064] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., intelligent high-level controllers), we designed an environment recognition system using computer vision and deep learning. Here we first reviewed the development of the "ExoNet" database - the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labelling architecture. We then trained and tested the EfficientNetB0 convolutional neural network, which was optimized for efficiency using neural architecture search, to forward predict the walking environments. Our environment recognition system achieved ~73% image classification accuracy. These results provide the inaugural benchmark performance on the ExoNet database. Future research should evaluate and compare different convolutional neural networks to develop an accurate and real- time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.
Collapse
|
20
|
Narayan A, Reyes FA, Ren M, Haoyong Y. Real-Time Hierarchical Classification of Time Series Data for Locomotion Mode Detection. IEEE J Biomed Health Inform 2021; 26:1749-1760. [PMID: 34410932 DOI: 10.1109/jbhi.2021.3106110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Accurate real-time estimation of motion intent is critical for rendering useful assistance using wearable robotic prosthetic and exoskeleton devices during user-initiated motions. We aim to evaluate hierarchical classification as a strategy for real-time locomotion mode recognition for the control of wearable robotic prosthetics and exoskeletons during user-intiated motions. METHODS We collect motion data from 8 subjects using a set of 7 inertial sensors for 16 lower limb locomotion modes of different specificities. A CNN based hierarchical classifier is trained to classify the modes into a specified label hierarchy. We measure the accuracy, stability, behaviour during mode transitions and suitability for real-time inference of the classifier. RESULTS The method achieves stable classification of locomotion modes using 1280 ms of time history data. It achieves average classification accuracy of 94.34% and an average AU(PRC) of 0.773 - comparable to similar classifiers. The method produces more informative classifications at transitions between modes. Less specific classes are classified earlier than more specific classes in the hierarchy. The inference step of the classifier can be executed in less than 2 ms on embedded hardware, indicating suitability for real-time operation. CONCLUSION Hierarchical classification can achieve accurate detection of locomotion modes and can break up mode transitions into multiple transitions between modes of different specificity. SIGNIFICANCE Multi-specific hierarchical classification of locomotion modes could lead to smoother, more fine grained control adaptation of wearable robots during locomotion mode transitions.
Collapse
|
21
|
Mouchoux J, Carisi S, Dosen S, Farina D, Schilling AF, Markovic M. Artificial Perception and Semiautonomous Control in Myoelectric Hand Prostheses Increases Performance and Decreases Effort. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3047013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
22
|
Review of control strategies for lower-limb exoskeletons to assist gait. J Neuroeng Rehabil 2021; 18:119. [PMID: 34315499 PMCID: PMC8314580 DOI: 10.1186/s12984-021-00906-3] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 06/25/2021] [Indexed: 12/20/2022] Open
Abstract
Background Many lower-limb exoskeletons have been developed to assist gait, exhibiting a large range of control methods. The goal of this paper is to review and classify these control strategies, that determine how these devices interact with the user. Methods In addition to covering the recent publications on the control of lower-limb exoskeletons for gait assistance, an effort has been made to review the controllers independently of the hardware and implementation aspects. The common 3-level structure (high, middle, and low levels) is first used to separate the continuous behavior (mid-level) from the implementation of position/torque control (low-level) and the detection of the terrain or user’s intention (high-level). Within these levels, different approaches (functional units) have been identified and combined to describe each considered controller. Results 291 references have been considered and sorted by the proposed classification. The methods identified in the high-level are manual user input, brain interfaces, or automatic mode detection based on the terrain or user’s movements. In the mid-level, the synchronization is most often based on manual triggers by the user, discrete events (followed by state machines or time-based progression), or continuous estimations using state variables. The desired action is determined based on position/torque profiles, model-based calculations, or other custom functions of the sensory signals. In the low-level, position or torque controllers are used to carry out the desired actions. In addition to a more detailed description of these methods, the variants of implementation within each one are also compared and discussed in the paper. Conclusions By listing and comparing the features of the reviewed controllers, this work can help in understanding the numerous techniques found in the literature. The main identified trends are the use of pre-defined trajectories for full-mobilization and event-triggered (or adaptive-frequency-oscillator-synchronized) torque profiles for partial assistance. More recently, advanced methods to adapt the position/torque profiles online and automatically detect terrains or locomotion modes have become more common, but these are largely still limited to laboratory settings. An analysis of the possible underlying reasons of the identified trends is also carried out and opportunities for further studies are discussed. Supplementary Information The online version contains supplementary material available at 10.1186/s12984-021-00906-3.
Collapse
|
23
|
Zhang K, Luo J, Xiao W, Zhang W, Liu H, Zhu J, Lu Z, Rong Y, de Silva CW, Fu C. A Subvision System for Enhancing the Environmental Adaptability of the Powered Transfemoral Prosthesis. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3285-3297. [PMID: 32203049 DOI: 10.1109/tcyb.2020.2978216] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Visual information is indispensable to human locomotion in complex environments. Although amputees can perceive the environmental information by eyes, they cannot transmit the neural signals to prostheses directly. To augment human-prosthesis interaction, this article introduces a subvision system that can perceive environments actively, assist to control the powered prosthesis predictively, and accordingly reconstruct a complete vision-locomotion loop for transfemoral amputees. By using deep learning, the subvision system can classify common static terrains (e.g., level ground, stairs, and ramps) and estimate corresponding motion intents of amputees with high accuracy (98%). After applying the subvision system to the locomotion control system, the powered prosthesis can help amputees to achieve nonrhythmic locomotion naturally, including switching between different locomotion modes and crossing the obstacle. The subvision system can also recognize dynamic objects, such as an unexpected obstacle approaching the amputee, and assist in generating an agile obstacle-avoidance reflex movement. The experimental results demonstrate that the subvision system can cooperate with the powered prosthesis to reconstruct a complete vision-locomotion loop, which enhances the environmental adaptability of the amputees.
Collapse
|
24
|
Zhang K, Liu H, Fan Z, Chen X, Leng Y, de Silva CW, Fu C. Foot Placement Prediction for Assistive Walking by Fusing Sequential 3D Gaze and Environmental Context. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3062003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
25
|
Investigating neural correlates of locomotion transition via temporal relation of EEG and EOG-recorded eye movements. Comput Biol Med 2021; 132:104350. [PMID: 33799217 DOI: 10.1016/j.compbiomed.2021.104350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 03/17/2021] [Accepted: 03/17/2021] [Indexed: 11/18/2022]
Abstract
The present study examines a temporal relation of walking behavior during locomotion transition (walking to stair ascent) to electrooculography (EOG) signals recorded from eye movement. Further, electroencephalography (EEG) signals from the occipital region of the brain are processed to understand the relative occurrence in EOG and EEG signals during the transition. The dipole sources in the occipital region with reference to EOG detection were estimated from independent components and then clustered using the k means algorithm. The dynamics of the dipoles in the occipital cluster in different frequency bands revealed significant desynchronization in the β and low γ bands, followed by resynchronization. This transitional behavior coincided with transient features suggesting possible saccadic movement of the eyes in the EOG signal. With the data from six able-bodied participants, the desynchronization in EEG from the occipital region was detected by nearly 2.2 ± 0.5s before the transition. Using preprocessing techniques on the EOG signal followed by detecting saccades from the derivative of the EOG signal, the eye movements were detected by nearly 2.5 ± 0.5s before the transition. The EOG decoded intention of transition appeared as early as 3.0 ± 1.63s before desynchronization in the EEG. A paired t-test analysis showed that the EOG-based intent decoding of transition reflects significantly earlier than occipital EEG (p < 0.00001). This study could lead to a multi-modal neural-machine interface that may produce results superior to previous attempts involving only EEG and EMG signals.
Collapse
|
26
|
Laschowski B, McNally W, Wong A, McPhee J. ExoNet Database: Wearable Camera Images of Human Locomotion Environments. Front Robot AI 2021; 7:562061. [PMID: 33501327 PMCID: PMC7805730 DOI: 10.3389/frobt.2020.562061] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 11/06/2020] [Indexed: 12/02/2022] Open
Affiliation(s)
- Brock Laschowski
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - William McNally
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - John McPhee
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
27
|
Han Y, Liu C, Yan L, Ren L. Design of Decision Tree Structure with Improved BPNN Nodes for High-Accuracy Locomotion Mode Recognition Using a Single IMU. SENSORS (BASEL, SWITZERLAND) 2021; 21:E526. [PMID: 33450967 PMCID: PMC7828453 DOI: 10.3390/s21020526] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 01/07/2021] [Accepted: 01/11/2021] [Indexed: 12/16/2022]
Abstract
Smart wearable robotic system, such as exoskeleton assist device and powered lower limb prostheses can rapidly and accurately realize man-machine interaction through locomotion mode recognition system. However, previous locomotion mode recognition studies usually adopted more sensors for higher accuracy and effective intelligent algorithms to recognize multiple locomotion modes simultaneously. To reduce the burden of sensors on users and recognize more locomotion modes, we design a novel decision tree structure (DTS) based on using an improved backpropagation neural network (IBPNN) as judgment nodes named IBPNN-DTS, after analyzing the experimental locomotion mode data using the original values with a 200-ms time window for a single inertial measurement unit to hierarchically identify nine common locomotion modes (level walking at three kinds of speeds, ramp ascent/descent, stair ascent/descent, Sit, and Stand). In addition, we reduce the number of parameters in the IBPNN for structure optimization and adopted the artificial bee colony (ABC) algorithm to perform global search for initial weight and threshold value to eliminate system uncertainty because randomly generated initial values tend to result in a failure to converge or falling into local optima. Experimental results demonstrate that recognition accuracy of the IBPNN-DTS with ABC optimization (ABC-IBPNN-DTS) was up to 96.71% (97.29% for the IBPNN-DTS). Compared to IBPNN-DTS without optimization, the number of parameters in ABC-IBPNN-DTS shrank by 66% with only a 0.58% reduction in accuracy while the classification model kept high robustness.
Collapse
Affiliation(s)
- Yang Han
- The School of Mechanical Science and Aerospace Engineering, Jilin University, Changchun 130000, China;
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130000, China
| | - Chunbao Liu
- The School of Mechanical Science and Aerospace Engineering, Jilin University, Changchun 130000, China;
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130000, China
| | - Lingyun Yan
- The School of Mechanical, Aerospace and Civil Engineering, University of Manchester, Manchester M13 9PL, UK;
| | - Lei Ren
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130000, China
- The School of Mechanical, Aerospace and Civil Engineering, University of Manchester, Manchester M13 9PL, UK;
| |
Collapse
|
28
|
Li M, Wen Y, Gao X, Si J, Huang H. Toward Expedited Impedance Tuning of a Robotic Prosthesis for Personalized Gait Assistance by Reinforcement Learning Control. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3078317] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
29
|
Hunt G, Hood S, Lenzi T. Stand-Up, Squat, Lunge, and Walk With a Robotic Knee and Ankle Prosthesis Under Shared Neural Control. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2021; 2:267-277. [PMID: 35402979 PMCID: PMC8901006 DOI: 10.1109/ojemb.2021.3104261] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 07/28/2021] [Accepted: 08/02/2021] [Indexed: 11/10/2022] Open
Abstract
Emerging robotic knee and ankle prostheses present an opportunity to restore the biomechanical function of missing biological legs, which is not possible with conventional passive prostheses. However, challenges in coordinating the robotic prosthesis movements with the user's neuromuscular system and transitioning between activities limit the real-world viability of these devices. Here we show that a shared neural control approach combining neural signals from the user's residual limb with robot control improves functional mobility in individuals with above-knee amputation. The proposed shared neural controller enables subjects to stand up and sit down under a variety of conditions, squat, lunge, walk, and seamlessly transition between activities without explicit classification of the intended movement. No other available technology can enable individuals with above-knee amputations to achieve this level of mobility. Further, we show that compared to using a conventional passive prosthesis, the proposed shared neural controller significantly reduced muscle effort in both the intact limb (21-51% decrease) and the residual limb (38-48% decrease). We also found that the body weight lifted by the prosthesis side increased significantly while standing up with the robotic leg prosthesis (49%-68% increase), leading to better loading symmetry (43-46% of body weight on the prosthesis side). By decreasing muscle effort and improving symmetry, the proposed shared neural controller has the potential to improve amputee mobility and decrease the risk of falls compared to using conventional passive prostheses.
Collapse
Affiliation(s)
- Grace Hunt
- Department of Mechanical Engineering and Utah Robotics CenterUniversity of Utah Salt Lake City UT 84112 USA
| | - Sarah Hood
- Department of Mechanical Engineering and Utah Robotics CenterUniversity of Utah Salt Lake City UT 84112 USA
| | - Tommaso Lenzi
- Department of Mechanical Engineering and Utah Robotics CenterUniversity of Utah Salt Lake City UT 84112 USA
| |
Collapse
|
30
|
Labarrière F, Thomas E, Calistri L, Optasanu V, Gueugnon M, Ornetti P, Laroche D. Machine Learning Approaches for Activity Recognition and/or Activity Prediction in Locomotion Assistive Devices-A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6345. [PMID: 33172158 PMCID: PMC7664393 DOI: 10.3390/s20216345] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 10/22/2020] [Accepted: 11/04/2020] [Indexed: 01/16/2023]
Abstract
Locomotion assistive devices equipped with a microprocessor can potentially automatically adapt their behavior when the user is transitioning from one locomotion mode to another. Many developments in the field have come from machine learning driven controllers on locomotion assistive devices that recognize/predict the current locomotion mode or the upcoming one. This review synthesizes the machine learning algorithms designed to recognize or to predict a locomotion mode in order to automatically adapt the behavior of a locomotion assistive device. A systematic review was conducted on the Web of Science and MEDLINE databases (as well as in the retrieved papers) to identify articles published between 1 January 2000 to 31 July 2020. This systematic review is reported in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines and is registered on Prospero (CRD42020149352). Study characteristics, sensors and algorithms used, accuracy and robustness were also summarized. In total, 1343 records were identified and 58 studies were included in this review. The experimental condition which was most often investigated was level ground walking along with stair and ramp ascent/descent activities. The machine learning algorithms implemented in the included studies reached global mean accuracies of around 90%. However, the robustness of those algorithms seems to be more broadly evaluated, notably, in everyday life. We also propose some guidelines for homogenizing future reports.
Collapse
Affiliation(s)
- Floriant Labarrière
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
| | - Elizabeth Thomas
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
| | - Laurine Calistri
- PROTEOR, 6 rue de la Redoute, CS 37833, CEDEX 21078 Dijon, France;
| | - Virgil Optasanu
- ICB, UMR 6303 CNRS, Université de Bourgogne Franche Comté 9 Av. Alain Savary, CEDEX 21078 Dijon, France;
| | - Mathieu Gueugnon
- INSERM, CIC 1432, Module Plurithematique, Plateforme d’Investigation Technologique, CHU Dijon-Bourgogne, Centre d’Investigation Clinique, Module Plurithématique, Plateforme d’Investigation Technologique, 21079 Dijon, France;
| | - Paul Ornetti
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
- INSERM, CIC 1432, Module Plurithematique, Plateforme d’Investigation Technologique, CHU Dijon-Bourgogne, Centre d’Investigation Clinique, Module Plurithématique, Plateforme d’Investigation Technologique, 21079 Dijon, France;
- Department of Rheumatology, Dijon University Hospital, 21079 Dijon, France
| | - Davy Laroche
- INSERM, UMR1093-CAPS, Université de Bourgogne Franche Comté, UFR des Sciences du Sport, F-21000 Dijon, France; (F.L.); (E.T.); (P.O.)
- INSERM, CIC 1432, Module Plurithematique, Plateforme d’Investigation Technologique, CHU Dijon-Bourgogne, Centre d’Investigation Clinique, Module Plurithématique, Plateforme d’Investigation Technologique, 21079 Dijon, France;
| |
Collapse
|
31
|
Khademi G, Simon D. Toward Minimal-Sensing Locomotion Mode Recognition for a Powered Knee-Ankle Prosthesis. IEEE Trans Biomed Eng 2020; 68:967-979. [PMID: 32784127 DOI: 10.1109/tbme.2020.3016129] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Locomotion mode recognition (LMR) enables seamless and natural transitions between low-level control systems in a powered prosthesis. We present a new optimization framework for LMR that eliminates irrelevant or redundant features and measurement signals while still maintaining performance. METHODS We use multi-objective biogeography-based optimization to find a compromise solution between performance and the minimization of feature set size. Experimental data are collected from four transfemoral users walking with a powered knee-ankle prosthesis. We compare the performance of LMR systems trained with the optimal feature subsets and with the full feature set using a deep neural network classifier across six locomotion modes: standing, flat-ground walking, stair up/down, and ramp up/down. RESULTS Statistical tests indicate that classifier performance using the optimal feature subsets is statistically equal to that using the full feature set. The LMR trained with an optimal subset results in the 1.98% steady-state and 4.09% transitional error rates, while only using approximately 41% and 53% of the available features and sensors, respectively. CONCLUSION Results thus indicate the capability of the proposed framework to achieve simultaneously accurate and low-complex LMR systems for transfemoral individuals with powered prostheses. SIGNIFICANCE This framework would potentially lead to less frequent clinical visits needed for sensor replacement and calibrations, which may save health care costs and the prosthesis user's time and energy.
Collapse
|
32
|
Mendez J, Hood S, Gunnel A, Lenzi T. Powered knee and ankle prosthesis with indirect volitional swing control enables level-ground walking and crossing over obstacles. Sci Robot 2020; 5:5/44/eaba6635. [PMID: 33022611 DOI: 10.1126/scirobotics.aba6635] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Accepted: 06/25/2020] [Indexed: 11/02/2022]
Abstract
Powered prostheses aim to mimic the missing biological limb with controllers that are finely tuned to replicate the nominal gait pattern of non-amputee individuals. Unfortunately, this control approach poses a problem with real-world ambulation, which includes tasks such as crossing over obstacles, where the prosthesis trajectory must be modified to provide adequate foot clearance and ensure timely foot placement. Here, we show an indirect volitional control approach that enables prosthesis users to walk at different speeds while smoothly and continuously crossing over obstacles of different sizes without explicit classification of the environment. At the high level, the proposed controller relies on a heuristic algorithm to continuously change the maximum knee flexion angle and the swing duration in harmony with the user's residual limb. At the low level, minimum-jerk planning is used to continuously adapt the swing trajectory while maximizing smoothness. Experiments with three individuals with above-knee amputation show that the proposed control approach allows for volitional control of foot clearance, which is necessary to negotiate environmental barriers. Our study suggests that a powered prosthesis controller with intrinsic, volitional adaptability may provide prosthesis users with functionality that is not currently available, facilitating real-world ambulation.
Collapse
Affiliation(s)
- Joel Mendez
- Department of Mechanical Engineering and Utah Robotics Center, University of Utah, Salt Lake City, UT, USA
| | - Sarah Hood
- Department of Mechanical Engineering and Utah Robotics Center, University of Utah, Salt Lake City, UT, USA
| | - Andy Gunnel
- Department of Mechanical Engineering and Utah Robotics Center, University of Utah, Salt Lake City, UT, USA
| | - Tommaso Lenzi
- Department of Mechanical Engineering and Utah Robotics Center, University of Utah, Salt Lake City, UT, USA.
| |
Collapse
|
33
|
Tschiedel M, Russold MF, Kaniusas E. Relying on more sense for enhancing lower limb prostheses control: a review. J Neuroeng Rehabil 2020; 17:99. [PMID: 32680530 PMCID: PMC7368691 DOI: 10.1186/s12984-020-00726-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 07/06/2020] [Indexed: 12/02/2022] Open
Abstract
Modern lower limb prostheses have the capability to replace missing body parts and improve the patients' quality of life. However, missing environmental information often makes a seamless adaptation to transitions between different forms of locomotion challenging. The aim of this review is to identify the progress made in this area over the last decade, addressing two main questions: which types of novel sensors for environmental awareness are used in lower limb prostheses, and how do they enhance device control towards more comfort and safety. A literature search was conducted on two Internet databases, PubMed and IEEE Xplore. Based on the criteria for inclusion and exclusion, 32 papers were selected for the review analysis, 18 of those are related to explicit environmental sensing and 14 to implicit environmental sensing. Characteristics were discussed with a focus on update rate and resolution as well as on computing power and energy consumption. Our analysis identified numerous state-of-the-art sensors, some of which are able to "look through" clothing or cosmetic covers. Five control categories were identified, how "next generation prostheses" could be extended. There is a clear tendency towards more upcoming object or terrain prediction concepts using all types of distance and depth-based sensors. Other advanced strategies, such as bilateral gait segmentation from unilateral sensors, could also play an important role in movement-dependent control applications. The studies demonstrated promising accuracy in well-controlled laboratory settings, but it is unclear how the systems will perform in real-world environments, both indoors and outdoors. At the moment the main limitation proves to be the necessity of having an unobstructed field of view.
Collapse
Affiliation(s)
- Michael Tschiedel
- Research Group Biomedical Sensing, TU Wien, Institute of Electrodynamics, Microwave and Circuit Engineering, Vienna, 1040 Austria
- Global Research, Ottobock Healthcare Products GmbH, Vienna, 1110 Austria
| | | | - Eugenijus Kaniusas
- Research Group Biomedical Sensing, TU Wien, Institute of Electrodynamics, Microwave and Circuit Engineering, Vienna, 1040 Austria
| |
Collapse
|
34
|
Stolyarov R, Carney M, Herr H. Accurate Heuristic Terrain Prediction in Powered Lower-Limb Prostheses Using Onboard Sensors. IEEE Trans Biomed Eng 2020; 68:384-392. [PMID: 32406822 DOI: 10.1109/tbme.2020.2994152] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE This study describes the development and offline validation of a heuristic algorithm for accurate prediction of ground terrain in a lower limb prosthesis. This method is based on inference of the ground terrain geometry using estimation of prosthetic limb kinematics during gait with a single integrated inertial measurement unit. METHODS We asked five subjects with below-knee amputations to traverse level ground, stairs, and ramps using a high-range-of-motion powered prosthesis while internal sensor data were remotely logged. We used these data to develop three terrain prediction algorithms. The first two employed state-of-the-art machine learning approaches, while the third was a directly tuned heuristic using thresholds on estimated prosthetic ankle joint translations and ground slope. We compared the performance of these algorithms using resubstitution error for the machine learning algorithms and overall error for the heuristic algorithm. RESULTS Our optimal machine learning algorithm attained a resubstitution error of 3.4% using 45 features, while our heuristic method attained an overall prediction error of 2.8% using only 5 features derived from estimation of ground slope and horizontal and vertical ankle joint displacement. Compared with pattern recognition, the heuristic performed better on each individual subject, and across both level and non-level strides. CONCLUSION AND SIGNIFICANCE These results demonstrate a method for heuristic prediction of ground terrain in a powered prosthesis. The method is more accurate, more interpretable, and less computationally expensive than machine learning methods considered state-of-the-art for intent recognition, and relies only on integrated prosthesis sensors. Finally, the method provides intuitively tunable thresholds to improve performance for specific walking conditions.
Collapse
|
35
|
Gao F, Liu G, Liang F, Liao WH. IMU-Based Locomotion Mode Identification for Transtibial Prostheses, Orthoses, and Exoskeletons. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1334-1343. [PMID: 32286999 DOI: 10.1109/tnsre.2020.2987155] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Active transtibial prostheses, orthoses, and exoskeletons hold the promise of improving the mobility of lower-limb impaired or amputated individuals. Locomotion mode identification (LMI) is essential for these devices precisely reproducing the required function in different terrains. In this study, a terrain geometry-based LMI algorithm is proposed. The environment should be built according to the inclination grade of the ground. For example, when the inclination angle is between 7 degrees and 15 degrees, the environment should be a ramp. If the inclination angle is around 30 degrees, the environment is preferred to be equipped with stairs. Given that, the locomotion mode/terrain can be classified by the inclination grade. Besides, human feet always move along the surface of terrain to minimize the energy expenditure for transporting lower limbs and get the required foot clearance. Hence, the foot trajectory estimated by an IMU was used to derive the inclination grade of the terrain that we traverse to identify the locomotion mode. In addition, a novel trigger condition (an elliptical boundary) is proposed to activate the decision-making of the LMI algorithm before the next foot strike thus leaving enough time for performing preparatory work in the swing phase. When the estimated foot trajectory goes across the elliptical boundary, the decision-making will be executed. Experimental results show that the average accuracy for three healthy subjects and three below-knee amputees is 98.5% in five locomotion modes: level-ground walking, up slope, down slope, stair descent, and stair ascent. Besides, all the locomotion modes can be identified before the next foot strike.
Collapse
|
36
|
Novo-Torres L, Ramirez-Paredes JP, Villarreal DJ. Obstacle Recognition using Computer Vision and Convolutional Neural Networks for Powered Prosthetic Leg Applications. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:3360-3363. [PMID: 31946601 DOI: 10.1109/embc.2019.8857420] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
In this work we combine computer vision and a machine learning algorithm, Convolutional Neural Networks (CNNs), to identify obstacles that powered prosthetic leg users might encounter during walking. Our motivation is that powered prosthetic legs could react in synchronicity with their users by recognizing and anticipating the terrain in front of them. We focus on identifying stairs and doors that are within the visual field of a person. To achieve this, we used a compact CNN architecture to optimize image processing for real-time applications. We built and tested a wearable system prototype that included a camera mounted on a pair of glasses and a single-board computer. The prototype was used by able-bodied users to collect and label obstacle and non-obstacle videos, which were used later to train the CNN. In validation, the system was able to recognize around 90% of obstacles across different indoor and outdoor scenarios. The accuracy achieved and the practicality of the prototype shows the potential of computer vision and machine learning in the field of powered prosthetic legs.
Collapse
|
37
|
Laschowski B, McNally W, Wong A, McPhee J. Preliminary Design of an Environment Recognition System for Controlling Robotic Lower-Limb Prostheses and Exoskeletons. IEEE Int Conf Rehabil Robot 2020; 2019:868-873. [PMID: 31374739 DOI: 10.1109/icorr.2019.8779540] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Drawing inspiration from autonomous vehicles, using future environment information could improve the control of wearable biomechatronic devices for assisting human locomotion. To the authors knowledge, this research represents the first documented investigation using machine vision and deep convolutional neural networks for environment recognition to support the predictive control of robotic lower-limb prostheses and exoskeletons. One participant was instrumented with a battery-powered, chest-mounted RGB camera system. Approximately 10 hours of video footage were experimentally collected while ambulating throughout unknown outdoor and indoor environments. The sampled images were preprocessed and individually labelled. A deep convolutional neural network was developed and trained to automatically recognize three walking environments: level-ground, incline staircases, and decline staircases. The environment recognition system achieved 94.85% overall image classification accuracy. Extending these preliminary findings, future research should incorporate other environment classes (e.g., incline ramps) and integrate the environment recognition system with electromechanical sensors and/or surface electromyography for automated locomotion mode recognition. The challenges associated with implementing deep learning on wearable biomechatronic devices are discussed.
Collapse
|
38
|
Krausz NE, Hargrove LJ. A Survey of Teleceptive Sensing for Wearable Assistive Robotic Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E5238. [PMID: 31795240 PMCID: PMC6928925 DOI: 10.3390/s19235238] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 11/04/2019] [Accepted: 11/21/2019] [Indexed: 11/24/2022]
Abstract
Teleception is defined as sensing that occurs remotely, with no physical contact with the object being sensed. To emulate innate control systems of the human body, a control system for a semi- or fully autonomous assistive device not only requires feedforward models of desired movement, but also the environmental or contextual awareness that could be provided by teleception. Several recent publications present teleception modalities integrated into control systems and provide preliminary results, for example, for performing hand grasp prediction or endpoint control of an arm assistive device; and gait segmentation, forward prediction of desired locomotion mode, and activity-specific control of a prosthetic leg or exoskeleton. Collectively, several different approaches to incorporating teleception have been used, including sensor fusion, geometric segmentation, and machine learning. In this paper, we summarize the recent and ongoing published work in this promising new area of research.
Collapse
Affiliation(s)
- Nili E. Krausz
- Neural Engineering for Prosthetics and Orthotics Lab, Center of Bionic Medicine, Shirley Ryan AbilityLab (Formerly Rehabilitation Institute of Chicago), Chicago, IL 60611, USA;
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Levi J. Hargrove
- Neural Engineering for Prosthetics and Orthotics Lab, Center of Bionic Medicine, Shirley Ryan AbilityLab (Formerly Rehabilitation Institute of Chicago), Chicago, IL 60611, USA;
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
- Physical Medicine and Rehabilitation Department, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
39
|
Krausz NE, Hu BH, Hargrove LJ. Subject- and Environment-Based Sensor Variability for Wearable Lower-Limb Assistive Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4887. [PMID: 31717471 PMCID: PMC6891559 DOI: 10.3390/s19224887] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 10/29/2019] [Accepted: 11/06/2019] [Indexed: 02/08/2023]
Abstract
Significant research effort has gone towards the development of powered lower limb prostheses that control power during gait. These devices use forward prediction based on electromyography (EMG), kinetics and kinematics to command the prosthesis which locomotion activity is desired. Unfortunately these predictions can have substantial errors, which can potentially lead to trips or falls. It is hypothesized that one reason for the significant prediction errors in the current control systems for powered lower-limb prostheses is due to the inter- and intra-subject variability of the data sources used for prediction. Environmental data, recorded from a depth sensor worn on a belt, should have less variability across trials and subjects as compared to kinetics, kinematics and EMG data, and thus its addition is proposed. The variability of each data source was analyzed, once normalized, to determine the intra-activity and intra-subject variability for each sensor modality. Then measures of separability, repeatability, clustering and overall desirability were computed. Results showed that combining Vision, EMG, IMU (inertial measurement unit), and Goniometer features yielded the best separability, repeatability, clustering and desirability across subjects and activities. This will likely be useful for future application in a forward predictor, which will incorporate Vision-based environmental data into a forward predictor for powered lower-limb prosthesis and exoskeletons.
Collapse
Affiliation(s)
- Nili E. Krausz
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Blair H. Hu
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Levi J. Hargrove
- Neural Engineering for Prosthetics and Orthotics Lab (NEPOL), Center of Bionic Medicine, Shirley Ryan AbilityLab (formerly RIC), Chicago, IL 60611, USA; (B.H.H.); (L.J.H.)
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
- Physical Medicine and Rehabilitation Department, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
40
|
Shim M, Han JI, Choi HS, Ha SM, Kim JH, Baek YS. Terrain Feature Estimation Method for a Lower Limb Exoskeleton Using Kinematic Analysis and Center of Pressure. SENSORS 2019; 19:s19204418. [PMID: 31614811 PMCID: PMC6832667 DOI: 10.3390/s19204418] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 10/02/2019] [Accepted: 10/10/2019] [Indexed: 11/16/2022]
Abstract
While controlling a lower limb exoskeleton providing walking assistance to wearers, the walking terrain is an important factor that should be considered for meeting performance and safety requirements. Therefore, we developed a method to estimate the slope and elevation using the contact points between the limb exoskeleton and ground. We used the center of pressure as a contact point on the ground and calculated the location of the contact points on the walking terrain based on kinematic analysis of the exoskeleton. Then, a set of contact points collected from each step during walking was modeled as the plane that represents the surface of the walking terrain through the least-square method. Finally, by comparing the normal vectors of the modeled planes for each step, features of the walking terrain were estimated. We analyzed the estimation accuracy of the proposed method through experiments on level ground, stairs, and a ramp. Classification using the estimated features showed recognition accuracy higher than 95% for all experimental motions. The proposed method approximately analyzed the movement of the exoskeleton on various terrains even though no prior information on the walking terrain was provided. The method can enable exoskeleton systems to actively assist walking in various environments.
Collapse
Affiliation(s)
- Myounghoon Shim
- Motion Control Laboratory, Department of Mechanical Engineering, Yonsei University, Seoul 03722, Korea.
| | - Jong In Han
- Motion Control Laboratory, Department of Mechanical Engineering, Yonsei University, Seoul 03722, Korea.
| | - Ho Seon Choi
- Motion Control Laboratory, Department of Mechanical Engineering, Yonsei University, Seoul 03722, Korea.
| | - Seong Min Ha
- Motion Control Laboratory, Department of Mechanical Engineering, Yonsei University, Seoul 03722, Korea.
| | - Jung-Hoon Kim
- Construction Robot and Automation Laboratory, Department of Civil & Environmental Engineering, Yonsei University, Seoul 03722, Korea.
| | - Yoon Su Baek
- Motion Control Laboratory, Department of Mechanical Engineering, Yonsei University, Seoul 03722, Korea.
| |
Collapse
|
41
|
Zhang K, Zhang W, Xiao W, Liu H, De Silva CW, Fu C. Sequential Decision Fusion for Environmental Classification in Assistive Walking. IEEE Trans Neural Syst Rehabil Eng 2019; 27:1780-1790. [PMID: 31425118 DOI: 10.1109/tnsre.2019.2935765] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Powered prostheses are effective for helping amputees walk in a single environment, but these devices are inconvenient to use in complex environments. In order to help amputees walk in complex environments, prostheses need to understand the motion intent of amputees. Recently, researchers have found that vision sensors can be utilized to classify environments and predict the motion intent of amputees. Although previous studies have been able to classify environments accurately in offline analysis, the corresponding time delay has not been considered. To increase the accuracy and decrease the time delay of environmental classification, the present paper proposes a new decision fusion method. In this method, the sequential decisions of environmental classification are fused by constructing a hidden Markov model and designing a transition probability matrix. The developed method is evaluated by inviting five able-bodied subjects and three amputees to perform indoor and outdoor walking experiments. The results indicate that the proposed method can classify environments with accuracy improvements of 1.01% (indoor) and 2.48% (outdoor) over the previous voting method when a delay of only one frame is incorporated. The present method also achieves higher classification accuracy than with the methods of recurrent neural network (RNN), long-short term memory (LSTM), and gated recurrent unit (GRU). When achieving the same classification accuracy, the method of the present paper can decrease the time delay by 67 ms (indoor) and 733 ms (outdoor) in comparison to the previous voting method. Besides classifying environments, the proposed decision fusion method may be able to optimize the sequential predictions of the human motion intent.
Collapse
|
42
|
Gupta R, Agarwal R. Single channel EMG-based continuous terrain identification with simple classifier for lower limb prosthesis. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.07.002] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
43
|
Fluit R, Prinsen EC, Wang S, van der Kooij H. A Comparison of Control Strategies in Commercial and Research Knee Prostheses. IEEE Trans Biomed Eng 2019; 67:277-290. [PMID: 31021749 DOI: 10.1109/tbme.2019.2912466] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
GOAL To provide an overview of control strategies in commercial and research microprocessor-controlled prosthetic knees (MPKs). METHODS Five commercially available MPKs described in patents, and five research MPKs reported in scientific literature were compared. Their working principles, intent recognition, and walking controller were analyzed. Speed and slope adaptability of the walking controller was considered as well. RESULTS Whereas commercial MPKs are mostly passive, i.e., do not inject energy in the system, and employ heuristic rule-based intent classifiers, research MPKs are all powered and often utilize machine learning algorithms for intention detection. Both commercial and research MPKs rely on finite state machine impedance controllers for walking. Yet while commercial MPKs require a prosthetist to adjust impedance settings, scientific research is focused on reducing the tunable parameter space and developing unified controllers, independent of subject anthropometrics, walking speed, and ground slope. CONCLUSION The main challenges in the field of powered, active MPKs (A-MPKs) to boost commercial viability are first to demonstrate the benefit of A-MPKs compared to passive MPKs or mechanical non-microprocessor knees using biomechanical, performance-based and patient-reported metrics. Second, to evaluate control strategies and intent recognition in an uncontrolled environment, preferably outside the laboratory setting. And third, even though research MPKs favor sophisticated algorithms, to maintain the possibility of practical and comprehensible tuning of control parameters, considering optimal control cannot be known a priori. SIGNIFICANCE This review identifies main challenges in the development of A-MPKs, which have thus far hindered their broad availability on the market.
Collapse
|
44
|
Zhang K, Xiong C, Zhang W, Liu H, Lai D, Rong Y, Fu C. Environmental Features Recognition for Lower Limb Prostheses Toward Predictive Walking. IEEE Trans Neural Syst Rehabil Eng 2019; 27:465-476. [PMID: 30703033 DOI: 10.1109/tnsre.2019.2895221] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
This paper aims to present a robust environmental features recognition system (EFRS) for lower limb prosthesis, which can assist the control of prosthesis by predicting the locomotion modes of amputees and estimating environmental features in the following steps. A depth sensor and an inertial measurement unit are combined to stabilize the point cloud of environments. Subsequently, the 2D point cloud is extracted from origin 3D point cloud and is classified through a neural network. Environmental features, including slope of road, width, and height of stair, were also estimated via the 2D point cloud. Finally, the EFRS is evaluated through classifying and recognizing five kinds of common environments in simulation, indoor experiments, and outdoor experiments by six healthy subjects and three transfemoral amputees, and databases of five healthy subjects and three amputees are used to validate without training. The classification accuracy of five kinds of common environments reach up to 99.3% and 98.5% for the amputees in the indoor and outdoor experiments, respectively. The locomotion modes are predicted at least 0.6 s before the switch of actual locomotion modes. Most estimation errors of indoor and outdoor environments features are lower than 5% and 10%, respectively. The overall process of EFRS takes less than 0.023 s. The promising results demonstrate the robustness and the potential application of the presented EFRS to help the control of lower limb prostheses.
Collapse
|
45
|
Feng Y, Chen W, Wang Q. A strain gauge based locomotion mode recognition method using convolutional neural network. Adv Robot 2019. [DOI: 10.1080/01691864.2018.1563500] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Yanggang Feng
- The Robotics Research Group, College of Engineering, Peking University, Beijing, People's Republic of China
- Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Peking University, Beijing, People's Republic of China
| | - Wanwen Chen
- The Robotics Research Group, College of Engineering, Peking University, Beijing, People's Republic of China
- Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Peking University, Beijing, People's Republic of China
| | - Qining Wang
- The Robotics Research Group, College of Engineering, Peking University, Beijing, People's Republic of China
- Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Peking University, Beijing, People's Republic of China
- Beijing Innovation Center for Engineering Science and Advanced Technology (BIC-ESAT), Beijing, People's Republic of China
| |
Collapse
|
46
|
Khademi G, Mohammadi H, Simon D. Gradient-Based Multi-Objective Feature Selection for Gait Mode Recognition of Transfemoral Amputees. SENSORS 2019; 19:s19020253. [PMID: 30634668 PMCID: PMC6359457 DOI: 10.3390/s19020253] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2018] [Revised: 01/03/2019] [Accepted: 01/04/2019] [Indexed: 01/27/2023]
Abstract
One control challenge in prosthetic legs is seamless transition from one gait mode to another. User intent recognition (UIR) is a high-level controller that tells a low-level controller to switch to the identified activity mode, depending on the user's intent and environment. We propose a new framework to design an optimal UIR system with simultaneous maximum performance and minimum complexity for gait mode recognition. We use multi-objective optimization (MOO) to find an optimal feature subset that creates a trade-off between these two conflicting objectives. The main contribution of this paper is two-fold: (1) a new gradient-based multi-objective feature selection (GMOFS) method for optimal UIR design; and (2) the application of advanced evolutionary MOO methods for UIR. GMOFS is an embedded method that simultaneously performs feature selection and classification by incorporating an elastic net in multilayer perceptron neural network training. Experimental data are collected from six subjects, including three able-bodied subjects and three transfemoral amputees. We implement GMOFS and four variants of multi-objective biogeography-based optimization (MOBBO) for optimal feature subset selection, and we compare their performances using normalized hypervolume and relative coverage. GMOFS demonstrates competitive performance compared to the four MOBBO methods. We achieve a mean classification accuracy of 97.14 % ± 1.51 % and 98.45 % ± 1.22 % with the optimal selected subset for able-bodied and amputee subjects, respectively, while using only 23% of the available features. Results thus indicate the potential of advanced optimization methods to simultaneously achieve accurate, reliable, and compact UIR for locomotion mode detection of lower-limb amputees with prostheses.
Collapse
Affiliation(s)
- Gholamreza Khademi
- Department of Electrical Engineering and Computer Science, Cleveland State University, Cleveland, OH 44115, USA.
| | - Hanieh Mohammadi
- Department of Electrical Engineering and Computer Science, Cleveland State University, Cleveland, OH 44115, USA.
| | - Dan Simon
- Department of Electrical Engineering and Computer Science, Cleveland State University, Cleveland, OH 44115, USA.
| |
Collapse
|
47
|
Diaz JP, da Silva RL, Zhong B, Huang HH, Lobaton E. Visual Terrain Identification and Surface Inclination Estimation for Improving Human Locomotion with a Lower-Limb Prosthetic. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:1817-1820. [PMID: 30440748 DOI: 10.1109/embc.2018.8512614] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Lower-limb robotic prosthetics can benefit from context awareness to provide comfort and safety to the amputee. In this work, we developed a terrain identification and surface inclination estimation system for a prosthetic leg using visual and inertial sensors. We built a dataset from which images with high sharpness are selected using the IMU signal. The images are used for terrain identification and inclination is also computed simultaneously. With such information, the control of a robotized prosthetic leg can be adapted to changes in its surrounding.
Collapse
|
48
|
Rai V, Rombokas E. Evaluation of a Visual Localization System for Environment Awareness in Assistive Devices. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:5135-5141. [PMID: 30441496 DOI: 10.1109/embc.2018.8513442] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A major hurdle for the widespread use of wearable assistive devices is determining, moment-by-moment, the control mode appropriate for a given terrain when faced with a complex, multi-terrain environment. Current control strategies focus mainly on measurements of user behavior and less on environment information. Here we demonstrate the application of location estimates from a vision-based localization system to obtain environment awareness by delineating various terrains into regions. Given the current location and region occupied by the user, a controller could be built to select appropriate modes, predict transitions, or to add error correction. We quantify the positional accuracy of location estimates, how well these estimates translate to classifying current region, and transitions. Performance was evaluated on eight participants without amputation wearing the sensor on the shank of the leg. We investigated the performance of an instantaneous region classifier, which used location estimates alone, and a time-history based region classifier, which used a Neural Network on a time history of location and height estimates to accomplish environment awareness. Four types of regions and six types of transitions were tested. The classifier using height estimates and time history provided accurate region labels at least 96% of the time, and accurately detected region transitions within 110 milliseconds. These results demonstrate the promise of localization for control of robotic assistive technology.
Collapse
|
49
|
Huang S, Huang H. Voluntary Control of Residual Antagonistic Muscles in Transtibial Amputees: Feedforward Ballistic Contractions and Implications for Direct Neural Control of Powered Lower Limb Prostheses. IEEE Trans Neural Syst Rehabil Eng 2018; 26:894-903. [PMID: 29641394 DOI: 10.1109/tnsre.2018.2811544] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Discrete, rapid (i.e., ballistic like) muscle activation patterns have been observed in ankle muscles (i.e., plantar flexors and dorsiflexors) of able-bodied individuals during voluntary posture control. This observation motivated us to investigate whether transtibial amputees are capable of generating such a ballistic-like activation pattern accurately using their residual ankle muscles in order to assess whether the volitional postural control of a powered ankle prosthesis using proportional myoelectric control via residual muscles could be feasible. In this paper, we asked ten transtibial amputees to generate ballistic-like activation patterns using their residual lateral gastrocnemius and residual tibialis anterior to control a computer cursor via proportional myoelectric control to hit targets positioned at 20% and 40% of maximum voluntary contraction of the corresponding residual muscle. During practice conditions, we asked amputees to hit a single target repeatedly. During testing conditions, we asked amputees to hit a random sequence of targets. We compared movement time to target and end-point accuracy. We also examined motor recruitment synchronization via time-frequency representations of residual muscle activation. The result showed that median end-point error ranged from -0.6% to 1% maximum voluntary contraction across subjects during practice, which was significantly lower compared to testing ( ). Average movement time for all amputees was 242 ms during practice and 272 ms during testing. Motor recruitment synchronization varied across subjects, and amputees with the highest synchronization achieved the fastest movement times. End-point accuracy was independent of movement time. Results suggest that it is feasible for transtibial amputees to generate ballistic control signals using their residual muscles. Future work on volitional control of powered power ankle prostheses might consider anticipatory postural control based on ballistic-like residual muscle activation patterns and direct continuous proportional myoelectric control.
Collapse
|
50
|
Electromyographic Signal-Driven Continuous Locomotion Mode Identification Module Design for Lower Limb Prosthesis Control. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2018. [DOI: 10.1007/s13369-018-3193-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
|