1
|
Lin HP, Xu Y, Zhang X, Woolley D, Zhao L, Liang W, Huang M, Cheng HJ, Zhang L, Wenderoth N. A usability study on mobile EMG-guided wrist extension training in subacute stroke patients-MyoGuide. J Neuroeng Rehabil 2024; 21:39. [PMID: 38515192 PMCID: PMC10956308 DOI: 10.1186/s12984-024-01334-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 03/07/2024] [Indexed: 03/23/2024] Open
Abstract
BACKGROUND Effective stroke rehabilitation requires high-dose, repetitive-task training, especially during the early recovery phase. However, the usability of upper-limb rehabilitation technology in acute and subacute stroke survivors remains relatively unexplored. In this study, we introduce subacute stroke survivors to MyoGuide, a mobile training platform that employs surface electromyography (sEMG)-guided neurofeedback training that specifically targets wrist extension. Notably, the study emphasizes evaluating the platform's usability within clinical contexts. METHODS Seven subacute post-stroke patients (1 female, mean age 53.7 years, mean time post-stroke 58.9 days, mean duration per training session 48.9 min) and three therapists (one for eligibility screening, two for conducting training) participated in the study. Participants underwent ten days of supervised one-on-one wrist extension training with MyoGuide, which encompassed calibration, stability assessment, and dynamic tasks. All training records including the Level of Difficulty (LoD) and Stability Assessment Scores were recorded within the application. Usability was assessed through the System Usability Scale (SUS) and participants' willingness to continue home-based training was gauged through a self-developed survey post-training. Therapists also documented the daily performance of participants and the extent of support required. RESULTS The usability analysis yielded positive results, with a median SUS score of 82.5. Compared to the first session, participants significantly improved their performance at the final session as indicated by both the Stability Assessment Scores (p = 0.010, mean = 229.43, CI = [25.74-433.11]) and the LoD (p < 0.001; mean: 45.43, CI: [25.56-65.29]). The rate of progression differed based on the initial impairment levels of the patient. After training, participants expressed a keen interest in continuing home-based training. However, they also acknowledged challenges related to independently using the Myo armband and software. CONCLUSIONS This study introduces the MyoGuide training platform and demonstrates its usability in a clinical setting for stroke rehabilitation, with the assistance of a therapist. The findings support the potential of MyoGuide for wrist extension training in patients across a wide range of impairment levels. However, certain usability challenges, such as donning/doffing the armband and navigating the application, need to be addressed to enable independent MyoGuide training requiring only minimal supervision by a therapist.
Collapse
Affiliation(s)
- Hao-Ping Lin
- Singapore-ETH Centre, Future Health Technologies Programme, CREATE campus, 1 Create Way, CREATE Tower, #06-01, Singapore, 138602, Singapore
| | - Yang Xu
- Department of Rehabilitation, Shengjing Hospital of China Medical University, 16 Puhe Road, Shenyang, Liaoning, 110134, China
| | - Xue Zhang
- Department of Health Sciences and Technology, Neural Control of Movement Lab, ETH Zurich, Gloriastrasse 37/39 GLC G17.2, Zurich, 8092, Switzerland
| | - Daniel Woolley
- Department of Health Sciences and Technology, Neural Control of Movement Lab, ETH Zurich, Gloriastrasse 37/39 GLC G17.2, Zurich, 8092, Switzerland
| | - Lina Zhao
- Department of Rehabilitation, Shengjing Hospital of China Medical University, 16 Puhe Road, Shenyang, Liaoning, 110134, China
| | - Weidi Liang
- Department of Rehabilitation, Shengjing Hospital of China Medical University, 16 Puhe Road, Shenyang, Liaoning, 110134, China
| | - Mengdi Huang
- Department of Rehabilitation, Shengjing Hospital of China Medical University, 16 Puhe Road, Shenyang, Liaoning, 110134, China
| | - Hsiao-Ju Cheng
- Singapore-ETH Centre, Future Health Technologies Programme, CREATE campus, 1 Create Way, CREATE Tower, #06-01, Singapore, 138602, Singapore
| | - Lixin Zhang
- Department of Rehabilitation, Shengjing Hospital of China Medical University, 16 Puhe Road, Shenyang, Liaoning, 110134, China
| | - Nicole Wenderoth
- Singapore-ETH Centre, Future Health Technologies Programme, CREATE campus, 1 Create Way, CREATE Tower, #06-01, Singapore, 138602, Singapore.
- Department of Health Sciences and Technology, Neural Control of Movement Lab, ETH Zurich, Gloriastrasse 37/39 GLC G17.2, Zurich, 8092, Switzerland.
| |
Collapse
|
2
|
Catalán JM, Trigili E, Nann M, Blanco-Ivorra A, Lauretti C, Cordella F, Ivorra E, Armstrong E, Crea S, Alcañiz M, Zollo L, Soekadar SR, Vitiello N, García-Aracil N. Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs). J Neuroeng Rehabil 2023; 20:61. [PMID: 37149621 PMCID: PMC10164333 DOI: 10.1186/s12984-023-01185-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 04/26/2023] [Indexed: 05/08/2023] Open
Abstract
BACKGROUND The aging of the population and the progressive increase of life expectancy in developed countries is leading to a high incidence of age-related cerebrovascular diseases, which affect people's motor and cognitive capabilities and might result in the loss of arm and hand functions. Such conditions have a detrimental impact on people's quality of life. Assistive robots have been developed to help people with motor or cognitive disabilities to perform activities of daily living (ADLs) independently. Most of the robotic systems for assisting on ADLs proposed in the state of the art are mainly external manipulators and exoskeletal devices. The main objective of this study is to compare the performance of an hybrid EEG/EOG interface to perform ADLs when the user is controlling an exoskeleton rather than using an external manipulator. METHODS Ten impaired participants (5 males and 5 females, mean age 52 ± 16 years) were instructed to use both systems to perform a drinking task and a pouring task comprising multiple subtasks. For each device, two modes of operation were studied: synchronous mode (the user received a visual cue indicating the sub-tasks to be performed at each time) and asynchronous mode (the user started and finished each of the sub-tasks independently). Fluent control was assumed when the time for successful initializations ranged below 3 s and a reliable control in case it remained below 5 s. NASA-TLX questionnaire was used to evaluate the task workload. For the trials involving the use of the exoskeleton, a custom Likert-Scale questionnaire was used to evaluate the user's experience in terms of perceived comfort, safety, and reliability. RESULTS All participants were able to control both systems fluently and reliably. However, results suggest better performances of the exoskeleton over the external manipulator (75% successful initializations remain below 3 s in case of the exoskeleton and bellow 5s in case of the external manipulator). CONCLUSIONS Although the results of our study in terms of fluency and reliability of EEG control suggest better performances of the exoskeleton over the external manipulator, such results cannot be considered conclusive, due to the heterogeneity of the population under test and the relatively limited number of participants.
Collapse
Affiliation(s)
- José M Catalán
- Robotics and Artificial Intelligence Group of the Bioengineering Institute, Miguel Hernandez University, 03202, Elche, Spain.
| | - Emilio Trigili
- BioRobotics Institute, Scuola Superiore Sant'Anna, 56025, Pontedera, Italy.
- Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, Italy.
| | - Marius Nann
- Clinical Neurotechnology Laboratory, Charité, Universitätsmedizin Berlin, 10117, Belin, Germany
| | - Andrea Blanco-Ivorra
- Robotics and Artificial Intelligence Group of the Bioengineering Institute, Miguel Hernandez University, 03202, Elche, Spain
| | - Clemente Lauretti
- Laboratory of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico di Roma, 00128, Rome, Italy
| | - Francesca Cordella
- Laboratory of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico di Roma, 00128, Rome, Italy
| | - Eugenio Ivorra
- University Institute for Human-Centered Technology Research (Human-Tech), Universitat Politècnica de València, 46022, Valencia, Spain
| | | | - Simona Crea
- BioRobotics Institute, Scuola Superiore Sant'Anna, 56025, Pontedera, Italy
- Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, Italy
- IRCCS, Fondazione Don Carlo Gnocchi, Milan, Italy
| | - Mariano Alcañiz
- University Institute for Human-Centered Technology Research (Human-Tech), Universitat Politècnica de València, 46022, Valencia, Spain
| | - Loredana Zollo
- Laboratory of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico di Roma, 00128, Rome, Italy
| | - Surjo R Soekadar
- Clinical Neurotechnology Laboratory, Charité, Universitätsmedizin Berlin, 10117, Belin, Germany
| | - Nicola Vitiello
- BioRobotics Institute, Scuola Superiore Sant'Anna, 56025, Pontedera, Italy
- Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, Italy
- IRCCS, Fondazione Don Carlo Gnocchi, Milan, Italy
| | - Nicolás García-Aracil
- Robotics and Artificial Intelligence Group of the Bioengineering Institute, Miguel Hernandez University, 03202, Elche, Spain
| |
Collapse
|
3
|
Sharma N, Prakash A, Sharma S. An optoelectronic muscle contraction sensor for prosthetic hand application. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2023; 94:035009. [PMID: 37012764 DOI: 10.1063/5.0130394] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 03/05/2023] [Indexed: 06/19/2023]
Abstract
Surface electromyography (sEMG) is considered an established means for controlling prosthetic devices. sEMG suffers from serious issues such as electrical noise, motion artifact, complex acquisition circuitry, and high measuring costs because of which other techniques have gained attention. This work presents a new optoelectronic muscle (OM) sensor setup as an alternative to the EMG sensor for precise measurement of muscle activity. The sensor integrates a near-infrared light-emitting diode and phototransistor pair along with the suitable driver circuitry. The sensor measures skin surface displacement (that occurs during muscle contraction) by detecting backscattered infrared light from skeletal muscle tissue. With an appropriate signal processing scheme, the sensor was able to produce a 0-5 V output proportional to the muscular contraction. The developed sensor depicted decent static and dynamic features. In detecting muscle contractions from the forearm muscles of subjects, the sensor showed good similarity with the EMG sensor. In addition, the sensor displayed higher signal-to-noise ratio values and better signal stability than the EMG sensor. Furthermore, the OM sensor setup was utilized to control the rotation of the servomotor using an appropriate control scheme. Hence, the developed sensing system can measure muscle contraction information for controlling assistive devices.
Collapse
Affiliation(s)
- Neeraj Sharma
- School of Biomedical Engineering, Indian Institute of Technology (BHU), Varanasi 221005, India
| | - Alok Prakash
- CSIR-National Physical Laboratory, New Delhi 110012, India
| | - Shiru Sharma
- School of Biomedical Engineering, Indian Institute of Technology (BHU), Varanasi 221005, India
| |
Collapse
|
4
|
Xue X, Zhang B, Moon S, Xu GX, Huang CC, Sharma N, Jiang X. Development of a Wearable Ultrasound Transducer for Sensing Muscle Activities in Assistive Robotics Applications. BIOSENSORS 2023; 13:134. [PMID: 36671969 PMCID: PMC9855872 DOI: 10.3390/bios13010134] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 01/05/2023] [Accepted: 01/08/2023] [Indexed: 06/17/2023]
Abstract
Robotic prostheses and powered exoskeletons are novel assistive robotic devices for modern medicine. Muscle activity sensing plays an important role in controlling assistive robotics devices. Most devices measure the surface electromyography (sEMG) signal for myoelectric control. However, sEMG is an integrated signal from muscle activities. It is difficult to sense muscle movements in specific small regions, particularly at different depths. Alternatively, traditional ultrasound imaging has recently been proposed to monitor muscle activity due to its ability to directly visualize superficial and at-depth muscles. Despite their advantages, traditional ultrasound probes lack wearability. In this paper, a wearable ultrasound (US) transducer, based on lead zirconate titanate (PZT) and a polyimide substrate, was developed for a muscle activity sensing demonstration. The fabricated PZT-5A elements were arranged into a 4 × 4 array and then packaged in polydimethylsiloxane (PDMS). In vitro porcine tissue experiments were carried out by generating the muscle activities artificially, and the muscle movements were detected by the proposed wearable US transducer via muscle movement imaging. Experimental results showed that all 16 elements had very similar acoustic behaviors: the averaged central frequency, -6 dB bandwidth, and electrical impedance in water were 10.59 MHz, 37.69%, and 78.41 Ω, respectively. The in vitro study successfully demonstrated the capability of monitoring local muscle activity using the prototyped wearable transducer. The findings indicate that ultrasonic sensing may be an alternative to standardize myoelectric control for assistive robotics applications.
Collapse
Affiliation(s)
- Xiangming Xue
- Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC 27695, USA
| | - Bohua Zhang
- Department of Mechanical and Aerospace Engineering, North Carolina State University, Raleigh, NC 27695, USA
| | - Sunho Moon
- Department of Mechanical and Aerospace Engineering, North Carolina State University, Raleigh, NC 27695, USA
| | - Guo-Xuan Xu
- Department of Biomedical Engineering, National Cheng Kung University, Tainan 70101, Taiwan
| | - Chih-Chung Huang
- Department of Biomedical Engineering, National Cheng Kung University, Tainan 70101, Taiwan
| | - Nitin Sharma
- Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC 27695, USA
| | - Xiaoning Jiang
- Department of Mechanical and Aerospace Engineering, North Carolina State University, Raleigh, NC 27695, USA
| |
Collapse
|
5
|
Das T, Gohain L, Kakoty NM, Malarvili MB, Widiyanti P, Kumar G. Hierarchical Approach for Fusion of Electroencephalography and Electromyography for Predicting Finger Movements and Kinematics using Deep Learning. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
6
|
Zhang Q, Fragnito N, Bao X, Sharma N. A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices control. WEARABLE TECHNOLOGIES 2022; 3:e20. [PMID: 38486894 PMCID: PMC10936300 DOI: 10.1017/wtc.2022.18] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 07/14/2022] [Accepted: 08/06/2022] [Indexed: 03/17/2024]
Abstract
Robotic assistive or rehabilitative devices are promising aids for people with neurological disorders as they help regain normative functions for both upper and lower limbs. However, it remains challenging to accurately estimate human intent or residual efforts non-invasively when using these robotic devices. In this article, we propose a deep learning approach that uses a brightness mode, that is, B-mode, of ultrasound (US) imaging from skeletal muscles to predict the ankle joint net plantarflexion moment while walking. The designed structure of customized deep convolutional neural networks (CNNs) guarantees the convergence and robustness of the deep learning approach. We investigated the influence of the US imaging's region of interest (ROI) on the net plantarflexion moment prediction performance. We also compared the CNN-based moment prediction performance utilizing B-mode US and sEMG spectrum imaging with the same ROI size. Experimental results from eight young participants walking on a treadmill at multiple speeds verified an improved accuracy by using the proposed US imaging + deep learning approach for net joint moment prediction. With the same CNN structure, compared to the prediction performance by using sEMG spectrum imaging, US imaging significantly reduced the normalized prediction root mean square error by 37.55% ( < .001) and increased the prediction coefficient of determination by 20.13% ( < .001). The findings show that the US imaging + deep learning approach personalizes the assessment of human joint voluntary effort, which can be incorporated with assistive or rehabilitative devices to improve clinical performance based on the assist-as-needed control strategy.
Collapse
Affiliation(s)
- Qiang Zhang
- Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC, USA
- Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Natalie Fragnito
- Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC, USA
- Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Xuefeng Bao
- Biomedical Engineering Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Nitin Sharma
- Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC, USA
- Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
7
|
Gandolla M, Luciani B, Pirovano DE, Pedrocchi A, Braghin F. A force-based human machine interface to drive a motorized upper limb exoskeleton. a pilot study. IEEE Int Conf Rehabil Robot 2022; 2022:1-6. [PMID: 36176155 DOI: 10.1109/icorr55369.2022.9896523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Muscular dystrophy is a strongly invalidating disease that causes the progressive loss of motor skills. The use of assistive devices, especially those in support of the upper limb, can increase the ability to perform daily-life activities and foster a partial recovery of the lost motor functionalities. However, for the use of these devices to be truly effective and accepted by patients, their activation must coincide with the user's intention to move. This work describes a new human-machine interface based on the integration of a six-axis force sensor to drive an upper limb motorized exoskeleton. This novel system can detect the patient's intention to move and produce displacements of the robotic device that are of magnitude and direction consistent with the user's wishes. The integration of the force-sensor interface in the BRIDGE/EMPATIA exoskeletal system was successful, and tests performed on both healthy and dystrophic subjects showed promising results, especially for the execution of planar movements.
Collapse
|
8
|
Gantenbein J, Meyer JT, Jager L, Sigrist R, Gassert R, Lambercy O. An Analysis of Intention Detection Strategies to Control Advanced Assistive Technologies at the CYBATHLON. IEEE Int Conf Rehabil Robot 2022; 2022:1-6. [PMID: 36176133 DOI: 10.1109/icorr55369.2022.9896539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
With the increasing range of functionalities of advanced assistive technologies (AAT), reliable control and initiation of the desired actions become increasingly challenging for users. In this work, we present an analysis of current practices, user preferences, and usability of AAT intention detection strategies based on a survey among participants with disabilities at the CYBATHLON 2020 Global Edition. We collected data from 35 respondents, using devices in various disciplines and levels of technology maturity. We found that conventional, direct inputs such as buttons and joysticks are used by the majority of AAT (71.4%) due to their simplicity and learnability. However, 22 respondents (62.8%) reported a desire for more natural control using muscle or non-invasive brain signals, and 37.1% even reported an openness to invasive strategies for potentially improved control. The usability of the used strategies in terms of the explored attributes (reliability, mental effort, required learning) was mainly perceived positively, whereas no significant difference was observed across intention detection strategies and device types. It can be assumed that the strategies used during the CYBATHLON realistically represent options to control an AAT in a dynamic, physically and mentally demanding environment. Thus, this work underlines the need for carefully considering user needs and preferences for the selection of intention detection strategies in a context of use outside the laboratory.
Collapse
|
9
|
Castellini C. Peripheral Nervous System Interfaces: Invasive or Non-invasive? Front Neurorobot 2022; 16:846866. [PMID: 35574233 PMCID: PMC9099407 DOI: 10.3389/fnbot.2022.846866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 03/25/2022] [Indexed: 11/13/2022] Open
Affiliation(s)
- Claudio Castellini
- Chair of Medical Robotics, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Germany
- The Adaptive Bio-Interfaces Group, German Aerospace Center (DLR), Institute of Robotics and Mechatronics, Oberpfaffenhofen, Germany
- *Correspondence: Claudio Castellini
| |
Collapse
|
10
|
Gantenbein J, Dittli J, Meyer JT, Gassert R, Lambercy O. Intention Detection Strategies for Robotic Upper-Limb Orthoses: A Scoping Review Considering Usability, Daily Life Application, and User Evaluation. Front Neurorobot 2022; 16:815693. [PMID: 35264940 PMCID: PMC8900616 DOI: 10.3389/fnbot.2022.815693] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
Wearable robotic upper limb orthoses (ULO) are promising tools to assist or enhance the upper-limb function of their users. While the functionality of these devices has continuously increased, the robust and reliable detection of the user's intention to control the available degrees of freedom remains a major challenge and a barrier for acceptance. As the information interface between device and user, the intention detection strategy (IDS) has a crucial impact on the usability of the overall device. Yet, this aspect and the impact it has on the device usability is only rarely evaluated with respect to the context of use of ULO. A scoping literature review was conducted to identify non-invasive IDS applied to ULO that have been evaluated with human participants, with a specific focus on evaluation methods and findings related to functionality and usability and their appropriateness for specific contexts of use in daily life. A total of 93 studies were identified, describing 29 different IDS that are summarized and classified according to a four-level classification scheme. The predominant user input signal associated with the described IDS was electromyography (35.6%), followed by manual triggers such as buttons, touchscreens or joysticks (16.7%), as well as isometric force generated by residual movement in upper-limb segments (15.1%). We identify and discuss the strengths and weaknesses of IDS with respect to specific contexts of use and highlight a trade-off between performance and complexity in selecting an optimal IDS. Investigating evaluation practices to study the usability of IDS, the included studies revealed that, primarily, objective and quantitative usability attributes related to effectiveness or efficiency were assessed. Further, it underlined the lack of a systematic way to determine whether the usability of an IDS is sufficiently high to be appropriate for use in daily life applications. This work highlights the importance of a user- and application-specific selection and evaluation of non-invasive IDS for ULO. For technology developers in the field, it further provides recommendations on the selection process of IDS as well as to the design of corresponding evaluation protocols.
Collapse
Affiliation(s)
- Jessica Gantenbein
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Jan Dittli
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Jan Thomas Meyer
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Roger Gassert
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
- Future Health Technologies, Singapore-ETH Centre, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore, Singapore
| | - Olivier Lambercy
- Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
- Future Health Technologies, Singapore-ETH Centre, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore, Singapore
| |
Collapse
|
11
|
Silverman JD, Balbinot G, Masani K, Zariffa J, Eng P. Validity and Reliability of Surface Electromyography Features in Lower Extremity Muscle Contraction in Healthy and Spinal Cord-Injured Participants. Top Spinal Cord Inj Rehabil 2021; 27:14-27. [PMID: 34866885 DOI: 10.46292/sci20-00001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Background: Spinal cord injury (SCI) has a significant impact on motor control and active force generation. Quantifying muscle activation following SCI may help indicate the degree of motor impairment and predict the efficacy of rehabilitative interventions. In healthy persons, muscle activation is typically quantified by electromyographic (EMG) signal amplitude measures. However, in SCI, these measures may not reflect voluntary effort, and therefore other nonamplitude-based features should be considered. Objectives: The purpose of this study was to assess the correlation of time-domain EMG features with the exerted joint torque (validity) and their test-retest repeatability (reliability), which may contribute to characterizing muscle activation following SCI. Methods: Surface EMG (SEMG) and torque were measured while nine uninjured participants and four participants with SCI performed isometric contractions of tibialis anterior (TA) and soleus (SOL). Data collection was repeated at a subsequent session for comparison across days. Validity and test-retest reliability of features were assessed by Spearman and intraclass correlation (ICC) of linear regression coefficients. Results: In healthy participants, SEMG features correlated well with torque (TA: ρ > 0.92; SOL: ρ > 0.94) and showed high reliability (ICCmean = 0.90; range, 0.72-0.99). In an SCI case series, SEMG features also correlated well with torque (TA: ρ > 0.86; SOL: ρ > 0.86), and time-domain features appeared no less repeatable than amplitude-based measures. Conclusion: Time-domain SEMG features are valid and reliable measures of lower extremity muscle activity in healthy participants and may be valid measures of sublesional muscle activity following SCI. These features could be used to gauge motor impairment and progression of rehabilitative interventions or in controlling assistive technologies.
Collapse
Affiliation(s)
- Jordan Daniel Silverman
- Division of Physical Medicine and Rehabilitation, University of Toronto, Toronto, Ontario, Canada.,KITE - Toronto Rehabilitation Institute - University Health Network, Toronto, Ontario, Canada
| | - Gustavo Balbinot
- KITE - Toronto Rehabilitation Institute - University Health Network, Toronto, Ontario, Canada
| | - Kei Masani
- KITE - Toronto Rehabilitation Institute - University Health Network, Toronto, Ontario, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada
| | - José Zariffa
- Division of Physical Medicine and Rehabilitation, University of Toronto, Toronto, Ontario, Canada.,KITE - Toronto Rehabilitation Institute - University Health Network, Toronto, Ontario, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada.,Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada.,Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada
| | - P Eng
- KITE - Toronto Rehabilitation Institute - University Health Network, Toronto, Ontario, Canada.,Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada.,Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada.,Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
12
|
Veselic S, Zito C, Farina D. Human-Robot Interaction With Robust Prediction of Movement Intention Surpasses Manual Control. Front Neurorobot 2021; 15:695022. [PMID: 34658829 PMCID: PMC8514866 DOI: 10.3389/fnbot.2021.695022] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 08/23/2021] [Indexed: 12/04/2022] Open
Abstract
Physical human-robot interaction (pHRI) enables a user to interact with a physical robotic device to advance beyond the current capabilities of high-payload and high-precision industrial robots. This paradigm opens up novel applications where a the cognitive capability of a user is combined with the precision and strength of robots. Yet, current pHRI interfaces suffer from low take-up and a high cognitive burden for the user. We propose a novel framework that robustly and efficiently assists users by reacting proactively to their commands. The key insight is to include context- and user-awareness in the controller, improving decision-making on how to assist the user. Context-awareness is achieved by inferring the candidate objects to be grasped in a task or scene and automatically computing plans for reaching them. User-awareness is implemented by facilitating the motion toward the most likely object that the user wants to grasp, as well as dynamically recovering from incorrect predictions. Experimental results in a virtual environment of two degrees of freedom control show the capability of this approach to outperform manual control. By robustly predicting user intention, the proposed controller allows subjects to achieve superhuman performance in terms of accuracy and, thereby, usability.
Collapse
Affiliation(s)
- Sebastijan Veselic
- Department of Clinical and Movement Neurosciences, University College London, London, United Kingdom.,Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom.,School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | - Claudio Zito
- School of Computer Science, University of Birmingham, Birmingham, United Kingdom.,Autonomous Robotics Research Centre, Technology Innovation Institute, Abu Dhabi, United Arab Emirates
| | - Dario Farina
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
13
|
Battaglia E, Boehm J, Zheng Y, Jamieson AR, Gahan J, Majewicz Fey A. Rethinking Autonomous Surgery: Focusing on Enhancement over Autonomy. Eur Urol Focus 2021; 7:696-705. [PMID: 34246619 PMCID: PMC10394949 DOI: 10.1016/j.euf.2021.06.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 05/28/2021] [Accepted: 06/17/2021] [Indexed: 12/12/2022]
Abstract
CONTEXT As robot-assisted surgery is increasingly used in surgical care, the engineering research effort towards surgical automation has also increased significantly. Automation promises to enhance surgical outcomes, offload mundane or repetitive tasks, and improve workflow. However, we must ask an important question: should autonomous surgery be our long-term goal? OBJECTIVE To provide an overview of the engineering requirements for automating control systems, summarize technical challenges in automated robotic surgery, and review sensing and modeling techniques to capture real-time human behaviors for integration into the robotic control loop for enhanced shared or collaborative control. EVIDENCE ACQUISITION We performed a nonsystematic search of the English language literature up to March 25, 2021. We included original studies related to automation in robot-assisted laparoscopic surgery and human-centered sensing and modeling. EVIDENCE SYNTHESIS We identified four comprehensive review papers that present techniques for automating portions of surgical tasks. Sixteen studies relate to human-centered sensing technologies and 23 to computer vision and/or advanced artificial intelligence or machine learning methods for skill assessment. Twenty-two studies evaluate or review the role of haptic or adaptive guidance during some learning task, with only a few applied to robotic surgery. Finally, only three studies discuss the role of some form of training in patient outcomes and none evaluated the effects of full or semi-autonomy on patient outcomes. CONCLUSIONS Rather than focusing on autonomy, which eliminates the surgeon from the loop, research centered on more fully understanding the surgeon's behaviors, goals, and limitations could facilitate a superior class of collaborative surgical robots that could be more effective and intelligent than automation alone. PATIENT SUMMARY We reviewed the literature for studies on automation in surgical robotics and on modeling of human behavior in human-machine interaction. The main application is to enhance the ability of surgical robotic systems to collaborate more effectively and intelligently with human surgeon operators.
Collapse
Affiliation(s)
- Edoardo Battaglia
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Jacob Boehm
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Yi Zheng
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Andrew R Jamieson
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Jeffrey Gahan
- Department of Urology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
14
|
Geers AM, Prinsen EC, van der Pijl DJ, Bergsma A, Rietman JS, Koopman BFJM. Head support in wheelchairs (scoping review): state-of-the-art and beyond. Disabil Rehabil Assist Technol 2021:1-24. [PMID: 34000206 DOI: 10.1080/17483107.2021.1892840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
BACKGROUND Many wheelchair users experience disabilities in stabilising and positioning of the head. For these users, adequate head support is required. Although several types of head supports are available, further development of these systems is needed to improve functionality and quality of life, especially for the group of severely challenged users. For this group, user needs have not been clearly established. In this article, we provide an overview of the state-of-the-art in wheelchair mounted head supports and associated scientific evidence in order to identify requirements for the next generation of head support systems. MATERIALS AND METHODS A scoping review was performed including scientific literature (PubMed/Scopus), patents (Espacenet/Google Scholar) and commercial information. Types of head support and important system characteristics for future head support systems were proposed from consultations with wheelchair users (n = 3), occupational therapists (n = 3) and an expert panel. RESULTS Forty scientific papers, 90 patents and 80 descriptions of commercial devices were included in the scoping review. The identified head support systems were categorised per head support type. Only limited scientific clinical evidence with respect to the effectiveness of existing head support systems was found. From the user and expert consultations, a need was identified for personalised head support systems that intuitively combine changes in sitting and head position with continuous optimal support of the head to accommodate severely challenged users. CONCLUSIONS This study presents the state-of-the-art in head support systems. Additionally, several important system characteristics are introduced that provide guidance for the development and improvement of head supports.Implications for rehabilitationEspecially for the group of severely challenged wheelchair users, current head support systems require further development to improve their users' quality of life.The desired system characteristics which are discussed in this review are an important step in the definition of requirements for the next generation of head supports.
Collapse
Affiliation(s)
- Anoek M Geers
- Department of Biomechanical Engineering, TechMed Centre, University of Twente, Enschede, The Netherlands.,Focal Meditech B.V, Tilburg, The Netherlands
| | - Erik C Prinsen
- Department of Biomechanical Engineering, TechMed Centre, University of Twente, Enschede, The Netherlands.,Roessingh Research and Development, Enschede, The Netherlands
| | | | - Arjen Bergsma
- Department of Biomechanical Engineering, TechMed Centre, University of Twente, Enschede, The Netherlands
| | - Johan S Rietman
- Department of Biomechanical Engineering, TechMed Centre, University of Twente, Enschede, The Netherlands.,Roessingh Research and Development, Enschede, The Netherlands.,Roessingh Centre of Rehabilitation, Enschede, The Netherlands
| | - Bart F J M Koopman
- Department of Biomechanical Engineering, TechMed Centre, University of Twente, Enschede, The Netherlands
| |
Collapse
|
15
|
Nsugbe E, William Samuel O, Asogbon MG, Li G. Contrast of multi‐resolution analysis approach to transhumeral phantom motion decoding. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2021. [DOI: 10.1049/cit2.12039] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
| | - Oluwarotimi William Samuel
- Key Laboratory of Human‐Machine Intelligence‐Synergy Systems Chinese Academy of Sciences (CAS) Shenzhen Institutes of Advanced Technology Shenzhen China
| | - Mojisola Grace Asogbon
- Key Laboratory of Human‐Machine Intelligence‐Synergy Systems Chinese Academy of Sciences (CAS) Shenzhen Institutes of Advanced Technology Shenzhen China
| | - Guanglin Li
- Key Laboratory of Human‐Machine Intelligence‐Synergy Systems Chinese Academy of Sciences (CAS) Shenzhen Institutes of Advanced Technology Shenzhen China
| |
Collapse
|
16
|
Lee W, Seong JJ, Ozlu B, Shim BS, Marakhimov A, Lee S. Biosignal Sensors and Deep Learning-Based Speech Recognition: A Review. SENSORS (BASEL, SWITZERLAND) 2021; 21:1399. [PMID: 33671282 PMCID: PMC7922488 DOI: 10.3390/s21041399] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 02/01/2021] [Accepted: 02/12/2021] [Indexed: 11/16/2022]
Abstract
Voice is one of the essential mechanisms for communicating and expressing one's intentions as a human being. There are several causes of voice inability, including disease, accident, vocal abuse, medical surgery, ageing, and environmental pollution, and the risk of voice loss continues to increase. Novel approaches should have been developed for speech recognition and production because that would seriously undermine the quality of life and sometimes leads to isolation from society. In this review, we survey mouth interface technologies which are mouth-mounted devices for speech recognition, production, and volitional control, and the corresponding research to develop artificial mouth technologies based on various sensors, including electromyography (EMG), electroencephalography (EEG), electropalatography (EPG), electromagnetic articulography (EMA), permanent magnet articulography (PMA), gyros, images and 3-axial magnetic sensors, especially with deep learning techniques. We especially research various deep learning technologies related to voice recognition, including visual speech recognition, silent speech interface, and analyze its flow, and systematize them into a taxonomy. Finally, we discuss methods to solve the communication problems of people with disabilities in speaking and future research with respect to deep learning components.
Collapse
Affiliation(s)
- Wookey Lee
- Biomedical Science and Engineering & Dept. of Industrial Security Governance & IE, Inha University, 100 Inharo, Incheon 22212, Korea;
| | - Jessica Jiwon Seong
- Department of Industrial Security Governance, Inha University, 100 Inharo, Incheon 22212, Korea;
| | - Busra Ozlu
- Biomedical Science and Engineering & Department of Chemical Engineering, Inha University, 100 Inharo, Incheon 22212, Korea; (B.O.); (B.S.S.)
| | - Bong Sup Shim
- Biomedical Science and Engineering & Department of Chemical Engineering, Inha University, 100 Inharo, Incheon 22212, Korea; (B.O.); (B.S.S.)
| | | | - Suan Lee
- School of Computer Science, Semyung University, Jecheon 27136, Korea
| |
Collapse
|
17
|
Nsugbe E. Brain-machine and muscle-machine bio-sensing methods for gesture intent acquisition in upper-limb prosthesis control: a review. J Med Eng Technol 2021; 45:115-128. [PMID: 33475039 DOI: 10.1080/03091902.2020.1854357] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Revised: 10/10/2020] [Accepted: 11/15/2020] [Indexed: 01/11/2023]
Abstract
This paper presents a review of a number of bio-sensing methods for gesture intent signal acquisition in control tasks for upper-limb prosthesis. The paper specifically provides a breakdown of the control task in myoelectric prosthesis, and in addition, highlights and describes the importance of the acquisition of a high-quality bio-signal. The paper also describes commonly used invasive and non-invasive brain and muscle machine interfaces such as electroencephalography, electrocorticography, electroneurography, surface electromyography, sonomyography, mechanomyography, near infra-red, force sensitive resistance/pressure, and magnetoencephalography. Each modality is reviewed based on its operating principle and limitations in gesture recognition, followed by respective advantages and disadvantages. Also described within this paper, are multimodal sensing approaches, which involve data fusion of information from various sensing modalities for an enhanced neuromuscular bio-sensing source. Using a semi-systematic review methodology, we are able to derive a novel tabular approach towards contrasting the various strengths and weaknesses of the reviewed bio-sensing methods towards gesture recognition in a prosthesis interface. This would allow for a streamlined method of down selection of an appropriate bio-sensor given specific prosthesis design criteria and requirements. The paper concludes by highlighting a number of research areas that require more work for strides to be made towards improving and enhancing the connection between man and machine as it concerns upper-limb prosthesis. Such areas include classifier augmentation for gesture recognition, filtering techniques for sensor disturbance rejection, feeling of tactile sensations with an artificial limb.
Collapse
Affiliation(s)
- Ejay Nsugbe
- University of Bristol, Bristol, United Kingdom
| |
Collapse
|
18
|
A low-cost transradial prosthesis controlled by the intention of muscular contraction. Phys Eng Sci Med 2021; 44:229-241. [PMID: 33469856 DOI: 10.1007/s13246-021-00972-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 01/07/2021] [Indexed: 10/22/2022]
Abstract
Persons with upper-limb amputations face severe problems due to a reduction in their ability to perform the activities of daily living. The prosthesis controlled by electromyography (EMG) or other signals from sensors, switches, accelerometers, etc., can somewhat regain the lost capability of such individuals. However, there are several issues with these prostheses, such as expensive cost, limited functionality, unnatural control, slow operating speed, complexity, heavyweight, large size, etc. This paper proposes an affordable transradial prosthesis, controlled by the muscular contractions from user intention. A surface EMG sensor was explicitly fabricated for capturing the muscle contraction information from the residual forearm of subjects with amputation. An under actuated 3D printed hand was developed with a prosthetic socket assembly to attach the remaining upper-limb of such subjects. The hand integrates an intuitive closed-loop control system that receives reference input from the designed sensor and feedback input from a force sensor installed at the thumb tip. The performance of the EMG sensor was compared with that of a traditional sensor in detecting muscle contractions from the subjects. The designed sensor showed a good correlation (r > 0.93) and a better signal-to-noise ratio (SNR) feature to the conventional sensor. Further, a successful trial of the developed hand prosthesis was made on five different subjects with transradial amputation. The users wearing the hand prototype were able to perform faster and delicate grasping of various objects. The implemented control system allowed the prosthesis users to control the grasp force of hand fingers with their intention of muscular contractions.
Collapse
|
19
|
Elbow Motion Trajectory Prediction Using a Multi-Modal Wearable System: A Comparative Analysis of Machine Learning Techniques. SENSORS 2021; 21:s21020498. [PMID: 33445601 PMCID: PMC7827251 DOI: 10.3390/s21020498] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 01/06/2021] [Accepted: 01/08/2021] [Indexed: 12/03/2022]
Abstract
Motion intention detection is fundamental in the implementation of human-machine interfaces applied to assistive robots. In this paper, multiple machine learning techniques have been explored for creating upper limb motion prediction models, which generally depend on three factors: the signals collected from the user (such as kinematic or physiological), the extracted features and the selected algorithm. We explore the use of different features extracted from various signals when used to train multiple algorithms for the prediction of elbow flexion angle trajectories. The accuracy of the prediction was evaluated based on the mean velocity and peak amplitude of the trajectory, which are sufficient to fully define it. Results show that prediction accuracy when using solely physiological signals is low, however, when kinematic signals are included, it is largely improved. This suggests kinematic signals provide a reliable source of information for predicting elbow trajectories. Different models were trained using 10 algorithms. Regularization algorithms performed well in all conditions, whereas neural networks performed better when the most important features are selected. The extensive analysis provided in this study can be consulted to aid in the development of accurate upper limb motion intention detection models.
Collapse
|
20
|
Stalljann S, Wöhle L, Schäfer J, Gebhard M. Performance Analysis of a Head and Eye Motion-Based Control Interface for Assistive Robots. SENSORS 2020; 20:s20247162. [PMID: 33327500 PMCID: PMC7764952 DOI: 10.3390/s20247162] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 12/07/2020] [Accepted: 12/09/2020] [Indexed: 11/16/2022]
Abstract
Assistive robots support people with limited mobility in their everyday life activities and work. However, most of the assistive systems and technologies for supporting eating and drinking require a residual mobility in arms or hands. For people without residual mobility, different hands-free controls have been developed. For hands-free control, the combination of different modalities can lead to great advantages and improved control. The novelty of this work is a new concept to control a robot using a combination of head and eye motions. The control unit is a mobile, compact and low-cost multimodal sensor system. A Magnetic Angular Rate Gravity (MARG)-sensor is used to detect head motion and an eye tracker enables the system to capture the user’s gaze. To analyze the performance of the two modalities, an experimental evaluation with ten able-bodied subjects and one subject with tetraplegia was performed. To assess discrete control (event-based control), a button activation task was performed. To assess two-dimensional continuous cursor control, a Fitts’s Law task was performed. The usability study was related to a use-case scenario with a collaborative robot assisting a drinking action. The results of the able-bodied subjects show no significant difference between eye motions and head motions for the activation time of the buttons and the throughput, while, using the eye tracker in the Fitts’s Law task, the error rate was significantly higher. The subject with tetraplegia showed slightly better performance for button activation when using the eye tracker. In the use-case, all subjects were able to use the control unit successfully to support the drinking action. Due to the limited head motion of the subject with tetraplegia, button activation with the eye tracker was slightly faster than with the MARG-sensor. A further study with more subjects with tetraplegia is planned, in order to verify these results.
Collapse
|
21
|
Prakash A, Sahi AK, Sharma N, Sharma S. Force myography controlled multifunctional hand prosthesis for upper-limb amputees. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.102122] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
22
|
Zhang Q, Iyer A, Kim K, Sharma N. Evaluation of Non-Invasive Ankle Joint Effort Prediction Methods for Use in Neurorehabilitation Using Electromyography and Ultrasound Imaging. IEEE Trans Biomed Eng 2020; 68:1044-1055. [PMID: 32759078 DOI: 10.1109/tbme.2020.3014861] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Reliable measurement of voluntary human effort is essential for effective and safe interaction between the wearer and an assistive robot. Existing voluntary effort prediction methods that use surface electromyography (sEMG) are susceptible to prediction inaccuracies due to non-selectivity in measuring muscle responses. This technical challenge motivates an investigation into alternative non-invasive effort prediction methods that directly visualize the muscle response and improve effort prediction accuracy. The paper is a comparative study of ultrasound imaging (US)-derived neuromuscular signals and sEMG signals for their use in predicting isometric ankle dorsiflexion moment. Furthermore, the study evaluates the prediction accuracy of model-based and model-free voluntary effort prediction approaches that use these signals. METHODS The study evaluates sEMG signals and three US imaging-derived signals: pennation angle, muscle fascicle length, and echogenicity and three voluntary effort prediction methods: linear regression (LR), feedforward neural network (FFNN), and Hill-type neuromuscular model (HNM). RESULTS In all the prediction methods, pennation angle and fascicle length significantly improve the prediction accuracy of dorsiflexion moment, when compared to echogenicity. Also, compared to LR, both FFNN and HNM improve dorsiflexion moment prediction accuracy. CONCLUSION The findings indicate FFNN or HNM approach and using pennation angle or fascicle length predict human ankle movement intent with higher accuracy. SIGNIFICANCE The accurate ankle effort prediction will pave the path to safe and reliable robotic assistance in patients with drop foot.
Collapse
|
23
|
AlMohimeed I, Ono Y. Ultrasound Measurement of Skeletal Muscle Contractile Parameters Using Flexible and Wearable Single-Element Ultrasonic Sensor. SENSORS 2020; 20:s20133616. [PMID: 32605006 PMCID: PMC7374409 DOI: 10.3390/s20133616] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 06/17/2020] [Accepted: 06/23/2020] [Indexed: 12/25/2022]
Abstract
Skeletal muscle is considered as a near-constant volume system, and the contractions of the muscle are related to the changes in tissue thickness. Assessment of the skeletal muscle contractile parameters such as maximum contraction thickness (Th), contraction time (Tc), contraction velocity (Vc), sustain time (Ts), and half-relaxation (Tr) provides valuable information for various medical applications. This paper presents a single-element wearable ultrasonic sensor (WUS) and a method to measure the skeletal muscle contractile parameters in A-mode ultrasonic data acquisition. The developed WUS was made of double-layer polyvinylidene fluoride (PVDF) piezoelectric polymer films with a simple and low-cost fabrication process. A flexible, lightweight, thin, and small size WUS would provide a secure attachment to the skin surface without affecting the muscle contraction dynamics of interest. The developed WUS was employed to monitor the contractions of gastrocnemius (GC) muscle of a human subject. The GC muscle contractions were evoked by the electrical muscle stimulation (EMS) at varying EMS frequencies from 2 Hz up to 30 Hz. The tissue thickness changes due to the muscle contractions were measured by utilizing a time-of-flight method in the ultrasonic through-transmission mode. The developed WUS demonstrated the capability to monitor the tissue thickness changes during the unfused and fused tetanic contractions. The tetanic progression level was quantitatively assessed using the parameter of the fusion index (FI) obtained. In addition, the contractile parameters (Th, Tc, Vc, Ts, and Tr) were successfully extracted from the measured tissue thickness changes. In addition, the unfused and fused tetanus frequencies were estimated from the obtained FI-EMS frequency curve. The WUS and ultrasonic method proposed in this study could be a valuable tool for inexpensive, non-invasive, and continuous monitoring of the skeletal muscle contractile properties.
Collapse
Affiliation(s)
- Ibrahim AlMohimeed
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada;
- Department of Medical Equipment Technology, Majmaah University, Majmaah 11952, Saudi Arabia
| | - Yuu Ono
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada;
- Correspondence:
| |
Collapse
|
24
|
Hameed HK, Wan Hasan WZ, Shafie S, Ahmad SA, Jaafar H, Inche Mat LN. Investigating the performance of an amplitude-independent algorithm for detecting the hand muscle activity of stroke survivors. J Med Eng Technol 2020; 44:139-148. [PMID: 32396756 DOI: 10.1080/03091902.2020.1753838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
To make robotic hand devices controlled by surface electromyography (sEMG) signals feasible and practical tools for assisting patients with hand impairments, the problems that prevent these devices from being widely used have to be overcome. The most significant problem is the involuntary amplitude variation of the sEMG signals due to the movement of electrodes during forearm motion. Moreover, for patients who have had a stroke or another neurological disease, the muscle activity of the impaired hand is weak and has a low signal-to-noise ratio (SNR). Thus, muscle activity detection methods intended for controlling robotic hand devices should not depend mainly on the amplitude characteristics of the sEMG signal in the detection process, and they need to be more reliable for sEMG signals that have a low SNR. Since amplitude-independent muscle activity detection methods meet these requirements, this paper investigates the performance of such a method on people who have had a stroke in terms of the detection of weak muscle activity and resistance to false alarms caused by the involuntary amplitude variation of sEMG signals; these two parameters are very important for achieving the reliable control of robotic hand devices intended for people with disabilities. A comparison between the performance of an amplitude-independent muscle activity detection algorithm and three amplitude-dependent algorithms was conducted by using sEMG signals recorded from six hemiparesis stroke survivors and from six healthy subjects. The results showed that the amplitude-independent algorithm performed better in terms of detecting weak muscle activity and resisting false alarms.
Collapse
Affiliation(s)
- Husamuldeen Khalid Hameed
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, Selangor, Malaysia
| | - Wan Zuha Wan Hasan
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, Selangor, Malaysia
| | - Suhaidi Shafie
- Institute of Advanced Technology (ITMA), Universiti Putra Malaysia, Selangor, Malaysia
| | - Siti Anom Ahmad
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, Selangor, Malaysia
| | - Haslina Jaafar
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, Selangor, Malaysia
| | - Liyana Najwa Inche Mat
- Department of Medicine, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, Selangor, Malaysia
| |
Collapse
|
25
|
Nowak M, Eiband T, Ramírez ER, Castellini C. Action interference in simultaneous and proportional myocontrol: comparing force- and electromyography. J Neural Eng 2020; 17:026011. [PMID: 32109906 DOI: 10.1088/1741-2552/ab7b1e] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Myocontrol, that is, control of a prosthesis via muscle signals, is still a surprisingly hard problem. Recent research indicates that surface electromyography (sEMG), the traditional technique used to detect a subject's intent, could proficiently be replaced, or conjoined with, other techniques (multi-modal myocontrol), with the aim to improve both on dexterity and reliability. Objective. In this paper we present an online assessment of multi-modal sEMG and force myography (FMG) targeted at hand and wrist myocontrol. Approach. Twenty sEMG and FMG sensors in total were used to enforce simultaneous and proportional control of hand opening/closing, wrist pronation/supination and wrist flexion/extension of 12 intact subjects. Main results and Significance. We found that FMG yields in general a better performance than sEMG, and that the main drawback of the sEMG array we used is not the inability to perform a desired action, but rather action interference, that is, the undesired concurrent activation of another action. FMG, on the other hand, causes less interference.
Collapse
Affiliation(s)
- Markus Nowak
- Institute of Robotics and Mechatronics, DLR-German Aerospace Center, Wessling, Germany. Author to whom any correspondence should be addressed
| | | | | | | |
Collapse
|
26
|
Assisted Grasping in Individuals with Tetraplegia: Improving Control through Residual Muscle Contraction and Movement. SENSORS 2019; 19:s19204532. [PMID: 31635286 PMCID: PMC6832396 DOI: 10.3390/s19204532] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Revised: 10/14/2019] [Accepted: 10/15/2019] [Indexed: 11/16/2022]
Abstract
Individuals who sustained a spinal cord injury often lose important motor skills, and cannot perform basic daily living activities. Several assistive technologies, including robotic assistance and functional electrical stimulation, have been developed to restore lost functions. However, designing reliable interfaces to control assistive devices for individuals with C4–C8 complete tetraplegia remains challenging. Although with limited grasping ability, they can often control upper arm movements via residual muscle contraction. In this article, we explore the feasibility of drawing upon these residual functions to pilot two devices, a robotic hand and an electrical stimulator. We studied two modalities, supra-lesional electromyography (EMG), and upper arm inertial sensors (IMU). We interpreted the muscle activity or arm movements of subjects with tetraplegia attempting to control the opening/closing of a robotic hand, and the extension/flexion of their own contralateral hand muscles activated by electrical stimulation. Two groups were recruited: eight subjects issued EMG-based commands; nine other subjects issued IMU-based commands. For each participant, we selected at least two muscles or gestures detectable by our algorithms. Despite little training, all participants could control the robot’s gestures or electrical stimulation of their own arm via muscle contraction or limb motion.
Collapse
|
27
|
Toxiri S, Näf MB, Lazzaroni M, Fernández J, Sposito M, Poliero T, Monica L, Anastasi S, Caldwell DG, Ortiz J. Back-Support Exoskeletons for Occupational Use: An Overview of Technological Advances and Trends. IISE Trans Occup Ergon Hum Factors 2019. [DOI: 10.1080/24725838.2019.1626303] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Stefano Toxiri
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Matthias B. Näf
- Robotics and Multibody Mechanics Research Group, Department of Mechanical Engineering, Vrije Universiteit Brussel and Flanders Make, Brussels, Belgium
| | - Maria Lazzaroni
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Jorge Fernández
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Matteo Sposito
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Tommaso Poliero
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Luigi Monica
- INAIL—Italian Workers’ Compensation Authority, Rome, Italy
| | - Sara Anastasi
- INAIL—Italian Workers’ Compensation Authority, Rome, Italy
| | - Darwin G. Caldwell
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Jesús Ortiz
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
28
|
Kyrarini M, Zheng Q, Haseeb MA, Graser A. Robot Learning of Assistive Manipulation Tasks by Demonstration via Head Gesture-based Interface. IEEE Int Conf Rehabil Robot 2019; 2019:1139-1146. [PMID: 31374783 DOI: 10.1109/icorr.2019.8779379] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Assistive robotic manipulators have the potential to support the lives of people suffering from severe motor impairments. They can support individuals with disabilities to independently perform daily living activities, such as drinking, eating, manipulation tasks, and opening doors. An attractive solution is to enable motor impaired users to teach a robot by providing demonstrations of daily living tasks. The user controls the robot 'manually' with an intuitive human-robot interface to provide demonstration, which is followed by the robot learning of the performed task. However, the control of robotic manipulators by motor impaired individuals is a challenging topic. In this paper, a novel head gesture-based interface for hands-free robot control and a framework for robot learning from demonstration are presented. The head gesture-based interface consists of a camera mounted on the user's hat, which records the changes in the viewed scene due to the head motion. The head gesture recognition is performed using the optical flow for feature extraction and support vector machine for gesture classification. The recognized head gestures are further mapped into robot control commands to perform object manipulation task. The robot learns the demonstrated task by generating the sequence of actions and Gaussian Mixture Model method is used to segment the demonstrated path of the robot's end-effector. During the robotic reproduction of the task, the modified Gaussian Mixture Model and Gaussian Mixture Regression are used to adapt to environmental changes. The proposed framework was evaluated in a real-world assistive robotic scenario in a small study involving 13 participants; 12 able-bodied and one tetraplegic. The presented results demonstrate a potential of the proposed framework to enable severe motor impaired individuals to demonstrate daily living tasks to robotic manipulators.
Collapse
|
29
|
Kaneishi D, Matthew RP, Leu JE, O'Donnell J, Zhang B, Tomizuka M, Stuart H. Hybrid Control Interface of a Semi-soft Assistive Glove for People with Spinal Cord Injuries. IEEE Int Conf Rehabil Robot 2019; 2019:132-138. [PMID: 31374619 DOI: 10.1109/icorr.2019.8779427] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Active assistive devices have been designed to augment the hand grasping capabilities of individuals with spinal cord injuries (SCI). An intuitive bio-signal of wrist extension has been utilized in the device control, which imitates the passive grasping effect of tenodesis. However, controlling these devices in this manner limits the wrist joint motion while grasping. This paper presents a novel hybrid control interface and corresponding algorithms (i.e., a hybrid control method) of the Semi-soft Assistive Glove (SAG) developed for individuals with C6/C7-SCI. The secondary control interface is implemented to enable/disable the grasp trigger signal generated by the primary interface detecting the wrist extension. A simulation study reveals that the hybrid control method can facilitate grasping situations faced in daily activities. Empirical results with three healthy subjects suggest that the proposed method can assist the user to reach and grasp objects with the SAG naturally.
Collapse
|
30
|
Abstract
In this review, we present an overview of the applications and computed parameters of electromyography (EMG) and near-infrared spectroscopy (NIRS) methods on patients in clinical practice. The eligible studies were those where both techniques were combined in order to assess muscle characteristics from the electrical and hemodynamic points of view. With this aim, a comprehensive screening of the literature based on related keywords in the most-used scientific data bases allowed us to identify 17 papers which met the research criteria. We also present a brief overview of the devices designed specifically for muscular applications with EMG and NIRS sensors (a total of eight papers). A critical analysis of the results of the review suggests that the combined use of EMG and NIRS on muscle has been only partially exploited for assessment and evaluation in clinical practice and, thus, this field shows promises for future developments.
Collapse
|
31
|
Shen Y, Sun J, Ma J, Rosen J. Admittance Control Scheme Comparison of EXO-UL8: A Dual-Arm Exoskeleton Robotic System. IEEE Int Conf Rehabil Robot 2019; 2019:611-617. [PMID: 31374698 DOI: 10.1109/icorr.2019.8779545] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In physical rehabilitation, exoskeleton assistive devices aim to restore lost motor functions of a patient suffering from neuromuscular or musculoskeletal disorders. These assistive devices are classified as operating in one of two modes: (1) passive mode, in which the exoskeleton passively moves its joints through the full range (or a subset) of the patient's motion during engagement, or (2) assist-as-needed (AAN) mode, in which the exoskeleton provides assistance to the joints of the patient, either by initiating the movements or assisting the patient's movements to complete the task at hand. Achieving high physical human-robot interaction (pHRI) transparency is an open problem for multiple degrees-of-freedom (DOFs) redundant exoskeletons. Using the EXO-UL8 exoskeleton, this study compares two multi-joint admittance control schemes (hyper parameter-based, and Kalman Filter-based) with comfort optimization to improve human-exoskeleton transparency. The control schemes were tested by three healthy subjects who completed reaching tasks while assisted by the exoskeleton. Kinematic information in both joint and task space, as well as force-and torque-based power exchange between the human arm and exoskeleton, are collected and analyzed. The results show that the preliminary Kalman Filter-based control scheme matches the performance of the existing hyper parameter-based scheme, highlighting the potential of the Kalman Filter-based approach for additional performance.
Collapse
|
32
|
Nizamis K, Stienen AHA, Kamper DG, Keller T, Plettenburg DH, Rouse EJ, Farina D, Koopman BFJM, Sartori M. Transferrable Expertise From Bionic Arms to Robotic Exoskeletons: Perspectives for Stroke and Duchenne Muscular Dystrophy. ACTA ACUST UNITED AC 2019. [DOI: 10.1109/tmrb.2019.2912453] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
33
|
Verros S, Lucassen K, Hekman EEG, Bergsma A, Verkerke GJ, Koopman BFJM. Evaluation of intuitive trunk and non-intuitive leg sEMG control interfaces as command input for a 2-D Fitts's law style task. PLoS One 2019; 14:e0214645. [PMID: 30943235 PMCID: PMC6447183 DOI: 10.1371/journal.pone.0214645] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 03/18/2019] [Indexed: 11/18/2022] Open
Abstract
Duchenne muscular dystrophy (DMD) is a muscular condition that leads to muscle loss. Orthotic devices may present a solution for people with DMD to perform activities of daily living (ADL). One such device is the active trunk support but it needs a control interface to identify the user’s intention. Myoelectric control interfaces can be used to detect the user’s intention and consequently control an active trunk support. Current research on the control of orthotic devices that use surface electromyography (sEMG) signals as control inputs, focuses mainly on muscles that are directly linked to the movement being performed (intuitive control). However in some cases, it is hard to detect a proper sEMG signal (e.g., when there is significant amount of fat), which can result in poor control performance. A way to overcome this problem might be the introduction of other, non-intuitive forms of control. This paper presents an explorative study on the comparison and learning behavior of two different control interfaces, one using sEMG of trunk muscles (intuitive) and one using sEMG of leg muscles that can be potentially used for an active trunk support (non-intuitive). Six healthy subjects undertook a 2-D Fitts’s law style task. They were asked to steer a cursor into targets that were radially distributed symmetrically in five directions. The results show that the subjects were generally able to learn to control the tasks using either of the control interfaces and improve their performance over time. Comparison of both control interfaces demonstrated that the subjects were able to learn the leg control interface task faster than the trunk control interface task. Moreover, the performance on the diagonal-targets was significantly lower compared to the one directional-targets for both control interfaces. Overall, the results show that the subjects were able to control a non-intuitive control interface with high performance. Moreover, the results indicate that the non-intuitive control may be a viable solution for controlling an active trunk support.
Collapse
Affiliation(s)
- Stergios Verros
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
- * E-mail:
| | - Koen Lucassen
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
| | - Edsko E. G. Hekman
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
| | - Arjen Bergsma
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
| | - Gijsbertus J. Verkerke
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
- University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Bart F. J. M. Koopman
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
| |
Collapse
|
34
|
Tailor-Made Hand Exoskeletons at the University of Florence: From Kinematics to Mechatronic Design. MACHINES 2019. [DOI: 10.3390/machines7020022] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recently, robotics has increasingly become a companion for the human being and assisting physically impaired people with robotic devices is showing encouraging signs regarding the application of this largely investigated technology to the clinical field. As of today, however, exoskeleton design can still be considered a hurdle task and, even in modern robotics, aiding those patients who have lost or injured their limbs is surely one of the most challenging goal. In this framework, the research activity carried out by the Department of Industrial Engineering of the University of Florence concentrated on the development of portable, wearable and highly customizable hand exoskeletons to aid patients suffering from hand disabilities, and on the definition of patient-centered design strategies to tailor-made devices specifically developed on the different users’ needs. Three hand exoskeletons versions will be presented in this paper proving the major taken steps in mechanical designing and controlling a compact and lightweight solution. The performance of the resulting systems has been tested in a real-use scenario. The obtained results have been satisfying, indicating that the derived solutions may constitute a valid alternative to existing hand exoskeletons so far studied in the rehabilitation and assistance fields.
Collapse
|
35
|
Li R, Zhang X, Lu Z, Liu C, Li H, Sheng W, Odekhe R. An Approach for Brain-Controlled Prostheses Based on a Facial Expression Paradigm. Front Neurosci 2018; 12:943. [PMID: 30618572 PMCID: PMC6305548 DOI: 10.3389/fnins.2018.00943] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Accepted: 11/29/2018] [Indexed: 12/26/2022] Open
Abstract
One of the most exciting areas of rehabilitation research is brain-controlled prostheses, which translate electroencephalography (EEG) signals into control commands that operate prostheses. However, the existing brain-control methods have an obstacle between the selection of brain computer interface (BCI) and its performance. In this paper, a novel BCI system based on a facial expression paradigm is proposed to control prostheses that uses the characteristics of theta and alpha rhythms of the prefrontal and motor cortices. A portable brain-controlled prosthesis system was constructed to validate the feasibility of the facial-expression-based BCI (FE-BCI) system. Four types of facial expressions were used in this study. An effective filtering algorithm based on noise-assisted multivariate empirical mode decomposition (NA-MEMD) and sample entropy (SampEn) was used to remove electromyography (EMG) artifacts. A wavelet transform (WT) was applied to calculate the feature set, and a back propagation neural network (BPNN) was employed as a classifier. To prove the effectiveness of the FE-BCI system for prosthesis control, 18 subjects were involved in both offline and online experiments. The grand average accuracy over 18 subjects was 81.31 ± 5.82% during the online experiment. The experimental results indicated that the proposed FE-BCI system achieved good performance and can be efficiently applied for prosthesis control.
Collapse
Affiliation(s)
- Rui Li
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Xiaodong Zhang
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Zhufeng Lu
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Chang Liu
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Hanzhe Li
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| | - Weihua Sheng
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, United States
- Shenzhen Academy of Robotics, Shenzhen, China
| | - Randolph Odekhe
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
36
|
Zhang N, Li X, Samuel OW, Huang PG, Fang P, Li G. A Pilot Study on Using Forcemyography to Record Upper-limb Movements for Human-machine Interactive Control. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:3788-3791. [PMID: 30441191 DOI: 10.1109/embc.2018.8513366] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Forcemyography (FMG) is a useful method to record real-time body motions, which has application potentials for human-machine interactive control. The FMG registers the change of force distribution in the normal direction on muscle surface during limb movements, and the body motions can be recognized by decoding the FMG patterns. In this study, we used FMG to record upper-limb movements and evaluated the influence of different configurations of signal channel and feature on motion classification performances. A four-channel wearable FMG acquisition system was developed to record seven upper-limb movements on each of six able-bodied subjects. The preliminary results showed that the signal channel number has significant influence on motion classification performance; however, the influence of signal feature number on motion classification was insignificant. In addition, the influence of channel combination and feature combination were also discussed in this paper. This work would support the application potential of FMG for body motion recording and may provide useful instructions for the application of FMG in human-machine interactive control.
Collapse
|
37
|
Fonseca L, Bo A, Guiraud D, Navarro B, Gelis A, Azevedo-Coste C. Investigating Upper Limb Movement Classification on Users with Tetraplegia as a Possible Neuroprosthesis Interface. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:5053-5056. [PMID: 30441476 DOI: 10.1109/embc.2018.8513418] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Spinal cord injury (SCI), stroke and other nervous system conditions can result in partial or total paralysis of individual's limbs. Numerous technologies have been proposed to assist neurorehabilitation or movement restoration, e.g. robotics or neuroprosthesis. However, individuals with tetraplegia often find difficult to pilot these devices. We developed a system based on a single inertial measurement unit located on the upper limb that is able to classify performed movements using principal component analysis. We analyzed three calibration algorithms: unsupervised learning, supervised learning and adaptive learning. Eight participants with tetraplegia (C4C7) piloted three different postures in a robotic hand. We achieved 89% accuracy using the supervised learning algorithm. Through offline simulation, we found accuracies of 76% on the unsupervised learning, and 88% on the adaptive one.
Collapse
|
38
|
A survey of human shoulder functional kinematic representations. Med Biol Eng Comput 2018; 57:339-367. [PMID: 30367391 PMCID: PMC6347660 DOI: 10.1007/s11517-018-1903-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Accepted: 12/17/2017] [Indexed: 10/28/2022]
Abstract
In this survey, we review the field of human shoulder functional kinematic representations. The central question of this review is to evaluate whether the current approaches in shoulder kinematics can meet the high-reliability computational challenge. This challenge is posed by applications such as robot-assisted rehabilitation. Currently, the role of kinematic representations in such applications has been mostly overlooked. Therefore, we have systematically searched and summarised the existing literature on shoulder kinematics. The shoulder is an important functional joint, and its large range of motion (ROM) poses several mathematical and practical challenges. Frequently, in kinematic analysis, the role of the shoulder articulation is approximated to a ball-and-socket joint. Following the high-reliability computational challenge, our review challenges this inappropriate use of reductionism. Therefore, we propose that this challenge could be met by kinematic representations, that are redundant, that use an active interpretation and that emphasise on functional understanding.
Collapse
|
39
|
Verros S, Mahmood N, Peeters L, Lobo-Prat J, Bergsma A, Hekman E, Verkerke GJ, Koopman B. Evaluation of Control Interfaces for Active Trunk Support. IEEE Trans Neural Syst Rehabil Eng 2018; 26:1965-1974. [PMID: 30137011 DOI: 10.1109/tnsre.2018.2866956] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
A feasibility study was performed to evaluate the control interfaces for a novel trunk support assistive device (Trunk Drive), namely, joystick, force on sternum, force on feet, and electromyography (EMG) to be used by adult men with Duchene muscular dystrophy. The objective of this paper was to evaluate the performance of the different control interfaces during a discrete position tracking task. We built a one degree of freedom flexion-extension active trunk support device that was tested on 10 healthy men. An experiment, based on the Fitts law, was conducted, whereby subjects were asked to steer a cursor representing the angle of the Trunk Drive into a target that was shown on a graphical user interface, using the above-mentioned control interfaces. The users could operate the Trunk Drive via each of the control interfaces. In general, the joystick and force on sternum were the fastest in movement time (more than 40%) without any significant difference between them, but there was a significant difference between force on sternum on the one hand, and EMG and force on feet on the other. All control interfaces proved to be feasible solutions for controlling an active trunk support, each of which had specific advantages.
Collapse
|
40
|
A Piezoresistive Sensor to Measure Muscle Contraction and Mechanomyography. SENSORS 2018; 18:s18082553. [PMID: 30081541 PMCID: PMC6111775 DOI: 10.3390/s18082553] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 07/31/2018] [Accepted: 08/01/2018] [Indexed: 11/17/2022]
Abstract
Measurement of muscle contraction is mainly achieved through electromyography (EMG) and is an area of interest for many biomedical applications, including prosthesis control and human machine interface. However, EMG has some drawbacks, and there are also alternative methods for measuring muscle activity, such as by monitoring the mechanical variations that occur during contraction. In this study, a new, simple, non-invasive sensor based on a force-sensitive resistor (FSR) which is able to measure muscle contraction is presented. The sensor, applied on the skin through a rigid dome, senses the mechanical force exerted by the underlying contracting muscles. Although FSR creep causes output drift, it was found that appropriate FSR conditioning reduces the drift by fixing the voltage across the FSR and provides voltage output proportional to force. In addition to the larger contraction signal, the sensor was able to detect the mechanomyogram (MMG), i.e., the little vibrations which occur during muscle contraction. The frequency response of the FSR sensor was found to be large enough to correctly measure the MMG. Simultaneous recordings from flexor carpi ulnaris showed a high correlation (Pearson's r > 0.9) between the FSR output and the EMG linear envelope. Preliminary validation tests on healthy subjects showed the ability of the FSR sensor, used instead of the EMG, to proportionally control a hand prosthesis, achieving comparable performances.
Collapse
|
41
|
Toxiri S, Koopman AS, Lazzaroni M, Ortiz J, Power V, de Looze MP, O'Sullivan L, Caldwell DG. Rationale, Implementation and Evaluation of Assistive Strategies for an Active Back-Support Exoskeleton. Front Robot AI 2018; 5:53. [PMID: 33500935 PMCID: PMC7805873 DOI: 10.3389/frobt.2018.00053] [Citation(s) in RCA: 86] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Accepted: 04/16/2018] [Indexed: 02/05/2023] Open
Abstract
Active exoskeletons are potentially more effective and versatile than passive ones, but designing them poses a number of additional challenges. An important open challenge in the field is associated to the assistive strategy, by which the actuation forces are modulated to the user's needs during the physical activity. This paper addresses this challenge on an active exoskeleton prototype aimed at reducing compressive low-back loads, associated to risk of musculoskeletal injury during manual material handling (i.e., repeatedly lifting objects). An analysis of the biomechanics of the physical task reveals two key factors that determine low-back loads. For each factor, a suitable control strategy for the exoskeleton is implemented. The first strategy is based on user posture and modulates the assistance to support the wearer's own upper body. The second one adapts to the mass of the lifted object and is a practical implementation of electromyographic control. A third strategy is devised as a generalized combination of the first two. With these strategies, the proposed exoskeleton can quickly adjust to different task conditions (which makes it versatile compared to using multiple, task-specific, devices) as well as to individual preference (which promotes user acceptance). Additionally, the presented implementation is potentially applicable to more powerful exoskeletons, capable of generating larger forces. The different strategies are implemented on the exoskeleton and tested on 11 participants in an experiment reproducing the lifting task. The resulting data highlights that the strategies modulate the assistance as intended by design, i.e., they effectively adjust the commanded assistive torque during operation based on user posture and external mass. The experiment also provides evidence of significant reduction in muscular activity at the lumbar spine (around 30%) associated to using the exoskeleton. The reduction is well in line with previous literature and may be associated to lower risk of injury.
Collapse
Affiliation(s)
- Stefano Toxiri
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy.,Department of Informatics Bioengineering Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Axel S Koopman
- Department of Human Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam, Netherlands
| | - Maria Lazzaroni
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy.,Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Jesús Ortiz
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Valerie Power
- School of Design, University of Limerick, Limerick, Ireland
| | - Michiel P de Looze
- Department of Human Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam, Netherlands.,TNO, Leiden, Netherlands
| | - Leonard O'Sullivan
- School of Design, University of Limerick, Limerick, Ireland.,Health Research Institute, University of Limerick, Limerick, Ireland
| | - Darwin G Caldwell
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
42
|
A historical evaluation of Chinese tongue diagnosis in the treatment of septicemic plague in the pre-antibiotic era, and as a new direction for revolutionary clinical research applications. JOURNAL OF INTEGRATIVE MEDICINE-JIM 2018; 16:141-146. [PMID: 29691189 DOI: 10.1016/j.joim.2018.04.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Accepted: 03/26/2018] [Indexed: 01/18/2023]
Abstract
Chinese tongue diagnosis was initially developed to quickly and efficiently diagnose and prescribe medicine, while at the same time allowing the doctor to have minimal contact with the patient. At the time of its compiling, the spread of Yersinia pestis, often causing septicaemia and gangrene of the extremities, may have discouraged doctors to come in direct contact with their patients and take the pulse. However, in recent decades, modern developments in the field of traditional Chinese medicine, as well as the spread of antibiotics in conjunction with the advancements of microbiology, have overshadowed the original purpose of this methodology. Nevertheless, the fast approaching post-antibiotic era and the development of artificial intelligence may hold new applications for tongue diagnosis. This article focuses on the historical development of what is the world's earliest tongue diagnosis monograph, and discusses the directions that such knowledge may be used in future clinical research.
Collapse
|
43
|
Wang Z, Majewicz Fey A. Human-centric predictive model of task difficulty for human-in-the-loop control tasks. PLoS One 2018; 13:e0195053. [PMID: 29621301 PMCID: PMC5886487 DOI: 10.1371/journal.pone.0195053] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 03/07/2018] [Indexed: 11/18/2022] Open
Abstract
Quantitatively measuring the difficulty of a manipulation task in human-in-the-loop control systems is ill-defined. Currently, systems are typically evaluated through task-specific performance measures and post-experiment user surveys; however, these methods do not capture the real-time experience of human users. In this study, we propose to analyze and predict the difficulty of a bivariate pointing task, with a haptic device interface, using human-centric measurement data in terms of cognition, physical effort, and motion kinematics. Noninvasive sensors were used to record the multimodal response of human user for 14 subjects performing the task. A data-driven approach for predicting task difficulty was implemented based on several task-independent metrics. We compare four possible models for predicting task difficulty to evaluated the roles of the various types of metrics, including: (I) a movement time model, (II) a fusion model using both physiological and kinematic metrics, (III) a model only with kinematic metrics, and (IV) a model only with physiological metrics. The results show significant correlation between task difficulty and the user sensorimotor response. The fusion model, integrating user physiology and motion kinematics, provided the best estimate of task difficulty (R2 = 0.927), followed by a model using only kinematic metrics (R2 = 0.921). Both models were better predictors of task difficulty than the movement time model (R2 = 0.847), derived from Fitt's law, a well studied difficulty model for human psychomotor control.
Collapse
Affiliation(s)
- Ziheng Wang
- Department of Mechanical Engineering, The University of Texas at Dallas, Richardson, TX 75080, United States of America
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, The University of Texas at Dallas, Richardson, TX 75080, United States of America
- Department of Surgery, UT Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
44
|
Abstract
This work reports on preliminary results about on hand movement recognition with Near InfraRed Spectroscopy (NIRS) and surface ElectroMyoGraphy (sEMG). Either basing on physical contact (touchscreens, data-gloves, etc.), vision techniques (Microsoft Kinect, Sony PlayStation Move, etc.), or other modalities, hand movement recognition is a pervasive function in today environment and it is at the base of many gaming, social, and medical applications. Albeit, in recent years, the use of muscle information extracted by sEMG has spread out from the medical applications to contaminate the consumer world, this technique still falls short when dealing with movements of the hand. We tested NIRS as a technique to get another point of view on the muscle phenomena and proved that, within a specific movements selection, NIRS can be used to recognize movements and return information regarding muscles at different depths. Furthermore, we propose here three different multimodal movement recognition approaches and compare their performances.
Collapse
|
45
|
Jackowski A, Gebhard M, Thietje R. Head Motion and Head Gesture-Based Robot Control: A Usability Study. IEEE Trans Neural Syst Rehabil Eng 2018; 26:161-170. [PMID: 29324407 DOI: 10.1109/tnsre.2017.2765362] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The assistive robot system adaptive head motion control for user-friendly support (AMiCUS) has been developed to increase the autonomy of motion impaired people. The six degrees of freedom robot arm with gripper is controlled with head motion and head gestures only, so especially tetraplegics benefit from collaboration with AMiCUS. In this paper, a usability study with a total number of 30 subjects was conducted to validate the AMiCUS interaction technology and design. 24 able-bodied subjects of demographically diverse groups and 6 tetraplegics participated in this paper. All subjects performed different pick and place tasks by controlling AMiCUS. The evaluation of the interaction design was carried out subjectively with a questionnaire as well as objectively by measurement of time, completion rate, and number of trials for correct head gesture performance. The influence of several factors like age, sex, motion impairment, and previous experience on head motion-based human-robot interaction was analyzed. The interaction design has been proven successful in laboratory environment and assessed overall positive by the subjects. The results of the presented paper confirm the usability of the assistive robot AMiCUS. AMiCUS has the potential to benefit tetraplegics by improving their independence in activities of daily living and adapted workplaces.
Collapse
|
46
|
|
47
|
Lobo-Prat J, Nizamis K, Janssen MMHP, Keemink AQL, Veltink PH, Koopman BFJM, Stienen AHA. Comparison between sEMG and force as control interfaces to support planar arm movements in adults with Duchenne: a feasibility study. J Neuroeng Rehabil 2017; 14:73. [PMID: 28701169 PMCID: PMC5508565 DOI: 10.1186/s12984-017-0282-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Accepted: 06/26/2017] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Adults with Duchenne muscular dystrophy (DMD) can benefit from devices that actively support their arm function. A critical component of such devices is the control interface as it is responsible for the human-machine interaction. Our previous work indicated that surface electromyography (sEMG) and force-based control with active gravity and joint-stiffness compensation were feasible solutions for the support of elbow movements (one degree of freedom). In this paper, we extend the evaluation of sEMG- and force-based control interfaces to simultaneous and proportional control of planar arm movements (two degrees of freedom). METHODS Three men with DMD (18-23 years-old) with different levels of arm function (i.e. Brooke scores of 4, 5 and 6) performed a series of line-tracing tasks over a tabletop surface using an experimental active arm support. The arm movements were controlled using three control methods: sEMG-based control, force-based control with stiffness compensation (FSC), and force-based control with no compensation (FNC). The movement performance was evaluated in terms of percentage of task completion, tracing error, smoothness and speed. RESULTS For subject S1 (Brooke 4) FNC was the preferred method and performed better than FSC and sEMG. FNC was not usable for subject S2 (Brooke 5) and S3 (Brooke 6). Subject S2 presented significantly lower movement speed with sEMG than with FSC, yet he preferred sEMG since FSC was perceived to be too fatiguing. Subject S3 could not successfully use neither of the two force-based control methods, while with sEMG he could reach almost his entire workspace. CONCLUSIONS Movement performance and subjective preference of the three control methods differed with the level of arm function of the participants. Our results indicate that all three control methods have to be considered in real applications, as they present complementary advantages and disadvantages. The fact that the two weaker subjects (S2 and S3) experienced the force-based control interfaces as fatiguing suggests that sEMG-based control interfaces could be a better solution for adults with DMD. Yet force-based control interfaces can be a better alternative for those cases in which voluntary forces are higher than the stiffness forces of the arms.
Collapse
Affiliation(s)
- Joan Lobo-Prat
- Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522, NB, The Netherlands.
| | - Kostas Nizamis
- Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522, NB, The Netherlands
| | - Mariska M H P Janssen
- Department of Rehabilitation, Radboud University Medical Center, Reinier Postlaan 4, Nijmegen, 6500, HB, The Netherlands
| | - Arvid Q L Keemink
- Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522, NB, The Netherlands
| | - Peter H Veltink
- Department of Biomedical Signals and Systems, University of Twente, Drienerlolaan 5, Enschede, 7500, AE, The Netherlands
| | - Bart F J M Koopman
- Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522, NB, The Netherlands
| | - Arno H A Stienen
- Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522, NB, The Netherlands
- Department of Physical Therapy and Human Movement Sciences, Northwestern University, 645 N Michigan Ave Suite 1100, Chicago (IL), 60611, USA
| |
Collapse
|
48
|
Nowak M, Eiband T, Castellini C. Multi-modal myocontrol: Testing combined force- and electromyography. IEEE Int Conf Rehabil Robot 2017; 2017:1364-1368. [PMID: 28814010 DOI: 10.1109/icorr.2017.8009438] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Myocontrol, that is control of prostheses using bodily signals, has proved in the decades to be a surprisingly hard problem for the scientific community of assistive and rehabilitation robotics. In particular, traditional surface electromyography (sEMG) seems to be no longer enough to guarantee dexterity (i.e., control over several degrees of freedom) and, most importantly, reliability. Multi-modal myocontrol is concerned with the idea of using novel signal gathering techniques as a replacement of, or alongside, sEMG, to provide high-density and diverse signals to improve dexterity and make the control more reliable. In this paper we present an offline and online assessment of multi-modal sEMG and force myography (FMG) targeted at hand and wrist myocontrol. A total number of twenty sEMG and FMG sensors were used simultaneously, in several combined configurations, to predict opening/closing of the hand and activation of two degrees of freedom of the wrist of ten intact subjects. The analysis was targeted at determining the optimal sensor combination and control parameters; the experimental results indicate that sEMG sensors alone perform worst, yielding a nRMSE of 9.1%, while mixing FMG and sEMG or using FMG only reduces the nRMSE to 5.2-6.6%. To validate these results, we engaged the subject with median performance in an online goal-reaching task. Analysis of this further experiment reveals that the online behaviour is similar to the offline one.
Collapse
|
49
|
Abstract
OBJECTIVE disease processes are often marked by both neural and muscular changes that alter movement control and execution, but these adaptations are difficult to tease apart because they occur simultaneously. This is addressed by swapping an individual's limb dynamics with a neurally controlled facsimile using an interactive musculoskeletal simulator (IMS) that allows controlled modifications of musculoskeletal dynamics. This paper details the design and operation of the IMS, quantifies and describes human adaptation to the IMS, and determines whether the IMS allows users to move naturally, a prerequisite for manipulation experiments. METHODS healthy volunteers (n = 4) practiced a swift goal-directed task (back-and-forth elbow flexion/extension) for 90 trials with the IMS off (normal dynamics) and 240 trials with the IMS on, i.e., the actions of a user's personalized electromyography-driven musculoskeletal model are robotically imposed back onto the user. RESULTS after practicing with the IMS on, subjects could complete the task with end-point errors of 1.56°, close to the speed-matched IMS-off error of 0.57°. Muscle activity, joint torque, and arm kinematics for IMS-on and -off conditions were well matched for three subjects (root-mean-squared error [RMSE] = 0.16 N·m), but the error was higher for one subject with a small stature (RMSE = 0.25 N·m). CONCLUSION a well-matched musculoskeletal model allowed IMS users to perform a goal-directed task nearly as well as when the IMS was not active. SIGNIFICANCE this advancement permits real-time manipulations of musculoskeletal dynamics, which could increase our understanding of muscular and neural co-adaptations to injury, disease, disuse, and aging.
Collapse
|
50
|
Li X, Samuel OW, Zhang X, Wang H, Fang P, Li G. A motion-classification strategy based on sEMG-EEG signal combination for upper-limb amputees. J Neuroeng Rehabil 2017; 14:2. [PMID: 28061779 PMCID: PMC5219671 DOI: 10.1186/s12984-016-0212-z] [Citation(s) in RCA: 82] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 12/14/2016] [Indexed: 12/02/2022] Open
Abstract
Background Most of the modern motorized prostheses are controlled with the surface electromyography (sEMG) recorded on the residual muscles of amputated limbs. However, the residual muscles are usually limited, especially after above-elbow amputations, which would not provide enough sEMG for the control of prostheses with multiple degrees of freedom. Signal fusion is a possible approach to solve the problem of insufficient control commands, where some non-EMG signals are combined with sEMG signals to provide sufficient information for motion intension decoding. In this study, a motion-classification method that combines sEMG and electroencephalography (EEG) signals were proposed and investigated, in order to improve the control performance of upper-limb prostheses. Methods Four transhumeral amputees without any form of neurological disease were recruited in the experiments. Five motion classes including hand-open, hand-close, wrist-pronation, wrist-supination, and no-movement were specified. During the motion performances, sEMG and EEG signals were simultaneously acquired from the skin surface and scalp of the amputees, respectively. The two types of signals were independently preprocessed and then combined as a parallel control input. Four time-domain features were extracted and fed into a classifier trained by the Linear Discriminant Analysis (LDA) algorithm for motion recognition. In addition, channel selections were performed by using the Sequential Forward Selection (SFS) algorithm to optimize the performance of the proposed method. Results The classification performance achieved by the fusion of sEMG and EEG signals was significantly better than that obtained by single signal source of either sEMG or EEG. An increment of more than 14% in classification accuracy was achieved when using a combination of 32-channel sEMG and 64-channel EEG. Furthermore, based on the SFS algorithm, two optimized electrode arrangements (10-channel sEMG + 10-channel EEG, 10-channel sEMG + 20-channel EEG) were obtained with classification accuracies of 84.2 and 87.0%, respectively, which were about 7.2 and 10% higher than the accuracy by using only 32-channel sEMG input. Conclusions This study demonstrated the feasibility of fusing sEMG and EEG signals towards improving motion classification accuracy for above-elbow amputees, which might enhance the control performances of multifunctional myoelectric prostheses in clinical application. Trial registration The study was approved by the ethics committee of Institutional Review Board of Shenzhen Institutes of Advanced Technology, and the reference number is SIAT-IRB-150515-H0077.
Collapse
Affiliation(s)
- Xiangxin Li
- Chinese Academy of Sciences (CAS) Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Shenzhen, 518055, China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Oluwarotimi Williams Samuel
- Chinese Academy of Sciences (CAS) Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Shenzhen, 518055, China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xu Zhang
- Chinese Academy of Sciences (CAS) Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Shenzhen, 518055, China.,Department of Biology, South University of Science and Technology of China, Shenzhen, 518055, China
| | - Hui Wang
- Chinese Academy of Sciences (CAS) Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Shenzhen, 518055, China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Peng Fang
- Chinese Academy of Sciences (CAS) Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Shenzhen, 518055, China.
| | - Guanglin Li
- Chinese Academy of Sciences (CAS) Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Shenzhen, 518055, China.
| |
Collapse
|