1
|
Losanno E, Ceradini M, Agnesi F, Righi G, Del Popolo G, Shokur S, Micera S. A Virtual Reality-Based Protocol to Determine the Preferred Control Strategy for Hand Neuroprostheses in People With Paralysis. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2261-2269. [PMID: 38865234 DOI: 10.1109/tnsre.2024.3413192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2024]
Abstract
Hand neuroprostheses restore voluntary movement in people with paralysis through neuromodulation protocols. There are a variety of strategies to control hand neuroprostheses, which can be based on residual body movements or brain activity. There is no universally superior solution, rather the best approach may vary from patient to patient. Here, we propose a protocol based on an immersive virtual reality (VR) environment that simulates the use of a hand neuroprosthesis to allow patients to experience and familiarize themselves with various control schemes in clinically relevant tasks and choose the preferred one. We used our VR environment to compare two alternative control strategies over 5 days of training in four patients with C6 spinal cord injury: (a) control via the ipsilateral wrist, (b) control via the contralateral shoulder. We did not find a one-fits-all solution but rather a subject-specific preference that could not be predicted based only on a general clinical assessment. The main results were that the VR simulation allowed participants to experience the pros and cons of the proposed strategies and make an educated choice, and that there was a longitudinal improvement. This shows that our VR-based protocol is a useful tool for personalization and training of the control strategy of hand neuroprostheses, which could help to promote user comfort and thus acceptance.
Collapse
|
2
|
Hussain I, Jany R. Interpreting Stroke-Impaired Electromyography Patterns through Explainable Artificial Intelligence. SENSORS (BASEL, SWITZERLAND) 2024; 24:1392. [PMID: 38474928 DOI: 10.3390/s24051392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/17/2024] [Accepted: 02/19/2024] [Indexed: 03/14/2024]
Abstract
Electromyography (EMG) proves invaluable myoelectric manifestation in identifying neuromuscular alterations resulting from ischemic strokes, serving as a potential marker for diagnostics of gait impairments caused by ischemia. This study aims to develop an interpretable machine learning (ML) framework capable of distinguishing between the myoelectric patterns of stroke patients and those of healthy individuals through Explainable Artificial Intelligence (XAI) techniques. The research included 48 stroke patients (average age 70.6 years, 65% male) undergoing treatment at a rehabilitation center, alongside 75 healthy adults (average age 76.3 years, 32% male) as the control group. EMG signals were recorded from wearable devices positioned on the bicep femoris and lateral gastrocnemius muscles of both lower limbs during indoor ground walking in a gait laboratory. Boosting ML techniques were deployed to identify stroke-related gait impairments using EMG gait features. Furthermore, we employed XAI techniques, such as Shapley Additive Explanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and Anchors to interpret the role of EMG variables in the stroke-prediction models. Among the ML models assessed, the GBoost model demonstrated the highest classification performance (AUROC: 0.94) during cross-validation with the training dataset, and it also overperformed (AUROC: 0.92, accuracy: 85.26%) when evaluated using the testing EMG dataset. Through SHAP and LIME analyses, the study identified that EMG spectral features contributing to distinguishing the stroke group from the control group were associated with the right bicep femoris and lateral gastrocnemius muscles. This interpretable EMG-based stroke prediction model holds promise as an objective tool for predicting post-stroke gait impairments. Its potential application could greatly assist in managing post-stroke rehabilitation by providing reliable EMG biomarkers and address potential gait impairment in individuals recovering from ischemic stroke.
Collapse
Affiliation(s)
- Iqram Hussain
- Department of Anesthesiology, Weill Cornell Medicine, Cornell University, New York, NY 10065, USA
| | - Rafsan Jany
- Department of Computer Science and Engineering, Islamic University and Technology (IUT), Gazipur 1704, Bangladesh
| |
Collapse
|
3
|
Buczak MK, Zollinger JM, Alsaleem A, Imburgia R, Rosenbluth J, George JA. Intuitive, Myoelectric Control of Adaptive Sports Equipment for Individuals with Tetraplegia. IEEE Int Conf Rehabil Robot 2023; 2023:1-6. [PMID: 37941260 DOI: 10.1109/icorr58425.2023.10304759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023]
Abstract
This research aims to develop safe, robust, and easy-to-use adaptive technology for individuals with tetraplegia. After a debilitating spinal cord injury, clinical care focuses on improving quality of life. Participation in adaptive sports has been shown to improve several aspects of participants' well-being. The TetraSki is a power-assisted ski chair that allows individuals with tetraplegia to participate in downhill skiing by sipping and puffing air on an integrated straw to turn their skis. Here, we introduce a new intuitive and dexterous control strategy for the TetraSki using surface electromyography (sEMG) from the neck and shoulder muscles. As an initial assessment, six healthy participants completed a virtual ski racecourse using sEMG and Sip-and-Puff control. Participants also completed a detection response task of cognitive load and the NASA-TLX survey of subjective workload. No significant differences were observed between the performance of sEMG control and the performance of Sip-and-Puff control. However, sEMG control required significantly less cognitive load and subjective workload than Sip-and-Puff control. These results indicate that sEMG can effectively control the equipment and is significantly more intuitive than traditional Sip-and-Puff control. This suggests that sEMG is a promising control method for further validation with individuals with tetraplegia. Ultimately, long-term use of sEMG control may promote neuroplasticity and drive rehabilitation.
Collapse
|
4
|
Pinheiro DJLL, Faber J, Micera S, Shokur S. Human-machine interface for two-dimensional steering control with the auricular muscles. Front Neurorobot 2023; 17:1154427. [PMID: 37342389 PMCID: PMC10277645 DOI: 10.3389/fnbot.2023.1154427] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 05/16/2023] [Indexed: 06/22/2023] Open
Abstract
Human-machine interfaces (HMIs) can be used to decode a user's motor intention to control an external device. People that suffer from motor disabilities, such as spinal cord injury, can benefit from the uses of these interfaces. While many solutions can be found in this direction, there is still room for improvement both from a decoding, hardware, and subject-motor learning perspective. Here we show, in a series of experiments with non-disabled participants, a novel decoding and training paradigm allowing naïve participants to use their auricular muscles (AM) to control two degrees of freedom with a virtual cursor. AMs are particularly interesting because they are vestigial muscles and are often preserved after neurological diseases. Our method relies on the use of surface electromyographic records and the use of contraction levels of both AMs to modulate the velocity and direction of a cursor in a two-dimensional paradigm. We used a locking mechanism to fix the current position of each axis separately to enable the user to stop the cursor at a certain location. A five-session training procedure (20-30 min per session) with a 2D center-out task was performed by five volunteers. All participants increased their success rate (Initial: 52.78 ± 5.56%; Final: 72.22 ± 6.67%; median ± median absolute deviation) and their trajectory performances throughout the training. We implemented a dual task with visual distractors to assess the mental challenge of controlling while executing another task; our results suggest that the participants could perform the task in cognitively demanding conditions (success rate of 66.67 ± 5.56%). Finally, using the Nasa Task Load Index questionnaire, we found that participants reported lower mental demand and effort in the last two sessions. To summarize, all subjects could learn to control the movement of a cursor with two degrees of freedom using their AM, with a low impact on the cognitive load. Our study is a first step in developing AM-based decoders for HMIs for people with motor disabilities, such as spinal cord injury.
Collapse
Affiliation(s)
- Daniel J. L. L. Pinheiro
- Division of Neuroscience, Department of Neurology and Neurosurgery, Neuroengineering and Neurocognition Laboratory, Escola Paulista de Medicina, Universidade Federal de São Paulo, São Paulo, Brazil
- Translational Neural Engineering Lab, Institute Neuro X, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Jean Faber
- Division of Neuroscience, Department of Neurology and Neurosurgery, Neuroengineering and Neurocognition Laboratory, Escola Paulista de Medicina, Universidade Federal de São Paulo, São Paulo, Brazil
- Neuroengineering Laboratory, Division of Biomedical Engineering, Instituto de Ciência e Tecnologia, Universidade Federal de São Paulo, São José dos Campos, Brazil
| | - Silvestro Micera
- Translational Neural Engineering Lab, Institute Neuro X, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
- Department of Excellence in Robotics and AI, Institute of BioRobotics Interdisciplinary Health Center, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Solaiman Shokur
- Translational Neural Engineering Lab, Institute Neuro X, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| |
Collapse
|
5
|
Deng CL, Tian CY, Kuai SG. A combination of eye-gaze and head-gaze interactions improves efficiency and user experience in an object positioning task in virtual environments. APPLIED ERGONOMICS 2022; 103:103785. [PMID: 35490546 DOI: 10.1016/j.apergo.2022.103785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 04/19/2022] [Accepted: 04/21/2022] [Indexed: 06/14/2023]
Abstract
Eye-gaze and head-gaze are two hands-free interaction modes in virtual reality, each of which has demonstrated different strengths. Selecting suitable interaction modes in different scenarios is important to achieve efficient interaction in virtual scenes. This study compared the movement time in an object positioning task by examining eye-gaze interaction and head-gaze interaction in various conditions. In turn, it identified the superior zones for each mode, respectively. Based on this information, we designed a combination mode - utilizing eye-gaze interaction at the acceleration phase and deceleration phase and head-gaze interaction at the correction phase - to achieve the optimal interaction mode, which has allowed us to obtain higher efficiency and subjective satisfaction. This study provides a comprehensive analysis of the characteristics of the eye-gaze and head-gaze interaction modes and provides valuable insights into selecting the appropriate interaction modes for virtual reality applications.
Collapse
Affiliation(s)
- Cheng-Long Deng
- Institute of Brain and Education Innovation, Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, 200062, China
| | - Chen-Yu Tian
- Institute of Brain and Education Innovation, Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, 200062, China
| | - Shu-Guang Kuai
- Institute of Brain and Education Innovation, Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, 200062, China; Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai, 200031, China; NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, 200062, China.
| |
Collapse
|
6
|
Mitchell CL, Cler GJ, Fager SK, Contessa P, Roy SH, De Luca G, Kline JC, Vojtech JM. Ability-Based Methods for Personalized Keyboard Generation. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022; 6:67. [PMID: 36313956 PMCID: PMC9608338 DOI: 10.3390/mti6080067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
This study introduces an ability-based method for personalized keyboard generation, wherein an individual's own movement and human-computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user's movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual's movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user's motor abilities when designing virtual interfaces.
Collapse
Affiliation(s)
| | - Gabriel J. Cler
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, USA
| | - Susan K. Fager
- Institute of Rehabilitation Science and Engineering, Madonna Rehabilitation Hospital, Lincoln, NE 68506, USA
| | - Paola Contessa
- Delsys, Inc., Natick, MA 01760, USA
- Altec, Inc., Natick, MA 01760, USA
| | - Serge H. Roy
- Delsys, Inc., Natick, MA 01760, USA
- Altec, Inc., Natick, MA 01760, USA
| | - Gianluca De Luca
- Delsys, Inc., Natick, MA 01760, USA
- Altec, Inc., Natick, MA 01760, USA
| | - Joshua C. Kline
- Delsys, Inc., Natick, MA 01760, USA
- Altec, Inc., Natick, MA 01760, USA
| | | |
Collapse
|
7
|
Rahmaniar W, Ma'Arif A, Lin TL. Touchless Head-Control (THC): Head Gesture Recognition for Cursor and Orientation Control. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1817-1828. [PMID: 35771790 DOI: 10.1109/tnsre.2022.3187472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
The touchless techniques in human-computer interaction (HCI) can effectively expand computer access capabilities for disabled people. This paper presents Touchless Head-Control (THC), an assistive system method for computer cursor control based on head pose captured with an RGB camera. Our work aimed to replace the standard cursor control using a device on the user's head. The convolutional neural networks with predicted fine-grained feature maps and binned classification were applied to estimate the head pose angles. The mouse pointer or cursor is moved to actual locations on the screen based on head movement (yaw and pitch) and the center position of the face. Head tilt to the right or left (roll) to control the mouse button. In addition, the proposed method can be used to simulate the movement of the robot or joystick using the head to control objects within three degrees of freedom (DOF). Various participants were involved in the interaction design evaluation, in which target selection accuracy, travel time, and path efficiency were measured. This technology allows people with limited motor skills to easily control a PC cursor and 3D object orientation without the use of additional equipment or sensors.
Collapse
|
8
|
Nawfel JL, Englehart KB, Scheme EJ. The Influence of Training with Visual Biofeedback on the Predictability of Myoelectric Control Usability. IEEE Trans Neural Syst Rehabil Eng 2022; 30:878-892. [PMID: 35333717 DOI: 10.1109/tnsre.2022.3162421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Studies have shown that closed-loop myoelectric control schemes can lead to changes in user performance and behavior compared to open-loop systems. When users are placed within the control loop, such as during real-time use, they must correct for errors made by the controller and learn what behavior is necessary to produce desired outcomes. Augmented feedback, consequently, has been used to incorporate the user throughout the training process and to facilitate learning. This work explores the effect of visual feedback presented during user training on both the performance and predictability of a myoelectric classification-based control system. Our results suggest that properly designed feedback mechanisms and training tasks can influence the quality of the training data and the ability to predict usability using linear combinations of metrics derived from feature space. Furthermore, our results confirm that the most common in-lab training protocol, screen guided training, may yield training data that are less representative of online use than training protocols that incorporate the user in the loop. These results suggest that training protocols should be designed that better parallel the testing environment to more effectively prepare both the algorithms and users for real-time control.
Collapse
|
9
|
Schultz JR, Slifkin AB, Schearer EM. Controlling an effector with eye movements: The effect of entangled sensory and motor responsibilities. PLoS One 2022; 17:e0263440. [PMID: 35113943 PMCID: PMC8812848 DOI: 10.1371/journal.pone.0263440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 01/20/2022] [Indexed: 11/19/2022] Open
Abstract
Restoring arm and hand function has been indicated by individuals with tetraplegia as one of the most important factors for regaining independence. The overall goal of our research is to develop assistive technologies that allow individuals with tetraplegia to control functional reaching movements. This study served as an initial step toward our overall goal by assessing the feasibility of using eye movements to control the motion of an effector in an experimental environment. We aimed to understand how additional motor requirements placed on the eyes affected eye-hand coordination during functional reaching. We were particularly interested in how eye fixation error was affected when the sensory and motor functions of the eyes were entangled due to the additional motor responsibility. We recorded participants’ eye and hand movements while they reached for targets on a monitor. We presented a cursor at the participant’s point of gaze position which can be thought of as being similar to the control of an assistive robot arm. To measure eye fixation error, we used an offline filter to extract eye fixations from the raw eye movement data. We compared the fixations to the locations of the targets presented on the monitor. The results show that not only are humans able to use eye movements to direct the cursor to a desired location (1.04 ± 0.15 cm), but they can do so with error similar to that of the hand (0.84 ± 0.05 cm). In other words, despite the additional motor responsibility placed on the eyes during direct eye-movement control of an effector, the ability to coordinate functional reaching movements was unaffected. The outcomes of this study support the efficacy of using the eyes as a direct command input for controlling movement.
Collapse
Affiliation(s)
- John R. Schultz
- Mechanical Engineering/Center for Human Machine Systems, Cleveland State University, Cleveland, Ohio, United States of America
- * E-mail:
| | - Andrew B. Slifkin
- Department of Psychology, Cleveland State University, Cleveland, Ohio, United States of America
| | - Eric M. Schearer
- Mechanical Engineering/Center for Human Machine Systems, Cleveland State University, Cleveland, Ohio, United States of America
| |
Collapse
|
10
|
Taheri A, Weissman Z, Sra M. Design and Evaluation of a Hands-Free Video Game Controller for Individuals With Motor Impairments. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2021.751455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Over the past few decades, video gaming has evolved at a tremendous rate although game input methods have been slower to change. Game input methods continue to rely on two-handed control of the joystick and D-pad or the keyboard and mouse for simultaneously controlling player movement and camera actions. Bi-manual input poses a significant play impediment to those with severe motor impairments. In this work, we propose and evaluate a hands-free game input control method that uses real-time facial expression recognition. Through our novel input method, our goal is to enable and empower individuals with neurological and neuromuscular diseases, who may lack hand muscle control, to be able to independently play video games. To evaluate the usability and acceptance of our system, we conducted a remote user study with eight severely motor-impaired individuals. Our results indicate high user satisfaction and greater preference for our input system with participants rating the input system as easy to learn. With this work, we aim to highlight that facial expression recognition can be a valuable input method.
Collapse
|
11
|
Development of Surface EMG Game Control Interface for Persons with Upper Limb Functional Impairments. SIGNALS 2021. [DOI: 10.3390/signals2040048] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In recent years, surface Electromyography (sEMG) signals have been effectively applied in various fields such as control interfaces, prosthetics, and rehabilitation. We propose a neck rotation estimation from EMG and apply the signal estimate as a game control interface that can be used by people with disabilities or patients with functional impairment of the upper limb. This paper utilizes an equation estimation and a machine learning model to translate the signals into corresponding neck rotations. For testing, we designed two custom-made game scenes, a dynamic 1D object interception and a 2D maze scenery, in Unity 3D to be controlled by sEMG signal in real-time. Twenty-two (22) test subjects (mean age 27.95, std 13.24) participated in the experiment to verify the usability of the interface. From object interception, subjects reported stable control inferred from intercepted objects more than 73% accurately. In a 2D maze, a comparison of male and female subjects reported a completion time of 98.84 s. ± 50.2 and 112.75 s. ± 44.2, respectively, without a significant difference in the mean of the one-way ANOVA (p = 0.519). The results confirmed the usefulness of neck sEMG of sternocleidomastoid (SCM) as a control interface with little or no calibration required. Control models using equations indicate intuitive direction and speed control, while machine learning schemes offer a more stable directional control. Control interfaces can be applied in several areas that involve neck activities, e.g., robot control and rehabilitation, as well as game interfaces, to enable entertainment for people with disabilities.
Collapse
|
12
|
Segil JL, Lukyanenko P, Lambrecht J, Weir RFF, Tyler D. Comparison of Myoelectric Control Schemes for Simultaneous Hand and Wrist Movement using Chronically Implanted Electromyography: A Case Series . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6224-6230. [PMID: 34892537 PMCID: PMC10964936 DOI: 10.1109/embc46164.2021.9630845] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE A current biomedical engineering challenge is the development of a system that allows fluid control of multi-functional prosthetic devices through a human-machine interface. Here we probe this challenge by studying two subjects with trans-radial limb loss as they control a virtual hand and wrist system using 6 or 8 chronically implanted intramuscular electromyographic (iEMG) signals. The subjects successfully controlled a 4, 5, and 6 Degrees of Freedom (DoF's) virtual hand and wrist systems to perform a target matching task. APPROACH Two control systems were evaluated where one tied EMG features directly to movement directions (Direct Control) and the other method determines user intent in the context of prior training data (Linear Interpolation). MAIN RESULTS Subjects successfully matched most targets with both controllers but differences were seen as the complexity of the virtual limb system increased. The Direct Control method encountered difficulty due to crosstalk at higher DoF's. The Linear Interpolation method reduced crosstalk effects and outperformed Direct Control at higher DoF's. This work also studied the use of the Postural Control Algorithm to control the hand postures simultaneously with wrist degrees of freedom. SIGNIFICANCE This work presents preliminary evidence that the PC algorithm can be used in conjunction with wrist control, that Direct Control with iEMG signals allows stable 4-DoF control, and that EMG pre-processing using the Linear Interpolation method can improve performance at 5 and 6-DoF's.
Collapse
|
13
|
Prediction of Myoelectric Biomarkers in Post-Stroke Gait. SENSORS 2021; 21:s21165334. [PMID: 34450776 PMCID: PMC8399186 DOI: 10.3390/s21165334] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 08/05/2021] [Accepted: 08/05/2021] [Indexed: 12/17/2022]
Abstract
Electromyography (EMG) is sensitive to neuromuscular changes resulting from ischemic stroke and is considered a potential predictive tool of post-stroke gait and rehabilitation management. This study aimed to evaluate the potential myoelectric biomarkers for the classification of stroke-impaired muscular activity of the stroke patient group and the muscular activity of the control healthy adult group. We also proposed an EMG-based gait monitoring system consisting of a portable EMG device, cloud-based data processing, data analytics, and a health advisor service. This system was investigated with 48 stroke patients (mean age 70.6 years, 65% male) admitted into the emergency unit of a hospital and 75 healthy elderly volunteers (mean age 76.3 years, 32% male). EMG was recorded during walking using the portable device at two muscle positions: the bicep femoris muscle and the lateral gastrocnemius muscle of both lower limbs. The statistical result showed that the mean power frequency (MNF), median power frequency (MDF), peak power frequency (PKF), and mean power (MNP) of the stroke group differed significantly from those of the healthy control group. In the machine learning analysis, the neural network model showed the highest classification performance (precision: 88%, specificity: 89%, accuracy: 80%) using the training dataset and highest classification performance (precision: 72%, specificity: 74%, accuracy: 65%) using the testing dataset. This study will be helpful to understand stroke-impaired gait changes and decide post-stroke rehabilitation.
Collapse
|
14
|
Kamavuako EN, Brown M, Bao X, Chihi I, Pitou S, Howard M. Affordable Embroidered EMG Electrodes for Myoelectric Control of Prostheses: A Pilot Study. SENSORS 2021; 21:s21155245. [PMID: 34372482 PMCID: PMC8347069 DOI: 10.3390/s21155245] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 07/30/2021] [Accepted: 07/31/2021] [Indexed: 11/16/2022]
Abstract
Commercial myoelectric prostheses are costly to purchase and maintain, making their provision challenging for developing countries. Recent research indicates that embroidered EMG electrodes may provide a more affordable alternative to the sensors used in current prostheses. This pilot study investigates the usability of such electrodes for myoelectric control by comparing online and offline performance against conventional gel electrodes. Offline performance is evaluated through the classification of nine different hand and wrist gestures. Online performance is assessed with a crossover two-degree-of-freedom real-time experiment using Fitts’ Law. Two performance metrics (Throughput and Completion Rate) are used to quantify usability. The mean classification accuracy of the nine gestures is approximately 98% for subject-specific models trained on both gel and embroidered electrode offline data from individual subjects, and 97% and 96% for general models trained on gel and embroidered offline data, respectively, from all subjects. Throughput (0.3 bits/s) and completion rate (95–97%) are similar in the online test. Results indicate that embroidered electrodes can achieve similar performance to gel electrodes paving the way for low-cost myoelectric prostheses.
Collapse
Affiliation(s)
- Ernest N. Kamavuako
- Department of Engineering, King’s College London, London WC2R 2LS, UK; (M.B.); (X.B.); (S.P.); (M.H.)
- Faculté de Médecine, Université de Kindu, Kindu, DR, Congo
- Correspondence: ; Tel.: +44-207-848-8666
| | - Mitchell Brown
- Department of Engineering, King’s College London, London WC2R 2LS, UK; (M.B.); (X.B.); (S.P.); (M.H.)
| | - Xinqi Bao
- Department of Engineering, King’s College London, London WC2R 2LS, UK; (M.B.); (X.B.); (S.P.); (M.H.)
| | - Ines Chihi
- National Engineering School of Bizerta, Carthage University, Tunis 2070, Tunisia;
- Department of Engineering (DOE), The Faculty of Science, Technology and Medicine (FSTM), University of Luxembourg, 4365 Luxembourg, Luxembourg
| | - Samuel Pitou
- Department of Engineering, King’s College London, London WC2R 2LS, UK; (M.B.); (X.B.); (S.P.); (M.H.)
| | - Matthew Howard
- Department of Engineering, King’s College London, London WC2R 2LS, UK; (M.B.); (X.B.); (S.P.); (M.H.)
| |
Collapse
|
15
|
Nawfel JL, Englehart KB, Scheme EJ. A Multi-Variate Approach to Predicting Myoelectric Control Usability. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1312-1327. [PMID: 34214042 DOI: 10.1109/tnsre.2021.3094324] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Pattern recognition techniques leveraging the use of electromyography signals have become a popular approach to provide intuitive control of myoelectric devices. Performance of these control interfaces is commonly quantified using offline classification accuracy, despite studies having shown that this metric is a poor indicator of usability. Researchers have identified alternative offline metrics that better correlate with online performance; however, the relationship has yet to be fully defined in the literature. This has necessitated the continued trial-and-error-style online testing of algorithms developed using offline approaches. To bridge this information divide, we conducted an exploratory study where thirty-two different metrics from the offline training data were extracted. A correlation analysis and an ordinary least squares regression were implemented to investigate the relationship between the offline metrics and six aspects online use. The results indicate that the current offline standard, classification accuracy, is a poor indicator of usability and that other metrics may hold predictive power. The metrics identified in this work also may constitute more representative evaluation criteria when designing and reporting new control schemes. Furthermore, linear combinations of offline training metrics generate substantially more accurate predictions than using individual metrics. We found that the offline metric feature efficiency generated the best predictions for the usability metric throughput. A combination of two offline metrics (mean semi-principal axes and mean absolute value) significantly outperformed feature efficiency alone, with a 166% increase in the predicted R2 value (i.e., VEcv). These findings suggest that combinations of metrics could provide a more robust framework for predicting usability.
Collapse
|
16
|
Olsen S, Zhang J, Liang KF, Lam M, Riaz U, Kao JC. An artificial intelligence that increases simulated brain-computer interface performance. J Neural Eng 2021; 18. [PMID: 33978599 DOI: 10.1088/1741-2552/abfaaa] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 04/22/2021] [Indexed: 12/14/2022]
Abstract
Objective.Brain-computer interfaces (BCIs) translate neural activity into control signals for assistive devices in order to help people with motor disabilities communicate effectively. In this work, we introduce a new BCI architecture that improves control of a BCI computer cursor to type on a virtual keyboard.Approach.Our BCI architecture incorporates an external artificial intelligence (AI) that beneficially augments the movement trajectories of the BCI. This AI-BCI leverages past user actions, at both long (100 s of seconds ago) and short (100 s of milliseconds ago) timescales, to modify the BCI's trajectories.Main results.We tested our AI-BCI in a closed-loop BCI simulator with nine human subjects performing a typing task. We demonstrate that our AI-BCI achieves: (1) categorically higher information communication rates, (2) quicker ballistic movements between targets, (3) improved precision control to 'dial in' on targets, and (4) more efficient movement trajectories. We further show that our AI-BCI increases performance across a wide control quality spectrum from poor to proficient control.Significance.This AI-BCI architecture, by increasing BCI performance across all key metrics evaluated, may increase the clinical viability of BCI systems.
Collapse
Affiliation(s)
- Sebastian Olsen
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90024, United States of America
| | - Jianwei Zhang
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90024, United States of America
| | - Ken-Fu Liang
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90024, United States of America
| | - Michelle Lam
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90024, United States of America
| | - Usama Riaz
- Department of Computer Science, University of California, Los Angeles, CA 90024, United States of America
| | - Jonathan C Kao
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90024, United States of America.,Neurosciences Program, University of California, Los Angeles, CA 90024, United States of America
| |
Collapse
|
17
|
Borish CN, Bertucco M, Berger DJ, d’Avella A, Sanger TD. Can spatial filtering separate voluntary and involuntary components in children with dyskinetic cerebral palsy? PLoS One 2021; 16:e0250001. [PMID: 33852638 PMCID: PMC8046213 DOI: 10.1371/journal.pone.0250001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 03/30/2021] [Indexed: 11/18/2022] Open
Abstract
The design of myocontrolled devices faces particular challenges in children with dyskinetic cerebral palsy because the electromyographic signal for control contains both voluntary and involuntary components. We hypothesized that voluntary and involuntary components of movements would be uncorrelated and thus detectable as different synergistic patterns of muscle activity, and that removal of the involuntary components would improve online EMG-based control. Therefore, we performed a synergy-based decomposition of EMG-guided movements, and evaluated which components were most controllable using a Fitts' Law task. Similarly, we also tested which muscles were most controllable. We then tested whether removing the uncontrollable components or muscles improved overall function in terms of movement time, success rate, and throughput. We found that removal of less controllable components or muscles did not improve EMG control performance, and in many cases worsened performance. These results suggest that abnormal movement in dyskinetic CP is consistent with a pervasive distortion of voluntary movement rather than a superposition of separable voluntary and involuntary components of movement.
Collapse
Affiliation(s)
- Cassie N. Borish
- Department of Biomedical Engineering, University of Southern California, Los Angeles, California, United States of America
| | - Matteo Bertucco
- Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Denise J. Berger
- Laboratory of Neuromotor Physiology, Foundation Santa Lucia, Rome, Italy
- Department of Systems Medicine and Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy
| | - Andrea d’Avella
- Laboratory of Neuromotor Physiology, Foundation Santa Lucia, Rome, Italy
- Department of Biomedical, Dental, Morphological and Functional Imaging Sciences, University of Messina, Messina, Italy
| | - Terence D. Sanger
- School of Engineering, University of California, Irvine, California, United States of America
- School of Medicine, University of California, Irvine, California, United States of America
- Children’s Hospital of Orange County, Orange, California, United States of America
| |
Collapse
|
18
|
Olsson AE, Malešević N, Björkman A, Antfolk C. Learning regularized representations of categorically labelled surface EMG enables simultaneous and proportional myoelectric control. J Neuroeng Rehabil 2021; 18:35. [PMID: 33588868 PMCID: PMC7885418 DOI: 10.1186/s12984-021-00832-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 02/02/2021] [Indexed: 11/18/2022] Open
Abstract
Background Processing the surface electromyogram (sEMG) to decode movement intent is a promising approach for natural control of upper extremity prostheses. To this end, this paper introduces and evaluates a new framework which allows for simultaneous and proportional myoelectric control over multiple degrees of freedom (DoFs) in real-time. The framework uses multitask neural networks and domain-informed regularization in order to automatically find nonlinear mappings from the forearm sEMG envelope to multivariate and continuous encodings of concurrent hand- and wrist kinematics, despite only requiring categorical movement instruction stimuli signals for calibration. Methods Forearm sEMG with 8 channels was collected from healthy human subjects (N = 20) and used to calibrate two myoelectric control interfaces, each with two output DoFs. The interfaces were built from (I) the proposed framework, termed Myoelectric Representation Learning (MRL), and, to allow for comparisons, from (II) a standard pattern recognition framework based on Linear Discriminant Analysis (LDA). The online performances of both interfaces were assessed with a Fitts’s law type test generating 5 quantitative performance metrics. The temporal stabilities of the interfaces were evaluated by conducting identical tests without recalibration 7 days after the initial experiment session. Results Metric-wise two-way repeated measures ANOVA with factors method (MRL vs LDA) and session (day 1 vs day 7) revealed a significant (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p<0.05$$\end{document}p<0.05) advantage for MRL over LDA in 5 out of 5 performance metrics, with metric-wise effect sizes (Cohen’s \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$d$$\end{document}d) separating MRL from LDA ranging from \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\left|d\right|=0.62$$\end{document}d=0.62 to \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\left|d\right|=1.13$$\end{document}d=1.13. No significant effect on any metric was detected for neither session nor interaction between method and session, indicating that none of the methods deteriorated significantly in control efficacy during one week of intermission. Conclusions The results suggest that MRL is able to successfully generate stable mappings from EMG to kinematics, thereby enabling myoelectric control with real-time performance superior to that of the current commercial standard for pattern recognition (as represented by LDA). It is thus postulated that the presented MRL approach can be of practical utility for muscle-computer interfaces.
Collapse
Affiliation(s)
- Alexander E Olsson
- Department of Biomedical Engineering, Faculty of Engineering, Lund University, Lund, Sweden.
| | - Nebojša Malešević
- Department of Biomedical Engineering, Faculty of Engineering, Lund University, Lund, Sweden
| | - Anders Björkman
- Department of Hand Surgery, Institute of Clinical Sciences, Sahlgrenska Academy, Sahlgrenska University Hospital and University of Gothenburg, Gothenburg, Sweden.,Wallenberg Center for Molecular Medicine, Lund University, Lund, Sweden
| | - Christian Antfolk
- Department of Biomedical Engineering, Faculty of Engineering, Lund University, Lund, Sweden.
| |
Collapse
|
19
|
Piazza C, Rossi M, Catalano MG, Bicchi A, Hargrove LJ. Evaluation of a Simultaneous Myoelectric Control Strategy for a Multi-DoF Transradial Prosthesis. IEEE Trans Neural Syst Rehabil Eng 2020; 28:2286-2295. [PMID: 32804650 DOI: 10.1109/tnsre.2020.3016909] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
While natural movements result from fluid coordination of multiple joints, commercial upper-limb prostheses are still limited to sequential control of multiple degrees of freedom (DoFs), or constrained to move along predefined patterns. To control multiple DoFs simultaneously, a probability-weighted regression (PWR) method has been proposed and has previously shown good performance with intramuscular electromyographic (EMG) sensors. This study aims to evaluate the PWR method for the simultaneous and proportional control of multiple DoFs using surface EMG sensors and compare the performance with a classical direct control strategy. To extract the maximum number of DoFs manageable by a user, a first analysis was conducted in a virtually simulated environment with eight able-bodied and four amputee subjects. Results show that, while using surface EMG degraded the PWR performance for the 3-DoFs control, the algorithm demonstrated excellent achievements in the 2-DoFs case. Finally, the two methods were compared on a physical experiment with amputee subjects using a hand-wrist prosthesis composed of the SoftHand Pro and the RIC Wrist Flexor. Results show comparable outcomes between the two controllers but a significantly higher wrist activation time for the PWR method, suggesting this novel method as a viable direction towards a more natural control of multi-DoFs.
Collapse
|
20
|
Waris A, Zia ur Rehman M, Niazi IK, Jochumsen M, Englehart K, Jensen W, Haavik H, Kamavuako EN. A Multiday Evaluation of Real-Time Intramuscular EMG Usability with ANN. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3385. [PMID: 32549396 PMCID: PMC7349229 DOI: 10.3390/s20123385] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Revised: 06/12/2020] [Accepted: 06/12/2020] [Indexed: 12/05/2022]
Abstract
Recent developments in implantable technology, such as high-density recordings, wireless transmission of signals to a prosthetic hand, may pave the way for intramuscular electromyography (iEMG)-based myoelectric control in the future. This study aimed to investigate the real-time control performance of iEMG over time. A novel protocol was developed to quantify the robustness of the real-time performance parameters. Intramuscular wires were used to record EMG signals, which were kept inside the muscles for five consecutive days. Tests were performed on multiple days using Fitts' law. Throughput, completion rate, path efficiency and overshoot were evaluated as performance metrics using three train/test strategies. Each train/test scheme was categorized on the basis of data quantity and the time difference between training and testing data. An artificial neural network (ANN) classifier was trained and tested on (i) data from the same day (WDT), (ii) data collected from the previous day and tested on present-day (BDT) and (iii) trained on all previous days including the present day and tested on present-day (CDT). It was found that the completion rate (91.6 ± 3.6%) of CDT was significantly better (p < 0.01) than BDT (74.02 ± 5.8%) and WDT (88.16 ± 3.6%). For BDT, on average, the first session of each day was significantly better (p < 0.01) than the second and third sessions for completion rate (77.9 ± 14.0%) and path efficiency (88.9 ± 16.9%). Subjects demonstrated the ability to achieve targets successfully with wire electrodes. Results also suggest that time variations in the iEMG signal can be catered by concatenating the data over several days. This scheme can be helpful in attaining stable and robust performance.
Collapse
Affiliation(s)
- Asim Waris
- Department of Biomedical Engineering and Sciences, School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan;
| | - Muhammad Zia ur Rehman
- Faculty of Engineering and Applied Sciences, Riphah International University, Islamabad 46000, Pakistan;
| | - Imran Khan Niazi
- Center for Sensory-Motor Interaction, Department of Health Science and Technology, Aalborg University, 9220 Aalborg, Denmark; (M.J.); (W.J.)
- Center of Chiropractic Research, New Zealand College of Chiropractic, Auckland 1060, New Zealand;
- Faculty of Health and Environmental Sciences, Health and Rehabilitation Research Institute, AUT University, Auckland 0627, New Zealand
| | - Mads Jochumsen
- Center for Sensory-Motor Interaction, Department of Health Science and Technology, Aalborg University, 9220 Aalborg, Denmark; (M.J.); (W.J.)
| | - Kevin Englehart
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada;
| | - Winnie Jensen
- Center for Sensory-Motor Interaction, Department of Health Science and Technology, Aalborg University, 9220 Aalborg, Denmark; (M.J.); (W.J.)
| | - Heidi Haavik
- Center of Chiropractic Research, New Zealand College of Chiropractic, Auckland 1060, New Zealand;
| | - Ernest Nlandu Kamavuako
- Centre for Robotics Research, Department of Informatics, King’s College London, London WC2R 2LS, UK;
| |
Collapse
|
21
|
Jaramillo-Yánez A, Benalcázar ME, Mena-Maldonado E. Real-Time Hand Gesture Recognition Using Surface Electromyography and Machine Learning: A Systematic Literature Review. SENSORS 2020; 20:s20092467. [PMID: 32349232 PMCID: PMC7250028 DOI: 10.3390/s20092467] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 02/24/2020] [Accepted: 02/25/2020] [Indexed: 11/16/2022]
Abstract
Today, daily life is composed of many computing systems, therefore interacting with them in a natural way makes the communication process more comfortable. Human-Computer Interaction (HCI) has been developed to overcome the communication barriers between humans and computers. One form of HCI is Hand Gesture Recognition (HGR), which predicts the class and the instant of execution of a given movement of the hand. One possible input for these models is surface electromyography (EMG), which records the electrical activity of skeletal muscles. EMG signals contain information about the intention of movement generated by the human brain. This systematic literature review analyses the state-of-the-art of real-time hand gesture recognition models using EMG data and machine learning. We selected and assessed 65 primary studies following the Kitchenham methodology. Based on a common structure of machine learning-based systems, we analyzed the structure of the proposed models and standardized concepts in regard to the types of models, data acquisition, segmentation, preprocessing, feature extraction, classification, postprocessing, real-time processing, types of gestures, and evaluation metrics. Finally, we also identified trends and gaps that could open new directions of work for future research in the area of gesture recognition using EMG.
Collapse
Affiliation(s)
- Andrés Jaramillo-Yánez
- Artificial Intelligence and Computer Vision Research Lab, Department of Informatics and Computer Science, Escuela Politécnica Nacional, Quito 170517, Ecuador; (M.E.B.)
- School of Science, Royal Melbourne Institute of Technology (RMIT), Melbourne 3000, Australia
- Correspondence: or
| | - Marco E. Benalcázar
- Artificial Intelligence and Computer Vision Research Lab, Department of Informatics and Computer Science, Escuela Politécnica Nacional, Quito 170517, Ecuador; (M.E.B.)
| | - Elisa Mena-Maldonado
- Artificial Intelligence and Computer Vision Research Lab, Department of Informatics and Computer Science, Escuela Politécnica Nacional, Quito 170517, Ecuador; (M.E.B.)
| |
Collapse
|
22
|
|
23
|
Shuggi IM, Oh H, Wu H, Ayoub MJ, Moreno A, Shaw EP, Shewokis PA, Gentili RJ. Motor Performance, Mental Workload and Self-Efficacy Dynamics during Learning of Reaching Movements throughout Multiple Practice Sessions. Neuroscience 2019; 423:232-248. [PMID: 31325564 DOI: 10.1016/j.neuroscience.2019.07.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2018] [Revised: 06/29/2019] [Accepted: 07/01/2019] [Indexed: 10/26/2022]
Abstract
The human capability to learn new motor skills depends on the efficient engagement of cognitive-motor resources, as reflected by mental workload, and psychological mechanisms (e.g., self-efficacy). While numerous investigations have examined the relationship between motor behavior and mental workload or self-efficacy in a performance context, a fairly limited effort focused on the combined examination of these notions during learning. Thus, this study aimed to examine their concomitant dynamics during the learning of a novel reaching skill practiced throughout multiple sessions. Individuals had to learn to control a virtual robotic arm via a human-machine interface by using limited head motion throughout eight practice sessions while motor performance, mental workload, and self-efficacy were assessed. The results revealed that as individuals learned to control the robotic arm, performance improved at the fastest rate, followed by a more gradual reduction of mental workload and finally an increase in self-efficacy. These results suggest that once the performance improved, less cognitive-motor resources were recruited, leading to an attenuated mental workload. Considering that attention is a primary cognitive resource driving mental workload, it is suggested that during early learning, attentional resources are primarily allocated to address task demands and not enough are available to assess self-efficacy. However, as the performance becomes more automatic, a lower level of mental workload is attained driven by decreased recruitment of attentional resources. These available resources allow for a reliable assessment of self-efficacy resulting in a subsequent observable change. These results are also discussed in terms of the application to the training and design of assistive technologies.
Collapse
Affiliation(s)
- Isabelle M Shuggi
- Department of Kinesiology, School of Public Health, University of Maryland, College Park, MD, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
| | - Hyuk Oh
- Department of Kinesiology, School of Public Health, University of Maryland, College Park, MD, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
| | - Helena Wu
- Department of Kinesiology, School of Public Health, University of Maryland, College Park, MD, USA
| | - Maria J Ayoub
- Department of Kinesiology, School of Public Health, University of Maryland, College Park, MD, USA
| | - Arianna Moreno
- Department of Kinesiology, School of Public Health, University of Maryland, College Park, MD, USA
| | - Emma P Shaw
- Department of Kinesiology, School of Public Health, University of Maryland, College Park, MD, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
| | - Patricia A Shewokis
- School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, PA, USA; Nutrition Sciences Department, College of Nursing and Health Professions, Drexel University, Philadelphia, PA, USA
| | - Rodolphe J Gentili
- Department of Kinesiology, School of Public Health, University of Maryland, College Park, MD, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA; Maryland Robotics Center, University of Maryland, College Park, MD, USA.
| |
Collapse
|
24
|
Stable, three degree-of-freedom myoelectric prosthetic control via chronic bipolar intramuscular electrodes: a case study. J Neuroeng Rehabil 2019; 16:147. [PMID: 31752886 PMCID: PMC6868792 DOI: 10.1186/s12984-019-0607-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 10/10/2019] [Indexed: 11/30/2022] Open
Abstract
Background Modern prosthetic hands are typically controlled using skin surface electromyographic signals (EMG) from remaining muscles in the residual limb. However, surface electrode performance is limited by changes in skin impedance over time, day-to-day variations in electrode placement, and relative motion between the electrodes and underlying muscles during movement: these limitations require frequent retraining of controllers. In the presented study, we used chronically implanted intramuscular electrodes to minimize these effects and thus create a more robust prosthetic controller. Methods A study participant with a transradial amputation was chronically implanted with 8 intramuscular EMG electrodes. A K Nearest Neighbor (KNN) regression velocity controller was trained to predict intended joint movement direction using EMG data collected during a single training session. The resulting KNN was evaluated over 12 weeks and in multiple arm posture configurations, with the participant controlling a 3 Degree-of-Freedom (DOF) virtual reality (VR) hand to match target VR hand postures. The performance of this EMG-based controller was compared to a position-based controller that used movement measured from the participant’s opposite (intact) hand. Surface EMG was also collected for signal quality comparisons. Results Signals from the implanted intramuscular electrodes exhibited less crosstalk between the various channels and had a higher Signal-to-Noise Ratio than surface electrode signals. The performance of the intramuscular EMG-based KNN controller in the VR control task showed no degradation over time, and was stable over the 6 different arm postures. Both the EMG-based KNN controller and the intact hand-based controller had 100% hand posture matching success rates, but the intact hand-based controller was slightly superior in regards to speed (trial time used) and directness of the VR hand control (path efficiency). Conclusions Chronically implanted intramuscular electrodes provide negligible crosstalk, high SNR, and substantial VR control performance, including the ability to use a fixed controller over 12 weeks and under different arm positions. This approach can thus be a highly effective platform for advanced, multi-DOF prosthetic control.
Collapse
|
25
|
Kong F, Sahadat MN, Ghovanloo M, Durgin GD. A Stand-Alone Intraoral Tongue-Controlled Computer Interface for People With Tetraplegia. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:848-857. [PMID: 31283486 DOI: 10.1109/tbcas.2019.2926755] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The intraoral Tongue Drive System (iTDS) is an embedded wireless tongue-operated assistive technology developed for people with tetraplegia to provide them a higher level of independence in performing daily living tasks, such as accessing computers, smartphones, and driving wheelchairs. The iTDS was built as an arch-shaped dental retainer hermetically sealed and placed in the buccal shelf area of the mouth, completely hidden from sight. To provide high level of comfort, the iTDS is customized based on the users' oral anatomy to stably fix onto the lower teeth. We have presented a standalone version of the iTDS, capable of recognizing tongue gestures/commands by processing raw magnetic sensor data with a built-in pattern recognition algorithm in real time. The iTDS then sends the commands out in 10-b packets through a custom-designed high-gain intraoral antenna at 2.4 GHz to an external receiver. To evaluate the standalone iTDS performance, four subjects performed a computer access task by issuing random tongue commands over five sessions. Subjects completed 99.2% of the commands, and achieved an information transfer rate of 150.1 b/min. Moreover, a new typing method, designed specifically for the iTDS, resulted in typing at a rate of 3.76 words/min and error rate of 2.23%.
Collapse
|
26
|
Biological surface electromyographic switch and necklace-type button switch control as an augmentative and alternative communication input device: a feasibility study. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2019; 42:839-851. [PMID: 31161594 DOI: 10.1007/s13246-019-00766-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2018] [Accepted: 05/24/2019] [Indexed: 10/26/2022]
Abstract
Augmentative and alternative communication (AAC) is an approach used to supplement, improve, and support the communication of those with speech or language impairments. We developed an AAC device for diverse approaches, using an electromyographic (EMG) switch and a necklace-type button switch. The EMG switch comprised an EMG signal processor and a switch interface processor. EMG signals were processed using an electrode through the stages of signal acquisition, amplification, filtering, rectification, and smoothing. In the switch interface processor, the microprocessor determined the switch as ON or OFF in response to an input EMG signal and then converted the EMG signal into a keyboard signal, which was transmitted to a smart device via Bluetooth communication. A similar transmission process was used for the necklace-type button switch, and switch signals were input and processed with general-purpose input/output. The first and second feasibility tests for the EMG switch and button switch were conducted in a total of three test sessions. The result of the feasibility test indicated that the major inconvenience and desired improvement associated with the EMG switch were the intricacy of the AAC device settings. The major inconveniences and desired improvements for the necklace-type button switch involved device shifting, volume and weight, and inconvenience in fixing the switch in various directions. Thus, based on the first and second feasibility tests, we developed an additional device. Finally, the EMG switch and necklace-type button switch developed to remedy the inconveniencies had high feasibility.
Collapse
|
27
|
Maimeri M, Della Santina C, Piazza C, Rossi M, Catalano MG, Grioli G. Design and Assessment of Control Maps for Multi-Channel sEMG-Driven Prostheses and Supernumerary Limbs. Front Neurorobot 2019; 13:26. [PMID: 31191285 PMCID: PMC6548824 DOI: 10.3389/fnbot.2019.00026] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2019] [Accepted: 05/01/2019] [Indexed: 11/13/2022] Open
Abstract
Proportional and simultaneous control algorithms are considered as one of the most effective ways of mapping electromyographic signals to an artificial device. However, the applicability of these methods is limited by the high number of electromyographic features that they require to operate-typically twice as many the actuators to be controlled. Indeed, extracting many independent electromyographic signals is challenging for a number of reasons-ranging from technological to anatomical. On the contrary, the number of actively moving parts in classic prostheses or extra-limbs is often high. This paper faces this issue, by proposing and experimentally assessing a set of algorithms which are capable of proportionally and simultaneously control as many actuators as there are independent electromyographic signals available. Two sets of solutions are considered. The first uses as input electromyographic signals only, while the second adds postural measurements to the sources of information. At first, all the proposed algorithms are experimentally tested in terms of precision, efficiency, and usability on twelve able-bodied subjects, in a virtual environment. A state-of-the-art controller using twice the amount of electromyographic signals as input is adopted as benchmark. We then performed qualitative tests, where the maps are used to control a prototype of upper limb prosthesis. The device is composed of a robotic hand and a wrist implementing active prono-supination movement. Eight able-bodied subjects participated to this second round of testings. Finally, the proposed strategies were tested in exploratory experiments involving two subjects with limb loss. Results coming from the evaluations in virtual and realistic settings show encouraging results and suggest the effectiveness of the proposed approach.
Collapse
Affiliation(s)
- Michele Maimeri
- Soft Robotics for Human Cooperation and Rehabilitation, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Cosimo Della Santina
- Research Center "Enrico Piaggio", University of Pisa, Pisa, Italy.,Dipartimento di Ingegneria Informatica, University of Pisa, Pisa, Italy
| | - Cristina Piazza
- Research Center "Enrico Piaggio", University of Pisa, Pisa, Italy.,Dipartimento di Ingegneria Informatica, University of Pisa, Pisa, Italy
| | - Matteo Rossi
- Soft Robotics for Human Cooperation and Rehabilitation, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Manuel G Catalano
- Soft Robotics for Human Cooperation and Rehabilitation, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Giorgio Grioli
- Soft Robotics for Human Cooperation and Rehabilitation, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
28
|
Lu Z, Zhou P. Hands-Free Human-Computer Interface Based on Facial Myoelectric Pattern Recognition. Front Neurol 2019; 10:444. [PMID: 31114539 PMCID: PMC6503102 DOI: 10.3389/fneur.2019.00444] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Accepted: 04/11/2019] [Indexed: 11/13/2022] Open
Abstract
Patients with no or limited hand function usually have difficulty in using conventional input devices such as a mouse or a touch screen. Having the ability of manipulating electronic devices can give patients full access to the digital world, thereby increasing their independence and confidence, and enriching their lives. In this study, a hands-free human-computer interface was developed in order to help patients manipulate computers using facial movements. Five facial movement patterns were detected by four electromyography (EMG) sensors, and classified using myoelectric pattern recognition algorithms. Facial movement patterns were mapped to cursor actions including movements in different directions and click. A typing task and a drawing task were designed in order to assess the interaction performance of the interface in daily use. Ten able-bodied subjects participated in the experiment. In the typing task, the median path efficiency was 80.4%, and the median input rate was 5.9 letters per minute. In the drawing task, the median time to accomplish was 239.9 s. Moreover, all the subjects achieved high classification accuracy (median: 98.0%). The interface driven by facial EMG achieved high performance, and will be assessed on patients with limited hand functions in the future.
Collapse
Affiliation(s)
- Zhiyuan Lu
- Department of Physical Medicine and Rehabilitation, University of Texas Health Science Center at Houston, and TIRR Memorial Hermann Research Center, Houston, TX, United States
| | - Ping Zhou
- Department of Physical Medicine and Rehabilitation, University of Texas Health Science Center at Houston, and TIRR Memorial Hermann Research Center, Houston, TX, United States
| |
Collapse
|
29
|
Ameri A, Akhaee MA, Scheme E, Englehart K. Regression convolutional neural network for improved simultaneous EMG control. J Neural Eng 2019; 16:036015. [DOI: 10.1088/1741-2552/ab0e2e] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
30
|
Verros S, Lucassen K, Hekman EEG, Bergsma A, Verkerke GJ, Koopman BFJM. Evaluation of intuitive trunk and non-intuitive leg sEMG control interfaces as command input for a 2-D Fitts's law style task. PLoS One 2019; 14:e0214645. [PMID: 30943235 PMCID: PMC6447183 DOI: 10.1371/journal.pone.0214645] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 03/18/2019] [Indexed: 11/18/2022] Open
Abstract
Duchenne muscular dystrophy (DMD) is a muscular condition that leads to muscle loss. Orthotic devices may present a solution for people with DMD to perform activities of daily living (ADL). One such device is the active trunk support but it needs a control interface to identify the user’s intention. Myoelectric control interfaces can be used to detect the user’s intention and consequently control an active trunk support. Current research on the control of orthotic devices that use surface electromyography (sEMG) signals as control inputs, focuses mainly on muscles that are directly linked to the movement being performed (intuitive control). However in some cases, it is hard to detect a proper sEMG signal (e.g., when there is significant amount of fat), which can result in poor control performance. A way to overcome this problem might be the introduction of other, non-intuitive forms of control. This paper presents an explorative study on the comparison and learning behavior of two different control interfaces, one using sEMG of trunk muscles (intuitive) and one using sEMG of leg muscles that can be potentially used for an active trunk support (non-intuitive). Six healthy subjects undertook a 2-D Fitts’s law style task. They were asked to steer a cursor into targets that were radially distributed symmetrically in five directions. The results show that the subjects were generally able to learn to control the tasks using either of the control interfaces and improve their performance over time. Comparison of both control interfaces demonstrated that the subjects were able to learn the leg control interface task faster than the trunk control interface task. Moreover, the performance on the diagonal-targets was significantly lower compared to the one directional-targets for both control interfaces. Overall, the results show that the subjects were able to control a non-intuitive control interface with high performance. Moreover, the results indicate that the non-intuitive control may be a viable solution for controlling an active trunk support.
Collapse
Affiliation(s)
- Stergios Verros
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
- * E-mail:
| | - Koen Lucassen
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
| | - Edsko E. G. Hekman
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
| | - Arjen Bergsma
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
| | - Gijsbertus J. Verkerke
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
- University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Bart F. J. M. Koopman
- Department Biomechanical Engineering, University of Twente, Enschede, The Netherlands
| |
Collapse
|
31
|
Zhang H, Chang BC, Rue YJ, Agrawal SK. Using the Motion of the Head-Neck as a Joystick for Orientation Control. IEEE Trans Neural Syst Rehabil Eng 2019; 27:236-243. [PMID: 30676970 DOI: 10.1109/tnsre.2019.2894517] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Head-neck interfaces have the potential to command and control orientation tasks when the hand-wrist is not available for use as a joystick. We pose the question in this paper-How well can the head-neck be used to perform orientation tasks when compared to the hand-wrist? Anatomically, the motion of the head-neck is similar to that of the hand-wrist. We hypothesize that the head-neck motion can be as effective as the motion of the hand-wrist to control orientation tasks. A study was designed to characterize the ability of head-neck to command and control general orientation tasks. Fourteen healthy participants were asked to control the head orientation of an avatar on a computer screen using the motion of their head-neck and hand-wrist, measured by a robotic neck brace and a conventional joystick, respectively. Visual feedback was given to the participants with the display of the target and the actual head orientations of the avatar. The outcomes were defined for comparison between the head-neck and hand-wrist motions as follows: 1) mean absolute error; 2) time delay in tracking continuous orientation trajectories; and 3) settling time to reach target orientations. The results showed that the performance outcomes were significantly better with the hand-wrist than that of the head-neck when used as a joystick. However, all participants successfully completed the tasks with the head-neck. This demonstrates that the head-neck can be used as a joystick for controlling three dimensional object orientations, even though it may not be as dexterous as the hand-wrist. These results have fundamental implications in the design of devices and interfaces with the human head-neck.
Collapse
|
32
|
Kong F, Zada M, Yoo H, Ghovanloo M. Adaptive Matching Transmitter With Dual-Band Antenna for Intraoral Tongue Drive System. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:1279-1288. [PMID: 30605083 DOI: 10.1109/tbcas.2018.2866960] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The intraoral Tongue Drive System (iTDS) is a wireless assistive technology that detects users' voluntary tongue gestures, and converts them to user-defined commands, enabling them to access computers and navigate powered wheelchairs. In this paper, we presented a transmitter (Tx) with adaptive matching and three bands (27, 433, and 915 MHz) to create a robust wireless link between iTDS and an external receiver (Rx) by addressing the effects of external RF interference and impedance variations of the Tx antenna in the dynamic mouth environment. The upper two Tx bands share a dual-band antenna, while the lower band drives a coil. The Tx antenna is simulated in a simplified human mouth model in HFSS as well as a real human head model. The adaptive triple-band Tx chip was fabricated in a 0.35-μm 4P2M standard CMOS process. The Tx chip and antenna have been characterized in a human subject as part of an iTDS prototype under open-and closed-mouth scenarios, which present the peak gain of -24.4 and -15.63 dBi at 433 and 915 MHz, respectively. Two adaptive matching networks for these bands compensate variations of the Tx antenna impedance via a feedback mechanism. The measured S11 tuning range of the proposed network can cover up to 60 and 75 jΩ at 433 and 915 MHz, respectively.
Collapse
|
33
|
Struijk LNSA, Bentsen B, Gaihede M, Lontis R. Speaking Ability while Using an Inductive Tongue-Computer Interface for Individuals with Tetraplegia: Talking and Driving a Powered Wheelchair - a Case Study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:2483-2486. [PMID: 30440911 DOI: 10.1109/embc.2018.8512834] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper assesses the ability of speaking while using an inductive tongue-computer interface. Lately, tongue- computer interfaces have been proposed for computer/robotic interfacing for individuals with tetraplegia. To be useful in home settings these interfaces should be aesthetic and interfere as little as possible with the limited preserved functionality of individuals with tetraplegia. As tongue interfaces from an aesthetical point of view are preferred to be entirely intra-oral it is relevant to address their effect on speech. Here we show that reading more than 566 words while using an inductive tongue-computer interface results in a maximum sensor activation time of less than 0.6 s, which means that false activations can be avoided by a sensor dwell time of 0.6 s. Furthermore, we show that it is possible to speak while controlling a powered wheelchair with the inductive tongue computer interface.
Collapse
|
34
|
Robertson JW, Englehart KB, Scheme EJ. Effects of Confidence-Based Rejection on Usability and Error in Pattern Recognition-Based Myoelectric Control. IEEE J Biomed Health Inform 2018; 23:2002-2008. [PMID: 30387754 DOI: 10.1109/jbhi.2018.2878907] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Rejection of movements based on the confidence in the classification decision has previously been demonstrated to improve the usability of pattern recognition based myoelectric control. To this point, however, the optimal rejection threshold has been determined heuristically, and it is not known how different thresholds affect the tradeoff between error mitigation and false rejections in real-time closed-loop control. To answer this question, 24 able-bodied subjects completed a real-time Fitts' law-style virtual cursor control task using a support vector machine classifier. It was found that rejection improved information throughput at all thresholds, with the best performance coming at thresholds between 0.60 and 0.75. Two fundamental types of error were defined and identified: operator error (identifiable, repeatable behaviors, directly attributable to the user), and systemic error (other errors attributable to misclassification or noise). The incidence of both operator and systemic errors were found to decrease as rejection threshold increased. Moreover, while the incidence of all error types correlated strongly with path efficiency, only systemic errors correlated strongly with throughput and trial completion rate. Interestingly, more experienced users were found to commit as many errors as novice users, despite performing better in the Fitts' task, suggesting that there is more to usability than error prevention alone. Nevertheless, these results demonstrate the usability gains possible with rejection across a range of thresholds for both novice and experienced users alike.
Collapse
|
35
|
Waris A, Mendez I, Englehart K, Jensen W, Kamavuako EN. On the robustness of real-time myoelectric control investigations: a multiday Fitts' law approach. J Neural Eng 2018; 16:026003. [PMID: 30524028 DOI: 10.1088/1741-2552/aae9d4] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Real-time myoelectric experimental protocol is considered as a means to quantify usability of myoelectric control schemes. While usability should be considered over time to assure clinical robustness, all real-time studies reported thus far are limited to a single session or day and thus the influence of time on real-time performance is still unexplored. In this study, the aim was to develop a novel experimental protocol to quantify the effect of time on real-time performance measures over multiple days using a Fitts' law approach. APPROACH Four metrics: throughput, completion rate, path efficiency and overshoot, were assessed using three train-test strategies: (i) an artificial neural network (ANN) classifier was trained on data collected from the previous day and tested on present day (BDT) (ii) trained and tested on the same day (WDT) and (iii) trained on all previous days including present day and tested on present day (CDT) in a week-long experimental protocol. MAIN RESULTS It was found that on average, the completion rate (98.37% ± 1.47%) of CDT was significantly better (P < 0.01) than that of BDT (86.25% ± 3.46%) and WDT (94.22% ± 2.74%). The throughput (0.40 ± 0.03 bits s-1) of CDT was significantly better (P = 0.001) than that of BDT (0.38 ± 0.03 bits s-1). Offline analysis showed a different trend due to the difference in the training strategies. SIGNIFICANCE Results suggest that increasing the size of the training set over time can be beneficial to assure robust performance of the system over time.
Collapse
Affiliation(s)
- Asim Waris
- SMI, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark. SMME, National University of Sciences and Technology, Islamabad, Pakistan
| | | | | | | | | |
Collapse
|
36
|
Ameri A, Akhaee MA, Scheme E, Englehart K. Real-time, simultaneous myoelectric control using a convolutional neural network. PLoS One 2018; 13:e0203835. [PMID: 30212573 PMCID: PMC6136764 DOI: 10.1371/journal.pone.0203835] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Accepted: 08/28/2018] [Indexed: 11/18/2022] Open
Abstract
The evolution of deep learning techniques has been transformative as they have allowed complex mappings to be trained between control inputs and outputs without the need for feature engineering. In this work, a myoelectric control system based on convolutional neural networks (CNN) is proposed as a possible alternative to traditional approaches that rely on specifically designed features. This CNN-based system is validated using a real-time Fitts' law style target acquisition test requiring single and combined wrist motions. The performance of the proposed system is then compared to that of a standard support vector machine (SVM) based myoelectric system using a set of time-domain features. Despite the prevalence and demonstrated performance of these well-known features, no significant difference (p>0.05) was found between the two methods for any of the computed control metrics. This demonstrates the potential for automated learning approaches to extract complex and rich information from stochastic biological signals. This first evaluation of the usability of a CNN in a real-time myoelectric control environment provides a basis for further exploration.
Collapse
Affiliation(s)
- Ali Ameri
- Department of Biomedical Engineering, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Ali Akhaee
- School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
| | - Erik Scheme
- Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, Canada
| | - Kevin Englehart
- Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, Canada
| |
Collapse
|
37
|
Sahadat MN, Alreja A, Mikail N, Ghovanloo M. Comparing the Use of Single vs. Multiple Combined Abilities in Conducting Complex Computer Tasks Hands-free. IEEE Trans Neural Syst Rehabil Eng 2018; 26:1868-1877. [PMID: 30106683 DOI: 10.1109/tnsre.2018.2864120] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Assistive technologies often focus on a remaining ability of their users, particularly those with physical disabilities, e.g. tetraplegia, to facilitate their computer access. We hypothesized that by combining multiple remaining abilities of the end users in an intuitive fashion, it is possible to improve the quality of computer access. In this study, 15 able-bodied subjects completed four computer access tasks without using their hands: center-out tapping, on-screen maze navigation, playing a game, and sending an email. They used the multimodal Tongue Drive System (mTDS), which offers proportional cursor control via head motion, discrete clicks via tongue gestures, and typing via speech recognition simultaneously. Their performances were compared against unimodal tongue gestures (TDS), and Keyboard & Mouse combination (KnM), as the gold standard. RESULTS Center-out tapping task average throughputs using mTDS and TDS were 0.84 bps and 0.94 bps, which were 21% and 22.4% of the throughput using mouse, respectively, while the average error rate and missed targets using mTDS were 4.1% and 25.5% less than TDS. Maze navigation throughputs using mTDS and TDS were 0.35 bps and 0.46 bps, which were 16.6% and 21.8% of the throughput using mouse, respectively. Participants achieved 72.32% higher score using mTDS than TDS when playing a simple game. Average email generating time with mTDS was ~2x longer than KnM with a mean typing accuracy of 78.1%. CONCLUSION Engaging multimodal abilities helped participants perform considerably better in complex tasks, such as sending an email, compared to a unimodal system (TDS). Their performances were similar for simpler task, while multimodal inputs improved interaction accuracy. Cursor navigation with head motion led to higher score in less constrained tasks, such as game, than a highly constrained maze task.
Collapse
|
38
|
Shin S, Tafreshi R, Langari R. EMG and IMU based real-time HCI using dynamic hand gestures for a multiple-DoF robot arm. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2018. [DOI: 10.3233/jifs-171562] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Sungtae Shin
- Department of Mechanical Engineering, Texas A&M University, College Station, TX, USA
| | - Reza Tafreshi
- Department of Mechanical Engineering, Texas A&M University at Qatar, Doha, Qatar
| | - Reza Langari
- Department of Mechanical Engineering, Texas A&M University, College Station, TX, USA
| |
Collapse
|
39
|
Williams MR. A pilot study into reaching performance after severe to moderate stroke using upper arm support. PLoS One 2018; 13:e0200787. [PMID: 30016364 PMCID: PMC6049950 DOI: 10.1371/journal.pone.0200787] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 07/03/2018] [Indexed: 11/30/2022] Open
Abstract
Stroke effects millions of people each year and can have a significant impact on the ability to use the impaired arm and hand. One of the results of stroke is the development of an abnormal shoulder-elbow flexion synergy, where lifting the arm can cause the elbow, wrist, and finger flexors to involuntarily contract, reducing the ability to reach with the arm and hand opening. This study explored the effect of using support at the upper arm to improve hand and arm reaching performance. Nine participants were studied while performing a virtual reaching task under three conditions: while the weight of their impaired arm was supported by a robot arm, while unsupported, and while using their non-impaired arm. Most subjects exhibited faster and more accurate reaching while supported compared to unsupported. For the subjects who could voluntarily open their hand, most were able to more swiftly open their hand when using upper arm support. In many cases, performance with support was not statistically different than the unaffected arm and hand. Muscle activity of the impaired limb with upper arm support showed decreased effort to lift the arm and reduced biceps activity in most subjects, pointing to a reduction in the abnormal flexion synergy while using upper arm support. While arm support can help to reduce the activation of abnormal synergies, weakness resulting from hemiparesis remains an issue impacting performance. Future systems will need to address both of these causes of disability to more fully restore function after stroke.
Collapse
Affiliation(s)
- Matthew R. Williams
- Louis Stokes Cleveland VA Medical Center, Cleveland, OH, United States of America
- Cleveland FES Center, Cleveland, OH, United States of America
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States of America
| |
Collapse
|
40
|
Vojtech JM, Cler GJ, Stepp CE. Prediction of Optimal Facial Electromyographic Sensor Configurations for Human-Machine Interface Control. IEEE Trans Neural Syst Rehabil Eng 2018; 26:1566-1576. [PMID: 29994124 DOI: 10.1109/tnsre.2018.2849202] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Surface electromyography (sEMG) is a promising computer access method for individuals with motor impairments. However, optimal sensor placement is a tedious task requiring trial-and-error by an expert, particularly when recording from facial musculature likely to be spared in individuals with neurological impairments. We sought to reduce the sEMG sensor configuration complexity by using quantitative signal features extracted from a short calibration task to predict human-machine interface (HMI) performance. A cursor control system allowed individuals to activate specific sEMG-targeted muscles to control an onscreen cursor and navigate a target selection task. The task was repeated for a range of sensor configurations to elicit a range of signal qualities. Signal features were extracted from the calibration of each configuration and examined via a principle component factor analysis in order to predict the HMI performance during subsequent tasks. Feature components most influenced by the energy and the complexity of the EMG signal and muscle activity between the sensors were significantly predictive of the HMI performance. However, configuration order had a greater effect on performance than the configurations, suggesting that non-experts can place sEMG sensors in the vicinity of usable muscle sites for computer access and healthy individuals will learn to efficiently control the HMI system.
Collapse
|
41
|
Wang Z, Majewicz Fey A. Human-centric predictive model of task difficulty for human-in-the-loop control tasks. PLoS One 2018; 13:e0195053. [PMID: 29621301 PMCID: PMC5886487 DOI: 10.1371/journal.pone.0195053] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 03/07/2018] [Indexed: 11/18/2022] Open
Abstract
Quantitatively measuring the difficulty of a manipulation task in human-in-the-loop control systems is ill-defined. Currently, systems are typically evaluated through task-specific performance measures and post-experiment user surveys; however, these methods do not capture the real-time experience of human users. In this study, we propose to analyze and predict the difficulty of a bivariate pointing task, with a haptic device interface, using human-centric measurement data in terms of cognition, physical effort, and motion kinematics. Noninvasive sensors were used to record the multimodal response of human user for 14 subjects performing the task. A data-driven approach for predicting task difficulty was implemented based on several task-independent metrics. We compare four possible models for predicting task difficulty to evaluated the roles of the various types of metrics, including: (I) a movement time model, (II) a fusion model using both physiological and kinematic metrics, (III) a model only with kinematic metrics, and (IV) a model only with physiological metrics. The results show significant correlation between task difficulty and the user sensorimotor response. The fusion model, integrating user physiology and motion kinematics, provided the best estimate of task difficulty (R2 = 0.927), followed by a model using only kinematic metrics (R2 = 0.921). Both models were better predictors of task difficulty than the movement time model (R2 = 0.847), derived from Fitt's law, a well studied difficulty model for human psychomotor control.
Collapse
Affiliation(s)
- Ziheng Wang
- Department of Mechanical Engineering, The University of Texas at Dallas, Richardson, TX 75080, United States of America
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, The University of Texas at Dallas, Richardson, TX 75080, United States of America
- Department of Surgery, UT Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
42
|
Vujaklija I, Shalchyan V, Kamavuako EN, Jiang N, Marateb HR, Farina D. Online mapping of EMG signals into kinematics by autoencoding. J Neuroeng Rehabil 2018. [PMID: 29534764 PMCID: PMC5850983 DOI: 10.1186/s12984-018-0363-1] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
Background In this paper, we propose a nonlinear minimally supervised method based on autoencoding (AEN) of EMG for myocontrol. The proposed method was tested against the state-of-the-art (SOA) control scheme using a Fitts’ law approach. Methods Seven able-bodied subjects performed a series of target acquisition myoelectric control tasks using the AEN and SOA algorithms for controlling two degrees-of-freedom (radial/ulnar deviation and flexion/extension of the wrist), and their online performance was characterized by six metrics. Results Both methods allowed a completion rate close to 100%, however AEN outperformed SOA for all other performance metrics, e.g. it allowed to perform the tasks on average in half the time with respect to SOA. Moreover, the amount of information transferred by the proposed method in bit/s was nearly twice the throughput of SOA. Conclusions These results show that autoencoders can map EMG signals into kinematics with the potential of providing intuitive and dexterous control of artificial limbs for amputees.
Collapse
Affiliation(s)
- Ivan Vujaklija
- Department of Bioengineering, Imperial College London, London, UK
| | - Vahid Shalchyan
- Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Ernest N Kamavuako
- Centre for Robotics Research, Department of Informatics, King's College London, London, UK
| | - Ning Jiang
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Hamid R Marateb
- Biomedical Engineering Department, Engineering Faculty, University of Isfahan, Isfahan, Iran
| | - Dario Farina
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
43
|
Sahadat MN, Alreja A, Ghovanloo M. Simultaneous Multimodal PC Access for People With Disabilities by Integrating Head Tracking, Speech Recognition, and Tongue Motion. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:192-201. [PMID: 29377807 DOI: 10.1109/tbcas.2017.2771235] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Multimodal Tongue Drive System (mTDS) is a highly integrated wireless assistive technology (AT) in the form of a lightweight wearable headset that utilizes three remaining key control and communication abilities in people with severe physical disabilities, such as tetraplegia, to provide them with effective access to computers: 1) tongue motion for discrete/switch-based control (e.g., clicking), 2) head tracking for proportional control (e.g., mouse pointer movements), and 3) speech recognition for typing, all available simultaneously. The mTDS architecture is presented here with new sensor signal processing algorithm for head tracking. To evaluate the device performance, it was compared against keyboard-and-mouse (KnM) combination, the gold standard in computer input methods, by 15 able-bodied participants, who used both mTDS and KnM to generate and sent an email with randomly selected content, under a 5-minute time constraint. In four repetitions, in the last trial, it took participants only 1.8 times longer to complete the email task, on average, using the mTDS versus KnM at 82.4% typing accuracy. Mean task completion time and typing accuracy improved 24.6% and 18.8% from first to fourth trial using mTDS. Multimodal simultaneous discrete and proportional control input options of mTDS, plus rapid typing, is expected to provide more effective computer access to people with severe physical disabilities.
Collapse
|
44
|
Kilgore KL, Peckham PH. Stimulation for Return of Upper-Extremity Function. Neuromodulation 2018. [DOI: 10.1016/b978-0-12-805353-9.00096-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
45
|
Shuggi IM, Shewokis PA, Herrmann JW, Gentili RJ. Changes in motor performance and mental workload during practice of reaching movements: a team dynamics perspective. Exp Brain Res 2017; 236:433-451. [PMID: 29214390 DOI: 10.1007/s00221-017-5136-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 11/14/2017] [Indexed: 10/18/2022]
Abstract
Few investigations have examined mental workload during motor practice or learning in a context of team dynamics. This study examines the underlying cognitive-motor processes of motor practice by assessing the changes in motor performance and mental workload during practice of reaching movements. Individuals moved a robotic arm to reach targets as fast and as straight as possible while satisfying the task requirement of avoiding a collision between the end-effector and the workspace limits. Individuals practiced the task either alone (HA group) or with a synthetic teammate (HRT group), which regulated the effector velocity to help satisfy the task requirements. The findings revealed that the performance of both groups improved similarly throughout practice. However, when compared to the individuals of the HA group, those in the HRT group (1) had a lower risk of collisions, (2) exhibited higher performance consistency, and (3) revealed a higher level of mental workload while generally perceiving the robotic teammate as interfering with their performance. As the synthetic teammate changed the effector velocity in specific regions near the workspace boundaries, individuals may have been constrained to learn a piecewise visuomotor map. This piecewise map made the task more challenging, which increased mental workload and perception of the synthetic teammate as a burden. The examination of both motor performance and mental workload revealed a combination of both adaptive and maladaptive team dynamics. This work is a first step to examine the human cognitive-motor processes underlying motor practice in a context of team dynamics and contributes to inform human-robot applications.
Collapse
Affiliation(s)
- Isabelle M Shuggi
- Systems Engineering Program, University of Maryland, College Park, MD, 20742, USA.,Department of Kinesiology, School of Public Health, University of Maryland, College Park, MD, 20742, USA.,Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA
| | - Patricia A Shewokis
- School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, PA, 19102, USA.,Nutrition Sciences Department, College of Nursing and Health Professions, Drexel University, Philadelphia, PA, 19102, USA
| | - Jeffrey W Herrmann
- Department of Mechanical Engineering, University of Maryland, College Park, MD, 20742, USA.,Institute for Systems Research, University of Maryland, College Park, MD, 20742, USA
| | - Rodolphe J Gentili
- Department of Kinesiology, School of Public Health, University of Maryland, College Park, MD, 20742, USA. .,Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA. .,Maryland Robotics Center, University of Maryland, College Park, MD, USA.
| |
Collapse
|
46
|
Gusman J, Mastinu E, Ortiz-Catalan M. Evaluation of Computer-Based Target Achievement Tests for Myoelectric Control. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE-JTEHM 2017; 5:2100310. [PMID: 29255654 PMCID: PMC5731324 DOI: 10.1109/jtehm.2017.2776925] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Revised: 09/11/2017] [Accepted: 10/22/2017] [Indexed: 11/10/2022]
Abstract
Real-time evaluation of novel prosthetic control schemes is critical for translational research on artificial limbs. Recently, two computer-based, real-time evaluation tools, the target achievement control (TAC) test and the Fitts' law test (FLT), have been proposed to assess real-time controllability. Whereas TAC tests provides an anthropomorphic visual representation of the limb at the cost of confusing visual feedback, FLT clarifies the current and target locations by simplified non-anthropomorphic representations. Here, we investigated these two approaches and quantified differences in common performance metrics that can result from the chosen method of visual feedback. Ten able-bodied and one amputee subject performed target achievement tasks corresponding to the FLT and TAC test with equivalent indices of difficulty. Able-bodied subjects exhibited significantly (p <0.05) better completion rate, path efficiency, and overshoot when performing the FLT, although no significant difference was seen in throughput performance. The amputee subject showed significantly better performance in overshoot at the FLT, but showed no significant difference in completion rate, path efficiency, and throughput. Results from the FLT showed a strong linear relationship between the movement time and the index of difficulty (R2 = 0.96), whereas TAC test results showed no apparent linear relationship (R2 = 0.19). These results suggest that in relatively similar conditions, the confusing location of virtual limb representation used in the TAC test contributed to poorer performance. Establishing an understanding of the biases of various evaluation protocols is critical to the translation of research into clinical practice.
Collapse
Affiliation(s)
- Jacob Gusman
- Center for Biomedical EngineeringBrown University
| | - Enzo Mastinu
- Department of Electrical EngineeringChalmers University of Technology
| | - Max Ortiz-Catalan
- Department of Electrical EngineeringChalmers University of Technology.,Integrum AB
| |
Collapse
|
47
|
Andreasen Struijk LNS, Egsgaard LL, Lontis R, Gaihede M, Bentsen B. Wireless intraoral tongue control of an assistive robotic arm for individuals with tetraplegia. J Neuroeng Rehabil 2017; 14:110. [PMID: 29110736 PMCID: PMC5674819 DOI: 10.1186/s12984-017-0330-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Accepted: 10/31/2017] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND For an individual with tetraplegia assistive robotic arms provide a potentially invaluable opportunity for rehabilitation. However, there is a lack of available control methods to allow these individuals to fully control the assistive arms. METHODS Here we show that it is possible for an individual with tetraplegia to use the tongue to fully control all 14 movements of an assistive robotic arm in a three dimensional space using a wireless intraoral control system, thus allowing for numerous activities of daily living. We developed a tongue-based robotic control method incorporating a multi-sensor inductive tongue interface. One abled-bodied individual and one individual with tetraplegia performed a proof of concept study by controlling the robot with their tongue using direct actuator control and endpoint control, respectively. RESULTS After 30 min of training, the able-bodied experimental participant tongue controlled the assistive robot to pick up a roll of tape in 80% of the attempts. Further, the individual with tetraplegia succeeded in fully tongue controlling the assistive robot to reach for and touch a roll of tape in 100% of the attempts and to pick up the roll in 50% of the attempts. Furthermore, she controlled the robot to grasp a bottle of water and pour its contents into a cup; her first functional action in 19 years. CONCLUSION To our knowledge, this is the first time that an individual with tetraplegia has been able to fully control an assistive robotic arm using a wireless intraoral tongue interface. The tongue interface used to control the robot is currently available for control of computers and of powered wheelchairs, and the robot employed in this study is also commercially available. Therefore, the presented results may translate into available solutions within reasonable time.
Collapse
Affiliation(s)
- Lotte N S Andreasen Struijk
- Center for Sensory Motor Interaction, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark.
| | - Line Lindhardt Egsgaard
- Center for Sensory Motor Interaction, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Romulus Lontis
- Center for Sensory Motor Interaction, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Michael Gaihede
- Department of Otolaryngology, Head and Neck Surgery, Aalborg University Hospital, Aalborg, Denmark.,Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Bo Bentsen
- Center for Sensory Motor Interaction, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| |
Collapse
|
48
|
Safavi SM, Sundaram SM, Gorji AH, Udaiwal NS, Chou PH. Application of infrared scanning of the neck muscles to control a cursor in Human-Computer Interface. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:787-790. [PMID: 29059990 DOI: 10.1109/embc.2017.8036942] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The feasibility of using infrared (IR) spectroscopy of the neck muscles in controlling a cursor in a 2-dimensional screen was assessed. The proposed technique utilizes two IR photoplethysmography sensors (λ = 940nm) to monitor the morphological changes of the Scalene and Sternocleidomastoid muscles. Since the reflection of the light has valuable information about the type of contraction, the direction of the movement (right/left, up/down) can be simply derived using two sensors. A MATLAB platform was developed in which a cursor moves using the recorded signal. Three scenarios of high/low sensitivity and joystick mode were tested. The results from 4 different healthy subjects shows the feasibility of control in terms of throughput, overshoot, and path efficiency.
Collapse
|
49
|
Shuggi IM, Oh H, Shewokis PA, Gentili RJ. Mental workload and motor performance dynamics during practice of reaching movements under various levels of task difficulty. Neuroscience 2017; 360:166-179. [PMID: 28757242 DOI: 10.1016/j.neuroscience.2017.07.048] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 07/17/2017] [Accepted: 07/19/2017] [Indexed: 10/19/2022]
|
50
|
Lobo-Prat J, Nizamis K, Janssen MMHP, Keemink AQL, Veltink PH, Koopman BFJM, Stienen AHA. Comparison between sEMG and force as control interfaces to support planar arm movements in adults with Duchenne: a feasibility study. J Neuroeng Rehabil 2017; 14:73. [PMID: 28701169 PMCID: PMC5508565 DOI: 10.1186/s12984-017-0282-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Accepted: 06/26/2017] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Adults with Duchenne muscular dystrophy (DMD) can benefit from devices that actively support their arm function. A critical component of such devices is the control interface as it is responsible for the human-machine interaction. Our previous work indicated that surface electromyography (sEMG) and force-based control with active gravity and joint-stiffness compensation were feasible solutions for the support of elbow movements (one degree of freedom). In this paper, we extend the evaluation of sEMG- and force-based control interfaces to simultaneous and proportional control of planar arm movements (two degrees of freedom). METHODS Three men with DMD (18-23 years-old) with different levels of arm function (i.e. Brooke scores of 4, 5 and 6) performed a series of line-tracing tasks over a tabletop surface using an experimental active arm support. The arm movements were controlled using three control methods: sEMG-based control, force-based control with stiffness compensation (FSC), and force-based control with no compensation (FNC). The movement performance was evaluated in terms of percentage of task completion, tracing error, smoothness and speed. RESULTS For subject S1 (Brooke 4) FNC was the preferred method and performed better than FSC and sEMG. FNC was not usable for subject S2 (Brooke 5) and S3 (Brooke 6). Subject S2 presented significantly lower movement speed with sEMG than with FSC, yet he preferred sEMG since FSC was perceived to be too fatiguing. Subject S3 could not successfully use neither of the two force-based control methods, while with sEMG he could reach almost his entire workspace. CONCLUSIONS Movement performance and subjective preference of the three control methods differed with the level of arm function of the participants. Our results indicate that all three control methods have to be considered in real applications, as they present complementary advantages and disadvantages. The fact that the two weaker subjects (S2 and S3) experienced the force-based control interfaces as fatiguing suggests that sEMG-based control interfaces could be a better solution for adults with DMD. Yet force-based control interfaces can be a better alternative for those cases in which voluntary forces are higher than the stiffness forces of the arms.
Collapse
Affiliation(s)
- Joan Lobo-Prat
- Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522, NB, The Netherlands.
| | - Kostas Nizamis
- Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522, NB, The Netherlands
| | - Mariska M H P Janssen
- Department of Rehabilitation, Radboud University Medical Center, Reinier Postlaan 4, Nijmegen, 6500, HB, The Netherlands
| | - Arvid Q L Keemink
- Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522, NB, The Netherlands
| | - Peter H Veltink
- Department of Biomedical Signals and Systems, University of Twente, Drienerlolaan 5, Enschede, 7500, AE, The Netherlands
| | - Bart F J M Koopman
- Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522, NB, The Netherlands
| | - Arno H A Stienen
- Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522, NB, The Netherlands
- Department of Physical Therapy and Human Movement Sciences, Northwestern University, 645 N Michigan Ave Suite 1100, Chicago (IL), 60611, USA
| |
Collapse
|