1
|
Davarinia F, Maleki A. EMG and SSVEP-based bimodal estimation of elbow angle trajectory. Neuroscience 2024; 562:1-9. [PMID: 39454713 DOI: 10.1016/j.neuroscience.2024.10.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 09/06/2024] [Accepted: 10/15/2024] [Indexed: 10/28/2024]
Abstract
Detecting intentions and estimating movement trajectories in a human-machine interface (HMI) using electromyogram (EMG) signals is particularly challenging, especially for individuals with movement impairments. Therefore, incorporating additional information from other biological sources, potential discrete information in the movement, and the EMG signal can be practical. This study combined EMG and target information to enhance estimation performance during reaching movements. EMG activity of the shoulder and arm muscles, elbow angle, and the electroencephalogram signals of ten healthy subjects were recorded while they reached blinking targets. The reaching target was recognized by steady-state visual evoked potential (SSVEP). The selected target's final angle and EMG were then mapped to the elbow angle trajectory. The proposed bimodal structure, which integrates EMG and final elbow angle information, outperformed the EMG-based decoder. Even under conditions of higher fatigue, the proposed structure provided better performance than the EMG decoder. Including additional information about the recognized reaching target in the trajectory model improved the estimation of the reaching profile. Consequently, this study's findings suggest that bimodal decoders are highly beneficial for enhancing assistive robotic devices and prostheses, especially for real-time upper limb rehabilitation.
Collapse
Affiliation(s)
| | - Ali Maleki
- Biomedical Engineering Department, Semnan University, Semnan, Iran.
| |
Collapse
|
2
|
Yang B, Chen X, Xiao X, Yan P, Hasegawa Y, Huang J. Gaze and Environmental Context-Guided Deep Neural Network and Sequential Decision Fusion for Grasp Intention Recognition. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3687-3698. [PMID: 37703142 DOI: 10.1109/tnsre.2023.3314503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
Grasp intention recognition plays a crucial role in controlling assistive robots to aid older people and individuals with limited mobility in restoring arm and hand function. Among the various modalities used for intention recognition, the eye-gaze movement has emerged as a promising approach due to its simplicity, intuitiveness, and effectiveness. Existing gaze-based approaches insufficiently integrate gaze data with environmental context and underuse temporal information, leading to inadequate intention recognition performance. The objective of this study is to eliminate the proposed deficiency and establish a gaze-based framework for object detection and its associated intention recognition. A novel gaze-based grasp intention recognition and sequential decision fusion framework (GIRSDF) is proposed. The GIRSDF comprises three main components: gaze attention map generation, the Gaze-YOLO grasp intention recognition model, and sequential decision fusion models (HMM, LSTM, and GRU). To evaluate the performance of GIRSDF, a dataset named Invisible containing data from healthy individuals and hemiplegic patients is established. GIRSDF is validated by trial-based and subject-based experiments on Invisible and outperforms the previous gaze-based grasp intention recognition methods. In terms of running efficiency, the proposed framework can run at a frequency of about 22 Hz, which ensures real-time grasp intention recognition. This study is expected to inspire additional gaze-related grasp intention recognition works.
Collapse
|
3
|
Hasse BA, Sheets DEG, Holly NL, Gothard KM, Fuglevand AJ. Restoration of complex movement in the paralyzed upper limb. J Neural Eng 2022; 19. [PMID: 35728568 DOI: 10.1088/1741-2552/ac7ad7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Accepted: 06/21/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Functional electrical stimulation (FES) involves artificial activation of skeletal muscles to reinstate motor function in paralyzed individuals. While FES applied to the upper limb has improved the ability of tetraplegics to perform activities of daily living, there are key shortcomings impeding its widespread use. One major limitation is that the range of motor behaviors that can be generated is restricted to a small set of simple, preprogrammed movements. This limitation stems from the substantial difficulty in determining the patterns of stimulation across many muscles required to produce more complex movements. Therefore, the objective of this study was to use machine learning to flexibly identify patterns of muscle stimulation needed to evoke a wide array of multi-joint arm movements. APPROACH Arm kinematics and electromyographic activity from 29 muscles were recorded while a 'trainer' monkey made an extensive range of arm movements. Those data were used to train an artificial neural network that predicted patterns of muscle activity associated with a new set of movements. Those patterns were converted into trains of stimulus pulses that were delivered to upper limb muscles in two other temporarily paralyzed monkeys. RESULTS Machine-learning based prediction of EMG was good for within-subject predictions but appreciably poorer for across-subject predictions. Evoked responses matched the desired movements with good fidelity only in some cases. Means to mitigate errors associated with FES-evoked movements are discussed. SIGNIFICANCE Because the range of movements that can be produced with our approach is virtually unlimited, this system could greatly expand the repertoire of movements available to individuals with high level paralysis.
Collapse
Affiliation(s)
- Brady A Hasse
- Department of Physiology, The University of Arizona College of Medicine Tucson, 1501 N Campbell Avenue, Tucson, Arizona, 85724-5051, UNITED STATES
| | - Drew E G Sheets
- Department of Organismal Biology & Anatomy, University of Chicago Biological Sciences Division, Anatomy, 1027 E 57th Street Chicago, IL 60637, Chicago, Illinois, 60637-5416, UNITED STATES
| | - Nicole L Holly
- Physiology, The University of Arizona College of Medicine Tucson, 1501 N Campbell Avenue, Tucson, Arizona, 85724-5051, UNITED STATES
| | - Katalin M Gothard
- Physiology, The University of Arizona College of Medicine Tucson, 1501 N Campbell Ave, Tucson, Arizona, 85724-5051, UNITED STATES
| | - Andrew J Fuglevand
- Department of Physiology, University of Arizona, Arizona Health Sciences Center, 1501 N. Campbell Ave, Tucson, Arizona, 85724-5051, UNITED STATES
| |
Collapse
|
4
|
Davarinia F, Maleki A. SSVEP-gated EMG-based decoding of elbow angle during goal-directed reaching movement. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103222] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
5
|
Krausz NE, Lamotte D, Batzianoulis I, Hargrove LJ, Micera S, Billard A. Intent Prediction Based on Biomechanical Coordination of EMG and Vision-Filtered Gaze for End-Point Control of an Arm Prosthesis. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1471-1480. [PMID: 32386160 DOI: 10.1109/tnsre.2020.2992885] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
We propose a novel controller for powered prosthetic arms, where fused EMG and gaze data predict the desired end-point for a full arm prosthesis, which could drive the forward motion of individual joints. We recorded EMG, gaze, and motion-tracking during pick-and-place trials with 7 able-bodied subjects. Subjects positioned an object above a random target on a virtual interface, each completing around 600 trials. On average across all trials and subjects gaze preceded EMG and followed a repeatable pattern that allowed for prediction. A computer vision algorithm was used to extract the initial and target fixations and estimate the target position in 2D space. Two SVRs were trained with EMG data to predict the x- and y- position of the hand; results showed that the y-estimate was significantly better than the x-estimate. The EMG and gaze predictions were fused using a Kalman Filter-based approach, and the positional error from using EMG-only was significantly higher than the fusion of EMG and gaze. The final target position Root Mean Squared Error (RMSE) decreased from 9.28 cm with an EMG-only prediction to 6.94 cm when using a gaze-EMG fusion. This error also increased significantly when removing some or all arm muscle signals. However, using fused EMG and gaze, there were no significant difference between predictors that included all muscles, or only a subset of muscles.
Collapse
|
6
|
Krausz NE, Hargrove LJ. A Survey of Teleceptive Sensing for Wearable Assistive Robotic Devices. SENSORS (BASEL, SWITZERLAND) 2019; 19:E5238. [PMID: 31795240 PMCID: PMC6928925 DOI: 10.3390/s19235238] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 11/04/2019] [Accepted: 11/21/2019] [Indexed: 11/24/2022]
Abstract
Teleception is defined as sensing that occurs remotely, with no physical contact with the object being sensed. To emulate innate control systems of the human body, a control system for a semi- or fully autonomous assistive device not only requires feedforward models of desired movement, but also the environmental or contextual awareness that could be provided by teleception. Several recent publications present teleception modalities integrated into control systems and provide preliminary results, for example, for performing hand grasp prediction or endpoint control of an arm assistive device; and gait segmentation, forward prediction of desired locomotion mode, and activity-specific control of a prosthetic leg or exoskeleton. Collectively, several different approaches to incorporating teleception have been used, including sensor fusion, geometric segmentation, and machine learning. In this paper, we summarize the recent and ongoing published work in this promising new area of research.
Collapse
Affiliation(s)
- Nili E. Krausz
- Neural Engineering for Prosthetics and Orthotics Lab, Center of Bionic Medicine, Shirley Ryan AbilityLab (Formerly Rehabilitation Institute of Chicago), Chicago, IL 60611, USA;
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
| | - Levi J. Hargrove
- Neural Engineering for Prosthetics and Orthotics Lab, Center of Bionic Medicine, Shirley Ryan AbilityLab (Formerly Rehabilitation Institute of Chicago), Chicago, IL 60611, USA;
- Biomedical Engineering Department, Northwestern University, Evanston, IL 60208, USA
- Physical Medicine and Rehabilitation Department, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
7
|
Mick S, Lapeyre M, Rouanet P, Halgand C, Benois-Pineau J, Paclet F, Cattaert D, Oudeyer PY, de Rugy A. Reachy, a 3D-Printed Human-Like Robotic Arm as a Testbed for Human-Robot Control Strategies. Front Neurorobot 2019; 13:65. [PMID: 31474846 PMCID: PMC6703080 DOI: 10.3389/fnbot.2019.00065] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 07/29/2019] [Indexed: 11/13/2022] Open
Abstract
To this day, despite the increasing motor capability of robotic devices, elaborating efficient control strategies is still a key challenge in the field of humanoid robotic arms. In particular, providing a human “pilot” with efficient ways to drive such a robotic arm requires thorough testing prior to integration into a finished system. Additionally, when it is needed to preserve anatomical consistency between pilot and robot, such testing requires to employ devices showing human-like features. To fulfill this need for a biomimetic test platform, we present Reachy, a human-like life-scale robotic arm with seven joints from shoulder to wrist. Although Reachy does not include a poly-articulated hand and is therefore more suitable for studying reaching than manipulation, a robotic hand prototype from available third-party projects could be integrated to it. Its 3D-printed structure and off-the-shelf actuators make it inexpensive relatively to the price of an industrial-grade robot. Using an open-source architecture, its design makes it broadly connectable and customizable, so it can be integrated into many applications. To illustrate how Reachy can connect to external devices, this paper presents several proofs of concept where it is operated with various control strategies, such as tele-operation or gaze-driven control. In this way, Reachy can help researchers to explore, develop and test innovative control strategies and interfaces on a human-like robot.
Collapse
Affiliation(s)
- Sébastien Mick
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287 CNRS & Univ. Bordeaux, Bordeaux, France
| | | | | | - Christophe Halgand
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287 CNRS & Univ. Bordeaux, Bordeaux, France
| | - Jenny Benois-Pineau
- Laboratoire Bordelais de Recherche en Informatique, UMR 5800, CNRS & Univ. Bordeaux & Bordeaux INP, Talence, France
| | - Florent Paclet
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287 CNRS & Univ. Bordeaux, Bordeaux, France
| | - Daniel Cattaert
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287 CNRS & Univ. Bordeaux, Bordeaux, France
| | | | - Aymar de Rugy
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287 CNRS & Univ. Bordeaux, Bordeaux, France.,Centre for Sensorimotor Performance, School of Human Movement and Nutrition Sciences, University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
8
|
Noronha B, Dziemian S, Zito GA, Konnaris C, Faisal AA. "Wink to grasp" - comparing eye, voice & EMG gesture control of grasp with soft-robotic gloves. IEEE Int Conf Rehabil Robot 2018; 2017:1043-1048. [PMID: 28813959 DOI: 10.1109/icorr.2017.8009387] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The ability of robotic rehabilitation devices to support paralysed end-users is ultimately limited by the degree to which human-machine-interaction is designed to be effective and efficient in translating user intention into robotic action. Specifically, we evaluate the novel possibility of binocular eye-tracking technology to detect voluntary winks from involuntary blink commands, to establish winks as a novel low-latency control signal to trigger robotic action. By wearing binocular eye-tracking glasses we enable users to directly observe their environment or the actuator and trigger movement actions, without having to interact with a visual display unit or user interface. We compare our novel approach to two conventional approaches for controlling robotic devices based on electromyo-graphy (EMG) and speech-based human-computer interaction technology. We present an integrated software framework based on ROS that allows transparent integration of these multiple modalities with a robotic system. We use a soft-robotic SEM glove (Bioservo Technologies AB, Sweden) to evaluate how the 3 modalities support the performance and subjective experience of the end-user when movement assisted. All 3 modalities are evaluated in streaming, closed-loop control operation for grasping physical objects. We find that wink control shows the lowest error rate mean with lowest standard deviation of (0.23 ± 0.07, mean ± SEM) followed by speech control (0.35 ± 0. 13) and EMG gesture control (using the Myo armband by Thalamic Labs), with the highest mean and standard deviation (0.46 ± 0.16). We conclude that with our novel own developed eye-tracking based approach to control assistive technologies is a well suited alternative to conventional approaches, especially when combined with 3D eye-tracking based robotic end-point control.
Collapse
|
9
|
Maimon-Dror RO, Fernandez-Quesada J, Zito GA, Konnaris C, Dziemian S, Faisal AA. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking. IEEE Int Conf Rehabil Robot 2018; 2017:1049-1054. [PMID: 28813960 DOI: 10.1109/icorr.2017.8009388] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.
Collapse
|
10
|
Menegaldo LL. Real-time muscle state estimation from EMG signals during isometric contractions using Kalman filters. BIOLOGICAL CYBERNETICS 2017; 111:335-346. [PMID: 28766051 DOI: 10.1007/s00422-017-0724-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2015] [Accepted: 07/22/2017] [Indexed: 06/07/2023]
Abstract
State-space control of myoelectric devices and real-time visualization of muscle forces in virtual rehabilitation require measuring or estimating muscle dynamic states: neuromuscular activation, tendon force and muscle length. This paper investigates whether regular (KF) and extended Kalman filters (eKF), derived directly from Hill-type muscle mechanics equations, can be used as real-time muscle state estimators for isometric contractions using raw electromyography signals (EMG) as the only available measurement. The estimators' amplitude error, computational cost, filtering lags and smoothness are compared with usual EMG-driven analysis, performed offline, by integrating the nonlinear Hill-type muscle model differential equations (offline simulations-OS). EMG activity of the three triceps surae components (soleus, gastrocnemius medialis and gastrocnemius lateralis), in three torque levels, was collected for ten subjects. The actualization interval (AI) between two updates of the KF and eKF was also varied. The results show that computational costs are significantly reduced (70x for KF and 17[Formula: see text] for eKF). The filtering lags presented sharp linear relationships with the AI (0-300 ms), depending on the state and activation level. Under maximum excitation, amplitude errors varied in the range 10-24% for activation, 5-8% for tendon force and 1.4-1.8% for muscle length, reducing linearly with the excitation level. Smoothness, measured by the ratio between the average standard variations of KF/eKF and OS estimations, was greatly reduced for activation but converged exponentially to 1 for the other states by increasing AI. Compared to regular KF, extended KF does not seem to improve estimation accuracy significantly. Depending on the particular application requirements, the most appropriate KF actualization interval can be selected.
Collapse
Affiliation(s)
- Luciano L Menegaldo
- Biomedical Engineering Program, Alberto Luiz Coimbra Institute for Graduate Studies and Research in Engineering (PEB/COPPE), Federal University of Rio de Janeiro, Av. Horacio Macedo 2030, Bloco H-338, Rio de Janeiro, 21941-914, Brazil.
| |
Collapse
|
11
|
Wright J, Macefield VG, van Schaik A, Tapson JC. A Review of Control Strategies in Closed-Loop Neuroprosthetic Systems. Front Neurosci 2016; 10:312. [PMID: 27462202 PMCID: PMC4940409 DOI: 10.3389/fnins.2016.00312] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2016] [Accepted: 06/21/2016] [Indexed: 11/23/2022] Open
Abstract
It has been widely recognized that closed-loop neuroprosthetic systems achieve more favorable outcomes for users then equivalent open-loop devices. Improved performance of tasks, better usability, and greater embodiment have all been reported in systems utilizing some form of feedback. However, the interdisciplinary work on neuroprosthetic systems can lead to miscommunication due to similarities in well-established nomenclature in different fields. Here we present a review of control strategies in existing experimental, investigational and clinical neuroprosthetic systems in order to establish a baseline and promote a common understanding of different feedback modes and closed-loop controllers. The first section provides a brief discussion of feedback control and control theory. The second section reviews the control strategies of recent Brain Machine Interfaces, neuromodulatory implants, neuroprosthetic systems, and assistive neurorobotic devices. The final section examines the different approaches to feedback in current neuroprosthetic and neurorobotic systems.
Collapse
Affiliation(s)
- James Wright
- Biomedical Engineering and Neuroscience, The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Vaughan G Macefield
- Biomedical Engineering and Neuroscience, The MARCS Institute, University of Western SydneySydney, NSW, Australia; School of Medicine, University of Western SydneySydney, NSW, Australia; Neuroscience Research AustraliaSydney, NSW, Australia
| | - André van Schaik
- Biomedical Engineering and Neuroscience, The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Jonathan C Tapson
- Biomedical Engineering and Neuroscience, The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| |
Collapse
|
12
|
Novak D, Riener R. Enhancing patient freedom in rehabilitation robotics using gaze-based intention detection. IEEE Int Conf Rehabil Robot 2014; 2013:6650507. [PMID: 24187322 DOI: 10.1109/icorr.2013.6650507] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Several design strategies for rehabilitation robotics have aimed to improve patients' experiences using motivating and engaging virtual environments. This paper presents a new design strategy: enhancing patient freedom with a complex virtual environment that intelligently detects patients' intentions and supports the intended actions. A 'virtual kitchen' scenario has been developed in which many possible actions can be performed at any time, allowing patients to experiment and giving them more freedom. Remote eye tracking is used to detect the intended action and trigger appropriate support by a rehabilitation robot. This approach requires no additional equipment attached to the patient and has a calibration time of less than a minute. The system was tested on healthy subjects using the ARMin III arm rehabilitation robot. It was found to be technically feasible and usable by healthy subjects. However, the intention detection algorithm should be improved using better sensor fusion, and clinical tests with patients are needed to evaluate the system's usability and potential therapeutic benefits.
Collapse
|
13
|
Markovic M, Dosen S, Cipriani C, Popovic D, Farina D. Stereovision and augmented reality for closed-loop control of grasping in hand prostheses. J Neural Eng 2014; 11:046001. [PMID: 24891493 DOI: 10.1088/1741-2560/11/4/046001] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Technologically advanced assistive devices are nowadays available to restore grasping, but effective and effortless control integrating both feed-forward (commands) and feedback (sensory information) is still missing. The goal of this work was to develop a user friendly interface for the semi-automatic and closed-loop control of grasping and to test its feasibility. APPROACH We developed a controller based on stereovision to automatically select grasp type and size and augmented reality (AR) to provide artificial proprioceptive feedback. The system was experimentally tested in healthy subjects using a dexterous hand prosthesis to grasp a set of daily objects. The subjects wore AR glasses with an integrated stereo-camera pair, and triggered the system via a simple myoelectric interface. MAIN RESULTS The results demonstrated that the subjects got easily acquainted with the semi-autonomous control. The stereovision grasp decoder successfully estimated the grasp type and size in realistic, cluttered environments. When allowed (forced) to correct the automatic system decisions, the subjects successfully utilized the AR feedback and achieved close to ideal system performance. SIGNIFICANCE The new method implements a high level, low effort control of complex functions in addition to the low level closed-loop control. The latter is achieved by providing rich visual feedback, which is integrated into the real life environment. The proposed system is an effective interface applicable with small alterations for many advanced prosthetic and orthotic/therapeutic rehabilitation devices.
Collapse
Affiliation(s)
- Marko Markovic
- Department of NeuroRehabilitation Engineering, Bernstein Focus Neurotechnology Göttingen, Bernstein Center for Computational Neuroscience, University Medical Center Göttingen, Georg-August University, D-37075 Göttingen, Germany
| | | | | | | | | |
Collapse
|
14
|
Corbett EA, Sachs NA, Körding KP, Perreault EJ. Multimodal decoding and congruent sensory information enhance reaching performance in subjects with cervical spinal cord injury. Front Neurosci 2014; 8:123. [PMID: 24904265 PMCID: PMC4033069 DOI: 10.3389/fnins.2014.00123] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Accepted: 05/06/2014] [Indexed: 11/30/2022] Open
Abstract
Cervical spinal cord injury (SCI) paralyzes muscles of the hand and arm, making it difficult to perform activities of daily living. Restoring the ability to reach can dramatically improve quality of life for people with cervical SCI. Any reaching system requires a user interface to decode parameters of an intended reach, such as trajectory and target. A challenge in developing such decoders is that often few physiological signals related to the intended reach remain under voluntary control, especially in patients with high cervical injuries. Furthermore, the decoding problem changes when the user is controlling the motion of their limb, as opposed to an external device. The purpose of this study was to investigate the benefits of combining disparate signal sources to control reach in people with a range of impairments, and to consider the effect of two feedback approaches. Subjects with cervical SCI performed robot-assisted reaching, controlling trajectories with either shoulder electromyograms (EMGs) or EMGs combined with gaze. We then evaluated how reaching performance was influenced by task-related sensory feedback, testing the EMG-only decoder in two conditions. The first involved moving the arm with the robot, providing congruent sensory feedback through their remaining sense of proprioception. In the second, the subjects moved the robot without the arm attached, as in applications that control external devices. We found that the multimodal-decoding algorithm worked well for all subjects, enabling them to perform straight, accurate reaches. The inclusion of gaze information, used to estimate target location, was especially important for the most impaired subjects. In the absence of gaze information, congruent sensory feedback improved performance. These results highlight the importance of proprioceptive feedback, and suggest that multi-modal decoders are likely to be most beneficial for highly impaired subjects and in tasks where such feedback is unavailable.
Collapse
Affiliation(s)
- Elaine A. Corbett
- Sensory Motor Performance Program, Rehabilitation Institute of ChicagoChicago, IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern UniversityChicago, IL, USA
- Melbourne School of Psychological Sciences, University of MelbourneParkville, VIC, Australia
| | - Nicholas A. Sachs
- Department of Biomedical Engineering, Northwestern UniversityEvanston, IL, USA
| | - Konrad P. Körding
- Sensory Motor Performance Program, Rehabilitation Institute of ChicagoChicago, IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern UniversityChicago, IL, USA
- Department of Physiology, Northwestern UniversityChicago, IL, USA
| | - Eric J. Perreault
- Sensory Motor Performance Program, Rehabilitation Institute of ChicagoChicago, IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern UniversityChicago, IL, USA
- Department of Biomedical Engineering, Northwestern UniversityEvanston, IL, USA
| |
Collapse
|
15
|
Corbett EA, Körding KP, Perreault EJ. Dealing with target uncertainty in a reaching control interface. PLoS One 2014; 9:e86811. [PMID: 24489788 PMCID: PMC3904937 DOI: 10.1371/journal.pone.0086811] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2013] [Accepted: 12/18/2013] [Indexed: 11/18/2022] Open
Abstract
Prosthetic devices need to be controlled by their users, typically using physiological signals. People tend to look at objects before reaching for them and we have shown that combining eye movements with other continuous physiological signal sources enhances control. This approach suffers when subjects also look at non-targets, a problem we addressed with a probabilistic mixture over targets where subject gaze information is used to identify target candidates. However, this approach would be ineffective if a user wanted to move towards targets that have not been foveated. Here we evaluated how the accuracy of prior target information influenced decoding accuracy, as the availability of neural control signals was varied. We also considered a mixture model where we assumed that the target may be foveated or, alternatively, that the target may not be foveated. We tested the accuracy of the models at decoding natural reaching data, and also in a closed-loop robot-assisted reaching task. The mixture model worked well in the face of high target uncertainty. Furthermore, errors due to inaccurate target information were reduced by including a generic model that relied on neural signals only.
Collapse
Affiliation(s)
- Elaine A. Corbett
- Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
| | - Konrad P. Körding
- Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Department of Physiology, Northwestern University, Chicago, Illinois, United States of America
| | - Eric J. Perreault
- Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Chicago, Illinois, United States of America
| |
Collapse
|