Abstract
Prediction of movement intentions from electromyographic (EMG) signals is typically performed with a pattern recognition approach, wherein a short dataframe of raw EMG is compressed into an instantaneous feature-encoding that is meaningful for classification. However, EMG signals are time-varying, implying that a frame-wise approach may not sufficiently incorporate temporal context into predictions, leading to erratic and unstable prediction behavior.
OBJECTIVE
We demonstrate that sequential prediction models and, specifically, temporal convolutional networks are able to leverage useful temporal information from EMG to achieve superior predictive performance.
METHODS
We compare this approach to other sequential and frame-wise models predicting 3 simultaneous hand and wrist degrees-of-freedom from 2 amputee and 13 non-amputee human subjects in a minimally constrained experiment. We also compare these models on the publicly available Ninapro and CapgMyo amputee and non-amputee datasets.
RESULTS
Temporal convolutional networks yield predictions that are more accurate and stable than frame-wise models, especially during inter-class transitions, with an average response delay of 4.6 ms and simpler feature-encoding. Their performance can be further improved with adaptive reinforcement training.
SIGNIFICANCE
Sequential models that incorporate temporal information from EMG achieve superior movement prediction performance and these models allow for novel types of interactive training.
CONCLUSIONS
Addressing EMG decoding as a sequential modeling problem will lead to enhancements in the reliability, responsiveness, and movement complexity available from prosthesis control systems.
Collapse