1
|
Davarinia F, Maleki A. EMG and SSVEP-based bimodal estimation of elbow angle trajectory. Neuroscience 2024; 562:1-9. [PMID: 39454713 DOI: 10.1016/j.neuroscience.2024.10.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 09/06/2024] [Accepted: 10/15/2024] [Indexed: 10/28/2024]
Abstract
Detecting intentions and estimating movement trajectories in a human-machine interface (HMI) using electromyogram (EMG) signals is particularly challenging, especially for individuals with movement impairments. Therefore, incorporating additional information from other biological sources, potential discrete information in the movement, and the EMG signal can be practical. This study combined EMG and target information to enhance estimation performance during reaching movements. EMG activity of the shoulder and arm muscles, elbow angle, and the electroencephalogram signals of ten healthy subjects were recorded while they reached blinking targets. The reaching target was recognized by steady-state visual evoked potential (SSVEP). The selected target's final angle and EMG were then mapped to the elbow angle trajectory. The proposed bimodal structure, which integrates EMG and final elbow angle information, outperformed the EMG-based decoder. Even under conditions of higher fatigue, the proposed structure provided better performance than the EMG decoder. Including additional information about the recognized reaching target in the trajectory model improved the estimation of the reaching profile. Consequently, this study's findings suggest that bimodal decoders are highly beneficial for enhancing assistive robotic devices and prostheses, especially for real-time upper limb rehabilitation.
Collapse
Affiliation(s)
| | - Ali Maleki
- Biomedical Engineering Department, Semnan University, Semnan, Iran.
| |
Collapse
|
2
|
Gao G, Zhang X, Chen X, Chen Z. Mitigating the Concurrent Interference of Electrode Shift and Loosening in Myoelectric Pattern Recognition Using Siamese Autoencoder Network. IEEE Trans Neural Syst Rehabil Eng 2024; 32:3388-3398. [PMID: 39196739 DOI: 10.1109/tnsre.2024.3450854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2024]
Abstract
The objective of this work is to develop a novel myoelectric pattern recognition (MPR) method to mitigate the concurrent interference of electrode shift and loosening, thereby improving the practicality of MPR-based gestural interfaces towards intelligent control. A Siamese auto-encoder network (SAEN) was established to learn robust feature representations against random occurrences of both electrode shift and loosening. The SAEN model was trained with a variety of shifted-view and masked-view feature maps, which were simulated through feature transformation operated on the original feature maps. Specifically, three mean square error (MSE) losses were devised to warrant the trained model's capability in adaptive recovery of any given interfered data. The SAEN was deployed as an independent feature extractor followed by a common support vector machine acting as the classifier. To evaluate the effectiveness of the proposed method, an eight-channel armband was adopted to collect surface electromyography (EMG) signals from nine subjects performing six gestures. Under the condition of concurrent interference, the proposed method achieved the highest classification accuracy in both offline and online testing compared to five common methods, with statistical significance (p <0.05). The proposed method was demonstrated to be effective in mitigating the electrode shift and loosening interferences. Our work offers a valuable solution for enhancing the robustness of myoelectric control systems.
Collapse
|
3
|
Tigrini A, Mobarak R, Mengarelli A, Khushaba RN, Al-Timemy AH, Verdini F, Gambi E, Fioretti S, Burattini L. Phasor-Based Myoelectric Synergy Features: A Fast Hand-Crafted Feature Extraction Scheme for Boosting Performance in Gait Phase Recognition. SENSORS (BASEL, SWITZERLAND) 2024; 24:5828. [PMID: 39275739 PMCID: PMC11397962 DOI: 10.3390/s24175828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 08/30/2024] [Accepted: 09/06/2024] [Indexed: 09/16/2024]
Abstract
Gait phase recognition systems based on surface electromyographic signals (EMGs) are crucial for developing advanced myoelectric control schemes that enhance the interaction between humans and lower limb assistive devices. However, machine learning models used in this context, such as Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM), typically experience performance degradation when modeling the gait cycle with more than just stance and swing phases. This study introduces a generalized phasor-based feature extraction approach (PHASOR) that captures spatial myoelectric features to improve the performance of LDA and SVM in gait phase recognition. A publicly available dataset of 40 subjects was used to evaluate PHASOR against state-of-the-art feature sets in a five-phase gait recognition problem. Additionally, fully data-driven deep learning architectures, such as Rocket and Mini-Rocket, were included for comparison. The separability index (SI) and mean semi-principal axis (MSA) analyses showed mean SI and MSA metrics of 7.7 and 0.5, respectively, indicating the proposed approach's ability to effectively decode gait phases through EMG activity. The SVM classifier demonstrated the highest accuracy of 82% using a five-fold leave-one-trial-out testing approach, outperforming Rocket and Mini-Rocket. This study confirms that in gait phase recognition based on EMG signals, novel and efficient muscle synergy information feature extraction schemes, such as PHASOR, can compete with deep learning approaches that require greater processing time for feature extraction and classification.
Collapse
Affiliation(s)
- Andrea Tigrini
- Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
| | - Rami Mobarak
- Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
| | - Alessandro Mengarelli
- Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
| | - Rami N Khushaba
- Transport for NSW Alexandria, Haymarket, NSW 2008, Australia
| | - Ali H Al-Timemy
- Biomedical Engineering Department, Al-Khwarizmi College of Engineering, University of Baghdad, Baghdad 10066, Iraq
| | - Federica Verdini
- Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
| | - Ennio Gambi
- Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
| | - Sandro Fioretti
- Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
| | - Laura Burattini
- Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
| |
Collapse
|
4
|
Lykourinas A, Rottenberg X, Catthoor F, Skodras A. Unsupervised Domain Adaptation for Inter-Session Re-Calibration of Ultrasound-Based HMIs. SENSORS (BASEL, SWITZERLAND) 2024; 24:5043. [PMID: 39124090 PMCID: PMC11314926 DOI: 10.3390/s24155043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 07/12/2024] [Accepted: 07/30/2024] [Indexed: 08/12/2024]
Abstract
Human-Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends on frequent periodic re-calibration using newly acquired data due to their adaptation needs in dynamic environments, where test-time data continuously change in unforeseen ways, a cause that significantly contributes to their abandonment and remains unexplored by the Ultrasound-based (US-based) HMI community. In this work, we conduct a thorough investigation of Unsupervised Domain Adaptation (UDA) algorithms for the re-calibration of US-based HMIs during within-day sessions, which utilize unlabeled data for re-calibration. Our experimentation led us to the proposal of a CNN-based architecture for simultaneous wrist rotation angle and finger gesture prediction that achieves comparable performance with the state-of-the-art while featuring 87.92% less trainable parameters. According to our findings, DANN (a Domain-Adversarial training algorithm), with proper initialization, offers an average 24.99% classification accuracy performance enhancement when compared to no re-calibration setting. However, our results suggest that in cases where the experimental setup and the UDA configuration may differ, observed enhancements would be rather small or even unnoticeable.
Collapse
Affiliation(s)
- Antonios Lykourinas
- Department of Electrical and Computer Engineering, University of Patras, 26504 Patras, Greece;
- Imec, 3001 Leuven, Belgium; (F.C.); (X.R.)
| | | | | | - Athanassios Skodras
- Department of Electrical and Computer Engineering, University of Patras, 26504 Patras, Greece;
| |
Collapse
|
5
|
Fan J, Hu X. Towards Efficient Neural Decoder for Dexterous Finger Force Predictions. IEEE Trans Biomed Eng 2024; 71:1831-1840. [PMID: 38215325 DOI: 10.1109/tbme.2024.3353145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2024]
Abstract
OBJECTIVE Dexterous control of robot hands requires a robust neural-machine interface capable of accurately decoding multiple finger movements. Existing studies primarily focus on single-finger movement or rely heavily on multi-finger data for decoder training, which requires large datasets and high computation demand. In this study, we investigated the feasibility of using limited single-finger surface electromyogram (sEMG) data to train a neural decoder capable of predicting the forces of unseen multi-finger combinations. METHODS We developed a deep forest-based neural decoder to concurrently predict the extension and flexion forces of three fingers (index, middle, and ring-pinky). We trained the model using varying amounts of high-density EMG data in a limited condition (i.e., single-finger data). RESULTS We showed that the deep forest decoder could achieve consistently commendable performance with 7.0% of force prediction errors and R2 value of 0.874, significantly surpassing the conventional EMG amplitude method and convolutional neural network approach. However, the deep forest decoder accuracy degraded when a smaller amount of data was used for training and when the testing data became noisy. CONCLUSION The deep forest decoder shows accurate performance in multi-finger force prediction tasks. The efficiency aspect of the deep forest lies in the short training time and small volume of training data, which are two critical factors in current neural decoding applications. SIGNIFICANCE This study offers insights into efficient and accurate neural decoder training for advanced robotic hand control, which has the potential for real-life applications during human-machine interactions.
Collapse
|
6
|
Li W, Zhang X, Shi P, Li S, Li P, Yu H. Across Sessions and Subjects Domain Adaptation for Building Robust Myoelectric Interface. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2005-2015. [PMID: 38147425 DOI: 10.1109/tnsre.2023.3347540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
Gesture interaction via surface electromyography (sEMG) signal is a promising approach for advanced human-computer interaction systems. However, improving the performance of the myoelectric interface is challenging due to the domain shift caused by the signal's inherent variability. To enhance the interface's robustness, we propose a novel adaptive information fusion neural network (AIFNN) framework, which could effectively reduce the effects of multiple scenarios. Specifically, domain adversarial training is established to inhibit the shared network's weights from exploiting domain-specific representation, thus allowing for the extraction of domain-invariant features. Effectively, classification loss, domain diversence loss and domain discrimination loss are employed, which improve classification performance while reduce distribution mismatches between the two domains. To simulate the application of myoelectric interface, experiments were carried out involving three scenarios (intra-session, inter-session and inter-subject scenarios). Ten non-disabled subjects were recruited to perform sixteen gestures for ten consecutive days. The experimental results indicated that the performance of AIFNN was better than two other state-of-the-art transfer learning approaches, namely fine-tuning (FT) and domain adversarial network (DANN). This study demonstrates the capability of AIFNN to maintain robustness over time and generalize across users in practical myoelectric interface implementations. These findings could serve as a foundation for future deployments.
Collapse
|
7
|
Igual C, Igual J. Simultaneous Three-Degrees-of-Freedom Prosthetic Control Based on Linear Regression and Closed-Loop Training Protocol. SENSORS (BASEL, SWITZERLAND) 2024; 24:3101. [PMID: 38793955 PMCID: PMC11124855 DOI: 10.3390/s24103101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/27/2024] [Accepted: 05/11/2024] [Indexed: 05/26/2024]
Abstract
Machine learning-based controllers of prostheses using electromyographic signals have become very popular in the last decade. The regression approach allows a simultaneous and proportional control of the intended movement in a more natural way than the classification approach, where the number of movements is discrete by definition. However, it is not common to find regression-based controllers working for more than two degrees of freedom at the same time. In this paper, we present the application of the adaptive linear regressor in a relatively low-dimensional feature space with only eight sensors to the problem of a simultaneous and proportional control of three degrees of freedom (left-right, up-down and open-close hand movements). We show that a key element usually overlooked in the learning process of the regressor is the training paradigm. We propose a closed-loop procedure, where the human learns how to improve the quality of the generated EMG signals, helping also to obtain a better controller. We apply it to 10 healthy and 3 limb-deficient subjects. Results show that the combination of the multidimensional targets and the open-loop training protocol significantly improve the performance, increasing the average completion rate from 53% to 65% for the most complicated case of simultaneously controlling the three degrees of freedom.
Collapse
Affiliation(s)
| | - Jorge Igual
- Instituto de Telecomunicaciones y Aplicaciones Multimedia (ITEAM), Universitat Politècnica de València, 46022 Valencia, Spain;
| |
Collapse
|
8
|
Liu Y, Peng X, Tan Y, Oyemakinde TT, Wang M, Li G, Li X. A novel unsupervised dynamic feature domain adaptation strategy for cross-individual myoelectric gesture recognition. J Neural Eng 2024; 20:066044. [PMID: 38134446 DOI: 10.1088/1741-2552/ad184f] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 12/22/2023] [Indexed: 12/24/2023]
Abstract
Objective.Surface electromyography pattern recognition (sEMG-PR) is considered as a promising control method for human-machine interaction systems. However, the performance of a trained classifier would greatly degrade for novel users since sEMG signals are user-dependent and largely affected by a number of individual factors such as the quantity of subcutaneous fat and the skin impedance.Approach.To solve this issue, we proposed a novel unsupervised cross-individual motion recognition method that aligned sEMG features from different individuals by self-adaptive dimensional dynamic distribution adaptation (SD-DDA) in this study. In the method, both the distances of marginal and conditional distributions between source and target features were minimized through automatically selecting the optimal feature domain dimension by using a small amount of unlabeled target data.Main results.The effectiveness of the proposed method was tested on four different feature sets, and results showed that the average classification accuracy was improved by above 10% on our collected dataset with the best accuracy reached 90.4%. Compared to six kinds of classic transfer learning methods, the proposed method showed an outstanding performance with improvements of 3.2%-13.8%. Additionally, the proposed method achieved an approximate 9% improvement on a publicly available dataset.Significance.These results suggested that the proposed SD-DDA method is feasible for cross-individual motion intention recognition, which would provide help for the application of sEMG-PR based system.
Collapse
Affiliation(s)
- Yan Liu
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen Institute of Artificial Intelligence and Robotics for Society, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Xinhao Peng
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen Institute of Artificial Intelligence and Robotics for Society, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Yingxiao Tan
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen Institute of Artificial Intelligence and Robotics for Society, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Tolulope Tofunmi Oyemakinde
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen Institute of Artificial Intelligence and Robotics for Society, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Mengtao Wang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen Institute of Artificial Intelligence and Robotics for Society, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Guanglin Li
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen Institute of Artificial Intelligence and Robotics for Society, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Xiangxin Li
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Shenzhen Institute of Artificial Intelligence and Robotics for Society, Research Center for Neural Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| |
Collapse
|
9
|
Yu G, Deng Z, Bao Z, Zhang Y, He B. Gesture Classification in Electromyography Signals for Real-Time Prosthetic Hand Control Using a Convolutional Neural Network-Enhanced Channel Attention Model. Bioengineering (Basel) 2023; 10:1324. [PMID: 38002448 PMCID: PMC10669079 DOI: 10.3390/bioengineering10111324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 11/11/2023] [Accepted: 11/13/2023] [Indexed: 11/26/2023] Open
Abstract
Accurate and real-time gesture recognition is required for the autonomous operation of prosthetic hand devices. This study employs a convolutional neural network-enhanced channel attention (CNN-ECA) model to provide a unique approach for surface electromyography (sEMG) gesture recognition. The introduction of the ECA module improves the model's capacity to extract features and focus on critical information in the sEMG data, thus simultaneously equipping the sEMG-controlled prosthetic hand systems with the characteristics of accurate gesture detection and real-time control. Furthermore, we suggest a preprocessing strategy for extracting envelope signals that incorporates Butterworth low-pass filtering and the fast Hilbert transform (FHT), which can successfully reduce noise interference and capture essential physiological information. Finally, the majority voting window technique is adopted to enhance the prediction results, further improving the accuracy and stability of the model. Overall, our multi-layered convolutional neural network model, in conjunction with envelope signal extraction and attention mechanisms, offers a promising and innovative approach for real-time control systems in prosthetic hands, allowing for precise fine motor actions.
Collapse
Affiliation(s)
- Guangjie Yu
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China; (G.Y.); (Z.D.); (Z.B.)
| | - Ziting Deng
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China; (G.Y.); (Z.D.); (Z.B.)
| | - Zhenchen Bao
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China; (G.Y.); (Z.D.); (Z.B.)
| | - Yue Zhang
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China; (G.Y.); (Z.D.); (Z.B.)
- Fujian Engineering Research Center of Joint Intelligent Medical Engineering, Fuzhou 350108, China
| | - Bingwei He
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China; (G.Y.); (Z.D.); (Z.B.)
- Fujian Engineering Research Center of Joint Intelligent Medical Engineering, Fuzhou 350108, China
| |
Collapse
|
10
|
Zabihi S, Rahimian E, Asif A, Mohammadi A. TraHGR: Transformer for Hand Gesture Recognition via Electromyography. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4211-4224. [PMID: 37831560 DOI: 10.1109/tnsre.2023.3324252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
Deep learning-based Hand Gesture Recognition (HGR) via surface Electromyogram (sEMG) signals have recently shown considerable potential for development of advanced myoelectric-controlled prosthesis. Although deep learning techniques can improve HGR accuracy compared to their classical counterparts, classifying hand movements based on sparse multichannel sEMG signals is still a challenging task. Furthermore, existing deep learning approaches, typically, include only one model as such can hardly extract representative features. In this paper, we aim to address this challenge by capitalizing on the recent advances in hybrid models and transformers. In other words, we propose a hybrid framework based on the transformer architecture, which is a relatively new and revolutionizing deep learning model. The proposed hybrid architecture, referred to as the Transformer for Hand Gesture Recognition (TraHGR), consists of two parallel paths followed by a linear layer that acts as a fusion center to integrate the advantage of each module. We evaluated the proposed architecture TraHGR based on the commonly used second Ninapro dataset, referred to as the DB2. The sEMG signals in the DB2 dataset are measured in real-life conditions from 40 healthy users, each performing 49 gestures. We have conducted an extensive set of experiments to test and validate the proposed TraHGR architecture, and compare its achievable accuracy with several recently proposed HGR classification algorithms over the same dataset. We have also compared the results of the proposed TraHGR architecture with each individual path and demonstrated the distinguishing power of the proposed hybrid architecture. The recognition accuracies of the proposed TraHGR architecture for the window of size 200ms and step size of 100ms are 86.00%, 88.72%, 81.27%, and 93.74%, which are 2.30%, 4.93%, 8.65%, and 4.20% higher than the state-of-the-art performance for DB2 (49 gestures), DB2-B (17 gestures), DB2-C (23 gestures), and DB2-D (9 gestures), respectively.
Collapse
|
11
|
Abstract
Development and implementation of neuroprosthetic hands is a multidisciplinary field at the interface between humans and artificial robotic systems, which aims at replacing the sensorimotor function of the upper-limb amputees as their own. Although prosthetic hand devices with myoelectric control can be dated back to more than 70 years ago, their applications with anthropomorphic robotic mechanisms and sensory feedback functions are still at a relatively preliminary and laboratory stage. Nevertheless, a recent series of proof-of-concept studies suggest that soft robotics technology may be promising and useful in alleviating the design complexity of the dexterous mechanism and integration difficulty of multifunctional artificial skins, in particular, in the context of personalized applications. Here, we review the evolution of neuroprosthetic hands with the emerging and cutting-edge soft robotics, covering the soft and anthropomorphic prosthetic hand design and relating bidirectional neural interactions with myoelectric control and sensory feedback. We further discuss future opportunities on revolutionized mechanisms, high-performance soft sensors, and compliant neural-interaction interfaces for the next generation of neuroprosthetic hands.
Collapse
Affiliation(s)
- Guoying Gu
- Robotics Institute, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
- Meta Robotics Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Ningbin Zhang
- Robotics Institute, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Chen Chen
- Robotics Institute, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Haipeng Xu
- Robotics Institute, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xiangyang Zhu
- Robotics Institute, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
- Meta Robotics Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
12
|
Yang S, Garg NP, Gao R, Yuan M, Noronha B, Ang WT, Accoto D. Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots. SENSORS (BASEL, SWITZERLAND) 2023; 23:2998. [PMID: 36991709 PMCID: PMC10056111 DOI: 10.3390/s23062998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 03/04/2023] [Accepted: 03/08/2023] [Indexed: 06/19/2023]
Abstract
The lack of intuitive and active human-robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing system comprising inertial measurement units (IMUs), electromyographic (EMG) sensors, and mechanomyography (MMG) sensors was implemented. This system was used to acquire kinematic and physiological signals during reaching and placing tasks performed by five healthy subjects. The onset motion data of each motion trial were extracted to input into traditional regression models and deep learning models for training and testing. The models can predict the position of the hand in planar space, which is the reference position for low-level position controllers. The results show that using IMU sensor with the proposed prediction model is sufficient for motion intention detection, which can provide almost the same prediction performance compared with adding EMG or MMG. Additionally, recurrent neural network (RNN)-based models can predict target positions over a short onset time window for reaching motions and are suitable for predicting targets over a longer horizon for placing tasks. This study's detailed analysis can improve the usability of the assistive/rehabilitation robots.
Collapse
Affiliation(s)
- Sibo Yang
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Neha P. Garg
- Rehabilitation Research Institute of Singapore (RRIS), Nanyang Technological University, Singapore 308232, Singapore
| | - Ruobin Gao
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Meng Yuan
- Rehabilitation Research Institute of Singapore (RRIS), Nanyang Technological University, Singapore 308232, Singapore
| | - Bernardo Noronha
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Wei Tech Ang
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
- Rehabilitation Research Institute of Singapore (RRIS), Nanyang Technological University, Singapore 308232, Singapore
| | - Dino Accoto
- Department of Mechanical Engineering, Robotics, Automation and Mechatronics Division, KU Leuven, 3590 Diepenbeek, Belgium
| |
Collapse
|
13
|
Jiang N, Chen C, He J, Meng J, Pan L, Su S, Zhu X. Bio-robotics research for non-invasive myoelectric neural interfaces for upper-limb prosthetic control: a 10-year perspective review. Natl Sci Rev 2023; 10:nwad048. [PMID: 37056442 PMCID: PMC10089583 DOI: 10.1093/nsr/nwad048] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/01/2023] [Accepted: 02/07/2023] [Indexed: 04/05/2023] Open
Abstract
ABSTRACT
A decade ago, a group of researchers from academia and industry identified a dichotomy between the industrial and academic state-of-the-art in upper-limb prosthesis control, a widely used bio-robotics application. They proposed that four key technical challenges, if addressed, could bridge this gap and translate academic research into clinically and commercially viable products. These challenges are unintuitive control schemes, lack of sensory feedback, poor robustness and single sensor modality. Here, we provide a perspective review on the research effort that occurred in the last decade, aiming at addressing these challenges. In addition, we discuss three research areas essential to the recent development in upper-limb prosthetic control research but were not envisioned in the review 10 years ago: deep learning methods, surface electromyogram decomposition and open-source databases. To conclude the review, we provide an outlook into the near future of the research and development in upper-limb prosthetic control and beyond.
Collapse
Affiliation(s)
| | - Chen Chen
- State Key Laboratory of Mechanical System and Vibration, and Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jiayuan He
- National Clinical Research Center for Geriatrics, West China Hospital, and Med-X Center for Manufacturing, Sichuan University, Chengdu 610041, China
| | - Jianjun Meng
- State Key Laboratory of Mechanical System and Vibration, and Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lizhi Pan
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, School of Mechanical Engineering, Tianjin University, Tianjin 300350, China
| | - Shiyong Su
- Institute of Neuroscience, Université Catholique Louvain, Brussel B-1348, Belgium
| | - Xiangyang Zhu
- State Key Laboratory of Mechanical System and Vibration, and Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
14
|
Fu J, Choudhury R, Hosseini SM, Simpson R, Park JH. Myoelectric Control Systems for Upper Limb Wearable Robotic Exoskeletons and Exosuits-A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:8134. [PMID: 36365832 PMCID: PMC9655258 DOI: 10.3390/s22218134] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/13/2022] [Accepted: 10/21/2022] [Indexed: 06/16/2023]
Abstract
In recent years, myoelectric control systems have emerged for upper limb wearable robotic exoskeletons to provide movement assistance and/or to restore motor functions in people with motor disabilities and to augment human performance in able-bodied individuals. In myoelectric control, electromyographic (EMG) signals from muscles are utilized to implement control strategies in exoskeletons and exosuits, improving adaptability and human-robot interactions during various motion tasks. This paper reviews the state-of-the-art myoelectric control systems designed for upper-limb wearable robotic exoskeletons and exosuits, and highlights the key focus areas for future research directions. Here, different modalities of existing myoelectric control systems were described in detail, and their advantages and disadvantages were summarized. Furthermore, key design aspects (i.e., supported degrees of freedom, portability, and intended application scenario) and the type of experiments conducted to validate the efficacy of the proposed myoelectric controllers were also discussed. Finally, the challenges and limitations of current myoelectric control systems were analyzed, and future research directions were suggested.
Collapse
Affiliation(s)
- Jirui Fu
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA
| | - Renoa Choudhury
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA
| | - Saba M. Hosseini
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA
| | - Rylan Simpson
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA
| | - Joon-Hyuk Park
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA
| |
Collapse
|
15
|
Zhu B, Zhang D, Chu Y, Gu Y, Zhao X. SeNic: An Open Source Dataset for sEMG-Based Gesture Recognition in Non-ideal Conditions. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1252-1260. [PMID: 35533170 DOI: 10.1109/tnsre.2022.3173708] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In order to reduce the gap between the laboratory environment and actual use in daily life of human-machine interaction based on surface electromyogram (sEMG) intent recognition, this paper presents a benchmark dataset of sEMG in non-ideal conditions (SeNic). The dataset mainly consists of 8-channel sEMG signals, and electrode shifts from an 3D-printed annular ruler. A total of 36 subjects participate in our data acquisition experiments of 7 gestures in non-ideal conditions, where non-ideal factors of 1) electrode shifts, 2) individual difference, 3) muscle fatigue, 4) inter-day difference, and 5) arm postures are elaborately involved. Signals of sEMG are validated first in temporal and frequency domains. Results of recognizing gestures in ideal conditions indicate the high quality of the dataset. Adverse impacts in non-ideal conditions are further revealed in the amplitudes of these data and recognition accuracies. To be concluded, SeNic is a benchmark dataset that introduces several non-ideal factors which often degrade the robustness of sEMG-based systems. It could be used as a freely available dataset and a common platform for researchers in the sEMG-based recognition community. The benchmark dataset SeNic are available online via the website3.
Collapse
|