1
|
Gowda HT, Miller LM. Topology of surface electromyogram signals: hand gesture decoding on Riemannian manifolds. J Neural Eng 2024; 21:036047. [PMID: 38806038 DOI: 10.1088/1741-2552/ad5107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/28/2024] [Indexed: 05/30/2024]
Abstract
Objective. Decoding gestures from the upper limb using noninvasive surface electromyogram (sEMG) signals is of keen interest for the rehabilitation of amputees, artificial supernumerary limb augmentation, gestural control of computers, and virtual/augmented realities. We show that sEMG signals recorded across an array of sensor electrodes in multiple spatial locations around the forearm evince a rich geometric pattern of global motor unit (MU) activity that can be leveraged to distinguish different hand gestures.Approach. We demonstrate a simple technique to analyze spatial patterns of muscle MU activity within a temporal window and show that distinct gestures can be classified in both supervised and unsupervised manners. Specifically, we construct symmetric positive definite covariance matrices to represent the spatial distribution of MU activity in a time window of interest, calculated as pairwise covariance of electrical signals measured across different electrodes.Main results. This allows us to understand and manipulate multivariate sEMG timeseries on a more natural subspace-the Riemannian manifold. Furthermore, it directly addresses signal variability across individuals and sessions, which remains a major challenge in the field. sEMG signals measured at a single electrode lack contextual information such as how various anatomical and physiological factors influence the signals and how their combined effect alters the evident interaction among neighboring muscles.Significance. As we show here, analyzing spatial patterns using covariance matrices on Riemannian manifolds allows us to robustly model complex interactions across spatially distributed MUs and provides a flexible and transparent framework to quantify differences in sEMG signals across individuals. The proposed method is novel in the study of sEMG signals and its performance exceeds the current benchmarks while being computationally efficient.
Collapse
Affiliation(s)
- Harshavardhana T Gowda
- Department of Electrical and Computer Engineering, University of California, Davis, CA 95616, United States of America
| | - Lee M Miller
- Center for Mind and Brain; Department of Neurobiology, Physiology, and Behavior; Department of Otolaryngology-Head and Neck Surgery. University of California, Davis, CA 95616, United States of America
| |
Collapse
|
2
|
Chen X, Yang H, Zhang D, Hu X, Xie P. Hand Gesture Recognition Based on High-Density Myoelectricity in Forearm Flexors in Humans. SENSORS (BASEL, SWITZERLAND) 2024; 24:3970. [PMID: 38931754 PMCID: PMC11207234 DOI: 10.3390/s24123970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 06/16/2024] [Accepted: 06/17/2024] [Indexed: 06/28/2024]
Abstract
Electromyography-based gesture recognition has become a challenging problem in the decoding of fine hand movements. Recent research has focused on improving the accuracy of gesture recognition by increasing the complexity of network models. However, training a complex model necessitates a significant amount of data, thereby escalating both user burden and computational costs. Moreover, owing to the considerable variability of surface electromyography (sEMG) signals across different users, conventional machine learning approaches reliant on a single feature fail to meet the demand for precise gesture recognition tailored to individual users. Therefore, to solve the problems of large computational cost and poor cross-user pattern recognition performance, we propose a feature selection method that combines mutual information, principal component analysis and the Pearson correlation coefficient (MPP). This method can filter out the optimal subset of features that match a specific user while combining with an SVM classifier to accurately and efficiently recognize the user's gesture movements. To validate the effectiveness of the above method, we designed an experiment including five gesture actions. The experimental results show that compared to the classification accuracy obtained using a single feature, we achieved an improvement of about 5% with the optimally selected feature as the input to any of the classifiers. This study provides an effective guarantee for user-specific fine hand movement decoding based on sEMG signals.
Collapse
Affiliation(s)
- Xiaoling Chen
- Institute of Electric Engineering, Yanshan University, Qinhuangdao 066004, China; (X.C.); (H.Y.); (D.Z.); (X.H.)
- Key Laboratory of Measurement Technology and Instrumentation of Hebei Province, Institute of Electric Engineering, Yanshan University, Qinhuangdao 066004, China
| | - Huaigang Yang
- Institute of Electric Engineering, Yanshan University, Qinhuangdao 066004, China; (X.C.); (H.Y.); (D.Z.); (X.H.)
| | - Dong Zhang
- Institute of Electric Engineering, Yanshan University, Qinhuangdao 066004, China; (X.C.); (H.Y.); (D.Z.); (X.H.)
| | - Xinfeng Hu
- Institute of Electric Engineering, Yanshan University, Qinhuangdao 066004, China; (X.C.); (H.Y.); (D.Z.); (X.H.)
| | - Ping Xie
- Institute of Electric Engineering, Yanshan University, Qinhuangdao 066004, China; (X.C.); (H.Y.); (D.Z.); (X.H.)
- Key Laboratory of Measurement Technology and Instrumentation of Hebei Province, Institute of Electric Engineering, Yanshan University, Qinhuangdao 066004, China
| |
Collapse
|
3
|
Lee H, Jiang M, Yang J, Yang Z, Zhao Q. Unveiling EMG semantics: a prototype-learning approach to generalizable gesture classification. J Neural Eng 2024; 21:036031. [PMID: 38754410 DOI: 10.1088/1741-2552/ad4c98] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 05/16/2024] [Indexed: 05/18/2024]
Abstract
Objective.Upper limb loss can profoundly impact an individual's quality of life, posing challenges to both physical capabilities and emotional well-being. To restore limb function by decoding electromyography (EMG) signals, in this paper, we present a novel deep prototype learning method for accurate and generalizable EMG-based gesture classification. Existing methods suffer from limitations in generalization across subjects due to the diverse nature of individual muscle responses, impeding seamless applicability in broader populations.Approach.By leveraging deep prototype learning, we introduce a method that goes beyond direct output prediction. Instead, it matches new EMG inputs to a set of learned prototypes and predicts the corresponding labels.Main results.This novel methodology significantly enhances the model's classification performance and generalizability by discriminating subtle differences between gestures, making it more reliable and precise in real-world applications. Our experiments on four Ninapro datasets suggest that our deep prototype learning classifier outperforms state-of-the-art methods in terms of intra-subject and inter-subject classification accuracy in gesture prediction.Significance.The results from our experiments validate the effectiveness of the proposed method and pave the way for future advancements in the field of EMG gesture classification for upper limb prosthetics.
Collapse
Affiliation(s)
- Hunmin Lee
- Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN, United States of America
| | - Ming Jiang
- Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN, United States of America
| | - Jinhui Yang
- Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN, United States of America
| | - Zhi Yang
- Department of Biomedical and Engineering, University of Minnesota, Twin Cities, MN, United States of America
| | - Qi Zhao
- Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN, United States of America
| |
Collapse
|
4
|
Hellara H, Barioul R, Sahnoun S, Fakhfakh A, Kanoun O. Comparative Study of sEMG Feature Evaluation Methods Based on the Hand Gesture Classification Performance. SENSORS (BASEL, SWITZERLAND) 2024; 24:3638. [PMID: 38894429 PMCID: PMC11175337 DOI: 10.3390/s24113638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 05/27/2024] [Accepted: 06/01/2024] [Indexed: 06/21/2024]
Abstract
Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture recognition. The investigation is based on several benchmark datasets and one real hand gesture dataset, including 15 hand force exercises collected from 14 healthy subjects using eight commercial sEMG sensors. A total of 37 time- and frequency-domain features were extracted from each sEMG channel. The benchmark dataset revealed that the minimum Redundancy Maximum Relevance (mRMR) feature evaluation method had the poorest performance, resulting in a decrease in classification accuracy. However, the RFE method demonstrated the potential to enhance classification accuracy across most of the datasets. It selected a feature subset comprising 65 features, which led to an accuracy of 97.14%. The Mutual Information (MI) method selected 200 features to reach an accuracy of 97.38%. The Feature Importance (FI) method reached a higher accuracy of 97.62% but selected 140 features. Further investigations have shown that selecting 65 and 75 features with the RFE methods led to an identical accuracy of 97.14%. A thorough examination of the selected features revealed the potential for three additional features from three specific sensors to enhance the classification accuracy to 97.38%. These results highlight the significance of employing an appropriate feature selection method to significantly reduce the number of necessary features while maintaining classification accuracy. They also underscore the necessity for further analysis and refinement to achieve optimal solutions.
Collapse
Affiliation(s)
- Hiba Hellara
- Professorship for Measurements and Sensor Technology, Chemnitz University of Technology, Rechenhainer Straße 70, 09126 Chemnitz, Germany; (H.H.); (R.B.)
- Laboratory of Signals, Systems, Artificial Intelligence and Networks, Digital Research Centre of Sfax, National School of Electronics and Telecommunications of Sfax, University of Sfax, Technopole of Sfax, Sfax 3021, Tunisia; (S.S.); (A.F.)
| | - Rim Barioul
- Professorship for Measurements and Sensor Technology, Chemnitz University of Technology, Rechenhainer Straße 70, 09126 Chemnitz, Germany; (H.H.); (R.B.)
| | - Salwa Sahnoun
- Laboratory of Signals, Systems, Artificial Intelligence and Networks, Digital Research Centre of Sfax, National School of Electronics and Telecommunications of Sfax, University of Sfax, Technopole of Sfax, Sfax 3021, Tunisia; (S.S.); (A.F.)
| | - Ahmed Fakhfakh
- Laboratory of Signals, Systems, Artificial Intelligence and Networks, Digital Research Centre of Sfax, National School of Electronics and Telecommunications of Sfax, University of Sfax, Technopole of Sfax, Sfax 3021, Tunisia; (S.S.); (A.F.)
| | - Olfa Kanoun
- Professorship for Measurements and Sensor Technology, Chemnitz University of Technology, Rechenhainer Straße 70, 09126 Chemnitz, Germany; (H.H.); (R.B.)
| |
Collapse
|
5
|
Chen B, Chen Z, Chen X, Mao S, Pan F, Li L, Liu W, Min H, Ding X, Fang B, Sun F, Wen L. Teleoperation of an Anthropomorphic Robot Hand with a Metamorphic Palm and Tunable-Stiffness Soft Fingers. Soft Robot 2024; 11:508-518. [PMID: 38386776 DOI: 10.1089/soro.2023.0062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2024] Open
Abstract
Teleoperation in soft robotics can endow soft robots with the ability to perform complex tasks through human-robot interaction. In this study, we propose a teleoperated anthropomorphic soft robot hand with variable degrees of freedom (DOFs) and a metamorphic palm. The soft robot hand consists of four pneumatic-actuated fingers, which can be heated to tune stiffness. A metamorphic mechanism was actuated to morph the hand palm by servo motors. The human fingers' DOF, gesture, and muscle stiffness were collected and mapped to the soft robotic hand through the sensory feedback from surface electromyography devices on the jib. The results show that the proposed soft robot hand can generate a variety of anthropomorphic configurations and can be remotely controlled to perform complex tasks such as primitively operating the cell phone and placing the building blocks. We also show that the soft hand can grasp a target through the slit by varying the DOFs and stiffness in a trail.
Collapse
Affiliation(s)
- Bohan Chen
- Department of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Ziming Chen
- Department of Robotics and Intelligent Systems, Wuhan University of Science and Technology, Wuhan, China
| | - Xingyu Chen
- Department of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Sizhe Mao
- Sino-French Engineer School, Beihang University, Beijing, China
| | - Fei Pan
- Department of Aeronautic Science and Engineering, Beihang University, Beijing, China
| | - Lei Li
- Department of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Wenbo Liu
- Department of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Huasong Min
- Department of Robotics and Intelligent Systems, Wuhan University of Science and Technology, Wuhan, China
| | - Xilun Ding
- Department of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Bin Fang
- Department of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Fuchun Sun
- Department of Computer Science, Tsinghua University, Beijing, China
| | - Li Wen
- Department of Mechanical Engineering and Automation, Beihang University, Beijing, China
| |
Collapse
|
6
|
Li W, Zhang X, Shi P, Li S, Li P, Yu H. Across Sessions and Subjects Domain Adaptation for Building Robust Myoelectric Interface. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2005-2015. [PMID: 38147425 DOI: 10.1109/tnsre.2023.3347540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
Gesture interaction via surface electromyography (sEMG) signal is a promising approach for advanced human-computer interaction systems. However, improving the performance of the myoelectric interface is challenging due to the domain shift caused by the signal's inherent variability. To enhance the interface's robustness, we propose a novel adaptive information fusion neural network (AIFNN) framework, which could effectively reduce the effects of multiple scenarios. Specifically, domain adversarial training is established to inhibit the shared network's weights from exploiting domain-specific representation, thus allowing for the extraction of domain-invariant features. Effectively, classification loss, domain diversence loss and domain discrimination loss are employed, which improve classification performance while reduce distribution mismatches between the two domains. To simulate the application of myoelectric interface, experiments were carried out involving three scenarios (intra-session, inter-session and inter-subject scenarios). Ten non-disabled subjects were recruited to perform sixteen gestures for ten consecutive days. The experimental results indicated that the performance of AIFNN was better than two other state-of-the-art transfer learning approaches, namely fine-tuning (FT) and domain adversarial network (DANN). This study demonstrates the capability of AIFNN to maintain robustness over time and generalize across users in practical myoelectric interface implementations. These findings could serve as a foundation for future deployments.
Collapse
|
7
|
Eddy E, Campbell E, Bateman S, Scheme E. Understanding the influence of confounding factors in myoelectric control for discrete gesture recognition. J Neural Eng 2024; 21:036015. [PMID: 38722304 DOI: 10.1088/1741-2552/ad4915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Accepted: 05/09/2024] [Indexed: 05/18/2024]
Abstract
Discrete myoelectric control-based gesture recognition has recently gained interest as a possible input modality for many emerging ubiquitous computing applications. Unlike the continuous control commonly employed in powered prostheses, discrete systems seek to recognize the dynamic sequences associated with gestures to generate event-based inputs. More akin to those used in general-purpose human-computer interaction, these could include, for example, a flick of the wrist to dismiss a phone call or a double tap of the index finger and thumb to silence an alarm. Moelectric control systems have been shown to achieve near-perfect classification accuracy, but in highly constrained offline settings. Real-world, online systems are subject to 'confounding factors' (i.e. factors that hinder the real-world robustness of myoelectric control that are not accounted for during typical offline analyses), which inevitably degrade system performance, limiting their practical use. Although these factors have been widely studied in continuous prosthesis control, there has been little exploration of their impacts on discrete myoelectric control systems for emerging applications and use cases. Correspondingly, this work examines, for the first time, three confounding factors and their effect on the robustness of discrete myoelectric control: (1)limb position variability, (2)cross-day use, and a newly identified confound faced by discrete systems (3)gesture elicitation speed. Results from four different discrete myoelectric control architectures: (1) Majority Vote LDA, (2) Dynamic Time Warping, (3) an LSTM network trained with Cross Entropy, and (4) an LSTM network trained with Contrastive Learning, show that classification accuracy is significantly degraded (p<0.05) as a result of each of these confounds. This work establishes that confounding factors are a critical barrier that must be addressed to enable the real-world adoption of discrete myoelectric control for robust and reliable gesture recognition.
Collapse
Affiliation(s)
- Ethan Eddy
- University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Evan Campbell
- University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Scott Bateman
- University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Erik Scheme
- University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| |
Collapse
|
8
|
Zheng Y, Zheng G, Zhang H, Zhao B, Sun P. Mapping Method of Human Arm Motion Based on Surface Electromyography Signals. SENSORS (BASEL, SWITZERLAND) 2024; 24:2827. [PMID: 38732933 PMCID: PMC11086324 DOI: 10.3390/s24092827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 04/19/2024] [Accepted: 04/24/2024] [Indexed: 05/13/2024]
Abstract
This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices.
Collapse
Affiliation(s)
- Yuanyuan Zheng
- School of Mechanical and Energy Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
- Key Laboratory of Special Purpose Equipment and Advanced Processing Technology, Ministry of Education and Zhejiang Province, Zhejiang University of Technology, Hangzhou 310023, China
| | - Gang Zheng
- School of Mechanical and Energy Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
| | - Hanqi Zhang
- College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| | - Bochen Zhao
- College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| | - Peng Sun
- Key Laboratory of Special Purpose Equipment and Advanced Processing Technology, Ministry of Education and Zhejiang Province, Zhejiang University of Technology, Hangzhou 310023, China
- College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| |
Collapse
|
9
|
Hu Z, Wang S, Ou C, Ge A, Li X. Study on Gesture Recognition Method with Two-Stream Residual Network Fusing sEMG Signals and Acceleration Signals. SENSORS (BASEL, SWITZERLAND) 2024; 24:2702. [PMID: 38732808 PMCID: PMC11085498 DOI: 10.3390/s24092702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024]
Abstract
Currently, surface EMG signals have a wide range of applications in human-computer interaction systems. However, selecting features for gesture recognition models based on traditional machine learning can be challenging and may not yield satisfactory results. Considering the strong nonlinear generalization ability of neural networks, this paper proposes a two-stream residual network model with an attention mechanism for gesture recognition. One branch processes surface EMG signals, while the other processes hand acceleration signals. Segmented networks are utilized to fully extract the physiological and kinematic features of the hand. To enhance the model's capacity to learn crucial information, we introduce an attention mechanism after global average pooling. This mechanism strengthens relevant features and weakens irrelevant ones. Finally, the deep features obtained from the two branches of learning are fused to further improve the accuracy of multi-gesture recognition. The experiments conducted on the NinaPro DB2 public dataset resulted in a recognition accuracy of 88.25% for 49 gestures. This demonstrates that our network model can effectively capture gesture features, enhancing accuracy and robustness across various gestures. This approach to multi-source information fusion is expected to provide more accurate and real-time commands for exoskeleton robots and myoelectric prosthetic control systems, thereby enhancing the user experience and the naturalness of robot operation.
Collapse
Affiliation(s)
- Zhigang Hu
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang 471023, China; (Z.H.); (C.O.); (A.G.)
| | - Shen Wang
- School of Mechanical and Electrical Engineering, Henan University of Science and Technology, Luoyang 471003, China;
| | - Cuisi Ou
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang 471023, China; (Z.H.); (C.O.); (A.G.)
| | - Aoru Ge
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang 471023, China; (Z.H.); (C.O.); (A.G.)
| | - Xiangpan Li
- School of Mechanical and Electrical Engineering, Henan University of Science and Technology, Luoyang 471003, China;
| |
Collapse
|
10
|
Qamar HGM, Qureshi MF, Mushtaq Z, Zubariah Z, Rehman MZU, Samee NA, Mahmoud NF, Gu YH, Al-Masni MA. EMG gesture signal analysis towards diagnosis of upper limb using dual-pathway convolutional neural network. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5712-5734. [PMID: 38872555 DOI: 10.3934/mbe.2024252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
This research introduces a novel dual-pathway convolutional neural network (DP-CNN) architecture tailored for robust performance in Log-Mel spectrogram image analysis derived from raw multichannel electromyography signals. The primary objective is to assess the effectiveness of the proposed DP-CNN architecture across three datasets (NinaPro DB1, DB2, and DB3), encompassing both able-bodied and amputee subjects. Performance metrics, including accuracy, precision, recall, and F1-score, are employed for comprehensive evaluation. The DP-CNN demonstrates notable mean accuracies of 94.93 ± 1.71% and 94.00 ± 3.65% on NinaPro DB1 and DB2 for healthy subjects, respectively. Additionally, it achieves a robust mean classification accuracy of 85.36 ± 0.82% on amputee subjects in DB3, affirming its efficacy. Comparative analysis with previous methodologies on the same datasets reveals substantial improvements of 28.33%, 26.92%, and 39.09% over the baseline for DB1, DB2, and DB3, respectively. The DP-CNN's superior performance extends to comparisons with transfer learning models for image classification, reaffirming its efficacy. Across diverse datasets involving both able-bodied and amputee subjects, the DP-CNN exhibits enhanced capabilities, holding promise for advancing myoelectric control.
Collapse
Affiliation(s)
| | - Muhammad Farrukh Qureshi
- Department of Electrical Engineering, Riphah International University, Islamabad 44000, Pakistan
| | - Zohaib Mushtaq
- Department of Electrical, Electronics and Computer Systems, College of Engineering and Technology, University of Sargodha, Sargodha 40100, Pakistan
| | - Zubariah Zubariah
- Department of Physiotherapy, Isfandyar Bukhari Civil Hospital, District Headquarter Hospital, Attock 43600, Pakistan
| | - Muhammad Zia Ur Rehman
- Department of Biomedical Engineering, Riphah International University, Islamabad 44000, Pakistan
- Department of Health Science and Technology, Aalborg University, Aalborg 9220, Denmark
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Noha F Mahmoud
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence Data Science, College of Software & Convergence Technology, Sejong University, Seoul 05006, Republic of Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence Data Science, College of Software & Convergence Technology, Sejong University, Seoul 05006, Republic of Korea
| |
Collapse
|
11
|
Xu T, Zhao K, Hu Y, Li L, Wang W, Wang F, Zhou Y, Li J. Transferable non-invasive modal fusion-transformer (NIMFT) for end-to-end hand gesture recognition. J Neural Eng 2024; 21:026034. [PMID: 38565124 DOI: 10.1088/1741-2552/ad39a5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 04/02/2024] [Indexed: 04/04/2024]
Abstract
Objective.Recent studies have shown that integrating inertial measurement unit (IMU) signals with surface electromyographic (sEMG) can greatly improve hand gesture recognition (HGR) performance in applications such as prosthetic control and rehabilitation training. However, current deep learning models for multimodal HGR encounter difficulties in invasive modal fusion, complex feature extraction from heterogeneous signals, and limited inter-subject model generalization. To address these challenges, this study aims to develop an end-to-end and inter-subject transferable model that utilizes non-invasively fused sEMG and acceleration (ACC) data.Approach.The proposed non-invasive modal fusion-transformer (NIMFT) model utilizes 1D-convolutional neural networks-based patch embedding for local information extraction and employs a multi-head cross-attention (MCA) mechanism to non-invasively integrate sEMG and ACC signals, stabilizing the variability induced by sEMG. The proposed architecture undergoes detailed ablation studies after hyperparameter tuning. Transfer learning is employed by fine-tuning a pre-trained model on new subject and a comparative analysis is performed between the fine-tuning and subject-specific model. Additionally, the performance of NIMFT is compared to state-of-the-art fusion models.Main results.The NIMFT model achieved recognition accuracies of 93.91%, 91.02%, and 95.56% on the three action sets in the Ninapro DB2 dataset. The proposed embedding method and MCA outperformed the traditional invasive modal fusion transformer by 2.01% (embedding) and 1.23% (fusion), respectively. In comparison to subject-specific models, the fine-tuning model exhibited the highest average accuracy improvement of 2.26%, achieving a final accuracy of 96.13%. Moreover, the NIMFT model demonstrated superiority in terms of accuracy, recall, precision, and F1-score compared to the latest modal fusion models with similar model scale.Significance.The NIMFT is a novel end-to-end HGR model, utilizes a non-invasive MCA mechanism to integrate long-range intermodal information effectively. Compared to recent modal fusion models, it demonstrates superior performance in inter-subject experiments and offers higher training efficiency and accuracy levels through transfer learning than subject-specific approaches.
Collapse
Affiliation(s)
- Tianxiang Xu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Kunkun Zhao
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Yuxiang Hu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Liang Li
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Wei Wang
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Fulin Wang
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- Nanjing PANDA Electronics Equipment Co., Ltd, Nanjing 210033, People's Republic of China
| | - Yuxuan Zhou
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Jianqing Li
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| |
Collapse
|
12
|
Moslhi AM, Aly HH, ElMessiery M. The Impact of Feature Extraction on Classification Accuracy Examined by Employing a Signal Transformer to Classify Hand Gestures Using Surface Electromyography Signals. SENSORS (BASEL, SWITZERLAND) 2024; 24:1259. [PMID: 38400416 PMCID: PMC10893156 DOI: 10.3390/s24041259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/01/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
Interest in developing techniques for acquiring and decoding biological signals is on the rise in the research community. This interest spans various applications, with a particular focus on prosthetic control and rehabilitation, where achieving precise hand gesture recognition using surface electromyography signals is crucial due to the complexity and variability of surface electromyography data. Advanced signal processing and data analysis techniques are required to effectively extract meaningful information from these signals. In our study, we utilized three datasets: NinaPro Database 1, CapgMyo Database A, and CapgMyo Database B. These datasets were chosen for their open-source availability and established role in evaluating surface electromyography classifiers. Hand gesture recognition using surface electromyography signals draws inspiration from image classification algorithms, leading to the introduction and development of the Novel Signal Transformer. We systematically investigated two feature extraction techniques for surface electromyography signals: the Fast Fourier Transform and wavelet-based feature extraction. Our study demonstrated significant advancements in surface electromyography signal classification, particularly in the Ninapro database 1 and CapgMyo dataset A, surpassing existing results in the literature. The newly introduced Signal Transformer outperformed traditional Convolutional Neural Networks by excelling in capturing structural details and incorporating global information from image-like signals through robust basis functions. Additionally, the inclusion of an attention mechanism within the Signal Transformer highlighted the significance of electrode readings, improving classification accuracy. These findings underscore the potential of the Signal Transformer as a powerful tool for precise and effective surface electromyography signal classification, promising applications in prosthetic control and rehabilitation.
Collapse
Affiliation(s)
- Aly Medhat Moslhi
- Faculty of Engineering, The Arab Academy for Science, Technology & Maritime Transport, Smart Village Campus, Giza P.O. Box 2033, Egypt;
| | - Hesham H. Aly
- Faculty of Engineering, The Arab Academy for Science, Technology & Maritime Transport, Smart Village Campus, Giza P.O. Box 2033, Egypt;
| | - Medhat ElMessiery
- Faculty of Engineering, Cairo University, Giza P.O. Box 2033, Egypt;
| |
Collapse
|
13
|
Emimal M, Hans WJ, Inbamalar TM, Lindsay NM. Classification of EMG signals with CNN features and voting ensemble classifier. Comput Methods Biomech Biomed Engin 2024:1-15. [PMID: 38317414 DOI: 10.1080/10255842.2024.2310726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 01/20/2024] [Indexed: 02/07/2024]
Abstract
Electromyography (EMG) signals are primarily used to control prosthetic hands. Classifying hand gestures efficiently with EMG signals presents numerous challenges. In addition to overcoming these challenges, a successful combination of feature extraction and classification approaches will improve classification accuracy. In the current work, convolutional neural network (CNN) features are used to reduce the redundancy problems associated with time and frequency domain features to improve classification accuracy. The features from the EMG signal are extracted using a CNN and are fed to the 'k' nearest neighbor (KNN) classifier with a different number of neighbors ( 1 N N , 3 N N , 5 N N , and 7 N N ) . It results in an ensemble of classifiers that are combined using a hard voting-based classifier. Based on the benchmark Ninapro DB4 database and CapgMyo database, the proposed framework obtained 91.3 % classification accuracy on CapgMyo and 89.5 % on Ninapro DB4.
Collapse
Affiliation(s)
- M Emimal
- Department of ECE, Sri Sivasubramaniya Nadar College of Engineering, Chennai, TamilNadu, India
| | - W Jino Hans
- Department of ECE, Sri Sivasubramaniya Nadar College of Engineering, Chennai, TamilNadu, India
| | - T M Inbamalar
- Department of ECE, RMK College of Engineering and Technology, Chennai, TamilNadu, India
| | - N Mahiban Lindsay
- Department of EEE, Hindustan Institute of Technology and Science, Chennai, TamilNadu, India
| |
Collapse
|
14
|
M E, Hans WJ, T M I, Lindsay NM. Multi-scale EMG classification with spatial-temporal attention for prosthetic hands. Comput Methods Biomech Biomed Engin 2023:1-16. [PMID: 38037332 DOI: 10.1080/10255842.2023.2287419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/20/2023] [Indexed: 12/02/2023]
Abstract
A classification framework for hand gestures using Electromyography (EMG) signals in prosthetic hands is presented. Leveraging the multi-scale characteristics and temporal nature of EMG signals, a Convolutional Neural Network (CNN) is used to extract multi-scale features and classify them with spatial-temporal attention. A multi-scale coarse-grained layer introduced into the input of one-dimensional CNN (1D-CNN) facilitates multi-scale feature extraction. The multi-scale features are fed into the attention layer and subsequently given to the fully connected layer to perform classification. The proposed model achieves classification accuracies of 93.4%, 92.8%, 91.3%, and 94.1% for Ninapro DB1, DB2, DB5, and DB7 respectively, thereby enhancing the confidence of prosthetic hand users.
Collapse
Affiliation(s)
- Emimal M
- Department of Electronics and Communication Engineering, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, Chennai, India
| | - W Jino Hans
- Department of Electronics and Communication Engineering, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, Chennai, India
| | - Inbamalar T M
- Department of Electronics and Communication Engineering, RMK College of Engineering and Technology, Puduvoyal, Chennai, India
| | - N Mahiban Lindsay
- Department of Electrical and Electronics Engineering, Hindustan Institute of Technology and Science, Padur, Chennai, India
| |
Collapse
|
15
|
Xiong B, Chen W, Niu Y, Gan Z, Mao G, Xu Y. A Global and Local Feature fused CNN architecture for the sEMG-based hand gesture recognition. Comput Biol Med 2023; 166:107497. [PMID: 37783073 DOI: 10.1016/j.compbiomed.2023.107497] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 07/22/2023] [Accepted: 09/15/2023] [Indexed: 10/04/2023]
Abstract
Deep learning methods have been widely used for the classification of hand gestures using sEMG signals. Existing deep learning architectures only captures local spatial information and has limitations in extracting global temporal dependency to enhance the model's performance. In this paper, we propose a Global and Local Feature fused CNN (GLF-CNN) model that extracts features both globally and locally from sEMG signals to enhance the performance of hand gestures classification. The model contains two independent branches extracting local and global features each and fuses them to learn more diversified features and effectively improve the stability of gesture recognition. Besides, it also exhibits lower computational cost compared to the present approaches. We conduct experiments on five benchmark databases, including the NinaPro DB4, NinaPro DB5, BioPatRec DB1-DB3, and the Mendeley Data. The proposed model achieved the highest average accuracy of 88.34% on these databases, with a 9.96% average accuracy improvement and a 50% reduction in variance compared to the models with the same number of parameters. Moreover, the classification accuracies for the BioPatRec DB1, BioPatRec DB3 and Mendeley Data are 91.4%, 91.0% and 88.6% respectively, corresponding to an improvement of 13.2%, 41.5% and 12.2% over the respective state-of-the-art models. The experimental results demonstrate that the proposed model effectively enhances robustness, with improved gesture recognition performance and generalization ability. It contributes a new way for prosthetic control and human-machine interaction.
Collapse
Affiliation(s)
- Baoping Xiong
- Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China
| | - Wensheng Chen
- Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China
| | - Yinxi Niu
- Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China
| | - Zhenhua Gan
- Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China
| | - Guojun Mao
- Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China
| | - Yong Xu
- Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China.
| |
Collapse
|
16
|
Fan L, Zhang Z, Zhu B, Zuo D, Yu X, Wang Y. Smart-Data-Glove-Based Gesture Recognition for Amphibious Communication. MICROMACHINES 2023; 14:2050. [PMID: 38004907 PMCID: PMC10673220 DOI: 10.3390/mi14112050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 10/23/2023] [Accepted: 10/30/2023] [Indexed: 11/26/2023]
Abstract
This study has designed and developed a smart data glove based on five-channel flexible capacitive stretch sensors and a six-axis inertial measurement unit (IMU) to recognize 25 static hand gestures and ten dynamic hand gestures for amphibious communication. The five-channel flexible capacitive sensors are fabricated on a glove to capture finger motion data in order to recognize static hand gestures and integrated with six-axis IMU data to recognize dynamic gestures. This study also proposes a novel amphibious hierarchical gesture recognition (AHGR) model. This model can adaptively switch between large complex and lightweight gesture recognition models based on environmental changes to ensure gesture recognition accuracy and effectiveness. The large complex model is based on the proposed SqueezeNet-BiLSTM algorithm, specially designed for the land environment, which will use all the sensory data captured from the smart data glove to recognize dynamic gestures, achieving a recognition accuracy of 98.21%. The lightweight stochastic singular value decomposition (SVD)-optimized spectral clustering gesture recognition algorithm for underwater environments that will perform direct inference on the glove-end side can reach an accuracy of 98.35%. This study also proposes a domain separation network (DSN)-based gesture recognition transfer model that ensures a 94% recognition accuracy for new users and new glove devices.
Collapse
Affiliation(s)
- Liufeng Fan
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China; (L.F.); (Z.Z.); (X.Y.); (Y.W.)
| | - Zhan Zhang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China; (L.F.); (Z.Z.); (X.Y.); (Y.W.)
| | - Biao Zhu
- Department of Electronic and Information Science, University of Science and Technology of China, Hefei 230052, China;
| | - Decheng Zuo
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China; (L.F.); (Z.Z.); (X.Y.); (Y.W.)
| | - Xintong Yu
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China; (L.F.); (Z.Z.); (X.Y.); (Y.W.)
| | - Yiwei Wang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China; (L.F.); (Z.Z.); (X.Y.); (Y.W.)
| |
Collapse
|
17
|
Zabihi S, Rahimian E, Asif A, Mohammadi A. TraHGR: Transformer for Hand Gesture Recognition via Electromyography. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4211-4224. [PMID: 37831560 DOI: 10.1109/tnsre.2023.3324252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
Deep learning-based Hand Gesture Recognition (HGR) via surface Electromyogram (sEMG) signals have recently shown considerable potential for development of advanced myoelectric-controlled prosthesis. Although deep learning techniques can improve HGR accuracy compared to their classical counterparts, classifying hand movements based on sparse multichannel sEMG signals is still a challenging task. Furthermore, existing deep learning approaches, typically, include only one model as such can hardly extract representative features. In this paper, we aim to address this challenge by capitalizing on the recent advances in hybrid models and transformers. In other words, we propose a hybrid framework based on the transformer architecture, which is a relatively new and revolutionizing deep learning model. The proposed hybrid architecture, referred to as the Transformer for Hand Gesture Recognition (TraHGR), consists of two parallel paths followed by a linear layer that acts as a fusion center to integrate the advantage of each module. We evaluated the proposed architecture TraHGR based on the commonly used second Ninapro dataset, referred to as the DB2. The sEMG signals in the DB2 dataset are measured in real-life conditions from 40 healthy users, each performing 49 gestures. We have conducted an extensive set of experiments to test and validate the proposed TraHGR architecture, and compare its achievable accuracy with several recently proposed HGR classification algorithms over the same dataset. We have also compared the results of the proposed TraHGR architecture with each individual path and demonstrated the distinguishing power of the proposed hybrid architecture. The recognition accuracies of the proposed TraHGR architecture for the window of size 200ms and step size of 100ms are 86.00%, 88.72%, 81.27%, and 93.74%, which are 2.30%, 4.93%, 8.65%, and 4.20% higher than the state-of-the-art performance for DB2 (49 gestures), DB2-B (17 gestures), DB2-C (23 gestures), and DB2-D (9 gestures), respectively.
Collapse
|
18
|
Xia M, Chen C, Xu Y, Li Y, Sheng X, Ding H. Extracting Individual Muscle Drive and Activity From High-Density Surface Electromyography Signals Based on the Center of Gravity of Motor Unit. IEEE Trans Biomed Eng 2023; 70:2852-2862. [PMID: 37043313 DOI: 10.1109/tbme.2023.3266575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/13/2023]
Abstract
Neural interfacing has played an essential role in advancing our understanding of fundamental movement neurophysiology and the development of human-machine interface. However, direct neural interfaces from brain and nerve recording are currently limited in clinical areas for their invasiveness and high selectivity. Here, we applied the surface electromyogram (EMG) in studying the neural control of movement and proposed a new non-invasive way of extracting neural drive to individual muscles. Sixteen subjects performed isometric contractions to complete six hand tasks. High-density surface EMG signals (256 channels in total) recorded from the forearm muscles were decomposed into motor unit firing trains. The location of each decomposed motor unit was represented by its center of gravity and was put into clustering for distinct muscle regions. All the motor units in the same cluster served as a muscle-specific motor pool from which individual muscle drive could be extracted directly. Moreover, we cross-validated the self-clustered muscle regions by magnetic resonance imaging (MRI) recorded from the subjects' forearms. All motor units that fall within the MRI region are considered correctly clustered. We achieved a clustering accuracy of 95.72% ± 4.01% for all subjects. We provided a new framework for collecting experimental muscle-specific drives and generalized the way of surface electrode placement without prior knowledge of the targeting muscle architecture.
Collapse
|
19
|
Zhang J, Matsuda Y, Fujimoto M, Suwa H, Yasumoto K. Movement recognition via channel-activation-wise sEMG attention. Methods 2023; 218:39-47. [PMID: 37479003 DOI: 10.1016/j.ymeth.2023.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 04/06/2023] [Accepted: 06/28/2023] [Indexed: 07/23/2023] Open
Abstract
CONTEXT Surface electromyography (sEMG) signals contain rich information recorded from muscle movements and therefore reflect the user's intention. sEMG has seen dominant applications in rehabilitation, clinical diagnosis as well as human engineering, etc. However, current feature extraction methods for sEMG signals have been seriously limited by their stochasticity, transiency, and non-stationarity. OBJECTIVE Our objective is to combat the difficulties induced by the aforementioned downsides of sEMG and thereby extract representative features for various downstream movement recognition. METHOD We propose a novel 3-axis view of sEMG features composed of temporal, spatial, and channel-wise summary. We leverage the state-of-the-art architecture Transformer to enforce efficient parallel search and to get rid of limitations imposed by previous work in gesture classification. The transformer model is designed on top of an attention-based module, which allows for the extraction of global contextual relevance among channels and the use of this relevance for sEMG recognition. RESULTS We compared the proposed method against existing methods on two Ninapro datasets consisting of data from both healthy people and amputees. Experimental results show the proposed method attains the state-of-the-art (SOTA) accuracy on both datasets. We further show that the proposed method enjoys strong generalization ability: a new SOTA is achieved by pretraining the model on a different dataset followed by fine-tuning it on the target dataset.
Collapse
Affiliation(s)
- Jiaxuan Zhang
- Nara Institute of Science and Technology (NAIST), Ikoma, Nara 630-0192, Japan.
| | - Yuki Matsuda
- Nara Institute of Science and Technology (NAIST), Ikoma, Nara 630-0192, Japan
| | | | - Hirohiko Suwa
- Nara Institute of Science and Technology (NAIST), Ikoma, Nara 630-0192, Japan
| | - Keiichi Yasumoto
- Nara Institute of Science and Technology (NAIST), Ikoma, Nara 630-0192, Japan
| |
Collapse
|
20
|
Dai Q, Wong Y, Kankanhali M, Li X, Geng W. Improved Network and Training Scheme for Cross-Trial Surface Electromyography (sEMG)-Based Gesture Recognition. Bioengineering (Basel) 2023; 10:1101. [PMID: 37760203 PMCID: PMC10525369 DOI: 10.3390/bioengineering10091101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 09/14/2023] [Accepted: 09/18/2023] [Indexed: 09/29/2023] Open
Abstract
To enhance the performance of surface electromyography (sEMG)-based gesture recognition, we propose a novel network-agnostic two-stage training scheme, called sEMGPoseMIM, that produces trial-invariant representations to be aligned with corresponding hand movements via cross-modal knowledge distillation. In the first stage, an sEMG encoder is trained via cross-trial mutual information maximization using the sEMG sequences sampled from the same time step but different trials in a contrastive learning manner. In the second stage, the learned sEMG encoder is fine-tuned with the supervision of gesture and hand movements in a knowledge-distillation manner. In addition, we propose a novel network called sEMGXCM as the sEMG encoder. Comprehensive experiments on seven sparse multichannel sEMG databases are conducted to demonstrate the effectiveness of the training scheme sEMGPoseMIM and the network sEMGXCM, which achieves an average improvement of +1.3% on the sparse multichannel sEMG databases compared to the existing methods. Furthermore, the comparison between training sEMGXCM and other existing networks from scratch shows that sEMGXCM outperforms the others by an average of +1.5%.
Collapse
Affiliation(s)
- Qingfeng Dai
- College of Computer Science and Technology, Faculty of Computer, Zhejiang University, Hangzhou 310058, China; (Q.D.); (X.L.)
| | - Yongkang Wong
- School of Computing, National University of Singapore, 21 Lower Kent Ridge Rd, Singapore 119077, Singapore; (Y.W.); (M.K.)
| | - Mohan Kankanhali
- School of Computing, National University of Singapore, 21 Lower Kent Ridge Rd, Singapore 119077, Singapore; (Y.W.); (M.K.)
| | - Xiangdong Li
- College of Computer Science and Technology, Faculty of Computer, Zhejiang University, Hangzhou 310058, China; (Q.D.); (X.L.)
| | | |
Collapse
|
21
|
van Dellen F, Vazquez CG, Labruyere R. 1D-Convolutional Neural Networks can Quantify Therapy Content of Children and Adolescents Walking in a Robot-Assisted Gait Trainer. IEEE Int Conf Rehabil Robot 2023; 2023:1-6. [PMID: 37941229 DOI: 10.1109/icorr58425.2023.10304726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023]
Abstract
Therapy content, consisting of device parameter settings and therapy instructions, is crucial for an effective robot-assisted gait therapy program. Settings and instructions depend on the therapy goals of the individual patient. While device parameters can be recorded by the robot, therapeutic instructions and associated patient responses are currently difficult to capture. This limits the transferability of successful therapeutic approaches between clinics. Here, we propose that 1D-convolutional neural networks can be used to relate patient behavior during individual steps to the instructions given as a surrogate for the patient's intent. Our model takes the surface electromyography patterns of two leg muscles as input and predicts the given instruction as output. We tested this approach with data from 20 healthy children walking in a robot-assisted gait trainer with 5 different instructions. Our model performs well, with a classification accuracy of almost 90%, when the instruction targets specific aspects of gait, such as step length. This shows that 1D-convolutional neural networks are a viable tool for quantifying therapy content. Thus, they could help compare therapy approaches and identify effective strategies.
Collapse
|
22
|
Chen J, Wang C, Chen J, Yin B. Manipulator Control System Based on Flexible Sensor Technology. MICROMACHINES 2023; 14:1697. [PMID: 37763860 PMCID: PMC10535772 DOI: 10.3390/mi14091697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/12/2023] [Accepted: 08/28/2023] [Indexed: 09/29/2023]
Abstract
The research on the remote control of manipulators based on flexible sensor technology is gradually extensive. In order to achieve stable, accurate, and efficient control of the manipulator, it is necessary to reasonably design the structure of the sensor with excellent tensile strength and flexibility. The acquisition of manual information by high-performance sensors is the basis of manipulator control. This paper starts with the manufacturing of materials of the flexible sensor for the manipulator, introduces the substrate, sensor, and flexible electrode materials, respectively, and summarizes the performance of different flexible sensors. From the perspective of manufacturing, it introduces their basic principles and compares their advantages and disadvantages. Then, according to the different ways of wearing, the two control methods of data glove control and surface EMG control are respectively introduced, the principle, control process, and detection accuracy are summarized, and the problems of material microstructure, reducing the cost, optimizing the circuit design and so on are emphasized in this field. Finally, the commercial application in this field is explained and the future research direction is proposed from two aspects: how to ensure real-time control and better receive the feedback signal from the manipulator.
Collapse
Affiliation(s)
| | | | | | - Binfeng Yin
- School of Mechanical Engineering, Yangzhou University, Huayangxi Road No. 196, Yangzhou 225127, China; (J.C.); (C.W.); (J.C.)
| |
Collapse
|
23
|
Fan J, Vargas L, Kamper DG, Hu X. Robust neural decoding for dexterous control of robotic hand kinematics. Comput Biol Med 2023; 162:107139. [PMID: 37301095 DOI: 10.1016/j.compbiomed.2023.107139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 05/22/2023] [Accepted: 06/04/2023] [Indexed: 06/12/2023]
Abstract
BACKGROUND Manual dexterity is a fundamental motor skill that allows us to perform complex daily tasks. Neuromuscular injuries, however, can lead to the loss of hand dexterity. Although numerous advanced assistive robotic hands have been developed, we still lack dexterous and continuous control of multiple degrees of freedom in real-time. In this study, we developed an efficient and robust neural decoding approach that can continuously decode intended finger dynamic movements for real-time control of a prosthetic hand. METHODS High-density electromyogram (HD-EMG) signals were obtained from the extrinsic finger flexor and extensor muscles, while participants performed either single-finger or multi-finger flexion-extension movements. We implemented a deep learning-based neural network approach to learn the mapping from HD-EMG features to finger-specific population motoneuron firing frequency (i.e., neural-drive signals). The neural-drive signals reflected motor commands specific to individual fingers. The predicted neural-drive signals were then used to continuously control the fingers (index, middle, and ring) of a prosthetic hand in real-time. RESULTS Our developed neural-drive decoder could consistently and accurately predict joint angles with significantly lower prediction errors across single-finger and multi-finger tasks, compared with a deep learning model directly trained on finger force signals and the conventional EMG-amplitude estimate. The decoder performance was stable over time and was robust to variations of the EMG signals. The decoder also demonstrated a substantially better finger separation with minimal predicted error of joint angle in the unintended fingers. CONCLUSIONS This neural decoding technique offers a novel and efficient neural-machine interface that can consistently predict robotic finger kinematics with high accuracy, which can enable dexterous control of assistive robotic hands.
Collapse
Affiliation(s)
- Jiahao Fan
- Department of Mechanical Engineering, Pennsylvania State University, University Park, USA
| | - Luis Vargas
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, USA
| | - Derek G Kamper
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, USA
| | - Xiaogang Hu
- Department of Mechanical Engineering, Pennsylvania State University, University Park, USA; Department of Kinesiology, Pennsylvania State University, University Park, USA; Department of Physical Medicine & Rehabilitation, Pennsylvania State Hershey College of Medicine, USA; Huck Institutes of the Life Sciences, Pennsylvania State University, University Park, USA; Center for Neural Engineering, Pennsylvania State University, University Park, USA.
| |
Collapse
|
24
|
Montazerin M, Rahimian E, Naderkhani F, Atashzar SF, Yanushkevich S, Mohammadi A. Transformer-based hand gesture recognition from instantaneous to fused neural decomposition of high-density EMG signals. Sci Rep 2023; 13:11000. [PMID: 37419881 PMCID: PMC10329032 DOI: 10.1038/s41598-023-36490-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 06/05/2023] [Indexed: 07/09/2023] Open
Abstract
Designing efficient and labor-saving prosthetic hands requires powerful hand gesture recognition algorithms that can achieve high accuracy with limited complexity and latency. In this context, the paper proposes a Compact Transformer-based Hand Gesture Recognition framework referred to as [Formula: see text], which employs a vision transformer network to conduct hand gesture recognition using high-density surface EMG (HD-sEMG) signals. Taking advantage of the attention mechanism, which is incorporated into the transformer architectures, our proposed [Formula: see text] framework overcomes major constraints associated with most of the existing deep learning models such as model complexity; requiring feature engineering; inability to consider both temporal and spatial information of HD-sEMG signals, and requiring a large number of training samples. The attention mechanism in the proposed model identifies similarities among different data segments with a greater capacity for parallel computations and addresses the memory limitation problems while dealing with inputs of large sequence lengths. [Formula: see text] can be trained from scratch without any need for transfer learning and can simultaneously extract both temporal and spatial features of HD-sEMG data. Additionally, the [Formula: see text] framework can perform instantaneous recognition using sEMG image spatially composed from HD-sEMG signals. A variant of the [Formula: see text] is also designed to incorporate microscopic neural drive information in the form of Motor Unit Spike Trains (MUSTs) extracted from HD-sEMG signals using Blind Source Separation (BSS). This variant is combined with its baseline version via a hybrid architecture to evaluate potentials of fusing macroscopic and microscopic neural drive information. The utilized HD-sEMG dataset involves 128 electrodes that collect the signals related to 65 isometric hand gestures of 20 subjects. The proposed [Formula: see text] framework is applied to 31.25, 62.5, 125, 250 ms window sizes of the above-mentioned dataset utilizing 32, 64, 128 electrode channels. Our results are obtained via 5-fold cross-validation by first applying the proposed framework on the dataset of each subject separately and then, averaging the accuracies among all the subjects. The average accuracy over all the participants using 32 electrodes and a window size of 31.25 ms is 86.23%, which gradually increases till reaching 91.98% for 128 electrodes and a window size of 250 ms. The [Formula: see text] achieves accuracy of 89.13% for instantaneous recognition based on a single frame of HD-sEMG image. The proposed model is statistically compared with a 3D Convolutional Neural Network (CNN) and two different variants of Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) models. The accuracy results for each of the above-mentioned models are paired with their precision, recall, F1 score, required memory, and train/test times. The results corroborate effectiveness of the proposed [Formula: see text] framework compared to its counterparts.
Collapse
Affiliation(s)
- Mansooreh Montazerin
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada
| | - Elahe Rahimian
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | - Farnoosh Naderkhani
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | - S Farokh Atashzar
- Departments of Electrical and Computer Engineering, Mechanical and Aerospace Engineering, New York University (NYU), New York, 10003, NY, USA
- NYU Center for Urban Science and Progress (CUSP), NYU WIRELESS, New York University (NYU), New York, 10003, NY, USA
| | - Svetlana Yanushkevich
- Biometric Technologies Laboratory, Department of Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
| | - Arash Mohammadi
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada.
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada.
| |
Collapse
|
25
|
Gomez-Correa M, Ballesteros M, Salgado I, Cruz-Ortiz D. Forearm sEMG data from young healthy humans during the execution of hand movements. Sci Data 2023; 10:310. [PMID: 37210582 DOI: 10.1038/s41597-023-02223-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 05/10/2023] [Indexed: 05/22/2023] Open
Abstract
This work provides a complete dataset containing surface electromyography (sEMG) signals acquired from the forearm with a sampling frequency of 1000 Hz. The dataset is named WyoFlex sEMG Hand Gesture and recorded the data of 28 participants between 18 and 37 years old without neuromuscular diseases or cardiovascular problems. The test protocol consisted of sEMG signals acquisition corresponding to ten wrist and grasping movements (extension, flexion, ulnar deviation, radial deviation, hook grip, power grip, spherical grip, precision grip, lateral grip, and pinch grip), considering three repetitions for each gesture. Also, the dataset contains general information such as anthropometric measures of the upper limb, gender, age, laterally of the person, and physical condition. Likewise, the implemented acquisition system consists of a portable armband with four sEMG channels distributed equidistantly for each forearm. The database could be used for the recognition of hand gestures, evaluation of the evolution of patients in rehabilitation processes, control of upper limb orthoses or prostheses, and biomechanical analysis of the forearm.
Collapse
Affiliation(s)
- Manuela Gomez-Correa
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Z.C, 07700, Mexico City, Mexico
| | - Mariana Ballesteros
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Z.C, 07700, Mexico City, Mexico
- Medical Robotics and Biosignals Laboratory, Unidad Profesional Interdisciplinaria de Biotecnología, Instituto Politécnico Nacional, Z.C, 07340, Mexico City, Mexico
| | - Ivan Salgado
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Z.C, 07700, Mexico City, Mexico
| | - David Cruz-Ortiz
- Medical Robotics and Biosignals Laboratory, Unidad Profesional Interdisciplinaria de Biotecnología, Instituto Politécnico Nacional, Z.C, 07340, Mexico City, Mexico.
| |
Collapse
|
26
|
Maksymenko K, Clarke AK, Mendez Guerra I, Deslauriers-Gauthier S, Farina D. A myoelectric digital twin for fast and realistic modelling in deep learning. Nat Commun 2023; 14:1600. [PMID: 36959193 PMCID: PMC10036636 DOI: 10.1038/s41467-023-37238-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 03/08/2023] [Indexed: 03/25/2023] Open
Abstract
Muscle electrophysiology has emerged as a powerful tool to drive human machine interfaces, with many new recent applications outside the traditional clinical domains, such as robotics and virtual reality. However, more sophisticated, functional, and robust decoding algorithms are required to meet the fine control requirements of these applications. Deep learning has shown high potential in meeting these demands, but requires a large amount of high-quality annotated data, which is expensive and time-consuming to acquire. Data augmentation using simulations, a strategy applied in other deep learning applications, has never been attempted in electromyography due to the absence of computationally efficient models. We introduce a concept of Myoelectric Digital Twin - highly realistic and fast computational model tailored for the training of deep learning algorithms. It enables simulation of arbitrary large and perfectly annotated datasets of realistic electromyography signals, allowing new approaches to muscular signal decoding, accelerating the development of human-machine interfaces.
Collapse
Affiliation(s)
| | | | | | | | - Dario Farina
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
27
|
Fan J, Wen J, Lai Z. Myoelectric Pattern Recognition Using Gramian Angular Field and Convolutional Neural Networks for Muscle-Computer Interface. SENSORS (BASEL, SWITZERLAND) 2023; 23:2715. [PMID: 36904918 PMCID: PMC10007307 DOI: 10.3390/s23052715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 02/19/2023] [Accepted: 02/24/2023] [Indexed: 06/18/2023]
Abstract
In the field of the muscle-computer interface, the most challenging task is extracting patterns from complex surface electromyography (sEMG) signals to improve the performance of myoelectric pattern recognition. To address this problem, a two-stage architecture, consisting of Gramian angular field (GAF)-based 2D representation and convolutional neural network (CNN)-based classification (GAF-CNN), is proposed. To explore discriminant channel features from sEMG signals, sEMG-GAF transformation is proposed for time sequence signal representation and feature modeling, in which the instantaneous values of multichannel sEMG signals are encoded in image form. A deep CNN model is introduced to extract high-level semantic features lying in image-form-based time sequence signals concerning instantaneous values for image classification. An insight analysis explains the rationale behind the advantages of the proposed method. Extensive experiments are conducted on benchmark publicly available sEMG datasets, i.e., NinaPro and CagpMyo, whose experimental results validate that the proposed GAF-CNN method is comparable to the state-of-the-art methods, as reported by previous work incorporating CNN models.
Collapse
Affiliation(s)
- Junjun Fan
- College of Computer Science & Software Engineering, Shenzhen University, Shenzhen 518060, China
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
- Guangdong Laboratory of Artificial-Intelligence and Cyber-Economics (SZ), Shenzhen University, Shenzhen 518060, China
| | - Jiajun Wen
- College of Computer Science & Software Engineering, Shenzhen University, Shenzhen 518060, China
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
- Guangdong Laboratory of Artificial-Intelligence and Cyber-Economics (SZ), Shenzhen University, Shenzhen 518060, China
| | - Zhihui Lai
- College of Computer Science & Software Engineering, Shenzhen University, Shenzhen 518060, China
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
- Guangdong Laboratory of Artificial-Intelligence and Cyber-Economics (SZ), Shenzhen University, Shenzhen 518060, China
| |
Collapse
|
28
|
Jiang N, Chen C, He J, Meng J, Pan L, Su S, Zhu X. Bio-robotics research for non-invasive myoelectric neural interfaces for upper-limb prosthetic control: a 10-year perspective review. Natl Sci Rev 2023; 10:nwad048. [PMID: 37056442 PMCID: PMC10089583 DOI: 10.1093/nsr/nwad048] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/01/2023] [Accepted: 02/07/2023] [Indexed: 04/05/2023] Open
Abstract
ABSTRACT
A decade ago, a group of researchers from academia and industry identified a dichotomy between the industrial and academic state-of-the-art in upper-limb prosthesis control, a widely used bio-robotics application. They proposed that four key technical challenges, if addressed, could bridge this gap and translate academic research into clinically and commercially viable products. These challenges are unintuitive control schemes, lack of sensory feedback, poor robustness and single sensor modality. Here, we provide a perspective review on the research effort that occurred in the last decade, aiming at addressing these challenges. In addition, we discuss three research areas essential to the recent development in upper-limb prosthetic control research but were not envisioned in the review 10 years ago: deep learning methods, surface electromyogram decomposition and open-source databases. To conclude the review, we provide an outlook into the near future of the research and development in upper-limb prosthetic control and beyond.
Collapse
Affiliation(s)
| | - Chen Chen
- State Key Laboratory of Mechanical System and Vibration, and Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jiayuan He
- National Clinical Research Center for Geriatrics, West China Hospital, and Med-X Center for Manufacturing, Sichuan University, Chengdu 610041, China
| | - Jianjun Meng
- State Key Laboratory of Mechanical System and Vibration, and Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lizhi Pan
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, School of Mechanical Engineering, Tianjin University, Tianjin 300350, China
| | - Shiyong Su
- Institute of Neuroscience, Université Catholique Louvain, Brussel B-1348, Belgium
| | - Xiangyang Zhu
- State Key Laboratory of Mechanical System and Vibration, and Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
29
|
sEMG signal-based lower limb movements recognition using tunable Q-factor wavelet transform and Kraskov entropy. Ing Rech Biomed 2023. [DOI: 10.1016/j.irbm.2023.100773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
30
|
Tian F, Yang J, Zhao S, Sawan M. NeuroCARE: A generic neuromorphic edge computing framework for healthcare applications. Front Neurosci 2023; 17:1093865. [PMID: 36755733 PMCID: PMC9900119 DOI: 10.3389/fnins.2023.1093865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 01/03/2023] [Indexed: 01/24/2023] Open
Abstract
Highly accurate classification methods for multi-task biomedical signal processing are reported, including neural networks. However, reported works are computationally expensive and power-hungry. Such bottlenecks make it hard to deploy existing approaches on edge platforms such as mobile and wearable devices. Gaining motivation from the good performance and high energy-efficiency of spiking neural networks (SNNs), a generic neuromorphic framework for edge healthcare and biomedical applications are proposed and evaluated on various tasks, including electroencephalography (EEG) based epileptic seizure prediction, electrocardiography (ECG) based arrhythmia detection, and electromyography (EMG) based hand gesture recognition. This approach, NeuroCARE, uses a unique sparse spike encoder to generate spike sequences from raw biomedical signals and makes classifications using the spike-based computing engine that combines the advantages of both CNN and SNN. An adaptive weight mapping method specifically co-designed with the spike encoder can efficiently convert CNN to SNN without performance deterioration. The evaluation results show that the overall performance, including the classification accuracy, sensitivity and F1 score, achieve 92.7, 96.7, and 85.7% for seizure prediction, arrhythmia detection and hand gesture recognition, respectively. In comparison with CNN topologies, the computation complexity is reduced by over 80.7% while the energy consumption and area occupation are reduced by over 80% and over 64.8%, respectively, indicating that the proposed neuromorphic computing approach is energy and area efficient and of high precision, which paves the way for deployment at edge platforms.
Collapse
Affiliation(s)
- Fengshi Tian
- CenBRAIN Neurotech, School of Engineering, Westlake University, Hangzhou, Zhejiang, China,The Hong Kong University of Science and Technology (HKUST), New Territories, Hong Kong SAR, China
| | - Jie Yang
- CenBRAIN Neurotech, School of Engineering, Westlake University, Hangzhou, Zhejiang, China,*Correspondence: Jie Yang,
| | - Shiqi Zhao
- CenBRAIN Neurotech, School of Engineering, Westlake University, Hangzhou, Zhejiang, China
| | - Mohamad Sawan
- CenBRAIN Neurotech, School of Engineering, Westlake University, Hangzhou, Zhejiang, China,Mohamad Sawan,
| |
Collapse
|
31
|
Liu X, Wang J, Liang T, Lou C, Wang H, Liu X. SE-TCN network for continuous estimation of upper limb joint angles. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:3237-3260. [PMID: 36899579 DOI: 10.3934/mbe.2023152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
The maturity of human-computer interaction technology has made it possible to use surface electromyographic signals (sEMG) to control exoskeleton robots and intelligent prostheses. However, the available upper limb rehabilitation robots controlled by sEMG have the shortcoming of inflexible joints. This paper proposes a method based on a temporal convolutional network (TCN) to predict upper limb joint angles by sEMG. The raw TCN depth was expanded to extract the temporal features and save the original information. The timing sequence characteristics of the muscle blocks that dominate the upper limb movement are not apparent, leading to low accuracy of the joint angle estimation. Therefore, this study squeeze-and-excitation networks (SE-Net) to improve the network model of the TCN. Finally, seven movements of the human upper limb were selected for ten human subjects, recording elbow angle (EA), shoulder vertical angle (SVA), and shoulder horizontal angle (SHA) values during their movements. The designed experiment compared the proposed SE-TCN model with the backpropagation (BP) and long short-term memory (LSTM) networks. The proposed SE-TCN systematically outperformed the BP network and LSTM model by the mean RMSE values: by 25.0 and 36.8% for EA, by 38.6 and 43.6% for SHA, and by 45.6 and 49.5% for SVA, respectively. Consequently, its R2 values exceeded those of BP and LSTM by 13.6 and 39.20% for EA, 19.01 and 31.72% for SHA, and 29.22 and 31.89% for SVA, respectively. This indicates that the proposed SE-TCN model has good accuracy and can be used to estimate the angles of upper limb rehabilitation robots in the future.
Collapse
Affiliation(s)
- Xiaoguang Liu
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, Hebei, China
| | - Jiawei Wang
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, Hebei, China
| | - Tie Liang
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, Hebei, China
| | - Cunguang Lou
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, Hebei, China
| | - Hongrui Wang
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, Hebei, China
| | - Xiuling Liu
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, Hebei, China
| |
Collapse
|
32
|
Wang T, Zhao Y, Wang Q. A Flexible Iontronic Capacitive Sensing Array for Hand Gesture Recognition Using Deep Convolutional Neural Networks. Soft Robot 2022. [DOI: 10.1089/soro.2021.0209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Affiliation(s)
- Tiantong Wang
- Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing, China
- Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Beijing, China
| | - Yunbiao Zhao
- Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing, China
- Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Beijing, China
| | - Qining Wang
- Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing, China
- Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
- Beijing Institute for General Artificial Intelligence, Beijing, China
| |
Collapse
|
33
|
A virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery. Int J Comput Assist Radiol Surg 2022; 18:909-919. [PMID: 36418763 PMCID: PMC10113313 DOI: 10.1007/s11548-022-02790-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 11/02/2022] [Indexed: 11/25/2022]
Abstract
Abstract
Background
Virtual reality (VR) technology is an ideal alternative of operation training and surgical teaching. However, virtual surgery is usually carried out using the mouse or data gloves, which affects the authenticity of virtual operation. A virtual surgery system with gesture recognition and real-time image feedback was explored to realize more authentic immersion.
Method
Gesture recognition technology proposed with an efficient and real-time algorithm and high fidelity was explored. The recognition of hand contour, palm and fingertip was firstly realized by hand data extraction. Then, an Support Vector Machine classifier was utilized to classify and recognize common gestures after extraction of feature recognition. The algorithm of collision detection adopted Axis Aligned Bounding Box binary tree to build hand and scalpel collision models. What’s more, nominal radius theorem (NRT) and separating axis theorem (SAT) were applied for speeding up collision detection. Based on the maxillofacial virtual surgical system we proposed before, the feasibility of integration of the above technologies in this prototype system was evaluated.
Results
Ten kinds of signal static gestures were designed to test gesture recognition algorithms. The accuracy of gestures recognition is more than 80%, some of which were over 90%. The generation speed of collision detection model met the software requirements with the method of NRT and SAT. The response time of gesture] recognition was less than 40 ms, namely the speed of hand gesture recognition system was greater than 25 Hz. On the condition of integration of hand gesture recognition, typical virtual surgical procedures including grabbing a scalpel, puncture site selection, virtual puncture operation and incision were carried out with realization of real-time image feedback.
Conclusion
Based on the previous maxillofacial virtual surgical system that consisted of VR, triangular mesh collision detection and maxillofacial biomechanical model construction, the integration of hand gesture recognition was a feasible method to improve the interactivity and immersion of virtual surgical operation training.
Collapse
|
34
|
Wang H, Zuo S, Cerezo-Sánchez M, Arekhloo NG, Nazarpour K, Heidari H. Wearable super-resolution muscle-machine interfacing. Front Neurosci 2022; 16:1020546. [PMID: 36466163 PMCID: PMC9714306 DOI: 10.3389/fnins.2022.1020546] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 10/21/2022] [Indexed: 09/19/2023] Open
Abstract
Muscles are the actuators of all human actions, from daily work and life to communication and expression of emotions. Myography records the signals from muscle activities as an interface between machine hardware and human wetware, granting direct and natural control of our electronic peripherals. Regardless of the significant progression as of late, the conventional myographic sensors are still incapable of achieving the desired high-resolution and non-invasive recording. This paper presents a critical review of state-of-the-art wearable sensing technologies that measure deeper muscle activity with high spatial resolution, so-called super-resolution. This paper classifies these myographic sensors according to the different signal types (i.e., biomechanical, biochemical, and bioelectrical) they record during measuring muscle activity. By describing the characteristics and current developments with advantages and limitations of each myographic sensor, their capabilities are investigated as a super-resolution myography technique, including: (i) non-invasive and high-density designs of the sensing units and their vulnerability to interferences, (ii) limit-of-detection to register the activity of deep muscles. Finally, this paper concludes with new opportunities in this fast-growing super-resolution myography field and proposes promising future research directions. These advances will enable next-generation muscle-machine interfaces to meet the practical design needs in real-life for healthcare technologies, assistive/rehabilitation robotics, and human augmentation with extended reality.
Collapse
Affiliation(s)
- Huxi Wang
- Microelectronics Lab, James Watt School of Engineering, The University of Glasgow, Glasgow, United Kingdom
- Neuranics Ltd., Glasgow, United Kingdom
| | - Siming Zuo
- Microelectronics Lab, James Watt School of Engineering, The University of Glasgow, Glasgow, United Kingdom
- Neuranics Ltd., Glasgow, United Kingdom
| | - María Cerezo-Sánchez
- Microelectronics Lab, James Watt School of Engineering, The University of Glasgow, Glasgow, United Kingdom
- Neuranics Ltd., Glasgow, United Kingdom
| | - Negin Ghahremani Arekhloo
- Microelectronics Lab, James Watt School of Engineering, The University of Glasgow, Glasgow, United Kingdom
- Neuranics Ltd., Glasgow, United Kingdom
| | - Kianoush Nazarpour
- Neuranics Ltd., Glasgow, United Kingdom
- School of Informatics, The University of Edinburgh, Edinburgh, United Kingdom
| | - Hadi Heidari
- Microelectronics Lab, James Watt School of Engineering, The University of Glasgow, Glasgow, United Kingdom
- Neuranics Ltd., Glasgow, United Kingdom
| |
Collapse
|
35
|
MSFF-Net: Multi-Stream Feature Fusion Network for surface electromyography gesture recognition. PLoS One 2022; 17:e0276436. [PMCID: PMC9639816 DOI: 10.1371/journal.pone.0276436] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 10/07/2022] [Indexed: 11/09/2022] Open
Abstract
In the field of surface electromyography (sEMG) gesture recognition, how to improve recognition accuracy has been a research hotspot. The rapid development of deep learning provides a new solution to this problem. At present, the main applications of deep learning for sEMG gesture feature extraction are based on convolutional neural network (CNN) structures to capture spatial morphological information of the multichannel sEMG or based on long short-term memory network (LSTM) to extract time-dependent information of the single-channel sEMG. However, there are few methods to comprehensively consider the distribution area of the sEMG signal acquisition electrode sensor and the arrangement of the sEMG signal morphological features and electrode spatial features. In this paper, a novel multi-stream feature fusion network (MSFF-Net) model is proposed for sEMG gesture recognition. The model adopts a divide-and-conquer strategy to learn the relationship between different muscle regions and specific gestures. Firstly, a multi-stream convolutional neural network (Multi-stream CNN) and a convolutional block attention module integrated with a resblock (ResCBAM) are used to extract multi-dimensional spatial features from signal morphology, electrode space, and feature map space. Then the learned multi-view depth features are fused by a view aggregation network consisting of an early fusion network and a late fusion network. The results of all subjects and gesture movement validation experiments in the sEMG signal acquired from 12 sensors provided by NinaPro’s DB2 and DB4 sub-databases show that the proposed model in this paper has better performance in terms of gesture recognition accuracy compared with the existing models.
Collapse
|
36
|
Masood F, Sharma M, Mand D, Nesathurai S, Simmons HA, Brunner K, Schalk DR, Sledge JB, Abdullah HA. A Novel Application of Deep Learning (Convolutional Neural Network) for Traumatic Spinal Cord Injury Classification Using Automatically Learned Features of EMG Signal. SENSORS (BASEL, SWITZERLAND) 2022; 22:8455. [PMID: 36366153 PMCID: PMC9657335 DOI: 10.3390/s22218455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/25/2022] [Accepted: 11/01/2022] [Indexed: 06/16/2023]
Abstract
In this study, a traumatic spinal cord injury (TSCI) classification system is proposed using a convolutional neural network (CNN) technique with automatically learned features from electromyography (EMG) signals for a non-human primate (NHP) model. A comparison between the proposed classification system and a classical classification method (k-nearest neighbors, kNN) is also presented. Developing such an NHP model with a suitable assessment tool (i.e., classifier) is a crucial step in detecting the effect of TSCI using EMG, which is expected to be essential in the evaluation of the efficacy of new TSCI treatments. Intramuscular EMG data were collected from an agonist/antagonist tail muscle pair for the pre- and post-spinal cord lesion from five Macaca fasicularis monkeys. The proposed classifier is based on a CNN using filtered segmented EMG signals from the pre- and post-lesion periods as inputs, while the kNN is designed using four hand-crafted EMG features. The results suggest that the CNN provides a promising classification technique for TSCI, compared to conventional machine learning classification. The kNN with hand-crafted EMG features classified the pre- and post-lesion EMG data with an F-measure of 89.7% and 92.7% for the left- and right-side muscles, respectively, while the CNN with the EMG segments classified the data with an F-measure of 89.8% and 96.9% for the left- and right-side muscles, respectively. Finally, the proposed deep learning classification model (CNN), with its learning ability of high-level features using EMG segments as inputs, shows high potential and promising results for use as a TSCI classification system. Future studies can confirm this finding by considering more subjects.
Collapse
Affiliation(s)
- Farah Masood
- School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada
- The Department of Biomedical Engineering, Al-Khwarizmi College of Engineering, Baghdad University, Baghdad 10071, Iraq
| | - Milan Sharma
- School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada
| | - Davleen Mand
- School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada
| | - Shanker Nesathurai
- The Wisconsin National Primate Research Center, University of Wisconsin-Madison, Madison, WI 53715, USA
- The Division of Physical Medicine and Rehabilitation, Department of Medicine, McMaster University, Hamilton, ON L8S 4L8, Canada
- The Department of Physical Medicine and Rehabilitation, Hamilton Health Sciences, St Joseph’s Hamilton Healthcare, Hamilton, ON L8N 4A6, Canada
| | - Heather A. Simmons
- The Wisconsin National Primate Research Center, University of Wisconsin-Madison, Madison, WI 53715, USA
| | - Kevin Brunner
- The Wisconsin National Primate Research Center, University of Wisconsin-Madison, Madison, WI 53715, USA
| | - Dane R. Schalk
- The Wisconsin National Primate Research Center, University of Wisconsin-Madison, Madison, WI 53715, USA
| | - John B. Sledge
- The Lafayette Bone and Joint Clinic, Lafayette, LA 70508, USA
| | | |
Collapse
|
37
|
Xu B, Zhang K, Yang X, Liu D, Hu C, Li H, Song A. Natural grasping movement recognition and force estimation using electromyography. Front Neurosci 2022; 16:1020086. [DOI: 10.3389/fnins.2022.1020086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/06/2022] [Indexed: 11/13/2022] Open
Abstract
Electromyography (EMG) generated by human hand movements is usually used to decode different action types with high accuracy. However, the classifications of the gestures rarely consider the impact of force, and the estimation of the grasp force when performing natural grasping movements is so far overlooked. Decoding natural grasping movements and estimating the force generated by the associated movements can help patients to improve the accuracy of prosthesis control. This study mainly focused on two aspects: the classification of four natural grasping movements and the force estimation of these actions. For this purpose, we designed an experimental platform where subjects could perform four common natural grasping movements in daily life, including pinch, palmar, twist, and plug grasp, to complete target profiles. On the one hand, the results showed that, for natural grasping movements with different levels of force (three levels at 20, 50, and 80%), the average accuracy could reach from 91.43 to 97.33% under five classification schemes. On the other hand, the feasibility of force estimation for natural grasping movements was demonstrated. Furthermore, in the process of force estimation, we confirmed that the regression performance about plug grasp was the best, and the average R2 could reach 0.9082. Besides, we found that the regression results were affected by the speed of force application. These findings contribute to the natural control of myoelectric prosthesis and the EMG-based rehabilitation training system, improving the user’s experience and acceptance.
Collapse
|
38
|
Xie B, Meng J, Li B, Harland A. Biosignal-based transferable attention Bi-ConvGRU deep network for hand-gesture recognition towards online upper-limb prosthesis control. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:106999. [PMID: 35841852 DOI: 10.1016/j.cmpb.2022.106999] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 06/13/2022] [Accepted: 06/30/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Upper-limb amputation can significantly affect a person's capabilities with a dramatic impact on their quality of life. As a biological signal, surface electromyogram (sEMG) provides a non-invasive means to measure underlying muscle activation patterns, corresponding to specific hand gestures. This project aims to develop a real-time deep learning based recognition model to automatically and reliably recognise these complex signals of a wide range of daily hand gestures from amputees and non-amputees. METHODS This paper proposes an attention bidirectional Convolutional Gated Recurrent Unit (Bi-ConvGRU) deep neural network for hand-gesture recognition. By training on sEMG data from both amputees and non-amputees, the model can learn to recognise a group of fine-grained hand movements. This is a significantly more challenging and underexplored area, compared to existing studies on coarse-control in lower limbs. One dimensional CNNs are initially used to extract intra-channel features. The novel use of a bidirectional sequential GRU (Bi-GRU) deep neural network allows the exploration of correlation of muscle activation among multi-channel sEMG signals from both prior and posterior time sequences. Importantly, the attention mechanism is employed following Bi-GRU layers. This enables the model to learn vital parts and feature weights, increasing robustness to bio-data noise and irregularity. Finally, we introduce the first of its kind transfer learning, demonstrating that a baseline model pre-trained with non-amputee data can be effectively refined with amputee data to build a personalised model for amputees. RESULTS The attention Bi-ConvGRU was evaluated on the benchmark database Ninapro, and achieved an average accuracy of 88.7%, outperforming the state-of-the-art on 18 gesture recognition by 6.7%. CONCLUSIONS To our knowledge, the developed end-to-end deep learning model is the first of its kind that enables reliable predictive decision making in short time windows (160ms). This reduced latency limits physiological awareness, enabling the potential for real-time, online and thus more intuitive bio-control of prosthetic devices for amputees.
Collapse
Affiliation(s)
- Baao Xie
- School of Electrical and Information Engineering, Tianjin University, China; Eastern Institute of Advanced Study, China
| | - James Meng
- Lancashire Teaching Hospitals, NHS Foundation Trust, PR2 9HT, UK
| | - Baihua Li
- Department of Computer Science, Loughborough University, LE11 3TU, UK.
| | - Andy Harland
- School of Mechanical, Electrical and Manufacturing Engineering, Loughborough University, LE11 3TU, UK
| |
Collapse
|
39
|
Fang B, Wang C, Sun F, Chen Z, Shan J, Liu H, Ding W, Liang W. Simultaneous sEMG Recognition of Gestures and Force Levels for Interaction With Prosthetic Hand. IEEE Trans Neural Syst Rehabil Eng 2022; 30:2426-2436. [PMID: 35981072 DOI: 10.1109/tnsre.2022.3199809] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The natural interaction between the prosthetic hand and the upper limb amputation patient is important and directly affects the rehabilitation effect and operation ability. Most previous studies only focused on the interaction of gestures but ignored the force levels. This paper proposes a simultaneous recognition method of gestures and forces for interaction with a prosthetic hand. The multitask classification algorithm based on a convolutional neural network (CNN) is designed to improve recognition efficiency and ensure recognition accuracy. The offline experimental results show that the algorithm proposed in this study outperforms other methods in both training speed and accuracy. To prove the effectiveness of the proposed method, a myoelectric prosthetic hand integrated with tactile sensors is developed, and surface electromyography (sEMG) datasets of healthy persons and amputees are built. The online experimental results show that the amputee can control the prosthetic hand to continuously make gestures under different force levels, and the effect of hand coordination on the hand perception of amputees is explored. The results show that gesture classification operation tasks with different force levels based on sEMG signals can be accurately recognized and comfortably interact with prosthetic hands in real time. It improves the amputees' operation ability and relieves their muscle fatigue.
Collapse
|
40
|
Xue B, Wu L, Liu A, Zhang X, Chen X, Chen X. Detecting the universal adversarial perturbations on high-density sEMG signals. Comput Biol Med 2022; 149:105978. [PMID: 36037630 DOI: 10.1016/j.compbiomed.2022.105978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 07/25/2022] [Accepted: 08/13/2022] [Indexed: 11/18/2022]
Abstract
Myoelectric pattern recognition is a promising approach for upper limb neuroprosthetic control. Convolutional neural networks (CNN) are increasingly used in dealing with the electromyography (EMG) signal collected by high-density electrodes due to its capacity to take full advantage of spatial information about muscle activity. However, it has been found that CNN models are very vulnerable to well-designed and tiny perturbations, such like universal adversarial perturbation (UAP). As shown in this work, the CNN-based myoelectric pattern recognition method can achieve a classification accuracy of more than 90%, but can only achieve a classification accuracy of less than 20% after the attack. This type of attack poses a big security concern to prosthetic control. To the best of our knowledge, there is no study on the detection of adversarial attacks to the myoelectric control system. In this paper, a correlation feature based on Chebyshev distance between the adjacent channels is proposed to detect the attack for EMG signals, which will serve as early warning and defense against the adversarial attacks. The performance of the detection framework is assessed with two high-density EMG datasets. The results show that our method has a detection rate of 91.39% and 93.87% for the attacks on both datasets with a latency of no more than 2 ms, which will facilitate the security of muscle-computer interfaces.
Collapse
Affiliation(s)
- Bo Xue
- The School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China.
| | - Le Wu
- The School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Aiping Liu
- The School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Xu Zhang
- The School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Xiang Chen
- The School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
| | - Xun Chen
- The School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China.
| |
Collapse
|
41
|
Yang X, Liu Y, Yin Z, Wang P, Deng P, Zhao Z, Liu H. Simultaneous Prediction of Wrist and Hand Motions via Wearable Ultrasound Sensing for Natural Control of Hand Prostheses. IEEE Trans Neural Syst Rehabil Eng 2022; 30:2517-2527. [PMID: 35947561 DOI: 10.1109/tnsre.2022.3197875] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Simultaneous prediction of wrist and hand motions is essential for the natural interaction with hand prostheses. In this paper, we propose a novel multi-out Gaussian process (MOGP) model and a multi-task deep learning (MTDL) algorithm to achieve simultaneous prediction of wrist rotation (pronation/ supination)1 and finger gestures for transradial amputees via a wearable ultrasound array. We target six finger gestures with concurrent wrist rotation in four transradial amputees. Results show that MOGP outperforms previously reported subclass discriminant analysis for both predictions of discrete finger gestures and continuous wrist rotation. Moreover, we find that MTDL has the potential to improve the accuracy of finger gesture prediction compared to MOGP and classification-specific deep learning, albeit at the expense of reducing the accuracy of wrist rotation prediction. Extended comparative analysis shows the superiority of ultrasound over surface electromyography. This paper prioritizes exploring the performance of wearable ultrasound on the simultaneous prediction of wrist and hand motions for transradial amputees, demonstrating the potential of ultrasound in future prosthetic control. Our ultrasound-based adaptive prosthetic control dataset (UltraPro) will be released to promote the development of the prosthetic community.
Collapse
|
42
|
Ozdemir MA, Kisa DH, Guren O, Akan A. Hand gesture classification using time–frequency images and transfer learning based on CNN. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103787] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
43
|
Malesevic N, Björkman A, Andersson GS, Cipriani C, Antfolk C. Evaluation of Simple Algorithms for Proportional Control of Prosthetic Hands Using Intramuscular Electromyography. SENSORS 2022; 22:s22135054. [PMID: 35808549 PMCID: PMC9269860 DOI: 10.3390/s22135054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 06/28/2022] [Accepted: 06/29/2022] [Indexed: 02/01/2023]
Abstract
Although seemingly effortless, the control of the human hand is backed by an elaborate neuro-muscular mechanism. The end result is typically a smooth action with the precise positioning of the joints of the hand and an exerted force that can be modulated to enable precise interaction with the surroundings. Unfortunately, even the most sophisticated technology cannot replace such a comprehensive role but can offer only basic hand functionalities. This issue arises from the drawbacks of the prosthetic hand control strategies that commonly rely on surface EMG signals that contain a high level of noise, thus limiting accurate and robust multi-joint movement estimation. The use of intramuscular EMG results in higher quality signals which, in turn, lead to an improvement in prosthetic control performance. Here, we present the evaluation of fourteen common/well-known algorithms (mean absolute value, variance, slope sign change, zero crossing, Willison amplitude, waveform length, signal envelope, total signal energy, Teager energy in the time domain, Teager energy in the frequency domain, modified Teager energy, mean of signal frequencies, median of signal frequencies, and firing rate) for the direct and proportional control of a prosthetic hand. The method involves the estimation of the forces generated in the hand by using different algorithms applied to iEMG signals from our recently published database, and comparing them to the measured forces (ground truth). The results presented in this paper are intended to be used as a baseline performance metric for more advanced algorithms that will be made and tested using the same database.
Collapse
Affiliation(s)
- Nebojsa Malesevic
- Department of Biomedical Engineering, Faculty of Engineering, Lund University, 223 63 Lund, Sweden
| | - Anders Björkman
- Department of Hand Surgery, Institute of Clinical Sciences, Sahlgrenska Academy, Sahlgrenska University Hospital, University of Gothenburg, 402 33 Gothenburg, Sweden
| | - Gert S Andersson
- Department of Clinical Neurophysiology, Skåne University Hospital, 223 63 Lund, Sweden
- Department of Clinical Sciences in Lund-Neurophysiology, Lund University, 223 63 Lund, Sweden
| | - Christian Cipriani
- The BioRobotics Institute, Scuola Superiore Sant'Anna, 56025 Pisa, Italy
| | - Christian Antfolk
- Department of Biomedical Engineering, Faculty of Engineering, Lund University, 223 63 Lund, Sweden
| |
Collapse
|
44
|
Rapid Detection of Cardiac Pathologies by Neural Networks Using ECG Signals (1D) and sECG Images (3D). COMPUTATION 2022. [DOI: 10.3390/computation10070112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Usually, cardiac pathologies are detected using one-dimensional electrocardiogram signals or two-dimensional images. When working with electrocardiogram signals, they can be represented in the time and frequency domains (one-dimensional signals). However, this technique can present difficulties, such as the high cost of private health services or the time the public health system takes to refer the patient to a cardiologist. In addition, the variety of cardiac pathologies (more than 20 types) is a problem in diagnosing the disease. On the other hand, surface electrocardiography (sECG) is a little-explored technique for this diagnosis. sECGs are three-dimensional images (two dimensions in space and one in time). In this way, the signals were taken in one-dimensional format and analyzed using neural networks. Following the transformation of the one-dimensional signals to three-dimensional signals, they were analyzed in the same sense. For this research, two models based on LSTM and ResNet34 neural networks were developed, which showed high accuracy, 98.71% and 93.64%, respectively. This study aims to propose the basis for developing Decision Support Software (DSS) based on machine learning models.
Collapse
|
45
|
Sandoval-Espino JA, Zamudio-Lara A, Marbán-Salgado JA, Escobedo-Alatorre JJ, Palillero-Sandoval O, Velásquez-Aguilar JG. Selection of the Best Set of Features for sEMG-Based Hand Gesture Recognition Applying a CNN Architecture. SENSORS 2022; 22:s22134972. [PMID: 35808467 PMCID: PMC9269838 DOI: 10.3390/s22134972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 06/23/2022] [Accepted: 06/27/2022] [Indexed: 12/10/2022]
Abstract
The classification of surface myoelectric signals (sEMG) remains a great challenge when focused on its implementation in an electromechanical hand prosthesis, due to its nonlinear and stochastic nature, as well as the great difference between models applied offline and online. In this work, the selection of the set of the features that allowed us to obtain the best results for the classification of this type of signals is presented. In order to compare the results obtained, the Nina PRO DB2 and DB3 databases were used, which contain information on 50 different movements of 40 healthy subjects and 11 amputated subjects, respectively. The sEMG of each subject was acquired through 12 channels in a bipolar configuration. To carry out the classification, a convolutional neural network (CNN) was used and a comparison of four sets of features extracted in the time domain was made, three of which have shown good performance in previous works and one more that was used for the first time to train this type of network. Set one is composed of six features in the time domain (TD1), Set two has 10 features also in the time domain (TD2) including the autoregression model (AR), the third set has two features in the time domain derived from spectral moments (TD-PSD1), and finally, a set of five features also has information on the power spectrum of the signal obtained in the time domain (TD-PSD2). The selected features in each set were organized in four different ways for the formation of the training images. The results obtained show that the set of features TD-PSD2 obtained the best performance for all cases. With the set of features and the formation of images proposed, an increase in the accuracies of the models of 8.16% and 8.56% was obtained for the DB2 and DB3 databases, respectively, compared to the current state of the art that has used these databases.
Collapse
Affiliation(s)
- Jorge Arturo Sandoval-Espino
- Centro de Investigación en Ingeniería y Ciencias Aplicadas (CIICAp), Universidad Autónoma del Estado de Morelos, Cuernavaca 62209, Morelos, Mexico; (J.A.S.-E.); (A.Z.-L.); (J.J.E.-A.); (O.P.-S.)
| | - Alvaro Zamudio-Lara
- Centro de Investigación en Ingeniería y Ciencias Aplicadas (CIICAp), Universidad Autónoma del Estado de Morelos, Cuernavaca 62209, Morelos, Mexico; (J.A.S.-E.); (A.Z.-L.); (J.J.E.-A.); (O.P.-S.)
| | - José Antonio Marbán-Salgado
- Centro de Investigación en Ingeniería y Ciencias Aplicadas (CIICAp), Universidad Autónoma del Estado de Morelos, Cuernavaca 62209, Morelos, Mexico; (J.A.S.-E.); (A.Z.-L.); (J.J.E.-A.); (O.P.-S.)
- Correspondence:
| | - J. Jesús Escobedo-Alatorre
- Centro de Investigación en Ingeniería y Ciencias Aplicadas (CIICAp), Universidad Autónoma del Estado de Morelos, Cuernavaca 62209, Morelos, Mexico; (J.A.S.-E.); (A.Z.-L.); (J.J.E.-A.); (O.P.-S.)
| | - Omar Palillero-Sandoval
- Centro de Investigación en Ingeniería y Ciencias Aplicadas (CIICAp), Universidad Autónoma del Estado de Morelos, Cuernavaca 62209, Morelos, Mexico; (J.A.S.-E.); (A.Z.-L.); (J.J.E.-A.); (O.P.-S.)
| | - J. Guadalupe Velásquez-Aguilar
- Facultad de Ciencias Químicas e Ingeniería (FCQeI), Universidad Autónoma del Estado de Morelos, Cuernavaca 62209, Morelos, Mexico;
| |
Collapse
|
46
|
Hu X, Song A, Wang J, Zeng H, Wei W. Finger Movement Recognition via High-Density Electromyography of Intrinsic and Extrinsic Hand Muscles. Sci Data 2022; 9:373. [PMID: 35768439 PMCID: PMC9243097 DOI: 10.1038/s41597-022-01484-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 06/15/2022] [Indexed: 11/09/2022] Open
Abstract
Surface electromyography (sEMG) is commonly used to observe the motor neuronal activity within muscle fibers. However, decoding dexterous body movements from sEMG signals is still quite challenging. In this paper, we present a high-density sEMG (HD-sEMG) signal database that comprises simultaneously recorded sEMG signals of intrinsic and extrinsic hand muscles. Specifically, twenty able-bodied participants performed 12 finger movements under two paces and three arm postures. HD-sEMG signals were recorded with a 64-channel high-density grid placed on the back of hand and an 8-channel armband around the forearm. Also, a data-glove was used to record the finger joint angles. Synchronisation and reproducibility of the data collection from the HD-sEMG and glove sensors were ensured. The collected data samples were further employed for automated recognition of dexterous finger movements. The introduced dataset offers a new perspective to study the synergy between the intrinsic and extrinsic hand muscles during dynamic finger movements. As this dataset was collected from multiple participants, it also provides a resource for exploring generalized models for finger movement decoding.
Collapse
Affiliation(s)
- Xuhui Hu
- State Key Laboratory of Bioelectronics, Nanjing, China.,Jiangsu Key Laboratory of Remote Measurement and Control, Nanjing, China.,School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Aiguo Song
- State Key Laboratory of Bioelectronics, Nanjing, China. .,Jiangsu Key Laboratory of Remote Measurement and Control, Nanjing, China. .,School of Instrument Science and Engineering, Southeast University, Nanjing, China.
| | - Jianzhi Wang
- State Key Laboratory of Bioelectronics, Nanjing, China.,Jiangsu Key Laboratory of Remote Measurement and Control, Nanjing, China.,School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Hong Zeng
- State Key Laboratory of Bioelectronics, Nanjing, China.,Jiangsu Key Laboratory of Remote Measurement and Control, Nanjing, China.,School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Wentao Wei
- School of Design Arts and Media, Nanjing University of Science and Technology, Nanjing, China
| |
Collapse
|
47
|
Surface Electromyography Signal Recognition Based on Deep Learning for Human-Robot Interaction and Collaboration. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01666-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
48
|
Wang S, Huang L, Jiang D, Sun Y, Jiang G, Li J, Zou C, Fan H, Xie Y, Xiong H, Chen B. Improved Multi-Stream Convolutional Block Attention Module for sEMG-Based Gesture Recognition. Front Bioeng Biotechnol 2022; 10:909023. [PMID: 35747495 PMCID: PMC9209772 DOI: 10.3389/fbioe.2022.909023] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/17/2022] [Indexed: 11/25/2022] Open
Abstract
As a key technology for the non-invasive human-machine interface that has received much attention in the industry and academia, surface EMG (sEMG) signals display great potential and advantages in the field of human-machine collaboration. Currently, gesture recognition based on sEMG signals suffers from inadequate feature extraction, difficulty in distinguishing similar gestures, and low accuracy of multi-gesture recognition. To solve these problems a new sEMG gesture recognition network called Multi-stream Convolutional Block Attention Module-Gate Recurrent Unit (MCBAM-GRU) is proposed, which is based on sEMG signals. The network is a multi-stream attention network formed by embedding a GRU module based on CBAM. Fusing sEMG and ACC signals further improves the accuracy of gesture action recognition. The experimental results show that the proposed method obtains excellent performance on dataset collected in this paper with the recognition accuracies of 94.1%, achieving advanced performance with accuracy of 89.7% on the Ninapro DB1 dataset. The system has high accuracy in classifying 52 kinds of different gestures, and the delay is less than 300 ms, showing excellent performance in terms of real-time human-computer interaction and flexibility of manipulator control.
Collapse
Affiliation(s)
- Shudi Wang
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China
| | - Li Huang
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan University of Science and Technology, Wuhan, China
- *Correspondence: Li Huang, ; Du Jiang, ; Ying Sun, ; Guozhang Jiang, ; Baojia Chen,
| | - Du Jiang
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan, China
- *Correspondence: Li Huang, ; Du Jiang, ; Ying Sun, ; Guozhang Jiang, ; Baojia Chen,
| | - Ying Sun
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan, China
- *Correspondence: Li Huang, ; Du Jiang, ; Ying Sun, ; Guozhang Jiang, ; Baojia Chen,
| | - Guozhang Jiang
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan, China
- *Correspondence: Li Huang, ; Du Jiang, ; Ying Sun, ; Guozhang Jiang, ; Baojia Chen,
| | - Jun Li
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China
| | - Cejing Zou
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China
| | - Hanwen Fan
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China
| | - Yuanmin Xie
- Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan, China
| | - Hegen Xiong
- Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan, China
| | - Baojia Chen
- Hubei Key Laboratory of Hydroelectric Machinery Design and Maintenance, China Three Gorges University, Yichang, China
- *Correspondence: Li Huang, ; Du Jiang, ; Ying Sun, ; Guozhang Jiang, ; Baojia Chen,
| |
Collapse
|
49
|
Fatayer A, Gao W, Fu Y. sEMG-based Gesture Recognition using Deep Learning from Noisy Labels. IEEE J Biomed Health Inform 2022; 26:4462-4473. [PMID: 35653452 DOI: 10.1109/jbhi.2022.3179630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Gesture recognition for myoelectric prosthesis control utilizing sparse multichannel surface Electromyography (sEMG) is a challenging task, and from a Muscle-Computer Interface (MCI) standpoint, the performance is still far from optimal. However, the design of a well-performed sEMG recognition system depends on the flexibility of the input-output function and the dataset's quality. To improve the performance of MCI, we proposed a novel gesture recognition framework that (i) Enrich the spectral information of the sparse sEMG signals by constructing a fused map image (denoted as sEMG-Map) that integrates a multiresolution decomposition (by means of orthogonal wavelets) through the raw signals then rely upon the Convolutional Neural Network (CNN) capacity to exploit the composite hierarchies in the constructed sEMGMap input. (ii) deals with the label noise by proposing a data-centric method (denoted as ALR-CNN) that synchronously refines the falsely labeled samples and optimizes the CNN model based on two basic assumptions. First, the deep model accuracy improves as the training progress. Second, a set of successive learnable max-activated outputs of a well-performed deep model is a reliable estimator for motion detection in the muscle activation pattern. Our proposed framework is evaluated on three large-scale public databases. The average classification accuracy is 95.50%, 95.85%, and 85.58% for NinaPro DB2, NinaPro DB7, and NinaPro DB3, respectively. The experimental results verify the effectuality of the proposed method and show high accuracy.
Collapse
|
50
|
Fu YL, Liang KC, Song W, Huang J. A hybrid approach to product prototype usability testing based on surface EMG images and convolutional neural network classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106870. [PMID: 35636360 DOI: 10.1016/j.cmpb.2022.106870] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 05/07/2022] [Accepted: 05/09/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVE It is common for employees to complain of muscle fatigue when resting in a reclined position in an office chair. To investigate the physical factors that influence resting comfort in a supine position, a newly designed product was used as the basis for creating a prototype experiment and testing its efficacy in use. Subjective questionnaires were combined with surface EMG measurements and deep learning algorithms were used to identify body part comfort to create a hybrid approach to product usability testing. METHODS To facilitate the use of sEMG-based CNNs in human factors engineering, a subjective user assessment was first conducted using a combination of body mapping and an impact comfort scale to the screen which body parts have a significant impact effect on comfort when using the prototype. A control group (no used) and an experimental group (used) were then created and the body parts with the most significant effects were measured using sEMG methods. After pre-processing the sEMG signal, sMEG feature maps were obtained by mean power frequency (MPF) and linear regression was used to analyze the comforting effect. Finally, a CNN model is constructed and the sMEG feature maps are trained and tested. RESULTS The results of the experiment showed that the user's subjective assessment showed that 10 body parts had a significant effect on comfort, with the right and left sides of the neck having the highest effect on comfort (4.78). sEMG measurements were then performed on the sternocleidomastoid (SCM) of the left and right neck. Linear analysis of the measurements showed that the control group had higher SCM fatigue than the experimental group, which could also indicate that the experimental group had better comfort. The final CNN model was able to accurately classify the four datasets with an accuracy of 0.99. CONCLUSION The results of the study show that the method is effective for the study of physical comfort in the supine sitting position and that it can be used to validate the comfort of similar products and to design iterations of the prototype.
Collapse
Affiliation(s)
- You-Lei Fu
- Fine Art and Design College, Quanzhou Normal University, Quanzhou 362000, China; Nanchang Institute of Technology, Nanchang 330044, China; Department of Design, National Taiwan Normal University, Taipei 106, Taiwan
| | - Kuei-Chia Liang
- Department of Design, National Taiwan Normal University, Taipei 106, Taiwan
| | - Wu Song
- College of Mechanical Engineering and Automation, Huaqiao University, Xiamen 361021, China.
| | - Jianlong Huang
- Faculty of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China; Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou 362000, China; Key Laboratory of Intelligent Computing and Information Processing, Fujian Province University, Quanzhou 362000, China.
| |
Collapse
|