1
|
Liu W, Bai J. Meta-analysis of the quantitative assessment of lower extremity motor function in elderly individuals based on objective detection. J Neuroeng Rehabil 2024; 21:111. [PMID: 38926890 PMCID: PMC11202321 DOI: 10.1186/s12984-024-01409-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 06/20/2024] [Indexed: 06/28/2024] Open
Abstract
OBJECTIVE To avoid deviation caused by the traditional scale method, the present study explored the accuracy, advantages, and disadvantages of different objective detection methods in evaluating lower extremity motor function in elderly individuals. METHODS Studies on lower extremity motor function assessment in elderly individuals published in the PubMed, Web of Science, Cochrane Library and EMBASE databases in the past five years were searched. The methodological quality of the included trials was assessed using RevMan 5.4.1 and Stata, followed by statistical analyses. RESULTS In total, 19 randomized controlled trials with a total of 2626 participants, were included. The results of the meta-analysis showed that inertial measurement units (IMUs), motion sensors, 3D motion capture systems, and observational gait analysis had statistical significance in evaluating the changes in step velocity and step length of lower extremity movement in elderly individuals (P < 0.00001), which can be used as a standardized basis for the assessment of motor function in elderly individuals. Subgroup analysis showed that there was significant heterogeneity in the assessment of step velocity [SMD=-0.98, 95%CI(-1.23, -0.72), I2 = 91.3%, P < 0.00001] and step length [SMD=-1.40, 95%CI(-1.77, -1.02), I2 = 86.4%, P < 0.00001] in elderly individuals. However, the sensors (I2 = 9%, I2 = 0%) and 3D motion capture systems (I2 = 0%) showed low heterogeneity in terms of step velocity and step length. The sensitivity analysis and publication bias test demonstrated that the results were stable and reliable. CONCLUSION observational gait analysis, motion sensors, 3D motion capture systems, and IMUs, as evaluation means, play a certain role in evaluating the characteristic parameters of step velocity and step length in lower extremity motor function of elderly individuals, which has good accuracy and clinical value in preventing motor injury. However, the high heterogeneity of observational gait analysis and IMUs suggested that different evaluation methods use different calculation formulas and indicators, resulting in the failure to obtain standardized indicators in clinical applications. Thus, multimodal quantitative evaluation should be integrated.
Collapse
Affiliation(s)
- Wen Liu
- Rehabilitation Medicine Center, The Second Affiliated Hospital and Yuying Children's Hospital, Wenzhou Medical University, Wenzhou, China
- Department of Spine and Spinal Cord Surgery, Beijing Boai Hospital, China Rehabilitation Research Centre, Beijing, China
| | - Jinzhu Bai
- Rehabilitation Medicine Center, The Second Affiliated Hospital and Yuying Children's Hospital, Wenzhou Medical University, Wenzhou, China.
- Department of Spine and Spinal Cord Surgery, Beijing Boai Hospital, China Rehabilitation Research Centre, Beijing, China.
- School of Rehabilitation Medicine, Capital Medical University, Beijing, China.
| |
Collapse
|
2
|
Lee H, Jiang M, Yang J, Yang Z, Zhao Q. Unveiling EMG semantics: a prototype-learning approach to generalizable gesture classification. J Neural Eng 2024; 21:036031. [PMID: 38754410 DOI: 10.1088/1741-2552/ad4c98] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 05/16/2024] [Indexed: 05/18/2024]
Abstract
Objective.Upper limb loss can profoundly impact an individual's quality of life, posing challenges to both physical capabilities and emotional well-being. To restore limb function by decoding electromyography (EMG) signals, in this paper, we present a novel deep prototype learning method for accurate and generalizable EMG-based gesture classification. Existing methods suffer from limitations in generalization across subjects due to the diverse nature of individual muscle responses, impeding seamless applicability in broader populations.Approach.By leveraging deep prototype learning, we introduce a method that goes beyond direct output prediction. Instead, it matches new EMG inputs to a set of learned prototypes and predicts the corresponding labels.Main results.This novel methodology significantly enhances the model's classification performance and generalizability by discriminating subtle differences between gestures, making it more reliable and precise in real-world applications. Our experiments on four Ninapro datasets suggest that our deep prototype learning classifier outperforms state-of-the-art methods in terms of intra-subject and inter-subject classification accuracy in gesture prediction.Significance.The results from our experiments validate the effectiveness of the proposed method and pave the way for future advancements in the field of EMG gesture classification for upper limb prosthetics.
Collapse
Affiliation(s)
- Hunmin Lee
- Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN, United States of America
| | - Ming Jiang
- Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN, United States of America
| | - Jinhui Yang
- Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN, United States of America
| | - Zhi Yang
- Department of Biomedical and Engineering, University of Minnesota, Twin Cities, MN, United States of America
| | - Qi Zhao
- Department of Computer Science and Engineering, University of Minnesota, Twin Cities, MN, United States of America
| |
Collapse
|
3
|
Wei Y, Lee C, Han S, Kim A. Enhancing visual communication through representation learning. Front Neurosci 2024; 18:1368733. [PMID: 38859924 PMCID: PMC11163107 DOI: 10.3389/fnins.2024.1368733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 04/17/2024] [Indexed: 06/12/2024] Open
Abstract
Introduction This research aims to address the challenges in model construction for the Extended Mind for the Design of the Human Environment. Specifically, we employ the ResNet-50, LSTM, and Object Tracking Algorithms approaches to achieve collaborative construction of high-quality virtual assets, image optimization, and intelligent agents, providing users with a virtual universe experience in the context of visual communication. Methods Firstly, we utilize ResNet-50 as a convolutional neural network model for generating virtual assets, including objects, characters, and environments. By training and fine-tuning ResNet-50, we can generate virtual elements with high realism and rich diversity. Next, we use LSTM (Long Short-Term Memory) for image processing and analysis of the generated virtual assets. LSTM can capture contextual information in image sequences and extract/improve the details and appearance of the images. By applying LSTM, we further enhance the quality and realism of the generated virtual assets. Finally, we adopt Object Tracking Algorithms to track and analyze the movement and behavior of virtual entities within the virtual environment. Object Tracking Algorithms enable us to accurately track the positions and trajectories of objects, characters, and other elements, allowing for realistic interactions and dynamic responses. Results and discussion By integrating the technologies of ResNet-50, LSTM, and Object Tracking Algorithms, we can generate realistic virtual assets, optimize image details, track and analyze virtual entities, and train intelligent agents, providing users with a more immersive and interactive visual communication-driven metaverse experience. These innovative solutions have important applications in the Extended Mind for the Design of the Human Environment, enabling the creation of more realistic and interactive virtual worlds.
Collapse
Affiliation(s)
- YuHan Wei
- Dankook University, Yongin-si, Gyeonggi-do, Republic of Korea
| | | | | | | |
Collapse
|
4
|
Li W, Zhang X, Shi P, Li S, Li P, Yu H. Across Sessions and Subjects Domain Adaptation for Building Robust Myoelectric Interface. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2005-2015. [PMID: 38147425 DOI: 10.1109/tnsre.2023.3347540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
Gesture interaction via surface electromyography (sEMG) signal is a promising approach for advanced human-computer interaction systems. However, improving the performance of the myoelectric interface is challenging due to the domain shift caused by the signal's inherent variability. To enhance the interface's robustness, we propose a novel adaptive information fusion neural network (AIFNN) framework, which could effectively reduce the effects of multiple scenarios. Specifically, domain adversarial training is established to inhibit the shared network's weights from exploiting domain-specific representation, thus allowing for the extraction of domain-invariant features. Effectively, classification loss, domain diversence loss and domain discrimination loss are employed, which improve classification performance while reduce distribution mismatches between the two domains. To simulate the application of myoelectric interface, experiments were carried out involving three scenarios (intra-session, inter-session and inter-subject scenarios). Ten non-disabled subjects were recruited to perform sixteen gestures for ten consecutive days. The experimental results indicated that the performance of AIFNN was better than two other state-of-the-art transfer learning approaches, namely fine-tuning (FT) and domain adversarial network (DANN). This study demonstrates the capability of AIFNN to maintain robustness over time and generalize across users in practical myoelectric interface implementations. These findings could serve as a foundation for future deployments.
Collapse
|
5
|
Lin C, Zhang X. Fusion inception and transformer network for continuous estimation of finger kinematics from surface electromyography. Front Neurorobot 2024; 18:1305605. [PMID: 38765870 PMCID: PMC11100415 DOI: 10.3389/fnbot.2024.1305605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 03/04/2024] [Indexed: 05/22/2024] Open
Abstract
Decoding surface electromyography (sEMG) to recognize human movement intentions enables us to achieve stable, natural and consistent control in the field of human computer interaction (HCI). In this paper, we present a novel deep learning (DL) model, named fusion inception and transformer network (FIT), which effectively models both local and global information on sequence data by fully leveraging the capabilities of Inception and Transformer networks. In the publicly available Ninapro dataset, we selected surface EMG signals from six typical hand grasping maneuvers in 10 subjects for predicting the values of the 10 most important joint angles in the hand. Our model's performance, assessed through Pearson's correlation coefficient (PCC), root mean square error (RMSE), and R-squared (R2) metrics, was compared with temporal convolutional network (TCN), long short-term memory network (LSTM), and bidirectional encoder representation from transformers model (BERT). Additionally, we also calculate the training time and the inference time of the models. The results show that FIT is the most performant, with excellent estimation accuracy and low computational cost. Our model contributes to the development of HCI technology and has significant practical value.
Collapse
Affiliation(s)
- Chuang Lin
- School of Information Science and Technology, Dalian Maritime University, Dalian, China
| | | |
Collapse
|
6
|
Gao RZ, Lee PS, Ravi A, Ren CL, Dickerson CR, Tung JY. Hybrid Soft-Rigid Active Prosthetics Laboratory Exercise for Hands-On Biomechanical and Biomedical Engineering Education. J Biomech Eng 2024; 146:051007. [PMID: 38456810 DOI: 10.1115/1.4065008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 02/26/2024] [Indexed: 03/09/2024]
Abstract
This paper introduces a hands-on laboratory exercise focused on assembling and testing a hybrid soft-rigid active finger prosthetic for biomechanical and biomedical engineering (BME) education. This hands-on laboratory activity focuses on the design of a myoelectric finger prosthesis, integrating mechanical, electrical, sensor (i.e., inertial measurement units (IMUs), electromyography (EMG)), pneumatics, and embedded software concepts. We expose students to a hybrid soft-rigid robotic system, offering a flexible, modifiable lab activity that can be tailored to instructors' needs and curriculum requirements. All necessary files are made available in an open-access format for implementation. Off-the-shelf components are all purchasable through global vendors (e.g., DigiKey Electronics, McMaster-Carr, Amazon), costing approximately USD 100 per kit, largely with reusable elements. We piloted this lab with 40 undergraduate engineering students in a neural and rehabilitation engineering upper year elective course, receiving excellent positive feedback. Rooted in real-world applications, the lab is an engaging pedagogical platform, as students are eager to learn about systems with tangible impacts. Extensions to the lab, such as follow-up clinical (e.g., prosthetist) and/or technical (e.g., user-device interface design) discussion, are a natural means to deepen and promote interdisciplinary hands-on learning experiences. In conclusion, the lab session provides an engaging journey through the lifecycle of the prosthetic finger research and design process, spanning conceptualization and creation to the final assembly and testing phases.
Collapse
Affiliation(s)
- Run Ze Gao
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Ave W., E5-3008, Waterloo, ON N2L 3G1, Canada
- University of Waterloo
| | - Peter S Lee
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Ave W., E5-3008, Waterloo, ON N2L 3G1, Canada
- University of Waterloo
| | - Aravind Ravi
- Department of Systems Design Engineering, University of Waterloo, 200 University Ave W., E7-3443, Waterloo, ON N2L 3G1, Canada
- University of Waterloo
| | - Carolyn L Ren
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Ave W., E3-4105, Waterloo, ON N2L 3G1, Canada
| | - Clark R Dickerson
- Department of Kinesiology and Health Sciences, University of Waterloo, 200 University Ave W., EXP 2684, Waterloo, ON N2L 3G1, Canada
| | - James Y Tung
- Department of Systems Design Engineering, University of Waterloo, 200 University Ave W., E7-3428, Waterloo, ON N2L 3G1, Canada
| |
Collapse
|
7
|
Hu Z, Wang S, Ou C, Ge A, Li X. Study on Gesture Recognition Method with Two-Stream Residual Network Fusing sEMG Signals and Acceleration Signals. SENSORS (BASEL, SWITZERLAND) 2024; 24:2702. [PMID: 38732808 PMCID: PMC11085498 DOI: 10.3390/s24092702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024]
Abstract
Currently, surface EMG signals have a wide range of applications in human-computer interaction systems. However, selecting features for gesture recognition models based on traditional machine learning can be challenging and may not yield satisfactory results. Considering the strong nonlinear generalization ability of neural networks, this paper proposes a two-stream residual network model with an attention mechanism for gesture recognition. One branch processes surface EMG signals, while the other processes hand acceleration signals. Segmented networks are utilized to fully extract the physiological and kinematic features of the hand. To enhance the model's capacity to learn crucial information, we introduce an attention mechanism after global average pooling. This mechanism strengthens relevant features and weakens irrelevant ones. Finally, the deep features obtained from the two branches of learning are fused to further improve the accuracy of multi-gesture recognition. The experiments conducted on the NinaPro DB2 public dataset resulted in a recognition accuracy of 88.25% for 49 gestures. This demonstrates that our network model can effectively capture gesture features, enhancing accuracy and robustness across various gestures. This approach to multi-source information fusion is expected to provide more accurate and real-time commands for exoskeleton robots and myoelectric prosthetic control systems, thereby enhancing the user experience and the naturalness of robot operation.
Collapse
Affiliation(s)
- Zhigang Hu
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang 471023, China; (Z.H.); (C.O.); (A.G.)
| | - Shen Wang
- School of Mechanical and Electrical Engineering, Henan University of Science and Technology, Luoyang 471003, China;
| | - Cuisi Ou
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang 471023, China; (Z.H.); (C.O.); (A.G.)
| | - Aoru Ge
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang 471023, China; (Z.H.); (C.O.); (A.G.)
| | - Xiangpan Li
- School of Mechanical and Electrical Engineering, Henan University of Science and Technology, Luoyang 471003, China;
| |
Collapse
|
8
|
Xu T, Zhao K, Hu Y, Li L, Wang W, Wang F, Zhou Y, Li J. Transferable non-invasive modal fusion-transformer (NIMFT) for end-to-end hand gesture recognition. J Neural Eng 2024; 21:026034. [PMID: 38565124 DOI: 10.1088/1741-2552/ad39a5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 04/02/2024] [Indexed: 04/04/2024]
Abstract
Objective.Recent studies have shown that integrating inertial measurement unit (IMU) signals with surface electromyographic (sEMG) can greatly improve hand gesture recognition (HGR) performance in applications such as prosthetic control and rehabilitation training. However, current deep learning models for multimodal HGR encounter difficulties in invasive modal fusion, complex feature extraction from heterogeneous signals, and limited inter-subject model generalization. To address these challenges, this study aims to develop an end-to-end and inter-subject transferable model that utilizes non-invasively fused sEMG and acceleration (ACC) data.Approach.The proposed non-invasive modal fusion-transformer (NIMFT) model utilizes 1D-convolutional neural networks-based patch embedding for local information extraction and employs a multi-head cross-attention (MCA) mechanism to non-invasively integrate sEMG and ACC signals, stabilizing the variability induced by sEMG. The proposed architecture undergoes detailed ablation studies after hyperparameter tuning. Transfer learning is employed by fine-tuning a pre-trained model on new subject and a comparative analysis is performed between the fine-tuning and subject-specific model. Additionally, the performance of NIMFT is compared to state-of-the-art fusion models.Main results.The NIMFT model achieved recognition accuracies of 93.91%, 91.02%, and 95.56% on the three action sets in the Ninapro DB2 dataset. The proposed embedding method and MCA outperformed the traditional invasive modal fusion transformer by 2.01% (embedding) and 1.23% (fusion), respectively. In comparison to subject-specific models, the fine-tuning model exhibited the highest average accuracy improvement of 2.26%, achieving a final accuracy of 96.13%. Moreover, the NIMFT model demonstrated superiority in terms of accuracy, recall, precision, and F1-score compared to the latest modal fusion models with similar model scale.Significance.The NIMFT is a novel end-to-end HGR model, utilizes a non-invasive MCA mechanism to integrate long-range intermodal information effectively. Compared to recent modal fusion models, it demonstrates superior performance in inter-subject experiments and offers higher training efficiency and accuracy levels through transfer learning than subject-specific approaches.
Collapse
Affiliation(s)
- Tianxiang Xu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Kunkun Zhao
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Yuxiang Hu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Liang Li
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Wei Wang
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Fulin Wang
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- Nanjing PANDA Electronics Equipment Co., Ltd, Nanjing 210033, People's Republic of China
| | - Yuxuan Zhou
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Jianqing Li
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| |
Collapse
|
9
|
Shaw HO, Devin KM, Tang J, Jiang L. Evaluation of Hand Action Classification Performance Using Machine Learning Based on Signals from Two sEMG Electrodes. SENSORS (BASEL, SWITZERLAND) 2024; 24:2383. [PMID: 38676000 PMCID: PMC11054923 DOI: 10.3390/s24082383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 03/21/2024] [Accepted: 03/30/2024] [Indexed: 04/28/2024]
Abstract
Classification-based myoelectric control has attracted significant interest in recent years, leading to prosthetic hands with advanced functionality, such as multi-grip hands. Thus far, high classification accuracies have been achieved by increasing the number of surface electromyography (sEMG) electrodes or adding other sensing mechanisms. While many prescribed myoelectric hands still adopt two-electrode sEMG systems, detailed studies on signal processing and classification performance are still lacking. In this study, nine able-bodied participants were recruited to perform six typical hand actions, from which sEMG signals from two electrodes were acquired using a Delsys Trigno Research+ acquisition system. Signal processing and machine learning algorithms, specifically, linear discriminant analysis (LDA), k-nearest neighbors (KNN), and support vector machines (SVM), were used to study classification accuracies. Overall classification accuracy of 93 ± 2%, action-specific accuracy of 97 ± 2%, and F1-score of 87 ± 7% were achieved, which are comparable with those reported from multi-electrode systems. The highest accuracies were achieved using SVM algorithm compared to LDA and KNN algorithms. A logarithmic relationship between classification accuracy and number of features was revealed, which plateaued at five features. These comprehensive findings may potentially contribute to signal processing and machine learning strategies for commonly prescribed myoelectric hand systems with two sEMG electrodes to further improve functionality.
Collapse
Affiliation(s)
- Hope O. Shaw
- School of Engineering, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton SO17 1BJ, UK; (K.M.D.)
| | | | | | - Liudi Jiang
- School of Engineering, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton SO17 1BJ, UK; (K.M.D.)
| |
Collapse
|
10
|
Moslhi AM, Aly HH, ElMessiery M. The Impact of Feature Extraction on Classification Accuracy Examined by Employing a Signal Transformer to Classify Hand Gestures Using Surface Electromyography Signals. SENSORS (BASEL, SWITZERLAND) 2024; 24:1259. [PMID: 38400416 PMCID: PMC10893156 DOI: 10.3390/s24041259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/01/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
Interest in developing techniques for acquiring and decoding biological signals is on the rise in the research community. This interest spans various applications, with a particular focus on prosthetic control and rehabilitation, where achieving precise hand gesture recognition using surface electromyography signals is crucial due to the complexity and variability of surface electromyography data. Advanced signal processing and data analysis techniques are required to effectively extract meaningful information from these signals. In our study, we utilized three datasets: NinaPro Database 1, CapgMyo Database A, and CapgMyo Database B. These datasets were chosen for their open-source availability and established role in evaluating surface electromyography classifiers. Hand gesture recognition using surface electromyography signals draws inspiration from image classification algorithms, leading to the introduction and development of the Novel Signal Transformer. We systematically investigated two feature extraction techniques for surface electromyography signals: the Fast Fourier Transform and wavelet-based feature extraction. Our study demonstrated significant advancements in surface electromyography signal classification, particularly in the Ninapro database 1 and CapgMyo dataset A, surpassing existing results in the literature. The newly introduced Signal Transformer outperformed traditional Convolutional Neural Networks by excelling in capturing structural details and incorporating global information from image-like signals through robust basis functions. Additionally, the inclusion of an attention mechanism within the Signal Transformer highlighted the significance of electrode readings, improving classification accuracy. These findings underscore the potential of the Signal Transformer as a powerful tool for precise and effective surface electromyography signal classification, promising applications in prosthetic control and rehabilitation.
Collapse
Affiliation(s)
- Aly Medhat Moslhi
- Faculty of Engineering, The Arab Academy for Science, Technology & Maritime Transport, Smart Village Campus, Giza P.O. Box 2033, Egypt;
| | - Hesham H. Aly
- Faculty of Engineering, The Arab Academy for Science, Technology & Maritime Transport, Smart Village Campus, Giza P.O. Box 2033, Egypt;
| | - Medhat ElMessiery
- Faculty of Engineering, Cairo University, Giza P.O. Box 2033, Egypt;
| |
Collapse
|
11
|
Yu G, Deng Z, Bao Z, Zhang Y, He B. Gesture Classification in Electromyography Signals for Real-Time Prosthetic Hand Control Using a Convolutional Neural Network-Enhanced Channel Attention Model. Bioengineering (Basel) 2023; 10:1324. [PMID: 38002448 PMCID: PMC10669079 DOI: 10.3390/bioengineering10111324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 11/11/2023] [Accepted: 11/13/2023] [Indexed: 11/26/2023] Open
Abstract
Accurate and real-time gesture recognition is required for the autonomous operation of prosthetic hand devices. This study employs a convolutional neural network-enhanced channel attention (CNN-ECA) model to provide a unique approach for surface electromyography (sEMG) gesture recognition. The introduction of the ECA module improves the model's capacity to extract features and focus on critical information in the sEMG data, thus simultaneously equipping the sEMG-controlled prosthetic hand systems with the characteristics of accurate gesture detection and real-time control. Furthermore, we suggest a preprocessing strategy for extracting envelope signals that incorporates Butterworth low-pass filtering and the fast Hilbert transform (FHT), which can successfully reduce noise interference and capture essential physiological information. Finally, the majority voting window technique is adopted to enhance the prediction results, further improving the accuracy and stability of the model. Overall, our multi-layered convolutional neural network model, in conjunction with envelope signal extraction and attention mechanisms, offers a promising and innovative approach for real-time control systems in prosthetic hands, allowing for precise fine motor actions.
Collapse
Affiliation(s)
- Guangjie Yu
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China; (G.Y.); (Z.D.); (Z.B.)
| | - Ziting Deng
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China; (G.Y.); (Z.D.); (Z.B.)
| | - Zhenchen Bao
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China; (G.Y.); (Z.D.); (Z.B.)
| | - Yue Zhang
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China; (G.Y.); (Z.D.); (Z.B.)
- Fujian Engineering Research Center of Joint Intelligent Medical Engineering, Fuzhou 350108, China
| | - Bingwei He
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China; (G.Y.); (Z.D.); (Z.B.)
- Fujian Engineering Research Center of Joint Intelligent Medical Engineering, Fuzhou 350108, China
| |
Collapse
|
12
|
Große Sundrup J, Mombaur K. On the Distribution of Muscle Signals: A Method for Distance-Based Classification of Human Gestures. SENSORS (BASEL, SWITZERLAND) 2023; 23:7441. [PMID: 37687896 PMCID: PMC10490578 DOI: 10.3390/s23177441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 08/15/2023] [Accepted: 08/19/2023] [Indexed: 09/10/2023]
Abstract
We investigate the distribution of muscle signatures of human hand gestures under Dynamic Time Warping. For this we present a k-Nearest-Neighbors classifier using Dynamic Time Warping for the distance estimate. To understand the resulting classification performance, we investigate the distribution of the recorded samples and derive a method of assessing the separability of a set of gestures. In addition to this, we present and evaluate two approaches with reduced real-time computational cost with regards to their effectiveness and the mechanics behind them. We further investigate the impact of different parameters with regards to practical usability and background rejection, allowing fine-tuning of the induced classification procedure.
Collapse
Affiliation(s)
- Jonas Große Sundrup
- Canada Excellence Research Chair Human-Centred Robotics and Machine Intelligence, Systems Design Engineering & Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Katja Mombaur
- Canada Excellence Research Chair Human-Centred Robotics and Machine Intelligence, Systems Design Engineering & Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
- Optimization and Biomechanics for Human-Centred Robotics, Institute of Anthropomatics and Robotics, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany
| |
Collapse
|
13
|
Montazerin M, Rahimian E, Naderkhani F, Atashzar SF, Yanushkevich S, Mohammadi A. Transformer-based hand gesture recognition from instantaneous to fused neural decomposition of high-density EMG signals. Sci Rep 2023; 13:11000. [PMID: 37419881 PMCID: PMC10329032 DOI: 10.1038/s41598-023-36490-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 06/05/2023] [Indexed: 07/09/2023] Open
Abstract
Designing efficient and labor-saving prosthetic hands requires powerful hand gesture recognition algorithms that can achieve high accuracy with limited complexity and latency. In this context, the paper proposes a Compact Transformer-based Hand Gesture Recognition framework referred to as [Formula: see text], which employs a vision transformer network to conduct hand gesture recognition using high-density surface EMG (HD-sEMG) signals. Taking advantage of the attention mechanism, which is incorporated into the transformer architectures, our proposed [Formula: see text] framework overcomes major constraints associated with most of the existing deep learning models such as model complexity; requiring feature engineering; inability to consider both temporal and spatial information of HD-sEMG signals, and requiring a large number of training samples. The attention mechanism in the proposed model identifies similarities among different data segments with a greater capacity for parallel computations and addresses the memory limitation problems while dealing with inputs of large sequence lengths. [Formula: see text] can be trained from scratch without any need for transfer learning and can simultaneously extract both temporal and spatial features of HD-sEMG data. Additionally, the [Formula: see text] framework can perform instantaneous recognition using sEMG image spatially composed from HD-sEMG signals. A variant of the [Formula: see text] is also designed to incorporate microscopic neural drive information in the form of Motor Unit Spike Trains (MUSTs) extracted from HD-sEMG signals using Blind Source Separation (BSS). This variant is combined with its baseline version via a hybrid architecture to evaluate potentials of fusing macroscopic and microscopic neural drive information. The utilized HD-sEMG dataset involves 128 electrodes that collect the signals related to 65 isometric hand gestures of 20 subjects. The proposed [Formula: see text] framework is applied to 31.25, 62.5, 125, 250 ms window sizes of the above-mentioned dataset utilizing 32, 64, 128 electrode channels. Our results are obtained via 5-fold cross-validation by first applying the proposed framework on the dataset of each subject separately and then, averaging the accuracies among all the subjects. The average accuracy over all the participants using 32 electrodes and a window size of 31.25 ms is 86.23%, which gradually increases till reaching 91.98% for 128 electrodes and a window size of 250 ms. The [Formula: see text] achieves accuracy of 89.13% for instantaneous recognition based on a single frame of HD-sEMG image. The proposed model is statistically compared with a 3D Convolutional Neural Network (CNN) and two different variants of Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) models. The accuracy results for each of the above-mentioned models are paired with their precision, recall, F1 score, required memory, and train/test times. The results corroborate effectiveness of the proposed [Formula: see text] framework compared to its counterparts.
Collapse
Affiliation(s)
- Mansooreh Montazerin
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada
| | - Elahe Rahimian
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | - Farnoosh Naderkhani
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada
| | - S Farokh Atashzar
- Departments of Electrical and Computer Engineering, Mechanical and Aerospace Engineering, New York University (NYU), New York, 10003, NY, USA
- NYU Center for Urban Science and Progress (CUSP), NYU WIRELESS, New York University (NYU), New York, 10003, NY, USA
| | - Svetlana Yanushkevich
- Biometric Technologies Laboratory, Department of Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
| | - Arash Mohammadi
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada.
- Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada.
| |
Collapse
|
14
|
Wei W, Tan F, Zhang H, Mao H, Fu M, Samuel OW, Li G. Surface electromyogram, kinematic, and kinetic dataset of lower limb walking for movement intent recognition. Sci Data 2023; 10:358. [PMID: 37280249 DOI: 10.1038/s41597-023-02263-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 05/23/2023] [Indexed: 06/08/2023] Open
Abstract
Surface electromyogram (sEMG) offers a rich set of motor information for decoding limb motion intention that serves as a control input to Intelligent human-machine synergy systems (IHMSS). Despite growing interest in IHMSS, the current publicly available datasets are limited and can hardly meet the growing demands of researchers. This study presents a novel lower limb motion dataset (designated as SIAT-LLMD), comprising sEMG, kinematic, and kinetic data with corresponding labels acquired from 40 healthy humans during 16 movements. The kinematic and kinetic data were collected using a motion capture system and six-dimensional force platforms and processed using OpenSim software. The sEMG data were recorded using nine wireless sensors placed on the subjects' thigh and calf muscles on the left limb. Besides, SIAT-LLMD provides labels to classify the different movements and different gait phases. Analysis of the dataset verified the synchronization and reproducibility, and codes for effective data processing are provided. The proposed dataset can serve as a new resource for exploring novel algorithms and models for characterizing lower limb movements.
Collapse
Affiliation(s)
- Wenhao Wei
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS), and the SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, 518055, China
| | - Fangning Tan
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS), and the SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, 518055, China
| | - Hang Zhang
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, 518055, China
| | - He Mao
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS), and the SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, 518055, China
| | - Menglong Fu
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, 518055, China
| | - Oluwarotimi Williams Samuel
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS), and the SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, 518055, China.
- School of Computing and Engineering, University of Derby, Derby, DE22 3AW, UK.
- Data Science Research Center, University of Derby, Derby, DE22 3AW, UK.
| | - Guanglin Li
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS), and the SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, 518055, China.
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, 518055, China.
| |
Collapse
|
15
|
Liang Z, Wang X, Guo J, Ye Y, Zhang H, Xie L, Tao K, Zeng W, Yin E, Ji B. A Wireless, High-Quality, Soft and Portable Wrist-Worn System for sEMG Signal Detection. MICROMACHINES 2023; 14:mi14051085. [PMID: 37241708 DOI: 10.3390/mi14051085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/16/2023] [Accepted: 05/19/2023] [Indexed: 05/28/2023]
Abstract
The study of wearable systems based on surface electromyography (sEMG) signals has attracted widespread attention and plays an important role in human-computer interaction, physiological state monitoring, and other fields. Traditional sEMG signal acquisition systems are primarily targeted at body parts that are not in line with daily wearing habits, such as the arms, legs, and face. In addition, some systems rely on wired connections, which impacts their flexibility and user-friendliness. This paper presents a novel wrist-worn system with four sEMG acquisition channels and a high common-mode rejection ratio (CMRR) greater than 120 dB. The circuit has an overall gain of 2492 V/V and a bandwidth of 15~500 Hz. It is fabricated using flexible circuit technologies and is encapsulated in a soft skin-friendly silicone gel. The system acquires sEMG signals at a sampling rate of over 2000 Hz with a 16-bit resolution and transmits data to a smart device via low-power Bluetooth. Muscle fatigue detection and four-class gesture recognition experiments (accuracy greater than 95%) were conducted to validate its practicality. The system has potential applications in natural and intuitive human-computer interaction and physiological state monitoring.
Collapse
Affiliation(s)
- Zekai Liang
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an 710072, China
- Ministry of Education Key Laboratory of Micro and Nano Systems for Aerospace, School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an 710072, China
- Innovation Center NPU Chongqing, Northwestern Polytechnical University, Chongqing 400000, China
| | - Xuanqi Wang
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an 710072, China
- Ministry of Education Key Laboratory of Micro and Nano Systems for Aerospace, School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an 710072, China
- Innovation Center NPU Chongqing, Northwestern Polytechnical University, Chongqing 400000, China
| | - Jun Guo
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an 710072, China
- Ministry of Education Key Laboratory of Micro and Nano Systems for Aerospace, School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an 710072, China
- Innovation Center NPU Chongqing, Northwestern Polytechnical University, Chongqing 400000, China
| | - Yuanming Ye
- Queen Mary University of London Engineering School, Northwestern Polytechnical University, Xi'an 710072, China
| | - Haoyang Zhang
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, China
| | - Liang Xie
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, China
| | - Kai Tao
- Ministry of Education Key Laboratory of Micro and Nano Systems for Aerospace, School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Wen Zeng
- Ministry of Education Key Laboratory of Micro and Nano Systems for Aerospace, School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Erwei Yin
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, China
| | - Bowen Ji
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an 710072, China
- Ministry of Education Key Laboratory of Micro and Nano Systems for Aerospace, School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an 710072, China
- Innovation Center NPU Chongqing, Northwestern Polytechnical University, Chongqing 400000, China
| |
Collapse
|
16
|
A novel neuroevolution model for emg-based hand gesture classification. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08253-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
17
|
Thiamchoo N, Phukpattaranont P. Evaluation of feature projection techniques in object grasp classification using electromyogram signals from different limb positions. PeerJ Comput Sci 2022; 8:e949. [PMID: 35634122 PMCID: PMC9138131 DOI: 10.7717/peerj-cs.949] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 03/24/2022] [Indexed: 06/15/2023]
Abstract
A myoelectric prosthesis is manipulated using electromyogram (EMG) signals from the existing muscles for performing the activities of daily living. A feature vector that is formed by concatenating data from many EMG channels may result in a high dimensional space, which may cause prolonged computation time, redundancy, and irrelevant information. We evaluated feature projection techniques, namely principal component analysis (PCA), linear discriminant analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and spectral regression extreme learning machine (SRELM), applied to object grasp classification. These represent feature projections that are combinations of either linear or nonlinear, and supervised or unsupervised types. All pairs of the four types of feature projection with seven types of classifiers were evaluated, with data from six EMG channels and an IMU sensors for nine upper limb positions in the transverse plane. The results showed that SRELM outperformed LDA with supervised feature projections, and t-SNE was superior to PCA with unsupervised feature projections. The classification errors from SRELM and t-SNE paired with the seven classifiers were from 1.50% to 2.65% and from 1.27% to 17.15%, respectively. A one-way ANOVA test revealed no statistically significant difference by classifier type when using the SRELM projection, which is a nonlinear supervised feature projection (p = 0.334). On the other hand, we have to carefully select an appropriate classifier for use with t-SNE, which is a nonlinear unsupervised feature projection. We achieved the lowest classification error 1.27% using t-SNE paired with a k-nearest neighbors classifier. For SRELM, the lowest 1.50% classification error was obtained when paired with a neural network classifier.
Collapse
|
18
|
Wu L, Chen X, Chen X, Zhang X. Rejecting Novel Motions in High-Density Myoelectric Pattern Recognition Using Hybrid Neural Networks. Front Neurorobot 2022; 16:862193. [PMID: 35418847 PMCID: PMC8996371 DOI: 10.3389/fnbot.2022.862193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 02/22/2022] [Indexed: 11/23/2022] Open
Abstract
The objective of this study is to develop a method for alleviating a novel pattern interference toward achieving a robust myoelectric pattern-recognition control system. To this end, a framework was presented for surface electromyogram (sEMG) pattern classification and novelty detection using hybrid neural networks, i.e., a convolutional neural network (CNN) and autoencoder networks. In the framework, the CNN was first used to extract spatio-temporal information conveyed in the sEMG data recorded via high-density (HD) 2-dimensional electrode arrays. Given the target motion patterns well-characterized by the CNN, autoencoder networks were applied to learn variable correlation in the spatio-temporal information, where samples from any novel pattern appeared to be significantly different from those from target patterns. Therefore, it was straightforward to discriminate and then reject the novel motion interferences identified as untargeted and unlearned patterns. The performance of the proposed method was evaluated with HD-sEMG data recorded by two 8 × 6 electrode arrays placed over the forearm extensors and flexors of 9 subjects performing seven target motion tasks and six novel motion tasks. The proposed method achieved high accuracies over 95% for identifying and rejecting novel motion tasks, and it outperformed conventional methods with statistical significance (p < 0.05). The proposed method is demonstrated to be a promising solution for rejecting novel motion interferences, which are ubiquitous in myoelectric control. This study will enhance the robustness of the myoelectric control system against novelty interference.
Collapse
|
19
|
Bao T, Xie SQ, Yang P, Zhou P, Zhang ZQ. Towards Robust, Adaptive and Reliable Upper-limb Motion Estimation Using Machine Learning and Deep Learning--A Survey in Myoelectric Control. IEEE J Biomed Health Inform 2022; 26:3822-3835. [PMID: 35294368 DOI: 10.1109/jbhi.2022.3159792] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
To develop multi-functional human-machine interfaces that can help disabled people reconstruct lost functions of upper-limbs, machine learning (ML) and deep learning (DL) techniques have been widely implemented to decode human movement intentions from surface electromyography (sEMG) signals. However, due to the high complexity of upper-limb movements and the inherent non-stable characteristics of sEMG, the usability of ML/DL based control schemes is still greatly limited in practical scenarios. To this end, tremendous efforts have been made to improve model robustness, adaptation, and reliability. In this article, we provide a systematic review on recent achievements, mainly from three categories: multi-modal sensing fusion to gain additional information of the user, transfer learning (TL) methods to eliminate domain shift impacts on estimation models, and post-processing approaches to obtain more reliable outcomes. Special attention is given to fusion strategies, deep TL frameworks, and confidence estimation. \textcolor{red}{Research challenges and emerging opportunities, with respect to hardware development, public resources, and decoding strategies, are also analysed to provide perspectives for future developments.
Collapse
|
20
|
Yang Z, Jiang D, Sun Y, Tao B, Tong X, Jiang G, Xu M, Yun J, Liu Y, Chen B, Kong J. Dynamic Gesture Recognition Using Surface EMG Signals Based on Multi-Stream Residual Network. Front Bioeng Biotechnol 2021; 9:779353. [PMID: 34746114 PMCID: PMC8569623 DOI: 10.3389/fbioe.2021.779353] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Accepted: 10/04/2021] [Indexed: 11/13/2022] Open
Abstract
Gesture recognition technology is widely used in the flexible and precise control of manipulators in the assisted medical field. Our MResLSTM algorithm can effectively perform dynamic gesture recognition. The result of surface EMG signal decoding is applied to the controller, which can improve the fluency of artificial hand control. Much current gesture recognition research using sEMG has focused on static gestures. In addition, the accuracy of recognition depends on the extraction and selection of features. However, Static gesture research cannot meet the requirements of natural human-computer interaction and dexterous control of manipulators. Therefore, a multi-stream residual network (MResLSTM) is proposed for dynamic hand movement recognition. This study aims to improve the accuracy and stability of dynamic gesture recognition. Simultaneously, it can also advance the research on the smooth control of the Manipulator. We combine the residual model and the convolutional short-term memory model into a unified framework. The architecture extracts spatiotemporal features from two aspects: global and deep, and combines feature fusion to retain essential information. The strategy of pointwise group convolution and channel shuffle is used to reduce the number of network calculations. A dataset is constructed containing six dynamic gestures for model training. The experimental results show that on the same recognition model, the gesture recognition effect of fusion of sEMG signal and acceleration signal is better than that of only using sEMG signal. The proposed approach obtains competitive performance on our dataset with the recognition accuracies of 93.52%, achieving state-of-the-art performance with 89.65% precision on the Ninapro DB1 dataset. Our bionic calculation method is applied to the controller, which can realize the continuity of human-computer interaction and the flexibility of manipulator control.
Collapse
Affiliation(s)
- Zhiwen Yang
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China.,Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China
| | - Du Jiang
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China.,Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan, China.,Institute of Precision Manufacturing, Wuhan University of Science and Technology, Wuhan, China
| | - Ying Sun
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China.,Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan, China.,Institute of Precision Manufacturing, Wuhan University of Science and Technology, Wuhan, China
| | - Bo Tao
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China.,Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan, China.,Institute of Precision Manufacturing, Wuhan University of Science and Technology, Wuhan, China
| | - Xiliang Tong
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China.,Institute of Precision Manufacturing, Wuhan University of Science and Technology, Wuhan, China
| | - Guozhang Jiang
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China.,Institute of Precision Manufacturing, Wuhan University of Science and Technology, Wuhan, China
| | - Manman Xu
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China.,Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China.,Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan, China
| | - Juntong Yun
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China.,Institute of Precision Manufacturing, Wuhan University of Science and Technology, Wuhan, China
| | - Ying Liu
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China.,Institute of Precision Manufacturing, Wuhan University of Science and Technology, Wuhan, China
| | - Baojia Chen
- Hubei Key Laboratory of Hydroelectric Machinery Design and Maintenance, Three Gorges University, Yichang, China
| | - Jianyi Kong
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of Science and Technology, Wuhan, China.,Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan, China.,Institute of Precision Manufacturing, Wuhan University of Science and Technology, Wuhan, China
| |
Collapse
|
21
|
Esposito D, Centracchio J, Andreozzi E, Gargiulo GD, Naik GR, Bifulco P. Biosignal-Based Human-Machine Interfaces for Assistance and Rehabilitation: A Survey. SENSORS 2021; 21:s21206863. [PMID: 34696076 PMCID: PMC8540117 DOI: 10.3390/s21206863] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/30/2021] [Accepted: 10/12/2021] [Indexed: 12/03/2022]
Abstract
As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal-based HMIs for assistance and rehabilitation to outline state-of-the-art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full-text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever-growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complexity, so their usefulness should be carefully evaluated for the specific application.
Collapse
Affiliation(s)
- Daniele Esposito
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Jessica Centracchio
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Emilio Andreozzi
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Gaetano D. Gargiulo
- School of Engineering, Design and Built Environment, Western Sydney University, Penrith, NSW 2747, Australia;
- The MARCS Institute, Western Sydney University, Penrith, NSW 2751, Australia
| | - Ganesh R. Naik
- School of Engineering, Design and Built Environment, Western Sydney University, Penrith, NSW 2747, Australia;
- The Adelaide Institute for Sleep Health, Flinders University, Bedford Park, SA 5042, Australia
- Correspondence:
| | - Paolo Bifulco
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| |
Collapse
|