1
|
Zandigohar M, Han M, Sharif M, Günay SY, Furmanek MP, Yarossi M, Bonato P, Onal C, Padır T, Erdoğmuş D, Schirner G. Multimodal fusion of EMG and vision for human grasp intent inference in prosthetic hand control. Front Robot AI 2024; 11:1312554. [PMID: 38476118 PMCID: PMC10927746 DOI: 10.3389/frobt.2024.1312554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 01/19/2024] [Indexed: 03/14/2024] Open
Abstract
Objective: For transradial amputees, robotic prosthetic hands promise to regain the capability to perform daily living activities. Current control methods based on physiological signals such as electromyography (EMG) are prone to yielding poor inference outcomes due to motion artifacts, muscle fatigue, and many more. Vision sensors are a major source of information about the environment state and can play a vital role in inferring feasible and intended gestures. However, visual evidence is also susceptible to its own artifacts, most often due to object occlusion, lighting changes, etc. Multimodal evidence fusion using physiological and vision sensor measurements is a natural approach due to the complementary strengths of these modalities. Methods: In this paper, we present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, eye-gaze, and EMG from the forearm processed by neural network models. We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it. For this purpose, we have also developed novel data processing and augmentation techniques to train neural network components. Results: Our results indicate that, on average, fusion improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66% and 14.8%, relative to EMG (81.64% non-fused) and visual evidence (80.5% non-fused) individually, resulting in an overall fusion accuracy of 95.3%. Conclusion: Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time.
Collapse
Affiliation(s)
- Mehrshad Zandigohar
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States
| | - Mo Han
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States
| | - Mohammadreza Sharif
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States
| | - Sezen Yağmur Günay
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States
| | - Mariusz P. Furmanek
- Department of Physical Therapy, Movement and Rehabilitation Sciences, Northeastern University, Boston, MA, United States
- Institute of Sport Sciences, Academy of Physical Education in Katowice, Katowice, Poland
| | - Mathew Yarossi
- Department of Physical Therapy, Movement and Rehabilitation Sciences, Northeastern University, Boston, MA, United States
| | - Paolo Bonato
- Motion Analysis Lab, Spaulding Rehabilitation Hospital, Charlestown, MA, United States
| | - Cagdas Onal
- Soft Robotics Lab, Worcester Polytechnic Institute, Worcester, MA, United States
| | - Taşkın Padır
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States
| | - Deniz Erdoğmuş
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States
| | - Gunar Schirner
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States
| |
Collapse
|
2
|
Savithri CN, Priya E, Rajasekar K. A machine learning approach to identify hand actions from single-channel sEMG signals. BIOMED ENG-BIOMED TE 2022; 67:89-103. [PMID: 35191277 DOI: 10.1515/bmt-2021-0072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 01/24/2022] [Indexed: 11/15/2022]
Abstract
Surface Electromyographic (sEMG) signal is a prime source of information to activate prosthetic hand such that it is able to restore a few basic hand actions of amputee, making it suitable for rehabilitation. In this work, a non-invasive single channel sEMG amplifier is developed that captures sEMG signal for three typical hand actions from the lower elbow muscles of able bodied subjects and amputees. The recorded sEMG signal detrends and has frequencies other than active frequencies. The Empirical Mode Decomposition Detrending Fluctuation Analysis (EMD-DFA) is attempted to de-noise the sEMG signal. A feature vector is formed by extracting eight features in time domain, seven features each in spectral and wavelet domain. Prominent features are selected by Fuzzy Entropy Measure (FEM) to ease the computational complexity and reduce the recognition time of classification. Classification of different hand actions is attempted based on multi-class approach namely Partial Least Squares Discriminant Analysis (PLS-DA) to control the prosthetic hand. It is inferred that an accuracy of 89.72% & 84% is observed for the pointing action whereas the accuracy for closed fist is 81.2% & 79.54% while for spherical grasp it is 80.6% & 76% respectively for normal subjects and amputees. The performance of the classifier is compared with Linear Discriminant Analysis (LDA) and an improvement of 5% in mean accuracy is observed for both normal subjects and amputees. The mean accuracy for all the three different hand actions is significantly high (83.84% & 80.18%) when compared with LDA. The proposed work frame provides a fair mean accuracy in classifying the hand actions of amputees. This methodology thus appears to be useful in actuating the prosthetic hand.
Collapse
Affiliation(s)
- Chanda Nagarajan Savithri
- Department of Electronics and Communication Engineering, Sri Sai Ram Engineering College, West Tambaram, Chennai, India
| | - Ebenezer Priya
- Department of Electronics and Communication Engineering, Sri Sai Ram Engineering College, West Tambaram, Chennai, India
| | - Kevin Rajasekar
- Rosenheim Technical University of Applied Sciences, Rosenheim, Germany
| |
Collapse
|
3
|
Qin P, Shi X. Evaluation of Feature Extraction and Classification for Lower Limb Motion Based on sEMG Signal. ENTROPY 2020; 22:e22080852. [PMID: 33286623 PMCID: PMC7517453 DOI: 10.3390/e22080852] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 07/29/2020] [Accepted: 07/30/2020] [Indexed: 12/12/2022]
Abstract
The real-time and accuracy of motion classification plays an essential role for the elderly or frail people in daily activities. This study aims to determine the optimal feature extraction and classification method for the activities of daily living (ADL). In the experiment, we collected surface electromyography (sEMG) signals from thigh semitendinosus, lateral thigh muscle, and calf gastrocnemius of the lower limbs to classify horizontal walking, crossing obstacles, standing up, going down the stairs, and going up the stairs. Firstly, we analyzed 11 feature extraction methods, including time domain, frequency domain, time-frequency domain, and entropy. Additionally, a feature evaluation method was proposed, and the separability of 11 feature extraction algorithms was calculated. Then, combined with 11 feature algorithms, the classification accuracy and time of 55 classification methods were calculated. The results showed that the Gaussian Kernel Linear Discriminant Analysis (GK-LDA) with WAMP had the highest classification accuracy rate (96%), and the calculation time was below 80 ms. In this paper, the quantitative comparative analysis of feature extraction and classification methods was a benefit to the application for the wearable sEMG sensor system in ADL.
Collapse
Affiliation(s)
| | - Xin Shi
- Correspondence: (P.Q.); (X.S.)
| |
Collapse
|
4
|
She H, Zhu J, Tian Y, Wang Y, Yokoi H, Huang Q. SEMG Feature Extraction Based on StockwellTransform Improves Hand MovementRecognition Accuracy. SENSORS 2019; 19:s19204457. [PMID: 31615162 PMCID: PMC6832976 DOI: 10.3390/s19204457] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 10/08/2019] [Accepted: 10/12/2019] [Indexed: 11/16/2022]
Abstract
Feature extraction, as an important method for extracting useful information from surface electromyography (SEMG), can significantly improve pattern recognition accuracy. Time and frequency analysis methods have been widely used for feature extraction, but these methods analyze SEMG signals only from the time or frequency domain. Recent studies have shown that feature extraction based on time-frequency analysis methods can extract more useful information from SEMG signals. This paper proposes a novel time-frequency analysis method based on the Stockwell transform (S-transform) to improve hand movement recognition accuracy from forearm SEMG signals. First, the time-frequency analysis method, S-transform, is used for extracting a feature vector from forearm SEMG signals. Second, to reduce the amount of calculations and improve the running speed of the classifier, principal component analysis (PCA) is used for dimensionality reduction of the feature vector. Finally, an artificial neural network (ANN)-based multilayer perceptron (MLP) is used for recognizing hand movements. Experimental results show that the proposed feature extraction based on the S-transform analysis method can improve the class separability and hand movement recognition accuracy compared with wavelet transform and power spectral density methods.
Collapse
Affiliation(s)
- Haotian She
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China.
- Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing 100081, China.
| | - Jinying Zhu
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
- Beijing Advanced Innovation Center for Intelligent Robot and System, Beijing 100081, China.
| | - Ye Tian
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China.
- Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing 100081, China.
| | - Yanchao Wang
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China.
- Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing 100081, China.
| | - Hiroshi Yokoi
- Beijing Advanced Innovation Center for Intelligent Robot and System, Beijing 100081, China.
- School of informatics and Engineering, University of Electro-Communications, Tokyo 163-8001, Japan.
| | - Qiang Huang
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China.
- Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing 100081, China.
| |
Collapse
|