1
|
Eddy E, Campbell E, Bateman S, Scheme E. Understanding the influence of confounding factors in myoelectric control for discrete gesture recognition. J Neural Eng 2024; 21:036015. [PMID: 38722304 DOI: 10.1088/1741-2552/ad4915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Accepted: 05/09/2024] [Indexed: 05/18/2024]
Abstract
Discrete myoelectric control-based gesture recognition has recently gained interest as a possible input modality for many emerging ubiquitous computing applications. Unlike the continuous control commonly employed in powered prostheses, discrete systems seek to recognize the dynamic sequences associated with gestures to generate event-based inputs. More akin to those used in general-purpose human-computer interaction, these could include, for example, a flick of the wrist to dismiss a phone call or a double tap of the index finger and thumb to silence an alarm. Moelectric control systems have been shown to achieve near-perfect classification accuracy, but in highly constrained offline settings. Real-world, online systems are subject to 'confounding factors' (i.e. factors that hinder the real-world robustness of myoelectric control that are not accounted for during typical offline analyses), which inevitably degrade system performance, limiting their practical use. Although these factors have been widely studied in continuous prosthesis control, there has been little exploration of their impacts on discrete myoelectric control systems for emerging applications and use cases. Correspondingly, this work examines, for the first time, three confounding factors and their effect on the robustness of discrete myoelectric control: (1)limb position variability, (2)cross-day use, and a newly identified confound faced by discrete systems (3)gesture elicitation speed. Results from four different discrete myoelectric control architectures: (1) Majority Vote LDA, (2) Dynamic Time Warping, (3) an LSTM network trained with Cross Entropy, and (4) an LSTM network trained with Contrastive Learning, show that classification accuracy is significantly degraded (p<0.05) as a result of each of these confounds. This work establishes that confounding factors are a critical barrier that must be addressed to enable the real-world adoption of discrete myoelectric control for robust and reliable gesture recognition.
Collapse
Affiliation(s)
- Ethan Eddy
- University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Evan Campbell
- University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Scott Bateman
- University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Erik Scheme
- University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| |
Collapse
|
2
|
Gao H, An H, Lin W, Yu X, Qiu J. Trajectory Tracking of Variable Centroid Objects Based on Fusion of Vision and Force Perception. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7957-7965. [PMID: 37027564 DOI: 10.1109/tcyb.2023.3240502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Compared with traditional rigid objects' dynamic throwing and catching by the robot, the in-flight trajectory of nonrigid objects (incredibly variable centroid objects) throwing is more challenging to predict and track. This article proposes a variable centroid trajectory tracking network (VCTTN) with the fusion of vision and force information by introducing force data of throw processing to the vision neural network. The VCTTN-based model-free robot control system is developed to perform highly precise prediction and tracking with a part of the in-flight vision. The flight trajectories dataset of variable centroid objects generated by the robot arm is collected to train VCTTN. The experimental results show that trajectory prediction and tracking with the vision-force VCTTN is superior to the ones with the traditional vision perception and has an excellent tracking performance.
Collapse
|
3
|
Syed AU, Sattar NY, Ganiyu I, Sanjay C, Alkhatib S, Salah B. Deep learning-based framework for real-time upper limb motion intention classification using combined bio-signals. Front Neurorobot 2023; 17:1174613. [PMID: 37575360 PMCID: PMC10413572 DOI: 10.3389/fnbot.2023.1174613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 07/10/2023] [Indexed: 08/15/2023] Open
Abstract
This research study proposes a unique framework that takes input from a surface electromyogram (sEMG) and functional near-infrared spectroscopy (fNIRS) bio-signals. These signals are trained using convolutional neural networks (CNN). The framework entails a real-time neuro-machine interface to decode the human intention of upper limb motions. The bio-signals from the two modalities are recorded for eight movements simultaneously for prosthetic arm functions focusing on trans-humeral amputees. The fNIRS signals are acquired from the human motor cortex, while sEMG is recorded from the human bicep muscles. The selected classification and command generation features are the peak, minimum, and mean ΔHbO and ΔHbR values within a 2-s moving window. In the case of sEMG, wavelength, peak, and mean were extracted with a 150-ms moving window. It was found that this scheme generates eight motions with an enhanced average accuracy of 94.5%. The obtained results validate the adopted research methodology and potential for future real-time neural-machine interfaces to control prosthetic arms.
Collapse
Affiliation(s)
- A. Usama Syed
- Department of Industrial Engineering, University of Trento, Trento, Italy
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| | - Neelum Y. Sattar
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| | - Ismaila Ganiyu
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh, Saudi Arabia
| | - Chintakindi Sanjay
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh, Saudi Arabia
| | - Soliman Alkhatib
- Engineering Mathematics and Physics Department, Faculty of Engineering and Technology, Future University in Egypt, New Cairo, Egypt
| | - Bashir Salah
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
4
|
Guo K, Orban M, Lu J, Al-Quraishi MS, Yang H, Elsamanty M. Empowering Hand Rehabilitation with AI-Powered Gesture Recognition: A Study of an sEMG-Based System. Bioengineering (Basel) 2023; 10:bioengineering10050557. [PMID: 37237627 DOI: 10.3390/bioengineering10050557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 04/30/2023] [Accepted: 05/03/2023] [Indexed: 05/28/2023] Open
Abstract
Stroke is one of the most prevalent health issues that people face today, causing long-term complications such as paresis, hemiparesis, and aphasia. These conditions significantly impact a patient's physical abilities and cause financial and social hardships. In order to address these challenges, this paper presents a groundbreaking solution-a wearable rehabilitation glove. This motorized glove is designed to provide comfortable and effective rehabilitation for patients with paresis. Its unique soft materials and compact size make it easy to use in clinical settings and at home. The glove can train each finger individually and all fingers together, using assistive force generated by advanced linear integrated actuators controlled by sEMG signals. The glove is also durable and long-lasting, with 4-5 h of battery life. The wearable motorized glove is worn on the affected hand to provide assistive force during rehabilitation training. The key to this glove's effectiveness is its ability to perform the classified hand gestures acquired from the non-affected hand by integrating four sEMG sensors and a deep learning algorithm (the 1D-CNN algorithm and the InceptionTime algorithm). The InceptionTime algorithm classified ten hand gestures' sEMG signals with an accuracy of 91.60% and 90.09% in the training and verification sets, respectively. The overall accuracy was 90.89%. It showed potential as a tool for developing effective hand gesture recognition systems. The classified hand gestures can be used as a control command for the motorized wearable glove placed on the affected hand, allowing it to mimic the movements of the non-affected hand. This innovative technology performs rehabilitation exercises based on the theory of mirror therapy and task-oriented therapy. Overall, this wearable rehabilitation glove represents a significant step forward in stroke rehabilitation, offering a practical and effective solution to help patients recover from stroke's physical, financial, and social impact.
Collapse
Affiliation(s)
- Kai Guo
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Mostafa Orban
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- Mechanical Department, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| | - Jingxin Lu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130001, China
| | | | - Hongbo Yang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130001, China
| | - Mahmoud Elsamanty
- Mechanical Department, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
- Mechatronics and Robotics Department, School of Innovative Design Engineering, Egypt-Japan University of Science and Technology, Alexandria 21934, Egypt
| |
Collapse
|