1
|
Sgambato BG, Hasbani MH, Barsakcioglu DY, Ibanez J, Jakob A, Fournelle M, Tang MX, Farina D. High Performance Wearable Ultrasound as a Human-Machine Interface for Wrist and Hand Kinematic Tracking. IEEE Trans Biomed Eng 2024; 71:484-493. [PMID: 37610892 DOI: 10.1109/tbme.2023.3307952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
OBJECTIVE Non-invasive human machine interfaces (HMIs) have high potential in medical, entertainment, and industrial applications. Traditionally, surface electromyography (sEMG) has been used to track muscular activity and infer motor intention. Ultrasound (US) has received increasing attention as an alternative to sEMG-based HMIs. Here, we developed a portable US armband system with 24 channels and a multiple receiver approach, and compared it with existing sEMG- and US-based HMIs on movement intention decoding. METHODS US and motion capture data was recorded while participants performed wrist and hand movements of four degrees of freedom (DoFs) and their combinations. A linear regression model was used to offline predict hand kinematics from the US (or sEMG, for comparison) features. The method was further validated in real-time for a 3-DoF target reaching task. RESULTS In the offline analysis, the wearable US system achieved an average [Formula: see text] of 0.94 in the prediction of four DoFs of the wrist and hand while sEMG reached a performance of [Formula: see text]= 0.60. In online control, the participants achieved an average 93% completion rate of the targets. CONCLUSION When tailored for HMIs, the proposed US A-mode system and processing pipeline can successfully regress hand kinematics both in offline and online settings with performances comparable or superior to previously published interfaces. SIGNIFICANCE Wearable US technology may provide a new generation of HMIs that use muscular deformation to estimate limb movements. The wearable US system allowed for robust proportional and simultaneous control over multiple DoFs in both offline and online settings.
Collapse
|
2
|
Wei S, Zhang Y, Liu H. A Multimodal Multilevel Converged Attention Network for Hand Gesture Recognition With Hybrid sEMG and A-Mode Ultrasound Sensing. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7723-7734. [PMID: 36149990 DOI: 10.1109/tcyb.2022.3204343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Gesture recognition based on surface electromyography (sEMG) has been widely used in the field of human-machine interaction (HMI). However, sEMG has limitations, such as low signal-to-noise ratio and insensitivity to fine finger movements, so we consider adding A-mode ultrasound (AUS) to enhance the recognition impact. To explore the influence of multisource sensing data on gesture recognition and better integrate the features of different modules. We proposed a multimodal multilevel converged attention network (MMCANet) model for multisource signals composed of sEMG and AUS. The proposed model extracts the hidden features of the AUS signal with a convolutional neural network (CNN). Meanwhile, a CNN-LSTM (long-short memory network) hybrid structure extracts some spatial-temporal features from the sEMG signal. Then, two types of CNN features from AUS and sEMG are spliced and transmitted to a transformer encoder to fuse the information and interact with sEMG features to produce hybrid features. Finally, the classification results are output employing fully connected layers. Attention mechanisms are used to adjust the weights of feature channels. We compared MMCANet's feature extraction and classification performance with that of manually extracted sEMG-AUS features using four traditional machine-learning (ML) algorithms. The recognition accuracy increased by at least 5.15%. In addition, we tried deep learning (DL) methods with CNN on single modals. The experimental results showed that the proposed model improved 14.31% and 3.80% over the CNN method with single sEMG and AUS, respectively. Compared with some state-of-the-art fusion techniques, our method also achieved better results.
Collapse
|
3
|
André AD, Martins P. Exo Supportive Devices: Summary of Technical Aspects. Bioengineering (Basel) 2023; 10:1328. [PMID: 38002452 PMCID: PMC10669745 DOI: 10.3390/bioengineering10111328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 11/10/2023] [Accepted: 11/14/2023] [Indexed: 11/26/2023] Open
Abstract
Human societies have been trying to mitigate the suffering of individuals with physical impairments, with a special effort in the last century. In the 1950s, a new concept arose, finding similarities between animal exoskeletons, and with the goal of medically aiding human movement (for rehabilitation applications). There have been several studies on using exosuits with this purpose in mind. So, the current review offers a critical perspective and a detailed analysis of the steps and key decisions involved in the conception of an exoskeleton. Choices such as design aspects, base materials (structure), actuators (force and motion), energy sources (actuation), and control systems will be discussed, pointing out their advantages and disadvantages. Moreover, examples of exosuits (full-body, upper-body, and lower-body devices) will be presented and described, including their use cases and outcomes. The future of exoskeletons as possible assisted movement solutions will be discussed-pointing to the best options for rehabilitation.
Collapse
Affiliation(s)
- António Diogo André
- Associated Laboratory of Energy, Transports and Aeronautics (LAETA), Biomechanic and Health Unity (UBS), Institute of Science and Innovation in Mechanical and Industrial Engineering (INEGI), 4200-465 Porto, Portugal;
- Faculty of Engineering, University of Porto (FEUP), 4200-465 Porto, Portugal
| | - Pedro Martins
- Associated Laboratory of Energy, Transports and Aeronautics (LAETA), Biomechanic and Health Unity (UBS), Institute of Science and Innovation in Mechanical and Industrial Engineering (INEGI), 4200-465 Porto, Portugal;
- Aragon Institute for Engineering Research (i3A), Universidad de Zaragoza, 50018 Zaragoza, Spain
| |
Collapse
|
4
|
Nazari V, Zheng YP. Controlling Upper Limb Prostheses Using Sonomyography (SMG): A Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:1885. [PMID: 36850483 PMCID: PMC9959820 DOI: 10.3390/s23041885] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 02/01/2023] [Accepted: 02/06/2023] [Indexed: 06/18/2023]
Abstract
This paper presents a critical review and comparison of the results of recently published studies in the fields of human-machine interface and the use of sonomyography (SMG) for the control of upper limb prothesis. For this review paper, a combination of the keywords "Human Machine Interface", "Sonomyography", "Ultrasound", "Upper Limb Prosthesis", "Artificial Intelligence", and "Non-Invasive Sensors" was used to search for articles on Google Scholar and PubMed. Sixty-one articles were found, of which fifty-nine were used in this review. For a comparison of the different ultrasound modes, feature extraction methods, and machine learning algorithms, 16 articles were used. Various modes of ultrasound devices for prosthetic control, various machine learning algorithms for classifying different hand gestures, and various feature extraction methods for increasing the accuracy of artificial intelligence used in their controlling systems are reviewed in this article. The results of the review article show that ultrasound sensing has the potential to be used as a viable human-machine interface in order to control bionic hands with multiple degrees of freedom. Moreover, different hand gestures can be classified by different machine learning algorithms trained with extracted features from collected data with an accuracy of around 95%.
Collapse
Affiliation(s)
- Vaheh Nazari
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yong-Ping Zheng
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong SAR, China
- Research Institute for Smart Ageing, The Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
5
|
Chen W, Feng L, Lu J, Wu B. An Extended Spatial Transformer Convolutional Neural Network for Gesture Recognition and Self-Calibration Based on Sparse sEMG Electrodes. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:1204-1215. [PMID: 36378801 DOI: 10.1109/tbcas.2022.3222196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
sEMG-based gesture recognition is widely applied in human-machine interaction system by its unique advantages. However, the accuracy of recognition drops significantly as electrodes shift. Besides, in applications such as VR, virtual hands should be shown in reasonable posture by self-calibration. We propose an armband fusing sEMG and IMU with autonomously adjustable gain, and an extended spatial transformer convolutional neural network (EST-CNN) with feature enhanced pretreatment (FEP) to accomplish both gesture recognition and self-calibration via a one-shot processing. Different from anthropogenic calibration methods, spatial transformer layers (STL) in EST-CNN automatically learn the transformation relation, and explicitly express the rotational angle for coarse correction. Due to the shape change of feature pattern as rotational shift, we design the fine tuning layer (FTL) which is able to regulate rotational angle within 45°. By combining STL, FTL and IMU-based posture, EST-CNN is able to calculate non-discretized angle, and achieves high resolution of posture estimation based on sparse sEMG electrodes. Experiments collect frequently-used 3 gestures of 4 subjects in equidistant angles to evaluate EST-CNN. The results under electrodes shift show that the accuracy of gesture recognition is 97.06%, which is 5.81% higher than CNN, the fitness between estimated and true rotational angle is 99.44%.
Collapse
|
6
|
Wang H, Zuo S, Cerezo-Sánchez M, Arekhloo NG, Nazarpour K, Heidari H. Wearable super-resolution muscle-machine interfacing. Front Neurosci 2022; 16:1020546. [PMID: 36466163 PMCID: PMC9714306 DOI: 10.3389/fnins.2022.1020546] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 10/21/2022] [Indexed: 09/19/2023] Open
Abstract
Muscles are the actuators of all human actions, from daily work and life to communication and expression of emotions. Myography records the signals from muscle activities as an interface between machine hardware and human wetware, granting direct and natural control of our electronic peripherals. Regardless of the significant progression as of late, the conventional myographic sensors are still incapable of achieving the desired high-resolution and non-invasive recording. This paper presents a critical review of state-of-the-art wearable sensing technologies that measure deeper muscle activity with high spatial resolution, so-called super-resolution. This paper classifies these myographic sensors according to the different signal types (i.e., biomechanical, biochemical, and bioelectrical) they record during measuring muscle activity. By describing the characteristics and current developments with advantages and limitations of each myographic sensor, their capabilities are investigated as a super-resolution myography technique, including: (i) non-invasive and high-density designs of the sensing units and their vulnerability to interferences, (ii) limit-of-detection to register the activity of deep muscles. Finally, this paper concludes with new opportunities in this fast-growing super-resolution myography field and proposes promising future research directions. These advances will enable next-generation muscle-machine interfaces to meet the practical design needs in real-life for healthcare technologies, assistive/rehabilitation robotics, and human augmentation with extended reality.
Collapse
Affiliation(s)
- Huxi Wang
- Microelectronics Lab, James Watt School of Engineering, The University of Glasgow, Glasgow, United Kingdom
- Neuranics Ltd., Glasgow, United Kingdom
| | - Siming Zuo
- Microelectronics Lab, James Watt School of Engineering, The University of Glasgow, Glasgow, United Kingdom
- Neuranics Ltd., Glasgow, United Kingdom
| | - María Cerezo-Sánchez
- Microelectronics Lab, James Watt School of Engineering, The University of Glasgow, Glasgow, United Kingdom
- Neuranics Ltd., Glasgow, United Kingdom
| | - Negin Ghahremani Arekhloo
- Microelectronics Lab, James Watt School of Engineering, The University of Glasgow, Glasgow, United Kingdom
- Neuranics Ltd., Glasgow, United Kingdom
| | - Kianoush Nazarpour
- Neuranics Ltd., Glasgow, United Kingdom
- School of Informatics, The University of Edinburgh, Edinburgh, United Kingdom
| | - Hadi Heidari
- Microelectronics Lab, James Watt School of Engineering, The University of Glasgow, Glasgow, United Kingdom
- Neuranics Ltd., Glasgow, United Kingdom
| |
Collapse
|
7
|
Cisotto G, Capuzzo M, Guglielmi AV, Zanella A. Feature stability and setup minimization for EEG-EMG-enabled monitoring systems. EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING 2022; 2022:103. [PMID: 36320592 PMCID: PMC9612609 DOI: 10.1186/s13634-022-00939-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 10/18/2022] [Indexed: 06/16/2023]
Abstract
Delivering health care at home emerged as a key advancement to reduce healthcare costs and infection risks, as during the SARS-Cov2 pandemic. In particular, in motor training applications, wearable and portable devices can be employed for movement recognition and monitoring of the associated brain signals. This is one of the contexts where it is essential to minimize the monitoring setup and the amount of data to collect, process, and share. In this paper, we address this challenge for a monitoring system that includes high-dimensional EEG and EMG data for the classification of a specific type of hand movement. We fuse EEG and EMG into the magnitude squared coherence (MSC) signal, from which we extracted features using different algorithms (one from the authors) to solve binary classification problems. Finally, we propose a mapping-and-aggregation strategy to increase the interpretability of the machine learning results. The proposed approach provides very low mis-classification errors ( < 0.1 ), with very few and stable MSC features ( < 10 % of the initial set of available features). Furthermore, we identified a common pattern across algorithms and classification problems, i.e., the activation of the centro-parietal brain areas and arm's muscles in 8-80 Hz frequency band, in line with previous literature. Thus, this study represents a step forward to the minimization of a reliable EEG-EMG setup to enable gesture recognition.
Collapse
Affiliation(s)
- Giulia Cisotto
- Department of Information Engineering, University of Padova, Via Gradenigo, 6, 35121 Padova, Italy
- Inter-University Consortium for Telecommunications (CNIT), Padova, Italy
- Department of Informatics, Systems and Communications, University of Milano-Bicocca, Viale Sarca, 336, 20126 Milano, Italy
| | - Martina Capuzzo
- Department of Information Engineering, University of Padova, Via Gradenigo, 6, 35121 Padova, Italy
- Human Inspired Technologies Research Center, University of Padova, Via Luzzatti, 4, 35121 Padova, Italy
| | - Anna Valeria Guglielmi
- Department of Information Engineering, University of Padova, Via Gradenigo, 6, 35121 Padova, Italy
| | - Andrea Zanella
- Department of Information Engineering, University of Padova, Via Gradenigo, 6, 35121 Padova, Italy
- Inter-University Consortium for Telecommunications (CNIT), Padova, Italy
- Human Inspired Technologies Research Center, University of Padova, Via Luzzatti, 4, 35121 Padova, Italy
| |
Collapse
|
8
|
Lu Z, Cai S, Chen B, Liu Z, Guo L, Yao L. Wearable Real-Time Gesture Recognition Scheme Based on A-Mode Ultrasound. IEEE Trans Neural Syst Rehabil Eng 2022; 30:2623-2629. [PMID: 36074871 DOI: 10.1109/tnsre.2022.3205026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
A-mode ultrasound has the advantages of high resolution, easy calculation and low cost in predicting dexterous gestures. In order to accelerate the popularization of A-mode ultrasound gesture recognition technology, we designed a human-machine interface that can interact with the user in real-time. Data processing includes Gaussian filtering, feature extraction and PCA dimensionality reduction. The NB, LDA and SVM algorithms were selected to train machine learning models. The whole process was written in C++ to classify gestures in real-time. This paper conducts offline and real-time experiments based on HMI-A (Human-machine interface based on A-mode ultrasound), including ten subjects and ten common gestures. To demonstrate the effectiveness of HMI-A and avoid accidental interference, the offline experiment collected ten rounds of gestures for each subject for ten-fold cross-validation. The results show that the offline recognition accuracy is 96.92% ± 1.92%. The real-time experiment was evaluated by four online performance metrics: action selection time, action completion time, action completion rate and real-time recognition accuracy. The results show that the action completion rate is 96.0% ± 3.6%, and the real-time recognition accuracy is 83.8% ± 6.9%. This study verifies the great potential of wearable A-mode ultrasound technology, and provides a wider range of application scenarios for gesture recognition.
Collapse
|
9
|
Zheng Z, Wang Q, Deng D, Wang Q, Huang W. CG-Recognizer: A biosignal-based continuous gesture recognition system. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
10
|
Pancholi S, Joshi AM. Advanced Energy Kernel-Based Feature Extraction Scheme for Improved EMG-PR-Based Prosthesis Control Against Force Variation. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:3819-3828. [PMID: 32946409 DOI: 10.1109/tcyb.2020.3016595] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The EMG signal is a widely focused, clinically viable, and reliable source for controlling bionics and prosthesis devices with the aid of machine-learning algorithms. The decisive step in the EMG pattern recognition (EMG-PR)-based control scheme is to extract the features with minimum neural information loss. This article proposes a novel feature extraction method based on advanced energy kernel-based features (AEKFs). The proposed method is evaluated on a scientific dataset which contains six types of upper limb motion with three different force variations. Furthermore, the EMG signal is acquired for eight upper limb gestures for the testing algorithm on the DSP processor. The efficiency of the proposed feature set has been investigated using classification accuracy (CA), Davies-Bouldin (DB) index-based separability measurement, and time complexity as performance metrics. Moreover, the proposed AEKF features, along with the LDA classifier, have been implemented on the DSP processor (ARM cortex M4) for real-time viability. Offline metrics comparison with the existing approaches prove that AEKF features exhibit lower time complexity along with a higher CA of 97.33%. The algorithm is tested on the DSP processor and CA is reported ≈ 92 %. MATLAB 2015a has been deployed in Intel Core i7, 3.40-GHz RAM for all offline analyses.
Collapse
|
11
|
Classifying Muscle States with One-Dimensional Radio-Frequency Signals from Single Element Ultrasound Transducers. SENSORS 2022; 22:s22072789. [PMID: 35408403 PMCID: PMC9002976 DOI: 10.3390/s22072789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 03/30/2022] [Accepted: 04/02/2022] [Indexed: 11/24/2022]
Abstract
The reliable assessment of muscle states, such as contracted muscles vs. non-contracted muscles or relaxed muscles vs. fatigue muscles, is crucial in many sports and rehabilitation scenarios, such as the assessment of therapeutic measures. The goal of this work was to deploy machine learning (ML) models based on one-dimensional (1-D) sonomyography (SMG) signals to facilitate low-cost and wearable ultrasound devices. One-dimensional SMG is a non-invasive technique using 1-D ultrasound radio-frequency signals to measure muscle states and has the advantage of being able to acquire information from deep soft tissue layers. To mimic real-life scenarios, we did not emphasize the acquisition of particularly distinct signals. The ML models exploited muscle contraction signals of eight volunteers and muscle fatigue signals of 21 volunteers. We evaluated them with different schemes on a variety of data types, such as unprocessed or processed raw signals and found that comparatively simple ML models, such as Support Vector Machines or Logistic Regression, yielded the best performance w.r.t. accuracy and evaluation time. We conclude that our framework for muscle contraction and muscle fatigue classifications is very well-suited to facilitate low-cost and wearable devices based on ML models using 1-D SMG.
Collapse
|
12
|
Wrist and finger motion recognition via M-mode ultrasound signal: A feasibility study. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103112] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
13
|
Zeng J, Zhou Y, Yang Y, Yan J, Liu H. Fatigue-sensitivity Comparison of sEMG and A-mode Ultrasound based Hand Gesture Recognition. IEEE J Biomed Health Inform 2021; 26:1718-1725. [PMID: 34699373 DOI: 10.1109/jbhi.2021.3122277] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Though physiological signal based human-machine interfaces (HMIs) have recently developed rapidly, their practical use is restricted by many real-world environmental factors, one of which is muscle fatigue. This paper explores the sensitivities between surface electromyography (sEMG) and A-mode ultrasound (AUS) sensing modalities subject to muscle fatigue in the context of hand gesture recognition tasks. Two metrics, mean classification accuracy (mCA) and decline rate (DR), are proposed to evaluate the accuracy and muscle fatigue sensitivity between sEMG and AUS based HMIs. Muscle fatigue inducing experiment was designed and eight subjects were recruited to participate in the experiment. The gesture recognition accuracies of sEMG and AUS under non-fatigue state and fatigue state are compared through Mahalanobis distance based classifier linear discriminant analysis (LDA). In addition, Mahalanobis distance based metrics, repeatability index (RI) and separability index(SI), are introduced to evaluate the changes in the feature distribution during muscle fatigue and reveal the cause of the fatigue sensitivity difference between sEMG and AUS signals. The experimental results demonstrate that the fatigue sensitivity of AUS signal is better than that of sEMG signal. Specifically, with the employment of the LDA classifier trained under non-fatigue state, the testing accuracy of the sEMG signal in the non-fatigue state is 94.96%, while reduce to 68.26% in the fatigue state. The testing accuracy of the AUS signal in the corresponding states is 99.68% and 91.24%. AUS signal attains a higher mCA and lower DR, indicating that it has advantages over sEMG signal in terms of both accuracy and muscle fatigue sensitivity. In addition, the RI and RI=SI analysis reveal that before and after muscle fatigue, the consistency of AUS feature distribution is better than that of sEMG. These research outcomes validate that AUS is more tolerant to feature migration caused by muscle fatigue than sEMG.
Collapse
|
14
|
Zhang Q, Iyer A, Sun Z, Kim K, Sharma N. A Dual-Modal Approach Using Electromyography and Sonomyography Improves Prediction of Dynamic Ankle Movement: A Case Study. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1944-1954. [PMID: 34428143 DOI: 10.1109/tnsre.2021.3106900] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
For decades, surface electromyography (sEMG) has been a popular non-invasive bio-sensing technology for predicting human joint motion. However, cross-talk, interference from adjacent muscles, and its inability to measure deeply located muscles limit its performance in predicting joint motion. Recently, ultrasound (US) imaging has been proposed as an alternative non-invasive technology to predict joint movement due to its high signal-to-noise ratio, direct visualization of targeted tissue, and ability to access deep-seated muscles. This paper proposes a dual-modal approach that combines US imaging and sEMG for predicting volitional dynamic ankle dorsiflexion movement. Three feature sets: 1) a uni-modal set with four sEMG features, 2) a uni-modal set with four US imaging features, and 3) a dual-modal set with four dominant sEMG and US imaging features, together with measured ankle dorsiflexion angles, were used to train multiple machine learning regression models. The experimental results from a seated posture and five walking trials at different speeds, ranging from 0.50 m/s to 1.50 m/s, showed that the dual-modal set significantly reduced the prediction root mean square errors (RMSEs). Compared to the uni-modal sEMG feature set, the dual-modal set reduced RMSEs by up to 47.84% for the seated posture and up to 77.72% for the walking trials. Similarly, when compared to the US imaging feature set, the dual-modal set reduced RMSEs by up to 53.95% for the seated posture and up to 58.39% for the walking trials. The findings show that potentially the dual-modal sensing approach can be used as a superior sensing modality to predict human intent of a continuous motion and implemented for volitional control of clinical rehabilitative and assistive devices.
Collapse
|
15
|
PANCHOLI SIDHARTH, JOSHI AMITM. INTELLIGENT UPPER-LIMB PROSTHETIC CONTROL (iULP) WITH NOVEL FEATURE EXTRACTION METHOD FOR PATTERN RECOGNITION USING EMG. J MECH MED BIOL 2021. [DOI: 10.1142/s0219519421500433] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
EMG signal-based pattern recognition (EMG-PR) techniques have gained lots of focus to develop myoelectric prosthesis. The performance of the prosthesis control-based applications mainly depends on extraction of eminent features with minimum neural information loss. The machine learning algorithms have a significant role to play for the development of Intelligent upper-limb prosthetic control (iULP) using EMG signal. This paper proposes a new technique of extracting the features known as advanced time derivative moments (ATDM) for effective pattern recognition of amputees. Four heterogeneous datasets have been used for testing and validation of the proposed technique. Out of the four datasets, three datasets have been taken from the standard NinaPro database and the fourth dataset comprises data collected from three amputees. The efficiency of ATDM features is examined with the help of Davies–Bouldin (DB) index for separability, classification accuracy and computational complexity. Further, it has been compared with similar work and the results reveal that ATDM features have excellent classification accuracy of 98.32% with relatively lower time complexity. The lower values of DB criteria prove the good separation of features belonging to various classes. The results are carried out on 2.6[Formula: see text]GHz Intel core i7 processor with MATLAB 2015a platform.
Collapse
Affiliation(s)
- SIDHARTH PANCHOLI
- Department of Electronics & Communication, MNIT, Jaipur 302017, Rajasthan, India
| | - AMIT M. JOSHI
- Department of Electronics & Communication, MNIT, Jaipur 302017, Rajasthan, India
| |
Collapse
|
16
|
Rosati G, Cisotto G, Sili D, Compagnucci L, De Giorgi C, Pavone EF, Paccagnella A, Betti V. Inkjet-printed fully customizable and low-cost electrodes matrix for gesture recognition. Sci Rep 2021; 11:14938. [PMID: 34294822 PMCID: PMC8298403 DOI: 10.1038/s41598-021-94526-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 07/05/2021] [Indexed: 11/11/2022] Open
Abstract
The use of surface electromyography (sEMG) is rapidly spreading, from robotic prostheses and muscle computer interfaces to rehabilitation devices controlled by residual muscular activities. In this context, sEMG-based gesture recognition plays an enabling role in controlling prosthetics and devices in real-life settings. Our work aimed at developing a low-cost, print-and-play platform to acquire and analyse sEMG signals that can be arranged in a fully customized way, depending on the application and the users' needs. We produced 8-channel sEMG matrices to measure the muscular activity of the forearm using innovative nanoparticle-based inks to print the sensors embedded into each matrix using a commercial inkjet printer. Then, we acquired the multi-channel sEMG data from 12 participants while repeatedly performing twelve standard finger movements (six extensions and six flexions). Our results showed that inkjet printing-based sEMG signals ensured significant similarity values across repetitions in every participant, a large enough difference between movements (dissimilarity index above 0.2), and an overall classification accuracy of 93-95% for flexion and extension, respectively.
Collapse
Affiliation(s)
- Giulio Rosati
- Department of Information Engineering, University of Padova, via G. Gradenigo 6b, 35131, Padova, Italy.
| | - Giulia Cisotto
- Department of Information Engineering, University of Padova, via G. Gradenigo 6b, 35131, Padova, Italy
- NCNP, National Centre of Neurology and Psychiatry, Tokyo, Japan
- CNIT, the National, Inter-University Consortium for Telecommunications, Rome, Italy
| | - Daniele Sili
- Department of Psychology, University of Rome "La Sapienza", Piazzale Aldo Moro 5, 00185, Rome, Italy
- IRCCS Fondazione Santa Lucia, Via Ardeatina, 306/354, 00179, Rome, Italy
| | - Luca Compagnucci
- Department of Psychology, University of Rome "La Sapienza", Piazzale Aldo Moro 5, 00185, Rome, Italy
- IRCCS Fondazione Santa Lucia, Via Ardeatina, 306/354, 00179, Rome, Italy
| | - Chiara De Giorgi
- Department of Psychology, University of Rome "La Sapienza", Piazzale Aldo Moro 5, 00185, Rome, Italy
- IRCCS Fondazione Santa Lucia, Via Ardeatina, 306/354, 00179, Rome, Italy
| | | | - Alessandro Paccagnella
- Department of Information Engineering, University of Padova, via G. Gradenigo 6b, 35131, Padova, Italy
| | - Viviana Betti
- Department of Psychology, University of Rome "La Sapienza", Piazzale Aldo Moro 5, 00185, Rome, Italy
- IRCCS Fondazione Santa Lucia, Via Ardeatina, 306/354, 00179, Rome, Italy
| |
Collapse
|
17
|
Zhao N, Yang X, Zhang Z, Khan MB. Circulating Nurse Assistant: Non-Contact Body Centric Gesture Recognition Towards Reducing Latrogenic Contamination. IEEE J Biomed Health Inform 2021; 25:2305-2316. [PMID: 33290234 DOI: 10.1109/jbhi.2020.3042998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Iatrogenic contamination causes serious health threats to both patients and healthcare staff. Contact operation is an important transmission route for nosocomial infection. Reducing direct contact during medical treatment can reduce nosocomial infection quickly and effectively. Scientific and technological progress in the 5G era brings new solutions to the problem of iatrogenic contamination. We conducted experiments at 27 GHz and 37 GHz to achieve contactless gesture recognition through the bornprint of body centric channel. The original channel S-parameters can achieve 82% (27 GHz) and 89% (37 GHz) basic recognition accuracy through simple statistical analysis. Basic switch recognition and multi-gesture selection recognition can meet the common operation requirements of circulating nurses, greatly reducing contact operations and reducing the probability of cross-contamination. Fully physically isolated body centric channel gesture sensing provides a new entry point for reducing iatrogenic contamination.
Collapse
|
18
|
Jiang S, Kang P, Song X, Lo B, Shull P. Emerging Wearable Interfaces and Algorithms for Hand Gesture Recognition: A Survey. IEEE Rev Biomed Eng 2021; 15:85-102. [PMID: 33961564 DOI: 10.1109/rbme.2021.3078190] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Hands are vital in a wide range of fundamental daily activities, and neurological diseases that impede hand function can significantly affect quality of life. Wearable hand gesture interfaces hold promise to restore and assist hand function and to enhance human-human and human-computer communication. The purpose of this review is to synthesize current novel sensing interfaces and algorithms for hand gesture recognition, and the scope of applications covers rehabilitation, prosthesis control, sign language recognition, and human-computer interaction. Results showed that electrical, dynamic, acoustical/vibratory, and optical sensing were the primary input modalities in gesture recognition interfaces. Two categories of algorithms were identified: 1) classification algorithms for predefined, fixed hand poses and 2) regression algorithms for continuous finger and wrist joint angles. Conventional machine learning algorithms, including linear discriminant analysis, support vector machines, random forests, and non-negative matrix factorization, have been widely used for a variety of gesture recognition applications, and deep learning algorithms have more recently been applied to further facilitate the complex relationship between sensor signals and multi-articulated hand postures. Future research should focus on increasing recognition accuracy with larger hand gesture datasets, improving reliability and robustness for daily use outside of the laboratory, and developing softer, less obtrusive interfaces.
Collapse
|
19
|
Rabe KG, Jahanandish MH, Boehm JR, Majewicz Fey A, Hoyt K, Fey NP. Ultrasound Sensing Can Improve Continuous Classification of Discrete Ambulation Modes Compared to Surface Electromyography. IEEE Trans Biomed Eng 2020; 68:1379-1388. [PMID: 33085612 DOI: 10.1109/tbme.2020.3032077] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Clinical translation of "intelligent" lower-limb assistive technologies relies on robust control interfaces capable of accurately detecting user intent. To date, mechanical sensors and surface electromyography (EMG) have been the primary sensing modalities used to classify ambulation. Ultrasound (US) imaging can be used to detect user-intent by characterizing structural changes of muscle. Our study evaluates wearable US imaging as a new sensing modality for continuous classification of five discrete ambulation modes: level, incline, decline, stair ascent, and stair descent ambulation, and benchmarks performance relative to EMG sensing. Ten able-bodied subjects were equipped with a wearable US scanner and eight unilateral EMG sensors. Time-intensity features were recorded from US images of three thigh muscles. Features from sliding windows of EMG signals were analyzed in two configurations: one including 5 EMG sensors on muscles around the thigh, and another with 3 additional sensors placed on the shank. Linear discriminate analysis was implemented to continuously classify these phase-dependent features of each sensing modality as one of five ambulation modes. US-based sensing statistically improved mean classification accuracy to 99.8% (99.5-100% CI) compared to 8-EMG sensors (85.8%; 84.0-87.6% CI) and 5-EMG sensors (75.3%; 74.5-76.1% CI). Further, separability analyses show the importance of superficial and deep US information for stair classification relative to other modes. These results are the first to demonstrate the ability of US-based sensing to classify discrete ambulation modes, highlighting the potential for improved assistive device control using less widespread, less superficial and higher resolution sensing of skeletal muscle.
Collapse
|
20
|
Jahanandish MH, Rabe KG, Fey NP, Hoyt K. Ultrasound Features of Skeletal Muscle Can Predict Kinematics of Upcoming Lower-Limb Motion. Ann Biomed Eng 2020; 49:822-833. [PMID: 32959134 DOI: 10.1007/s10439-020-02617-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 09/10/2020] [Indexed: 10/23/2022]
Abstract
Seamless integration of lower-limb assistive devices with the human body requires an intuitive human-machine interface, which would benefit from predicting the intent of individuals in advance of the upcoming motion. Ultrasound imaging was recently introduced as an intuitive sensing interface. The objective of the present study was to investigate the predictability of joint kinematics using ultrasound features of the rectus femoris muscle during a non-weight-bearing knee extension/flexion. Motion prediction accuracy was evaluated in 67 ms increments, up to 600 ms in time. Statistical analysis was used to evaluate the feasibility of motion prediction, and the linear mixed-effects model was used to determine a prediction time window where the joint angle prediction error is barely perceivable by the sample population, hence clinically reliable. Surprisingly, statistical tests revealed that the prediction accuracy of the joint angle was more sensitive to temporal shifts than the accuracy of the joint angular velocity prediction. Overall, predictability of the upcoming joint kinematics using ultrasound features of skeletal muscle was confirmed, and a time window for a statistically and clinically reliable prediction was found between 133 and 142 ms. A reliable prediction of user intent may provide the time needed for processing, control planning, and actuation of the assistive devices at critical points during ambulation, contributing to the intuitive behavior of lower-limb assistive devices.
Collapse
Affiliation(s)
- M Hassan Jahanandish
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX, 75080, USA
| | - Kaitlin G Rabe
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX, 75080, USA
| | - Nicholas P Fey
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX, 75080, USA. .,Department of Mechanical Engineering, The University of Texas at Dallas, Richardson, TX, USA. .,Department of Physical Medicine and Rehabilitation, UT Southwestern Medical Center, Dallas, TX, USA.
| | - Kenneth Hoyt
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX, 75080, USA. .,Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
21
|
Yang X, Yan J, Fang Y, Zhou D, Liu H. Simultaneous Prediction of Wrist/Hand Motion via Wearable Ultrasound Sensing. IEEE Trans Neural Syst Rehabil Eng 2020; 28:970-977. [PMID: 32142449 DOI: 10.1109/tnsre.2020.2977908] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The ability to predict wrist and hand motions simultaneously is essential for natural controls of hand protheses. In this paper, we propose a novel method that includes subclass discriminant analysis (SDA) and principal component analysis for the simultaneous prediction of wrist rotation (pronation/supination) and finger gestures using wearable ultrasound. We tested the method on eight finger gestures with concurrent wrist rotations. Results showed that SDA was able to achieve accurate classification of both finger gestures and wrist rotations under dynamic wrist rotations. When grouping the wrist rotations into three subclasses, about 99.2 ± 1.2% of finger gestures and 92.8 ± 1.4% of wrist rotations can be accurately classified. Moreover, we found that the first principal component (PC1) of the selected ultrasound features was linear to the wrist rotation angle regardless of finger gestures. We further used PC1 in an online tracking task for continuous wrist control and demonstrated that a wrist tracking precision ( R2 ) of 0.954 ± 0.012 and a finger gesture classification accuracy of 96.5 ± 1.7% can be simultaneously achieved, with only two minutes of user training. Our proposed simultaneous wrist/hand control scheme is training-efficient and robust, paving the way for musculature-driven artificial hand control and rehabilitation treatment.
Collapse
|
22
|
Yang X, Yan J, Liu H. Comparative Analysis of Wearable A-Mode Ultrasound and sEMG for Muscle-Computer Interface. IEEE Trans Biomed Eng 2020; 67:2434-2442. [PMID: 31899410 DOI: 10.1109/tbme.2019.2962499] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE While surface electromyography (sEMG) is still dominant in the field of muscle-computer interface, ultrasound (US) sensing has been regarded as a promising alternative to sEMG, owing to its ability to precisely monitor muscle deformations. Among different US modalities, A-mode US is more compact and cost-effective for wearable applications against its cumbersome B-mode counterpart. In this article, we conduct a comprehensive comparison of wearable A-mode US and sEMG on gesture recognition and isometric muscle contraction force estimation. METHODS We experimented with eight types of gesture, with a range of 0-60% maximum voluntary contraction for each motion. RESULTS Results show that A-mode US outperforms sEMG on gesture recognition accuracy, robustness, and discrete force estimation accuracy, while sEMG is superior to US on continuous force estimation accuracy and ease of use in force estimation. Moreover, an extended online experiment demonstrates that the complementary advantages of US and sEMG on gesture recognition and continuous force estimation can be combined for the achievement of multi-class proportional gesture control. SIGNIFICANCE This article demonstrates the potential of A-mode US in automated gesture recognition, and the prospect of sEMG/US fusion for proportional gesture interaction.
Collapse
|