1
|
Koch P, Mohammad-Zadeh K, Maass M, Dreier M, Thomsen O, Parbs TJ, Phan H, Mertins A. sEMG-Based Hand Movement Regression by Prediction of Joint Angles With Recurrent Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6519-6523. [PMID: 34892603 DOI: 10.1109/embc46164.2021.9630042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This work takes a step towards a better biosignal based hand gesture recognition by investigating the strategies for a reliable prediction of hand joint angles. Those strategies are especially important for medical applications in order to achieve e.g. good acceptance of hand prostheses among amputees. A recurrent neural network with a small footprint is deployed to estimate the joint positions from surface electromyography data measured at the forearm. As the predictions are expected to be not smooth, different post processing methods and a regularisation term for the objective function of the network are proposed. The experiments were conducted on publicly available databases. The results reveal that both post processing strategies and regularisation have a positive impact on the results with a maximal relative improvement of 6.13 %. On the one hand post processing strategies introduce an additional delay, consequently, the improvement is analysed in context of the caused delay. On the other hand the regularisation strategy does not cause a delay and can be adjusted easily to cope with different ground truths or compensate for certain problems in the hand tracking.
Collapse
|
2
|
Phan H, Chen OY, Koch P, Lu Z, McLoughlin I, Mertins A, De Vos M. Towards More Accurate Automatic Sleep Staging via Deep Transfer Learning. IEEE Trans Biomed Eng 2021; 68:1787-1798. [PMID: 32866092 DOI: 10.1109/tbme.2020.3020381] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND Despite recent significant progress in the development of automatic sleep staging methods, building a good model still remains a big challenge for sleep studies with a small cohort due to the data-variability and data-inefficiency issues. This work presents a deep transfer learning approach to overcome these issues and enable transferring knowledge from a large dataset to a small cohort for automatic sleep staging. METHODS We start from a generic end-to-end deep learning framework for sequence-to-sequence sleep staging and derive two networks as the means for transfer learning. The networks are first trained in the source domain (i.e. the large database). The pretrained networks are then finetuned in the target domain (i.e. the small cohort) to complete knowledge transfer. We employ the Montreal Archive of Sleep Studies (MASS) database consisting of 200 subjects as the source domain and study deep transfer learning on three different target domains: the Sleep Cassette subset and the Sleep Telemetry subset of the Sleep-EDF Expanded database, and the Surrey-cEEGrid database. The target domains are purposely adopted to cover different degrees of data mismatch to the source domains. RESULTS Our experimental results show significant performance improvement on automatic sleep staging on the target domains achieved with the proposed deep transfer learning approach. CONCLUSIONS These results suggest the efficacy of the proposed approach in addressing the above-mentioned data-variability and data-inefficiency issues. SIGNIFICANCE As a consequence, it would enable one to improve the quality of automatic sleep staging models when the amount of data is relatively small.11The source code and the pretrained models are published at https://github.com/pquochuy/sleep_transfer_learning.
Collapse
|
3
|
Koch P, Dreier M, Larsen A, Parbs TJ, Maass M, Phan H, Mertins A. Regression of Hand Movements from sEMG Data with Recurrent Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:3783-3787. [PMID: 33018825 DOI: 10.1109/embc44109.2020.9176278] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Most wearable human-machine interfaces concerning hand movements only focus on classifying a limited number of hand gestures. With the introduction of deep learning, surface electromyography based hand gesture classification systems improved drastically. Therefore, it is worth investigating whether the classification can be replaced by a movement regression of all the different movable hand parts. As recurrent neural networks based approaches have proven their abilities of solving the classification problem we also choose them for the regression problem. Experiments were conducted with multiple different network architectures on several databases. Furthermore, due to the lack of a reliable measure to compare different gesture regression approaches we propose an interpretable and reproducible new error measure that can even handle noisy ground truth data. The results reveal the general possibility of regressing detailed hand movements. Even with the relatively simple networks the hand gestures can be regressed quite accurately.
Collapse
|
4
|
Jahanandish MH, Rabe KG, Fey NP, Hoyt K. Ultrasound Features of Skeletal Muscle Can Predict Kinematics of Upcoming Lower-Limb Motion. Ann Biomed Eng 2020; 49:822-833. [PMID: 32959134 DOI: 10.1007/s10439-020-02617-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 09/10/2020] [Indexed: 10/23/2022]
Abstract
Seamless integration of lower-limb assistive devices with the human body requires an intuitive human-machine interface, which would benefit from predicting the intent of individuals in advance of the upcoming motion. Ultrasound imaging was recently introduced as an intuitive sensing interface. The objective of the present study was to investigate the predictability of joint kinematics using ultrasound features of the rectus femoris muscle during a non-weight-bearing knee extension/flexion. Motion prediction accuracy was evaluated in 67 ms increments, up to 600 ms in time. Statistical analysis was used to evaluate the feasibility of motion prediction, and the linear mixed-effects model was used to determine a prediction time window where the joint angle prediction error is barely perceivable by the sample population, hence clinically reliable. Surprisingly, statistical tests revealed that the prediction accuracy of the joint angle was more sensitive to temporal shifts than the accuracy of the joint angular velocity prediction. Overall, predictability of the upcoming joint kinematics using ultrasound features of skeletal muscle was confirmed, and a time window for a statistically and clinically reliable prediction was found between 133 and 142 ms. A reliable prediction of user intent may provide the time needed for processing, control planning, and actuation of the assistive devices at critical points during ambulation, contributing to the intuitive behavior of lower-limb assistive devices.
Collapse
Affiliation(s)
- M Hassan Jahanandish
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX, 75080, USA
| | - Kaitlin G Rabe
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX, 75080, USA
| | - Nicholas P Fey
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX, 75080, USA. .,Department of Mechanical Engineering, The University of Texas at Dallas, Richardson, TX, USA. .,Department of Physical Medicine and Rehabilitation, UT Southwestern Medical Center, Dallas, TX, USA.
| | - Kenneth Hoyt
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX, 75080, USA. .,Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
5
|
Phan H, Chen OY, Koch P, Mertins A, De Vos M. Fusion of End-to-End Deep Learning Models for Sequence-to-Sequence Sleep Staging. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:1829-1833. [PMID: 31946253 DOI: 10.1109/embc.2019.8857348] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Sleep staging, a process of identifying the sleep stages associated with polysomnography (PSG) epochs, plays an important role in sleep monitoring and diagnosing sleep disorders. We present in this work a model fusion approach to automate this task. The fusion model is composed of two base sleep-stage classifiers, SeqSleepNet and DeepSleepNet, both of which are state-of-the-art end-to-end deep learning models complying to the sequence-to-sequence sleep staging scheme. In addition, in the light of ensemble methods, we reason and demonstrate that these two networks form a good ensemble of models due to their high diversity. Experiments show that the fusion approach is able to preserve the strength of the base networks in the fusion model, leading to consistent performance gains over the two base networks. The fusion model obtain the best modelling results we have observed so far on the Montreal Archive of Sleep Studies (MASS) dataset with 200 subjects, achieving an overall accuracy of 88.0%, a macro F1-score of 84.3%, and a Cohen's kappa of 0.828.
Collapse
|
6
|
Koch P, Dreier M, Maass M, Bohme M, Phan H, Mertins A. A Recurrent Neural Network for Hand Gesture Recognition based on Accelerometer Data. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5088-5091. [PMID: 31947003 DOI: 10.1109/embc.2019.8856844] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
For many applications, hand gesture recognition systems that rely on biosignal data exclusively are mandatory. Usually, theses systems have to be affordable, reliable as well as mobile. The hand is moved due to muscle contractions that cause motions of the forearm skin. Theses motions can be captured with cheap and reliable accelerometers placed around the forearm. Since accelerometers can also be integrated into mobile systems easily, the possibility of a robust hand gesture recognition based on accelerometer signals is evaluated in this work. For this, a neural network architecture consisting of two different kinds of recurrent neural network (RNN) cells is proposed. Experiments on three databases reveal that this relatively small network outperforms by far state-of-the-art hand gesture recognition approaches that rely on multi-modal data. The combination of accelerometer data and an RNN forms a robust hand gesture classification system, i.e., the performance of the network does not vary a lot between subjects and it is outstanding for amputees. Furthermore, the proposed network uses only 5 ms short windows to classify the hand gestures. Consequently, this approach allows for a quick, and potentially delay-free hand gesture detection.
Collapse
|
7
|
Phan H, Andreotti F, Cooray N, Chén OY, De Vos M. Joint Classification and Prediction CNN Framework for Automatic Sleep Stage Classification. IEEE Trans Biomed Eng 2019; 66:1285-1296. [PMID: 30346277 PMCID: PMC6487915 DOI: 10.1109/tbme.2018.2872652] [Citation(s) in RCA: 134] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2018] [Accepted: 09/22/2018] [Indexed: 11/07/2022]
Abstract
Correctly identifying sleep stages is important in diagnosing and treating sleep disorders. This paper proposes a joint classification-and-prediction framework based on convolutional neural networks (CNNs) for automatic sleep staging, and, subsequently, introduces a simple yet efficient CNN architecture to power the framework. Given a single input epoch, the novel framework jointly determines its label (classification) and its neighboring epochs' labels (prediction) in the contextual output. While the proposed framework is orthogonal to the widely adopted classification schemes, which take one or multiple epochs as contextual inputs and produce a single classification decision on the target epoch, we demonstrate its advantages in several ways. First, it leverages the dependency among consecutive sleep epochs while surpassing the problems experienced with the common classification schemes. Second, even with a single model, the framework has the capacity to produce multiple decisions, which are essential in obtaining a good performance as in ensemble-of-models methods, with very little induced computational overhead. Probabilistic aggregation techniques are then proposed to leverage the availability of multiple decisions. To illustrate the efficacy of the proposed framework, we conducted experiments on two public datasets: Sleep-EDF Expanded (Sleep-EDF), which consists of 20 subjects, and Montreal Archive of Sleep Studies (MASS) dataset, which consists of 200 subjects. The proposed framework yields an overall classification accuracy of 82.3% and 83.6%, respectively. We also show that the proposed framework not only is superior to the baselines based on the common classification schemes but also outperforms existing deep-learning approaches. To our knowledge, this is the first work going beyond the standard single-output classification to consider multitask neural networks for automatic sleep staging. This framework provides avenues for further studies of different neural-network architectures for automatic sleep staging.
Collapse
Affiliation(s)
- Huy Phan
- Institute of Biomedical EngineeringUniversity of OxfordOxfordOX3 7DQU.K.
| | | | - Navin Cooray
- Institute of Biomedical EngineeringUniversity of Oxford
| | | | | |
Collapse
|
8
|
Phan H. SeqSleepNet: End-to-End Hierarchical Recurrent Neural Network for Sequence-to-Sequence Automatic Sleep Staging. IEEE Trans Neural Syst Rehabil Eng 2019; 27:400-410. [PMID: 30716040 PMCID: PMC6481557 DOI: 10.1109/tnsre.2019.2896659] [Citation(s) in RCA: 164] [Impact Index Per Article: 32.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Automatic sleep staging has been often treated as a simple classification problem that aims at determining the label of individual target polysomnography epochs one at a time. In this paper, we tackle the task as a sequence-to-sequence classification problem that receives a sequence of multiple epochs as input and classifies all of their labels at once. For this purpose, we propose a hierarchical recurrent neural network named SeqSleepNet (source code is available at http://github.com/pquochuy/SeqSleepNet). At the epoch processing level, the network consists of a filterbank layer tailored to learn frequency-domain filters for preprocessing and an attention-based recurrent layer designed for short-term sequential modeling. At the sequence processing level, a recurrent layer placed on top of the learned epoch-wise features for long-term modeling of sequential epochs. The classification is then carried out on the output vectors at every time step of the top recurrent layer to produce the sequence of output labels. Despite being hierarchical, we present a strategy to train the network in an end-to-end fashion. We show that the proposed network outperforms the state-of-the-art approaches, achieving an overall accuracy, macro F1-score, and Cohen's kappa of 87.1%, 83.3%, and 0.815 on a publicly available dataset with 200 subjects.
Collapse
Affiliation(s)
- Huy Phan
- School of Computing, University of Kent, Chatham Maritime, Kent ME4 4AG, United Kingdom and the Institute of Biomedical Engineering, University of Oxford, Oxford OX3 7DQ, United Kingdom
| |
Collapse
|