1
|
Alsuradi H, Hong J, Mazi H, Eid M. Neuro-motor controlled wearable augmentations: current research and emerging trends. Front Neurorobot 2024; 18:1443010. [PMID: 39544848 PMCID: PMC11560910 DOI: 10.3389/fnbot.2024.1443010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 10/15/2024] [Indexed: 11/17/2024] Open
Abstract
Wearable augmentations (WAs) designed for movement and manipulation, such as exoskeletons and supernumerary robotic limbs, are used to enhance the physical abilities of healthy individuals and substitute or restore lost functionality for impaired individuals. Non-invasive neuro-motor (NM) technologies, including electroencephalography (EEG) and sufrace electromyography (sEMG), promise direct and intuitive communication between the brain and the WA. After presenting a historical perspective, this review proposes a conceptual model for NM-controlled WAs, analyzes key design aspects, such as hardware design, mounting methods, control paradigms, and sensory feedback, that have direct implications on the user experience, and in the long term, on the embodiment of WAs. The literature is surveyed and categorized into three main areas: hand WAs, upper body WAs, and lower body WAs. The review concludes by highlighting the primary findings, challenges, and trends in NM-controlled WAs. This review motivates researchers and practitioners to further explore and evaluate the development of WAs, ensuring a better quality of life.
Collapse
Affiliation(s)
- Haneen Alsuradi
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Joseph Hong
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Helin Mazi
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Mohamad Eid
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
2
|
Senadheera I, Hettiarachchi P, Haslam B, Nawaratne R, Sheehan J, Lockwood KJ, Alahakoon D, Carey LM. AI Applications in Adult Stroke Recovery and Rehabilitation: A Scoping Review Using AI. SENSORS (BASEL, SWITZERLAND) 2024; 24:6585. [PMID: 39460066 PMCID: PMC11511449 DOI: 10.3390/s24206585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Revised: 10/08/2024] [Accepted: 10/09/2024] [Indexed: 10/28/2024]
Abstract
Stroke is a leading cause of long-term disability worldwide. With the advancements in sensor technologies and data availability, artificial intelligence (AI) holds the promise of improving the amount, quality and efficiency of care and enhancing the precision of stroke rehabilitation. We aimed to identify and characterize the existing research on AI applications in stroke recovery and rehabilitation of adults, including categories of application and progression of technologies over time. Data were collected from peer-reviewed articles across various electronic databases up to January 2024. Insights were extracted using AI-enhanced multi-method, data-driven techniques, including clustering of themes and topics. This scoping review summarizes outcomes from 704 studies. Four common themes (impairment, assisted intervention, prediction and imaging, and neuroscience) were identified, in which time-linked patterns emerged. The impairment theme revealed a focus on motor function, gait and mobility, while the assisted intervention theme included applications of robotic and brain-computer interface (BCI) techniques. AI applications progressed over time, starting from conceptualization and then expanding to a broader range of techniques in supervised learning, artificial neural networks (ANN), natural language processing (NLP) and more. Applications focused on upper limb rehabilitation were reviewed in more detail, with machine learning (ML), deep learning techniques and sensors such as inertial measurement units (IMU) used for upper limb and functional movement analysis. AI applications have potential to facilitate tailored therapeutic delivery, thereby contributing to the optimization of rehabilitation outcomes and promoting sustained recovery from rehabilitation to real-world settings.
Collapse
Affiliation(s)
- Isuru Senadheera
- Centre for Data Analytics and Cognition, La Trobe Business School, La Trobe University, Melbourne, VIC 3086, Australia; (I.S.); (P.H.); (R.N.); (D.A.)
- Occupational Therapy, School of Allied Health, Human Services and Sport, La Trobe University, Melbourne, VIC 3086, Australia; (B.H.); (J.S.); (K.J.L.)
| | - Prasad Hettiarachchi
- Centre for Data Analytics and Cognition, La Trobe Business School, La Trobe University, Melbourne, VIC 3086, Australia; (I.S.); (P.H.); (R.N.); (D.A.)
- Occupational Therapy, School of Allied Health, Human Services and Sport, La Trobe University, Melbourne, VIC 3086, Australia; (B.H.); (J.S.); (K.J.L.)
| | - Brendon Haslam
- Occupational Therapy, School of Allied Health, Human Services and Sport, La Trobe University, Melbourne, VIC 3086, Australia; (B.H.); (J.S.); (K.J.L.)
- Neurorehabilitation and Recovery, The Florey, Melbourne, VIC 3086, Australia
| | - Rashmika Nawaratne
- Centre for Data Analytics and Cognition, La Trobe Business School, La Trobe University, Melbourne, VIC 3086, Australia; (I.S.); (P.H.); (R.N.); (D.A.)
| | - Jacinta Sheehan
- Occupational Therapy, School of Allied Health, Human Services and Sport, La Trobe University, Melbourne, VIC 3086, Australia; (B.H.); (J.S.); (K.J.L.)
| | - Kylee J. Lockwood
- Occupational Therapy, School of Allied Health, Human Services and Sport, La Trobe University, Melbourne, VIC 3086, Australia; (B.H.); (J.S.); (K.J.L.)
| | - Damminda Alahakoon
- Centre for Data Analytics and Cognition, La Trobe Business School, La Trobe University, Melbourne, VIC 3086, Australia; (I.S.); (P.H.); (R.N.); (D.A.)
| | - Leeanne M. Carey
- Occupational Therapy, School of Allied Health, Human Services and Sport, La Trobe University, Melbourne, VIC 3086, Australia; (B.H.); (J.S.); (K.J.L.)
- Neurorehabilitation and Recovery, The Florey, Melbourne, VIC 3086, Australia
| |
Collapse
|
3
|
Jing H, Zheng T, Zhang Q, Liu B, Sun K, Li L, Zhao J, Zhu Y. A Mouth and Tongue Interactive Device to Control Wearable Robotic Limbs in Tasks where Human Limbs Are Occupied. BIOSENSORS 2024; 14:213. [PMID: 38785687 PMCID: PMC11118463 DOI: 10.3390/bios14050213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/22/2024] [Accepted: 04/22/2024] [Indexed: 05/25/2024]
Abstract
The Wearable Robotic Limb (WRL) is a type of robotic arm worn on the human body, aiming to enhance the wearer's operational capabilities. However, proposing additional methods to control and perceive the WRL when human limbs are heavily occupied with primary tasks presents a challenge. Existing interactive methods, such as voice, gaze, and electromyography (EMG), have limitations in control precision and convenience. To address this, we have developed an interactive device that utilizes the mouth and tongue. This device is lightweight and compact, allowing wearers to achieve continuous motion and contact force control of the WRL. By using a tongue controller and mouth gas pressure sensor, wearers can control the WRL while also receiving sensitive contact feedback through changes in mouth pressure. To facilitate bidirectional interaction between the wearer and the WRL, we have devised an algorithm that divides WRL control into motion and force-position hybrid modes. In order to evaluate the performance of the device, we conducted an experiment with ten participants tasked with completing a pin-hole assembly task with the assistance of the WRL system. The results show that the device enables continuous control of the position and contact force of the WRL, with users perceiving feedback through mouth airflow resistance. However, the experiment also revealed some shortcomings of the device, including user fatigue and its impact on breathing. After experimental investigation, it was observed that fatigue levels can decrease with training. Experimental studies have revealed that fatigue levels can decrease with training. Furthermore, the limitations of the device have shown potential for improvement through structural enhancements. Overall, our mouth and tongue interactive device shows promising potential in controlling the WRL during tasks where human limbs are occupied.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Yanhe Zhu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China; (H.J.); (T.Z.); (Q.Z.); (B.L.); (K.S.); (L.L.); (J.Z.)
| |
Collapse
|
4
|
Liang G, Cao D, Wang J, Zhang Z, Wu Y. EISATC-Fusion: Inception Self-Attention Temporal Convolutional Network Fusion for Motor Imagery EEG Decoding. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1535-1545. [PMID: 38536681 DOI: 10.1109/tnsre.2024.3382226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
The motor imagery brain-computer interface (MI-BCI) based on electroencephalography (EEG) is a widely used human-machine interface paradigm. However, due to the non-stationarity and individual differences among subjects in EEG signals, the decoding accuracy is limited, affecting the application of the MI-BCI. In this paper, we propose the EISATC-Fusion model for MI EEG decoding, consisting of inception block, multi-head self-attention (MSA), temporal convolutional network (TCN), and layer fusion. Specifically, we design a DS Inception block to extract multi-scale frequency band information. And design a new cnnCosMSA module based on CNN and cos attention to solve the attention collapse and improve the interpretability of the model. The TCN module is improved by the depthwise separable convolution to reduces the parameters of the model. The layer fusion consists of feature fusion and decision fusion, fully utilizing the features output by the model and enhances the robustness of the model. We improve the two-stage training strategy for model training. Early stopping is used to prevent model overfitting, and the accuracy and loss of the validation set are used as indicators for early stopping. The proposed model achieves within-subject classification accuracies of 84.57% and 87.58% on BCI Competition IV Datasets 2a and 2b, respectively. And the model achieves cross-subject classification accuracies of 67.42% and 71.23% (by transfer learning) when training the model with two sessions and one session of Dataset 2a, respectively. The interpretability of the model is demonstrated through weight visualization method.
Collapse
|
5
|
Li T, Sun G, Yu L, Zhou K. HRBUST-LLPED: A Benchmark Dataset for Wearable Low-Light Pedestrian Detection. MICROMACHINES 2023; 14:2164. [PMID: 38138333 PMCID: PMC10745713 DOI: 10.3390/mi14122164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/26/2023] [Accepted: 11/07/2023] [Indexed: 12/24/2023]
Abstract
Detecting pedestrians in low-light conditions is challenging, especially in the context of wearable platforms. Infrared cameras have been employed to enhance detection capabilities, whereas low-light cameras capture the more intricate features of pedestrians. With this in mind, we introduce a low-light pedestrian detection (called HRBUST-LLPED) dataset by capturing pedestrian data on campus using wearable low-light cameras. Most of the data were gathered under starlight-level illumination. Our dataset annotates 32,148 pedestrian instances in 4269 keyframes. The pedestrian density reaches high values with more than seven people per image. We provide four lightweight, low-light pedestrian detection models based on advanced YOLOv5 and YOLOv8. By training the models on public datasets and fine-tuning them on the HRBUST-LLPED dataset, our model obtained 69.90% in terms of AP@0.5:0.95 and 1.6 ms for the inference time. The experiments demonstrate that our research can assist in advancing pedestrian detection research by using low-light cameras in wearable devices.
Collapse
Affiliation(s)
| | - Guanglu Sun
- School of Computer Science and Technology, Harbin University of Science and Technology, No. 52 Xuefu Road, Nangang District, Harbin 150080, China; (T.L.)
| | | | | |
Collapse
|
6
|
Jing H, Zheng T, Zhang Q, Sun K, Li L, Lai M, Zhao J, Zhu Y. Human Operation Augmentation through Wearable Robotic Limb Integrated with Mixed Reality Device. Biomimetics (Basel) 2023; 8:479. [PMID: 37887610 PMCID: PMC10604667 DOI: 10.3390/biomimetics8060479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 09/26/2023] [Accepted: 10/05/2023] [Indexed: 10/28/2023] Open
Abstract
Mixed reality technology can give humans an intuitive visual experience, and combined with the multi-source information of the human body, it can provide a comfortable human-robot interaction experience. This paper applies a mixed reality device (Hololens2) to provide interactive communication between the wearer and the wearable robotic limb (supernumerary robotic limb, SRL). Hololens2 can obtain human body information, including eye gaze, hand gestures, voice input, etc. It can also provide feedback information to the wearer through augmented reality and audio output, which is the communication bridge needed in human-robot interaction. Implementing a wearable robotic arm integrated with HoloLens2 is proposed to augment the wearer's capabilities. Taking two typical practical tasks of cable installation and electrical connector soldering in aircraft manufacturing as examples, the task models and interaction scheme are designed. Finally, human augmentation is evaluated in terms of task completion time statistics.
Collapse
Affiliation(s)
- Hongwei Jing
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Tianjiao Zheng
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Qinghua Zhang
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Kerui Sun
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Lele Li
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Mingzhu Lai
- School of Mathematics and Statistics, Hainan Normal University, Haikou 571158, China
| | - Jie Zhao
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Yanhe Zhu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| |
Collapse
|
7
|
Autonomous grasping of 3-D objects by a vision-actuated robot arm using Brain–Computer Interface. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
8
|
Feng J, Li Y, Jiang C, Liu Y, Li M, Hu Q. Classification of motor imagery electroencephalogram signals by using adaptive cross-subject transfer learning. Front Hum Neurosci 2022; 16:1068165. [PMID: 36618992 PMCID: PMC9811670 DOI: 10.3389/fnhum.2022.1068165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/05/2022] [Indexed: 12/24/2022] Open
Abstract
Introduction Electroencephalogram (EEG)-based motor imagery (MI) classification is an important aspect in brain-computer interfaces (BCIs), which bridges between neural system and computer devices decoding brain signals into recognizable machine commands. However, due to the small number of training samples of MI electroencephalogram (MI-EEG) for a single subject and the great individual differences of MI-EEG among different subjects, the generalization and accuracy of the model on the specific MI task may be poor. Methods To solve these problems, an adaptive cross-subject transfer learning algorithm is proposed, which is based on kernel mean matching (KMM) and transfer learning adaptive boosting (TrAdaBoost) method. First, the common spatial pattern (CSP) is used to extract the spatial features. Then, in order to make the feature distribution more similar among different subjects, the KMM algorithm is used to compute a sample weight matrix for aligning the mean between source and target domains and reducing distribution differences among different subjects. Finally, the sample weight matrix from KMM is used as the initialization weight of TrAdaBoost, and then TrAdaBoost is used to adaptively select source domain samples that are closer to the target task distribution to assist in building a classification model. Results In order to verify the effectiveness and feasibility of the proposed method, the algorithm is applied to BCI Competition IV datasets and in-house datasets. The results show that the average classification accuracy of the proposed method on the public datasets is 89.1%, and the average classification accuracy on the in-house datasets is 80.4%. Discussion Compared with the existing methods, the proposed method effectively improves the classification accuracy of MI-EEG signals. At the same time, this paper also applies the proposed algorithm to the in-house dataset, the results verify the effectiveness of the algorithm again, and the results of this study have certain clinical guiding significance for brain rehabilitation.
Collapse
Affiliation(s)
- Jin Feng
- Department of Student Affairs, Guilin Normal College, Guilin, Guangxi, China
| | - Yunde Li
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, China
| | - Chengliang Jiang
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, China
| | - Yu Liu
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, China,*Correspondence: Yu Liu,
| | - Mingxin Li
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Qinghui Hu
- School of Computer Science and Engineering, Guilin University of Aerospace Technology, Guilin, Guangxi, China
| |
Collapse
|
9
|
Hand Exoskeleton Design and Human–Machine Interaction Strategies for Rehabilitation. Bioengineering (Basel) 2022; 9:bioengineering9110682. [DOI: 10.3390/bioengineering9110682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/04/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022] Open
Abstract
Stroke and related complications such as hemiplegia and disability create huge burdens for human society in the 21st century, which leads to a great need for rehabilitation and daily life assistance. To address this issue, continuous efforts are devoted in human–machine interaction (HMI) technology, which aims to capture and recognize users’ intentions and fulfil their needs via physical response. Based on the physiological structure of the human hand, a dimension-adjustable linkage-driven hand exoskeleton with 10 active degrees of freedom (DoFs) and 3 passive DoFs is proposed in this study, which grants high-level synergy with the human hand. Considering the weight of the adopted linkage design, the hand exoskeleton can be mounted on the existing up-limb exoskeleton system, which greatly diminishes the burden for users. Three rehabilitation/daily life assistance modes are developed (namely, robot-in-charge, therapist-in-charge, and patient-in-charge modes) to meet specific personal needs. To realize HMI, a thin-film force sensor matrix and Inertial Measurement Units (IMUs) are installed in both the hand exoskeleton and the corresponding controller. Outstanding sensor–machine synergy is confirmed by trigger rate evaluation, Kernel Density Estimation (KDE), and a confusion matrix. To recognize user intention, a genetic algorithm (GA) is applied to search for the optimal hyperparameters of a 1D Convolutional Neural Network (CNN), and the average intention-recognition accuracy for the eight actions/gestures examined reaches 97.1% (based on K-fold cross-validation). The hand exoskeleton system provides the possibility for people with limited exercise ability to conduct self-rehabilitation and complex daily activities.
Collapse
|
10
|
Jaipriya D, Sriharipriya KC. A comparative analysis of masking empirical mode decomposition and a neural network with feed-forward and back propagation along with masking empirical mode decomposition to improve the classification performance for a reliable brain-computer interface. Front Comput Neurosci 2022; 16:1010770. [PMID: 36405787 PMCID: PMC9672820 DOI: 10.3389/fncom.2022.1010770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 10/03/2022] [Indexed: 02/25/2024] Open
Abstract
In general, extraction and classification are used in various fields like image processing, pattern recognition, signal processing, and so on. Extracting effective characteristics from raw electroencephalogram (EEG) signals is a crucial role of the brain-computer interface for motor imagery. Recently, there has been a great deal of focus on motor imagery in the EEG signals since they encode a person's intent to do an action. Researchers have been using MI signals to assist paralyzed people and even move them on their own with certain equipment, like wheelchairs. As a result, proper decoding is an important step required for the interconnection of the brain and the computer. EEG decoding is a challenging process because of poor SNR, complexity, and other reasons. However, choosing an appropriate method to extract the features to improve the performance of motor imagery recognition is still a research hotspot. To extract the features of the EEG signal in the classification task, this paper proposes a Masking Empirical Mode Decomposition (MEMD) based Feed Forward Back Propagation Neural Network (MEMD-FFBPNN). The dataset consists of EEG signals which are first normalized using the minimax method and given as input to the MEMD to extract the features and then given to the FFBPNN to classify the tasks. The accuracy of the proposed method MEMD-FFBPNN has been measured using the confusion matrix, mean square error and which has been recorded up to 99.9%. Thus, the proposed method gives better accuracy than the other conventional methods.
Collapse
Affiliation(s)
| | - K. C. Sriharipriya
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| |
Collapse
|