1
|
Alsuradi H, Hong J, Mazi H, Eid M. Neuro-motor controlled wearable augmentations: current research and emerging trends. Front Neurorobot 2024; 18:1443010. [PMID: 39544848 PMCID: PMC11560910 DOI: 10.3389/fnbot.2024.1443010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 10/15/2024] [Indexed: 11/17/2024] Open
Abstract
Wearable augmentations (WAs) designed for movement and manipulation, such as exoskeletons and supernumerary robotic limbs, are used to enhance the physical abilities of healthy individuals and substitute or restore lost functionality for impaired individuals. Non-invasive neuro-motor (NM) technologies, including electroencephalography (EEG) and sufrace electromyography (sEMG), promise direct and intuitive communication between the brain and the WA. After presenting a historical perspective, this review proposes a conceptual model for NM-controlled WAs, analyzes key design aspects, such as hardware design, mounting methods, control paradigms, and sensory feedback, that have direct implications on the user experience, and in the long term, on the embodiment of WAs. The literature is surveyed and categorized into three main areas: hand WAs, upper body WAs, and lower body WAs. The review concludes by highlighting the primary findings, challenges, and trends in NM-controlled WAs. This review motivates researchers and practitioners to further explore and evaluate the development of WAs, ensuring a better quality of life.
Collapse
Affiliation(s)
- Haneen Alsuradi
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Joseph Hong
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Helin Mazi
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Mohamad Eid
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
2
|
Lee BH, Cho JH, Kwon BH, Lee M, Lee SW. Iteratively Calibratable Network for Reliable EEG-Based Robotic Arm Control. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2793-2804. [PMID: 39074028 DOI: 10.1109/tnsre.2024.3434983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Robotic arms are increasingly being utilized in shared workspaces, which necessitates the accurate interpretation of human intentions for both efficiency and safety. Electroencephalogram (EEG) signals, commonly employed to measure brain activity, offer a direct communication channel between humans and robotic arms. However, the ambiguous and unstable characteristics of EEG signals, coupled with their widespread distribution, make it challenging to collect sufficient data and hinder the calibration performance for new signals, thereby reducing the reliability of EEG-based applications. To address these issues, this study proposes an iteratively calibratable network aimed at enhancing the reliability and efficiency of EEG-based robotic arm control systems. The proposed method integrates feature inputs with network expansion techniques. This integration allows a network trained on an extensive initial dataset to adapt effectively to new users during calibration. Additionally, our approach combines motor imagery and speech imagery datasets to increase not only its intuitiveness but also the number of command classes. The evaluation is conducted in a pseudo-online manner, with a robotic arm operating in real-time to collect data, which is then analyzed offline. The evaluation results demonstrated that the proposed method outperformed the comparison group in 10 sessions and demonstrated competitive results when the two paradigms were combined. Therefore, it was confirmed that the network can be calibrated and personalized using only the new data from new users.
Collapse
|
3
|
Jia T, Sun J, McGeady C, Ji L, Li C. Enhancing Brain-Computer Interface Performance by Incorporating Brain-to-Brain Coupling. CYBORG AND BIONIC SYSTEMS 2024; 5:0116. [PMID: 38680535 PMCID: PMC11052607 DOI: 10.34133/cbsystems.0116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 03/24/2024] [Indexed: 05/01/2024] Open
Abstract
Human cooperation relies on key features of social interaction in order to reach desirable outcomes. Similarly, human-robot interaction may benefit from integration with human-human interaction factors. In this paper, we aim to investigate brain-to-brain coupling during motor imagery (MI)-based brain-computer interface (BCI) training using eye-contact and hand-touch interaction. Twelve pairs of friends (experimental group) and 10 pairs of strangers (control group) were recruited for MI-based BCI tests concurrent with electroencephalography (EEG) hyperscanning. Event-related desynchronization (ERD) was estimated to measure cortical activation, and interbrain functional connectivity was assessed using multilevel statistical analysis. Furthermore, we compared BCI classification performance under different social interaction conditions. In the experimental group, greater ERD was found around the contralateral sensorimotor cortex under social interaction conditions compared with MI without any social interaction. Notably, EEG channels with decreased power were mainly distributed around the frontal, central, and occipital regions. A significant increase in interbrain coupling was also found under social interaction conditions. BCI decoding accuracies were significantly improved in the eye contact condition and eye and hand contact condition compared with the no-interaction condition. However, for the strangers' group, no positive effects were observed in comparisons of cortical activations between interaction and no-interaction conditions. These findings indicate that social interaction can improve the neural synchronization between familiar partners with enhanced brain activations and brain-to-brain coupling. This study may provide a novel method for enhancing MI-based BCI performance in conjunction with neural synchronization between users.
Collapse
Affiliation(s)
- Tianyu Jia
- Lab of Intelligent and Biomimetic Machinery, Department of Mechanical Engineering,
Tsinghua University, Beijing, China
- Department of Bioengineering,
Imperial College London, London, UK
| | - Jingyao Sun
- Lab of Intelligent and Biomimetic Machinery, Department of Mechanical Engineering,
Tsinghua University, Beijing, China
| | - Ciarán McGeady
- Department of Bioengineering,
Imperial College London, London, UK
| | - Linhong Ji
- Lab of Intelligent and Biomimetic Machinery, Department of Mechanical Engineering,
Tsinghua University, Beijing, China
| | - Chong Li
- Lab of Intelligent and Biomimetic Machinery, Department of Mechanical Engineering,
Tsinghua University, Beijing, China
- School of Clinical Medicine,
Tsinghua University, Beijing, China
- Beijing Tsinghua Changgung Hospital,
Tsinghua University, Beijing, China
| |
Collapse
|
4
|
Jing H, Zheng T, Zhang Q, Liu B, Sun K, Li L, Zhao J, Zhu Y. A Mouth and Tongue Interactive Device to Control Wearable Robotic Limbs in Tasks where Human Limbs Are Occupied. BIOSENSORS 2024; 14:213. [PMID: 38785687 PMCID: PMC11118463 DOI: 10.3390/bios14050213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/22/2024] [Accepted: 04/22/2024] [Indexed: 05/25/2024]
Abstract
The Wearable Robotic Limb (WRL) is a type of robotic arm worn on the human body, aiming to enhance the wearer's operational capabilities. However, proposing additional methods to control and perceive the WRL when human limbs are heavily occupied with primary tasks presents a challenge. Existing interactive methods, such as voice, gaze, and electromyography (EMG), have limitations in control precision and convenience. To address this, we have developed an interactive device that utilizes the mouth and tongue. This device is lightweight and compact, allowing wearers to achieve continuous motion and contact force control of the WRL. By using a tongue controller and mouth gas pressure sensor, wearers can control the WRL while also receiving sensitive contact feedback through changes in mouth pressure. To facilitate bidirectional interaction between the wearer and the WRL, we have devised an algorithm that divides WRL control into motion and force-position hybrid modes. In order to evaluate the performance of the device, we conducted an experiment with ten participants tasked with completing a pin-hole assembly task with the assistance of the WRL system. The results show that the device enables continuous control of the position and contact force of the WRL, with users perceiving feedback through mouth airflow resistance. However, the experiment also revealed some shortcomings of the device, including user fatigue and its impact on breathing. After experimental investigation, it was observed that fatigue levels can decrease with training. Experimental studies have revealed that fatigue levels can decrease with training. Furthermore, the limitations of the device have shown potential for improvement through structural enhancements. Overall, our mouth and tongue interactive device shows promising potential in controlling the WRL during tasks where human limbs are occupied.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Yanhe Zhu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China; (H.J.); (T.Z.); (Q.Z.); (B.L.); (K.S.); (L.L.); (J.Z.)
| |
Collapse
|
5
|
Dominijanni G, Pinheiro DL, Pollina L, Orset B, Gini M, Anselmino E, Pierella C, Olivier J, Shokur S, Micera S. Human motor augmentation with an extra robotic arm without functional interference. Sci Robot 2023; 8:eadh1438. [PMID: 38091424 DOI: 10.1126/scirobotics.adh1438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 11/15/2023] [Indexed: 12/18/2023]
Abstract
Extra robotic arms (XRAs) are gaining interest in neuroscience and robotics, offering potential tools for daily activities. However, this compelling opportunity poses new challenges for sensorimotor control strategies and human-machine interfaces (HMIs). A key unsolved challenge is allowing users to proficiently control XRAs without hindering their existing functions. To address this, we propose a pipeline to identify suitable HMIs given a defined task to accomplish with the XRA. Following such a scheme, we assessed a multimodal motor HMI based on gaze detection and diaphragmatic respiration in a purposely designed modular neurorobotic platform integrating virtual reality and a bilateral upper limb exoskeleton. Our results show that the proposed HMI does not interfere with speaking or visual exploration and that it can be used to control an extra virtual arm independently from the biological ones or in coordination with them. Participants showed significant improvements in performance with daily training and retention of learning, with no further improvements when artificial haptic feedback was provided. As a final proof of concept, naïve and experienced participants used a simplified version of the HMI to control a wearable XRA. Our analysis indicates how the presented HMI can be effectively used to control XRAs. The observation that experienced users achieved a success rate 22.2% higher than that of naïve users, combined with the result that naïve users showed average success rates of 74% when they first engaged with the system, endorses the viability of both the virtual reality-based testing and training and the proposed pipeline.
Collapse
Affiliation(s)
- Giulia Dominijanni
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Daniel Leal Pinheiro
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Neuroengineering and Neurocognition Laboratory, Escola Paulista de Medicina, Department of Neurology and Neurosurgery, Division of Neuroscience, Universidade Federal de São Paulo, São Paulo, Brazil
| | - Leonardo Pollina
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Bastien Orset
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Martina Gini
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
- Neuroelectronic Interfaces, Faculty of Electrical Engineering and IT, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, Aachen 52074, Germany
| | - Eugenio Anselmino
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Camilla Pierella
- Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, and Maternal and Children's Sciences (DINOGMI), University of Genoa, Genoa, Italy
| | - Jérémy Olivier
- Institute for Industrial Sciences and Technologies, Haute Ecole du Paysage, d'Ingénierie et d'Architecture (HEPIA), HES-SO University of Applied Sciences and Arts Western Switzerland, Geneva, Switzerland
| | - Solaiman Shokur
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Silvestro Micera
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
| |
Collapse
|
6
|
Jeong JH, Cho JH, Lee BH, Lee SW. Real-Time Deep Neurolinguistic Learning Enhances Noninvasive Neural Language Decoding for Brain-Machine Interaction. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7469-7482. [PMID: 36251899 DOI: 10.1109/tcyb.2022.3211694] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Electroencephalogram (EEG)-based brain-machine interface (BMI) has been utilized to help patients regain motor function and has recently been validated for its use in healthy people because of its ability to directly decipher human intentions. In particular, neurolinguistic research using EEGs has been investigated as an intuitive and naturalistic communication tool between humans and machines. In this study, the human mind directly decoded the neural languages based on speech imagery using the proposed deep neurolinguistic learning. Through real-time experiments, we evaluated whether BMI-based cooperative tasks between multiple users could be accomplished using a variety of neural languages. We successfully demonstrated a BMI system that allows a variety of scenarios, such as essential activity, collaborative play, and emotional interaction. This outcome presents a novel BMI frontier that can interact at the level of human-like intelligence in real time and extends the boundaries of the communication paradigm.
Collapse
|
7
|
Miao M, Yang Z, Zeng H, Zhang W, Xu B, Hu W. Explainable cross-task adaptive transfer learning for motor imagery EEG classification. J Neural Eng 2023; 20:066021. [PMID: 37963394 DOI: 10.1088/1741-2552/ad0c61] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 11/14/2023] [Indexed: 11/16/2023]
Abstract
Objective. In the field of motor imagery (MI) electroencephalography (EEG)-based brain-computer interfaces, deep transfer learning (TL) has proven to be an effective tool for solving the problem of limited availability in subject-specific data for the training of robust deep learning (DL) models. Although considerable progress has been made in the cross-subject/session and cross-device scenarios, the more challenging problem of cross-task deep TL remains largely unexplored.Approach. We propose a novel explainable cross-task adaptive TL method for MI EEG decoding. Firstly, similarity analysis and data alignment are performed for EEG data of motor execution (ME) and MI tasks. Afterwards, the MI EEG decoding model is obtained via pre-training with extensive ME EEG data and fine-tuning with partial MI EEG data. Finally, expected gradient-based post-hoc explainability analysis is conducted for the visualization of important temporal-spatial features.Main results. Extensive experiments are conducted on one large ME EEG High-Gamma dataset and two large MI EEG datasets (openBMI and GIST). The best average classification accuracy of our method reaches 80.00% and 72.73% for OpenBMI and GIST respectively, which outperforms several state-of-the-art algorithms. In addition, the results of the explainability analysis further validate the correlation between ME and MI EEG data and the effectiveness of ME/MI cross-task adaptation.Significance. This paper confirms that the decoding of MI EEG can be well facilitated by pre-existing ME EEG data, which largely relaxes the constraint of training samples for MI EEG decoding and is important in a practical sense.
Collapse
Affiliation(s)
- Minmin Miao
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, People's Republic of China
| | - Zhong Yang
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
| | - Hong Zeng
- School of Instrument Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Wenbin Zhang
- College of Computer and Information, Hohai University, Nanjing, People's Republic of China
| | - Baoguo Xu
- School of Instrument Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Wenjun Hu
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, People's Republic of China
| |
Collapse
|
8
|
Lian J, Qiao X, Zhao Y, Li S, Wang C, Zhou J. EEG-Based Target Detection Using an RSVP Paradigm under Five Levels of Weak Hidden Conditions. Brain Sci 2023; 13:1583. [PMID: 38002543 PMCID: PMC10670035 DOI: 10.3390/brainsci13111583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 11/06/2023] [Accepted: 11/07/2023] [Indexed: 11/26/2023] Open
Abstract
Although target detection based on electroencephalogram (EEG) signals has been extensively investigated recently, EEG-based target detection under weak hidden conditions remains a problem. In this paper, we proposed a rapid serial visual presentation (RSVP) paradigm for target detection corresponding to five levels of weak hidden conditions quantitively based on the RGB color space. Eighteen subjects participated in the experiment, and the neural signatures, including P300 amplitude and latency, were investigated. Detection performance was evaluated under five levels of weak hidden conditions using the linear discrimination analysis and support vector machine classifiers on different channel sets. The experimental results showed that, compared with the benchmark condition, (1) the P300 amplitude significantly decreased (8.92 ± 1.24 μV versus 7.84 ± 1.40 μV, p = 0.021) and latency was significantly prolonged (582.39 ± 25.02 ms versus 643.83 ± 26.16 ms, p = 0.028) only under the weakest hidden condition, and (2) the detection accuracy decreased by less than 2% (75.04 ± 3.24% versus 73.35 ± 3.15%, p = 0.029) with a more than 90% reduction in channel number (62 channels versus 6 channels), determined using the proposed channel selection method under the weakest hidden condition. Our study can provide new insights into target detection under weak hidden conditions based on EEG signals with a rapid serial visual presentation paradigm. In addition, it may expand the application of brain-computer interfaces in EEG-based target detection areas.
Collapse
Affiliation(s)
- Jinling Lian
- Department of Neural Engineering and Biological Interdisciplinary Studies, Beijing Institute of Basic Medical Sciences, 27 Taiping Rd., Beijing 100850, China; (J.L.); (X.Q.); (Y.Z.); (S.L.)
| | - Xin Qiao
- Department of Neural Engineering and Biological Interdisciplinary Studies, Beijing Institute of Basic Medical Sciences, 27 Taiping Rd., Beijing 100850, China; (J.L.); (X.Q.); (Y.Z.); (S.L.)
| | - Yuwei Zhao
- Department of Neural Engineering and Biological Interdisciplinary Studies, Beijing Institute of Basic Medical Sciences, 27 Taiping Rd., Beijing 100850, China; (J.L.); (X.Q.); (Y.Z.); (S.L.)
| | - Siwei Li
- Department of Neural Engineering and Biological Interdisciplinary Studies, Beijing Institute of Basic Medical Sciences, 27 Taiping Rd., Beijing 100850, China; (J.L.); (X.Q.); (Y.Z.); (S.L.)
| | - Changyong Wang
- Department of Neural Engineering and Biological Interdisciplinary Studies, Beijing Institute of Basic Medical Sciences, 27 Taiping Rd., Beijing 100850, China; (J.L.); (X.Q.); (Y.Z.); (S.L.)
| | - Jin Zhou
- Department of Neural Engineering and Biological Interdisciplinary Studies, Beijing Institute of Basic Medical Sciences, 27 Taiping Rd., Beijing 100850, China; (J.L.); (X.Q.); (Y.Z.); (S.L.)
- Chinese Institute for Brain Research, Zhongguancun Life Science Park, Changping District, Beijing 102206, China
| |
Collapse
|
9
|
Mang J, Xu Z, Qi Y, Zhang T. Favoring the cognitive-motor process in the closed-loop of BCI mediated post stroke motor function recovery: challenges and approaches. Front Neurorobot 2023; 17:1271967. [PMID: 37881517 PMCID: PMC10595019 DOI: 10.3389/fnbot.2023.1271967] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 09/08/2023] [Indexed: 10/27/2023] Open
Abstract
The brain-computer interface (BCI)-mediated rehabilitation is emerging as a solution to restore motor skills in paretic patients after stroke. In the human brain, cortical motor neurons not only fire when actions are carried out but are also activated in a wired manner through many cognitive processes related to movement such as imagining, perceiving, and observing the actions. Moreover, the recruitment of motor cortexes can usually be regulated by environmental conditions, forming a closed-loop through neurofeedback. However, this cognitive-motor control loop is often interrupted by the impairment of stroke. The requirement to bridge the stroke-induced gap in the motor control loop is promoting the evolution of the BCI-based motor rehabilitation system and, notably posing many challenges regarding the disease-specific process of post stroke motor function recovery. This review aimed to map the current literature surrounding the new progress in BCI-mediated post stroke motor function recovery involved with cognitive aspect, particularly in how it refired and rewired the neural circuit of motor control through motor learning along with the BCI-centric closed-loop.
Collapse
Affiliation(s)
- Jing Mang
- Department of Neurology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Zhuo Xu
- Department of Rehabilitation, China-Japan Union Hospital of Jilin University, Changchun, China
| | - YingBin Qi
- Department of Neurology, Jilin Province People's Hospital, Changchun, China
| | - Ting Zhang
- Rehabilitation Therapeutics, School of Nursing, Jilin University, Changchun, China
| |
Collapse
|
10
|
Jing H, Zheng T, Zhang Q, Sun K, Li L, Lai M, Zhao J, Zhu Y. Human Operation Augmentation through Wearable Robotic Limb Integrated with Mixed Reality Device. Biomimetics (Basel) 2023; 8:479. [PMID: 37887610 PMCID: PMC10604667 DOI: 10.3390/biomimetics8060479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 09/26/2023] [Accepted: 10/05/2023] [Indexed: 10/28/2023] Open
Abstract
Mixed reality technology can give humans an intuitive visual experience, and combined with the multi-source information of the human body, it can provide a comfortable human-robot interaction experience. This paper applies a mixed reality device (Hololens2) to provide interactive communication between the wearer and the wearable robotic limb (supernumerary robotic limb, SRL). Hololens2 can obtain human body information, including eye gaze, hand gestures, voice input, etc. It can also provide feedback information to the wearer through augmented reality and audio output, which is the communication bridge needed in human-robot interaction. Implementing a wearable robotic arm integrated with HoloLens2 is proposed to augment the wearer's capabilities. Taking two typical practical tasks of cable installation and electrical connector soldering in aircraft manufacturing as examples, the task models and interaction scheme are designed. Finally, human augmentation is evaluated in terms of task completion time statistics.
Collapse
Affiliation(s)
- Hongwei Jing
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Tianjiao Zheng
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Qinghua Zhang
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Kerui Sun
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Lele Li
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Mingzhu Lai
- School of Mathematics and Statistics, Hainan Normal University, Haikou 571158, China
| | - Jie Zhao
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| | - Yanhe Zhu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
| |
Collapse
|
11
|
Liu X, Wang K, Liu F, Zhao W, Liu J. 3D Convolution neural network with multiscale spatial and temporal cues for motor imagery EEG classification. Cogn Neurodyn 2023; 17:1357-1380. [PMID: 37786651 PMCID: PMC10542086 DOI: 10.1007/s11571-022-09906-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 08/01/2022] [Accepted: 09/06/2022] [Indexed: 10/04/2023] Open
Abstract
Recently, deep learning-based methods have achieved meaningful results in the Motor imagery electroencephalogram (MI EEG) classification. However, because of the low signal-to-noise ratio and the various characteristics of brain activities among subjects, these methods lack a subject adaptive feature extraction mechanism. Another issue is that they neglect important spatial topological information and the global temporal variation trend of MI EEG signals. These issues limit the classification accuracy. Here, we propose an end-to-end 3D CNN to extract multiscale spatial and temporal dependent features for improving the accuracy performance of 4-class MI EEG classification. The proposed method adaptively assigns higher weights to motor-related spatial channels and temporal sampling cues than the motor-unrelated ones across all brain regions, which can prevent influences caused by biological and environmental artifacts. Experimental evaluation reveals that the proposed method achieved an average classification accuracy of 93.06% and 97.05% on two commonly used datasets, demonstrating excellent performance and robustness for different subjects compared to other state-of-the-art methods.In order to verify the real-time performance in actual applications, the proposed method is applied to control the robot based on MI EEG signals. The proposed approach effectively addresses the issues of existing methods, improves the classification accuracy and the performance of BCI system, and has great application prospects.
Collapse
Affiliation(s)
- Xiuling Liu
- College of Electronic and Information Engineering, Hebei University, Baoding, 071002 China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, 071002 China
| | - Kaidong Wang
- College of Electronic and Information Engineering, Hebei University, Baoding, 071002 China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, 071002 China
| | - Fengshuang Liu
- College of Electronic and Information Engineering, Hebei University, Baoding, 071002 China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, 071002 China
| | - Wei Zhao
- College of Computer and Cyber Security, Hebei Normal University, Street, Shijiazhuang, 050024 China
| | - Jing Liu
- College of Computer and Cyber Security, Hebei Normal University, Street, Shijiazhuang, 050024 China
| |
Collapse
|
12
|
Luo J, Wang Y, Xia S, Lu N, Ren X, Shi Z, Hei X. A shallow mirror transformer for subject-independent motor imagery BCI. Comput Biol Med 2023; 164:107254. [PMID: 37499295 DOI: 10.1016/j.compbiomed.2023.107254] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 06/28/2023] [Accepted: 07/07/2023] [Indexed: 07/29/2023]
Abstract
OBJECTIVE Motor imagery BCI plays an increasingly important role in motor disorders rehabilitation. However, the position and duration of the discriminative segment in an EEG trial vary from subject to subject and even trial to trial, and this leads to poor performance of subject-independent motor imagery classification. Thus, determining how to detect and utilize the discriminative signal segments is crucial for improving the performance of subject-independent motor imagery BCI. APPROACH In this paper, a shallow mirror transformer is proposed for subject-independent motor imagery EEG classification. Specifically, a multihead self-attention layer with a global receptive field is employed to detect and utilize the discriminative segment from the entire input EEG trial. Furthermore, the mirror EEG signal and the mirror network structure are constructed to improve the classification precision based on ensemble learning. Finally, the subject-independent setup was used to evaluate the shallow mirror transformer on motor imagery EEG signals from subjects existing in the training set and new subjects. MAIN RESULTS The experiments results on BCI Competition IV datasets 2a and 2b and the OpenBMI dataset demonstrated the promising effectiveness of the proposed shallow mirror transformer. The shallow mirror transformer obtained average accuracies of 74.48% and 76.1% for new subjects and existing subjects, respectively, which were highest among the compared state-of-the-art methods. In addition, visualization of the attention score showed the ability of discriminative EEG segment detection. This paper demonstrated that multihead self-attention is effective in capturing global EEG signal information in motor imagery classification. SIGNIFICANCE This study provides an effective model based on a multihead self-attention layer for subject-independent motor imagery-based BCIs. To the best of our knowledge, this is the shallowest transformer model available, in which a small number of parameters promotes the performance in motor imagery EEG classification for such a small sample problem.
Collapse
Affiliation(s)
- Jing Luo
- Shaanxi Key Laboratory for Network Computing and Security Technology and Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, China.
| | - Yaojie Wang
- Shaanxi Key Laboratory for Network Computing and Security Technology and Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, China
| | - Shuxiang Xia
- Shaanxi Key Laboratory for Network Computing and Security Technology and Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, China
| | - Na Lu
- State Key Laboratory for Manufacturing Systems Engineering, Systems Engineering Institute, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Xiaoyong Ren
- Department of Otolaryngology Head and Neck Surgery & Center of Sleep Medicine, The Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Zhenghao Shi
- Shaanxi Key Laboratory for Network Computing and Security Technology and Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, China
| | - Xinhong Hei
- Shaanxi Key Laboratory for Network Computing and Security Technology and Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, China
| |
Collapse
|
13
|
Zhou Y, Yu T, Gao W, Huang W, Lu Z, Huang Q, Li Y. Shared Three-Dimensional Robotic Arm Control Based on Asynchronous BCI and Computer Vision. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3163-3175. [PMID: 37498753 DOI: 10.1109/tnsre.2023.3299350] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
OBJECTIVE A brain-computer interface (BCI) can be used to translate neuronal activity into commands to control external devices. However, using noninvasive BCI to control a robotic arm for movements in three-dimensional (3D) environments and accomplish complicated daily tasks, such as grasping and drinking, remains a challenge. APPROACH In this study, a shared robotic arm control system based on hybrid asynchronous BCI and computer vision was presented. The BCI model, which combines steady-state visual evoked potentials (SSVEPs) and blink-related electrooculography (EOG) signals, allows users to freely choose from fifteen commands in an asynchronous mode corresponding to robot actions in a 3D workspace and reach targets with a wide movement range, while computer vision can identify objects and assist a robotic arm in completing more precise tasks, such as grasping a target automatically. RESULTS Ten subjects participated in the experiments and achieved an average accuracy of more than 92% and a high trajectory efficiency for robot movement. All subjects were able to perform the reach-grasp-drink tasks successfully using the proposed shared control method, with fewer error commands and shorter completion time than with direct BCI control. SIGNIFICANCE Our results demonstrated the feasibility and efficiency of generating practical multidimensional control of an intuitive robotic arm by merging hybrid asynchronous BCI and computer vision-based recognition.
Collapse
|
14
|
Lei Y, Wang D, Wang W, Qu H, Wang J, Shi B. Improving single-hand open/close motor imagery classification by error-related potentials correction. Heliyon 2023; 9:e18452. [PMID: 37520987 PMCID: PMC10382287 DOI: 10.1016/j.heliyon.2023.e18452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 07/15/2023] [Accepted: 07/18/2023] [Indexed: 08/01/2023] Open
Abstract
Objective The ability of a brain-computer interface (BCI) to classify brain activity in electroencephalograms (EEG) during motor imagery (MI) tasks is an important performance indicator. Because the cortical regions that drive the single-handed open and closed tasks overlap, it is difficult to classify the EEG signals during executing both tasks. Approach The addition of special EEG features can improve the accuracy of classifying single-hand open and closed tasks. In this work, we designed a hybrid BCI paradigm based on error-related potentials (ErrP) and motor imagery (MI) and proposed a strategy to correct the classification results of MI by using ErrP information. The ErrP and MI features of EEG data from 11 subjects were superimposed. Main results The corrected strategy improved the classification accuracy of single-hand open/close MI tasks from 52.3% to 73.7%, an increase of approximately 21%. Significance Our hybrid BCI paradigm improves the classification accuracy of single-hand MI by adding ErrP information, which provides a new approach for improving the classification performance of BCI.
Collapse
Affiliation(s)
- Yanghao Lei
- Institute of Robotics and Intelligent System, School of Mechanical Engineering, Xi'an Jiaotong University, Xi' an,710049, China
- Research Institute of NRR-Neurorehabilitation Robot, Xi' an Jiaotong University, Xi' an,710049, China
| | - Dong Wang
- Institute of Robotics and Intelligent System, School of Mechanical Engineering, Xi'an Jiaotong University, Xi' an,710049, China
- Research Institute of NRR-Neurorehabilitation Robot, Xi' an Jiaotong University, Xi' an,710049, China
| | - Weizhen Wang
- Institute of Robotics and Intelligent System, School of Mechanical Engineering, Xi'an Jiaotong University, Xi' an,710049, China
- Research Institute of NRR-Neurorehabilitation Robot, Xi' an Jiaotong University, Xi' an,710049, China
| | - Hao Qu
- Institute of Robotics and Intelligent System, School of Mechanical Engineering, Xi'an Jiaotong University, Xi' an,710049, China
- Research Institute of NRR-Neurorehabilitation Robot, Xi' an Jiaotong University, Xi' an,710049, China
| | - Jing Wang
- Institute of Robotics and Intelligent System, School of Mechanical Engineering, Xi'an Jiaotong University, Xi' an,710049, China
- Research Institute of NRR-Neurorehabilitation Robot, Xi' an Jiaotong University, Xi' an,710049, China
| | - Bin Shi
- PLA Rocket Force University of Engineering Xi'an, Xi' an, 710025, China
| |
Collapse
|
15
|
Wang Z, Shi N, Zhang Y, Zheng N, Li H, Jiao Y, Cheng J, Wang Y, Zhang X, Chen Y, Chen Y, Wang H, Xie T, Wang Y, Ma Y, Gao X, Feng X. Conformal in-ear bioelectronics for visual and auditory brain-computer interfaces. Nat Commun 2023; 14:4213. [PMID: 37452047 PMCID: PMC10349124 DOI: 10.1038/s41467-023-39814-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 06/28/2023] [Indexed: 07/18/2023] Open
Abstract
Brain-computer interfaces (BCIs) have attracted considerable attention in motor and language rehabilitation. Most devices use cap-based non-invasive, headband-based commercial products or microneedle-based invasive approaches, which are constrained for inconvenience, limited applications, inflammation risks and even irreversible damage to soft tissues. Here, we propose in-ear visual and auditory BCIs based on in-ear bioelectronics, named as SpiralE, which can adaptively expand and spiral along the auditory meatus under electrothermal actuation to ensure conformal contact. Participants achieve offline accuracies of 95% in 9-target steady state visual evoked potential (SSVEP) BCI classification and type target phrases successfully in a calibration-free 40-target online SSVEP speller experiment. Interestingly, in-ear SSVEPs exhibit significant 2nd harmonic tendencies, indicating that in-ear sensing may be complementary for studying harmonic spatial distributions in SSVEP studies. Moreover, natural speech auditory classification accuracy can reach 84% in cocktail party experiments. The SpiralE provides innovative concepts for designing 3D flexible bioelectronics and assists the development of biomedical engineering and neural monitoring.
Collapse
Affiliation(s)
- Zhouheng Wang
- Laboratory of Flexible Electronics Technology, Tsinghua University, Beijing, 100084, China
- AML, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China
| | - Nanlin Shi
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Yingchao Zhang
- AML, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China
| | - Ning Zheng
- State Key Laboratory of Chemical Engineering, College of Chemical and Biological Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Haicheng Li
- Laboratory of Flexible Electronics Technology, Tsinghua University, Beijing, 100084, China
- AML, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China
| | - Yang Jiao
- Laboratory of Flexible Electronics Technology, Tsinghua University, Beijing, 100084, China
- AML, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China
| | - Jiahui Cheng
- Laboratory of Flexible Electronics Technology, Tsinghua University, Beijing, 100084, China
- AML, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China
| | - Yutong Wang
- Laboratory of Flexible Electronics Technology, Tsinghua University, Beijing, 100084, China
- AML, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China
| | - Xiaoqing Zhang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
| | - Ying Chen
- Institute of Flexible Electronics Technology of THU, Zhejiang, Jiaxing, 314000, China
| | - Yihao Chen
- Laboratory of Flexible Electronics Technology, Tsinghua University, Beijing, 100084, China
- AML, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China
| | - Heling Wang
- Laboratory of Flexible Electronics Technology, Tsinghua University, Beijing, 100084, China
- AML, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China
| | - Tao Xie
- State Key Laboratory of Chemical Engineering, College of Chemical and Biological Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
| | - Yinji Ma
- Laboratory of Flexible Electronics Technology, Tsinghua University, Beijing, 100084, China.
- AML, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China.
| | - Xiaorong Gao
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China.
| | - Xue Feng
- Laboratory of Flexible Electronics Technology, Tsinghua University, Beijing, 100084, China.
- AML, Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
16
|
Pinardi M, Noccaro A, Raiano L, Formica D, Di Pino G. Comparing end-effector position and joint angle feedback for online robotic limb tracking. PLoS One 2023; 18:e0286566. [PMID: 37289675 PMCID: PMC10249844 DOI: 10.1371/journal.pone.0286566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 05/18/2023] [Indexed: 06/10/2023] Open
Abstract
Somatosensation greatly increases the ability to control our natural body. This suggests that supplementing vision with haptic sensory feedback would also be helpful when a user aims at controlling a robotic arm proficiently. However, whether the position of the robot and its continuous update should be coded in a extrinsic or intrinsic reference frame is not known. Here we compared two different supplementary feedback contents concerning the status of a robotic limb in 2-DoFs configuration: one encoding the Cartesian coordinates of the end-effector of the robotic arm (i.e., Task-space feedback) and another and encoding the robot joints angles (i.e., Joint-space feedback). Feedback was delivered to blindfolded participants through vibrotactile stimulation applied on participants' leg. After a 1.5-hour training with both feedbacks, participants were significantly more accurate with Task compared to Joint-space feedback, as shown by lower position and aiming errors, albeit not faster (i.e., similar onset delay). However, learning index during training was significantly higher in Joint space feedback compared to Task-space feedback. These results suggest that Task-space feedback is probably more intuitive and more suited for activities which require short training sessions, while Joint space feedback showed potential for long-term improvement. We speculate that the latter, despite performing worse in the present work, might be ultimately more suited for applications requiring long training, such as the control of supernumerary robotic limbs for surgical robotics, heavy industrial manufacturing, or more generally, in the context of human movement augmentation.
Collapse
Affiliation(s)
- Mattia Pinardi
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Alessia Noccaro
- Neurorobotics Group, Newcastle University, Newcastle, United Kingdom
| | - Luigi Raiano
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Domenico Formica
- Neurorobotics Group, Newcastle University, Newcastle, United Kingdom
| | - Giovanni Di Pino
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| |
Collapse
|
17
|
Murphy RR. Sci-fi imagines how good brain-machine interfaces will amplify bad choices. Sci Robot 2023; 8:eadi2192. [PMID: 37196071 DOI: 10.1126/scirobotics.adi2192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Machine Man and The Andromeda Evolution explore personal and societal ramifications of brain-machine interfaces.
Collapse
Affiliation(s)
- Robin R Murphy
- Computer Science and Engineering, Texas A&M University, College Station, TX 77843, USA
| |
Collapse
|
18
|
Tian Q, Zhao H, Wang X, Jiang Y, Zhu M, Yelemulati H, Xie R, Li Q, Su R, Cao Z, Jiang N, Huang J, Li G, Chen S, Chen X, Liu Z. Hairy-Skin-Adaptive Viscoelastic Dry Electrodes for Long-Term Electrophysiological Monitoring. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023:e2211236. [PMID: 37072159 DOI: 10.1002/adma.202211236] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/31/2023] [Indexed: 06/11/2023]
Abstract
Long-term epidermal electrophysiological (EP) monitoring is crucial for disease diagnosis and human-machine synergy. The human skin is covered with hair that grows at an average rate of 0.3 mm per day. This impedes a stable contact between the skin and dry epidermal electrodes, resulting in motion artifacts during ultralong-term EP monitoring. Therefore, accurate and high-quality EP signal detection remains challenging. To address this issue, a new solution-the hairy-skin-adaptive viscoelastic dry electrode (VDE) is reported. This innovative technology is capable of bypassing hair and filling into the skin wrinkles, leading to long-lasting and stable interface impedance. The VDE maintains a stable interface impedance for a remarkable period of 48 days and 100 cycles. The VDE is highly effective in shielding against hair disturbances in electrocardiography (ECG) monitoring, even during intense chest expansion, and in electromyography (EMG) monitoring during large strain. Furthermore, the VDE is easily attachable to the skull without requiring any electroencephalogram (EEG) cap or bandage, making it an ideal solution for EEG monitoring. This work represents a substantial breakthrough in the field of EP monitoring, providing a solution for the previously challenging issue of monitoring human EP signals on hairy skin.
Collapse
Affiliation(s)
- Qiong Tian
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hang Zhao
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xin Wang
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, 518055, China
| | - Ying Jiang
- School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
| | - Mingxing Zhu
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Huoerhute Yelemulati
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Ruijie Xie
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Qingsong Li
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Rui Su
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhengshuai Cao
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Naifu Jiang
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jianping Huang
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Guanglin Li
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Shixiong Chen
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiaodong Chen
- School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
| | - Zhiyuan Liu
- Neural Engineering Centre, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| |
Collapse
|
19
|
Functional Two-Dimensional Materials for Bioelectronic Neural Interfacing. J Funct Biomater 2023; 14:jfb14010035. [PMID: 36662082 PMCID: PMC9863167 DOI: 10.3390/jfb14010035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/26/2022] [Accepted: 01/03/2023] [Indexed: 01/11/2023] Open
Abstract
Realizing the neurological information processing by analyzing the complex data transferring behavior of populations and individual neurons is one of the fast-growing fields of neuroscience and bioelectronic technologies. This field is anticipated to cover a wide range of advanced applications, including neural dynamic monitoring, understanding the neurological disorders, human brain-machine communications and even ambitious mind-controlled prosthetic implant systems. To fulfill the requirements of high spatial and temporal resolution recording of neural activities, electrical, optical and biosensing technologies are combined to develop multifunctional bioelectronic and neuro-signal probes. Advanced two-dimensional (2D) layered materials such as graphene, graphene oxide, transition metal dichalcogenides and MXenes with their atomic-layer thickness and multifunctional capabilities show bio-stimulation and multiple sensing properties. These characteristics are beneficial factors for development of ultrathin-film electrodes for flexible neural interfacing with minimum invasive chronic interfaces to the brain cells and cortex. The combination of incredible properties of 2D nanostructure places them in a unique position, as the main materials of choice, for multifunctional reception of neural activities. The current review highlights the recent achievements in 2D-based bioelectronic systems for monitoring of biophysiological indicators and biosignals at neural interfaces.
Collapse
|
20
|
Zhang R, Chen Y, Xu Z, Zhang L, Hu Y, Chen M. Recognition of single upper limb motor imagery tasks from EEG using multi-branch fusion convolutional neural network. Front Neurosci 2023; 17:1129049. [PMID: 36908782 PMCID: PMC9992961 DOI: 10.3389/fnins.2023.1129049] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 02/03/2023] [Indexed: 02/24/2023] Open
Abstract
Motor imagery-based brain-computer interfaces (MI-BCI) have important application values in the field of neurorehabilitation and robot control. At present, MI-BCI mostly use bilateral upper limb motor tasks, but there are relatively few studies on single upper limb MI tasks. In this work, we conducted studies on the recognition of motor imagery EEG signals of the right upper limb and proposed a multi-branch fusion convolutional neural network (MF-CNN) for learning the features of the raw EEG signals as well as the two-dimensional time-frequency maps at the same time. The dataset used in this study contained three types of motor imagery tasks: extending the arm, rotating the wrist, and grasping the object, 25 subjects were included. In the binary classification experiment between the grasping object and the arm-extending tasks, MF-CNN achieved an average classification accuracy of 78.52% and kappa value of 0.57. When all three tasks were used for classification, the accuracy and kappa value were 57.06% and 0.36, respectively. The comparison results showed that the classification performance of MF-CNN is higher than that of single CNN branch algorithms in both binary-class and three-class classification. In conclusion, MF-CNN makes full use of the time-domain and frequency-domain features of EEG, can improve the decoding accuracy of single limb motor imagery tasks, and it contributes to the application of MI-BCI in motor function rehabilitation training after stroke.
Collapse
Affiliation(s)
- Rui Zhang
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Yadi Chen
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Zongxin Xu
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Lipeng Zhang
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Yuxia Hu
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Mingming Chen
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
21
|
Cho JH, Jeong JH, Lee SW. NeuroGrasp: Real-Time EEG Classification of High-Level Motor Imagery Tasks Using a Dual-Stage Deep Learning Framework. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13279-13292. [PMID: 34748509 DOI: 10.1109/tcyb.2021.3122969] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Brain-computer interfaces (BCIs) have been widely employed to identify and estimate a user's intention to trigger a robotic device by decoding motor imagery (MI) from an electroencephalogram (EEG). However, developing a BCI system driven by MI related to natural hand-grasp tasks is challenging due to its high complexity. Although numerous BCI studies have successfully decoded large body parts, such as the movement intention of both hands, arms, or legs, research on MI decoding of high-level behaviors such as hand grasping is essential to further expand the versatility of MI-based BCIs. In this study, we propose NeuroGrasp, a dual-stage deep learning framework that decodes multiple hand grasping from EEG signals under the MI paradigm. The proposed method effectively uses an EEG and electromyography (EMG)-based learning, such that EEG-based inference at test phase becomes possible. The EMG guidance during model training allows BCIs to predict hand grasp types from EEG signals accurately. Consequently, NeuroGrasp improved classification performance offline, and demonstrated a stable classification performance online. Across 12 subjects, we obtained an average offline classification accuracy of 0.68 (±0.09) in four-grasp-type classifications and 0.86 (±0.04) in two-grasp category classifications. In addition, we obtained an average online classification accuracy of 0.65 (±0.09) and 0.79 (±0.09) across six high-performance subjects. Because the proposed method has demonstrated a stable classification performance when evaluated either online or offline, in the future, we expect that the proposed method could contribute to different BCI applications, including robotic hands or neuroprosthetics for handling everyday objects.
Collapse
|
22
|
Cornelio P, Haggard P, Hornbaek K, Georgiou O, Bergström J, Subramanian S, Obrist M. The sense of agency in emerging technologies for human–computer integration: A review. Front Neurosci 2022; 16:949138. [PMID: 36172040 PMCID: PMC9511170 DOI: 10.3389/fnins.2022.949138] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/05/2022] [Indexed: 11/13/2022] Open
Abstract
Human–computer integration is an emerging area in which the boundary between humans and technology is blurred as users and computers work collaboratively and share agency to execute tasks. The sense of agency (SoA) is an experience that arises by a combination of a voluntary motor action and sensory evidence whether the corresponding body movements have somehow influenced the course of external events. The SoA is not only a key part of our experiences in daily life but also in our interaction with technology as it gives us the feeling of “I did that” as opposed to “the system did that,” thus supporting a feeling of being in control. This feeling becomes critical with human–computer integration, wherein emerging technology directly influences people’s body, their actions, and the resulting outcomes. In this review, we analyse and classify current integration technologies based on what we currently know about agency in the literature, and propose a distinction between body augmentation, action augmentation, and outcome augmentation. For each category, we describe agency considerations and markers of differentiation that illustrate a relationship between assistance level (low, high), agency delegation (human, technology), and integration type (fusion, symbiosis). We conclude with a reflection on the opportunities and challenges of integrating humans with computers, and finalise with an expanded definition of human–computer integration including agency aspects which we consider to be particularly relevant. The aim this review is to provide researchers and practitioners with guidelines to situate their work within the integration research agenda and consider the implications of any technologies on SoA, and thus overall user experience when designing future technology.
Collapse
Affiliation(s)
- Patricia Cornelio
- Ultraleap Ltd., Bristol, United Kingdom
- Department of Computer Science, University College London, London, United Kingdom
- *Correspondence: Patricia Cornelio,
| | - Patrick Haggard
- Department of Computer Science, University College London, London, United Kingdom
| | - Kasper Hornbaek
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | - Joanna Bergström
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Sriram Subramanian
- Department of Computer Science, University College London, London, United Kingdom
| | - Marianna Obrist
- Department of Computer Science, University College London, London, United Kingdom
| |
Collapse
|
23
|
Yao L, Jiang N, Mrachacz-Kersting N, Zhu X, Farina D, Wang Y. Performance Variation of a Somatosensory BCI Based on Imagined Sensation: A Large Population Study. IEEE Trans Neural Syst Rehabil Eng 2022; 30:2486-2493. [PMID: 35969546 DOI: 10.1109/tnsre.2022.3198970] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
A proportion of users cannot achieve adequate brain-computer interface (BCI) control. The diversity of BCI modalities provides a way to solve this emerging issue. Here, we investigate the accuracy of a somatosensory BCI based on sensory imagery (SI). During the SI tasks, subjects were instructed to imagine a tactile sensation and to maintain the attention on the corresponding hand, as if there was tactile stimulus on the skin of the wrist. The performance across 106 healthy subjects in left- and right-hand SI discrimination was 78.9±13.2%. In 70.7% of the subjects the performance was above 70%. The SI task induced a contralateral cortical activation, and high-density EEG source localization showed that the real tactile stimulation and imagined tactile stimulation shared similar cortical activations within the somatosensory cortex. The somatosensory BCI based on SI provides a new signal modality for independent BCI development. Moreover, a combination of SI and other BCI modalities, such as motor imagery, may provide new avenues for further improving BCI usage and applicability, especially in those subjects unable to attain adequate BCI control with conventional BCI modalities.
Collapse
|
24
|
Rulik I, Sunny MSH, Sanjuan De Caro JD, Zarif MII, Brahmi B, Ahamed SI, Schultz K, Wang I, Leheng T, Longxiang JP, Rahman MH. Control of a Wheelchair-Mounted 6DOF Assistive Robot With Chin and Finger Joysticks. Front Robot AI 2022; 9:885610. [PMID: 35937617 PMCID: PMC9354078 DOI: 10.3389/frobt.2022.885610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 06/15/2022] [Indexed: 11/13/2022] Open
Abstract
Throughout the last decade, many assistive robots for people with disabilities have been developed; however, researchers have not fully utilized these robotic technologies to entirely create independent living conditions for people with disabilities, particularly in relation to activities of daily living (ADLs). An assistive system can help satisfy the demands of regular ADLs for people with disabilities. With an increasing shortage of caregivers and a growing number of individuals with impairments and the elderly, assistive robots can help meet future healthcare demands. One of the critical aspects of designing these assistive devices is to improve functional independence while providing an excellent human–machine interface. People with limited upper limb function due to stroke, spinal cord injury, cerebral palsy, amyotrophic lateral sclerosis, and other conditions find the controls of assistive devices such as power wheelchairs difficult to use. Thus, the objective of this research was to design a multimodal control method for robotic self-assistance that could assist individuals with disabilities in performing self-care tasks on a daily basis. In this research, a control framework for two interchangeable operating modes with a finger joystick and a chin joystick is developed where joysticks seamlessly control a wheelchair and a wheelchair-mounted robotic arm. Custom circuitry was developed to complete the control architecture. A user study was conducted to test the robotic system. Ten healthy individuals agreed to perform three tasks using both (chin and finger) joysticks for a total of six tasks with 10 repetitions each. The control method has been tested rigorously, maneuvering the robot at different velocities and under varying payload (1–3.5 lb) conditions. The absolute position accuracy was experimentally found to be approximately 5 mm. The round-trip delay we observed between the commands while controlling the xArm was 4 ms. Tests performed showed that the proposed control system allowed individuals to perform some ADLs such as picking up and placing items with a completion time of less than 1 min for each task and 100% success.
Collapse
Affiliation(s)
- Ivan Rulik
- Department of Computer Sciences, University of Wisconsin-Milwaukee, Milwaukee, WI, United States
- *Correspondence: Ivan Rulik,
| | - Md Samiul Haque Sunny
- Department of Computer Sciences, University of Wisconsin-Milwaukee, Milwaukee, WI, United States
| | | | | | - Brahim Brahmi
- Electrical Engineering Department, Collège Ahuntsic, Montreal, QC, Canada
| | | | - Katie Schultz
- Assistive Technology Program, Clement J. Zablocki VA Medical Center, Milwaukee, WI, United States
| | - Inga Wang
- Department of Rehabilitation Sciences & Technology, University of Wisconsin-Milwaukee, Milwaukee, WI, United States
| | - Tony Leheng
- UFACTORY Technology Co., Ltd., Shenzhen, China
| | | | - Mohammad H. Rahman
- Department of Mechanical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI, United States
| |
Collapse
|
25
|
Jeong JH, Cho JH, Lee YE, Lee SH, Shin GH, Kweon YS, Millán JDR, Müller KR, Lee SW. 2020 International brain-computer interface competition: A review. Front Hum Neurosci 2022; 16:898300. [PMID: 35937679 PMCID: PMC9354666 DOI: 10.3389/fnhum.2022.898300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 07/01/2022] [Indexed: 11/16/2022] Open
Abstract
The brain-computer interface (BCI) has been investigated as a form of communication tool between the brain and external devices. BCIs have been extended beyond communication and control over the years. The 2020 international BCI competition aimed to provide high-quality neuroscientific data for open access that could be used to evaluate the current degree of technical advances in BCI. Although there are a variety of remaining challenges for future BCI advances, we discuss some of more recent application directions: (i) few-shot EEG learning, (ii) micro-sleep detection (iii) imagined speech decoding, (iv) cross-session classification, and (v) EEG(+ear-EEG) detection in an ambulatory environment. Not only did scientists from the BCI field compete, but scholars with a broad variety of backgrounds and nationalities participated in the competition to address these challenges. Each dataset was prepared and separated into three data that were released to the competitors in the form of training and validation sets followed by a test set. Remarkable BCI advances were identified through the 2020 competition and indicated some trends of interest to BCI researchers.
Collapse
Affiliation(s)
- Ji-Hoon Jeong
- School of Computer Science, Chungbuk National University, Cheongju, South Korea
| | - Jeong-Hyun Cho
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Eun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Seo-Hyun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Gi-Hwan Shin
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Seok Kweon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - José del R. Millán
- Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX, United States
| | - Klaus-Robert Müller
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Machine Learning Group, Department of Computer Science, Berlin Institute of Technology, Berlin, Germany
- Max Planck Institute for Informatics, Saarbrucken, Germany
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
26
|
Clawson WP, Levin M. Endless forms most beautiful 2.0: teleonomy and the bioengineering of chimaeric and synthetic organisms. Biol J Linn Soc Lond 2022. [DOI: 10.1093/biolinnean/blac073] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Abstract
The rich variety of biological forms and behaviours results from one evolutionary history on Earth, via frozen accidents and selection in specific environments. This ubiquitous baggage in natural, familiar model species obscures the plasticity and swarm intelligence of cellular collectives. Significant gaps exist in our understanding of the origin of anatomical novelty, of the relationship between genome and form, and of strategies for control of large-scale structure and function in regenerative medicine and bioengineering. Analysis of living forms that have never existed before is necessary to reveal deep design principles of life as it can be. We briefly review existing examples of chimaeras, cyborgs, hybrots and other beings along the spectrum containing evolved and designed systems. To drive experimental progress in multicellular synthetic morphology, we propose teleonomic (goal-seeking, problem-solving) behaviour in diverse problem spaces as a powerful invariant across possible beings regardless of composition or origin. Cybernetic perspectives on chimaeric morphogenesis erase artificial distinctions established by past limitations of technology and imagination. We suggest that a multi-scale competency architecture facilitates evolution of robust problem-solving, living machines. Creation and analysis of novel living forms will be an essential testbed for the emerging field of diverse intelligence, with numerous implications across regenerative medicine, robotics and ethics.
Collapse
Affiliation(s)
| | - Michael Levin
- Allen Discovery Center at Tufts University , Medford, MA , USA
- Wyss Institute for Biologically Inspired Engineering at Harvard University , Boston, MA , USA
| |
Collapse
|
27
|
Ma W, Gong Y, Xue H, Liu Y, Lin X, Zhou G, Li Y. A lightweight and accurate double-branch neural network for four-class motor imagery classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
28
|
Shishkin SL. Active Brain-Computer Interfacing for Healthy Users. Front Neurosci 2022; 16:859887. [PMID: 35546879 PMCID: PMC9083451 DOI: 10.3389/fnins.2022.859887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 03/30/2022] [Indexed: 11/13/2022] Open
|
29
|
Heng W, Solomon S, Gao W. Flexible Electronics and Devices as Human-Machine Interfaces for Medical Robotics. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2022; 34:e2107902. [PMID: 34897836 PMCID: PMC9035141 DOI: 10.1002/adma.202107902] [Citation(s) in RCA: 131] [Impact Index Per Article: 65.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 12/08/2021] [Indexed: 05/02/2023]
Abstract
Medical robots are invaluable players in non-pharmaceutical treatment of disabilities. Particularly, using prosthetic and rehabilitation devices with human-machine interfaces can greatly improve the quality of life for impaired patients. In recent years, flexible electronic interfaces and soft robotics have attracted tremendous attention in this field due to their high biocompatibility, functionality, conformability, and low-cost. Flexible human-machine interfaces on soft robotics will make a promising alternative to conventional rigid devices, which can potentially revolutionize the paradigm and future direction of medical robotics in terms of rehabilitation feedback and user experience. In this review, the fundamental components of the materials, structures, and mechanisms in flexible human-machine interfaces are summarized by recent and renowned applications in five primary areas: physical and chemical sensing, physiological recording, information processing and communication, soft robotic actuation, and feedback stimulation. This review further concludes by discussing the outlook and current challenges of these technologies as a human-machine interface in medical robotics.
Collapse
Affiliation(s)
- Wenzheng Heng
- Andrew and Peggy Cherng Department of Medical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA
| | - Samuel Solomon
- Andrew and Peggy Cherng Department of Medical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA
| | - Wei Gao
- Andrew and Peggy Cherng Department of Medical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA
| |
Collapse
|
30
|
Eden J, Bräcklein M, Ibáñez J, Barsakcioglu DY, Di Pino G, Farina D, Burdet E, Mehring C. Principles of human movement augmentation and the challenges in making it a reality. Nat Commun 2022; 13:1345. [PMID: 35292665 PMCID: PMC8924218 DOI: 10.1038/s41467-022-28725-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Accepted: 02/04/2022] [Indexed: 12/23/2022] Open
Abstract
Augmenting the body with artificial limbs controlled concurrently to one's natural limbs has long appeared in science fiction, but recent technological and neuroscientific advances have begun to make this possible. By allowing individuals to achieve otherwise impossible actions, movement augmentation could revolutionize medical and industrial applications and profoundly change the way humans interact with the environment. Here, we construct a movement augmentation taxonomy through what is augmented and how it is achieved. With this framework, we analyze augmentation that extends the number of degrees-of-freedom, discuss critical features of effective augmentation such as physiological control signals, sensory feedback and learning as well as application scenarios, and propose a vision for the field.
Collapse
Affiliation(s)
- Jonathan Eden
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK
| | - Mario Bräcklein
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK
| | - Jaime Ibáñez
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK
- BSICoS, IIS Aragón, Universidad de Zaragoza, Zaragoza, Spain
- Department of Clinical and Movement Neurosciences, Institute of Neurology, University College London, London, UK
| | | | - Giovanni Di Pino
- NEXT: Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Dario Farina
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK
| | - Etienne Burdet
- Department of Bioengineering, Imperial College of Science, Technology and Medicine, London, UK.
| | - Carsten Mehring
- Bernstein Center Freiburg, University of Freiburg, Freiburg im Breisgau, 79104, Germany
- Faculty of Biology, University of Freiburg, Freiburg im Breisgau, 79104, Germany
| |
Collapse
|
31
|
Liu Y, Huang S, Wang Z, Ji F, Ming D. Functional Reorganization After Four-Week Brain-Computer Interface-Controlled Supernumerary Robotic Finger Training: A Pilot Study of Longitudinal Resting-State fMRI. Front Neurosci 2022; 15:766648. [PMID: 35221886 PMCID: PMC8873384 DOI: 10.3389/fnins.2021.766648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 12/21/2021] [Indexed: 11/13/2022] Open
Abstract
Humans have long been fascinated by the opportunities afforded through motor augmentation provided by the supernumerary robotic fingers (SRFs) and limbs (SRLs). However, the neuroplasticity mechanism induced by the motor augmentation equipment still needs further investigation. This study focused on the resting-state brain functional reorganization during longitudinal brain-computer interface (BCI)-controlled SRF training in using the fractional amplitude of low-frequency fluctuation (fALFF), regional homogeneity (ReHo), and degree centrality (DC) metrics. Ten right-handed subjects were enrolled for 4 weeks of BCI-controlled SRF training. The behavioral data and the neurological changes were recorded at baseline, training for 2 weeks, training for 4 weeks immediately after, and 2 weeks after the end of training. One-way repeated-measure ANOVA was used to investigate long-term motor improvement [F(2.805,25.24) = 43.94, p < 0.0001] and neurological changes. The fALFF values were significantly modulated in Cerebelum_6_R and correlated with motor function improvement (r = 0.6887, p < 0.0402) from t0 to t2. Besides, Cerebelum_9_R and Vermis_3 were also significantly modulated and showed different trends in longitudinal SRF training in using ReHo metric. At the same time, ReHo values that changed from t0 to t1 in Vermis_3 was significantly correlated with motor function improvement (r = 0.7038, p < 0.0344). We conclude that the compensation and suppression mechanism of the cerebellum existed during BCI-controlled SRF training, and this current result provided evidence to the neuroplasticity mechanism brought by the BCI-controlled motor-augmentation devices.
Collapse
Affiliation(s)
| | | | | | | | - Dong Ming
- Academy of Medical Engineering and Translational Medicine (AMT), Tianjin University, Tianjin, China
| |
Collapse
|
32
|
Lee DY, Jeong JH, Lee BH, Lee SW. Motor Imagery Classification Using Inter-Task Transfer Learning via A Channel-Wise Variational Autoencoder-based Convolutional Neural Network. IEEE Trans Neural Syst Rehabil Eng 2022; 30:226-237. [PMID: 35041605 DOI: 10.1109/tnsre.2022.3143836] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Highly sophisticated control based on a brain-computer interface (BCI) requires decoding kinematic information from brain signals. The forearm is a region of the upper limb that is often used in everyday life, but intuitive movements within the same limb have rarely been investigated in previous BCI studies. In this study, we focused on various forearm movement decoding from electroencephalography (EEG) signals using a small number of samples. Ten healthy participants took part in an experiment and performed motor execution (ME) and motor imagery (MI) of the intuitive movement tasks (Dataset I). We propose a convolutional neural network using a channel-wise variational autoencoder (CVNet) based on inter-task transfer learning. We approached that training the reconstructed ME-EEG signals together will also achieve more sufficient classification performance with only a small amount of MI-EEG signals. The proposed CVNet was validated on our own Dataset I and a public dataset, BNCI Horizon 2020 (Dataset II). The classification accuracies of various movements are confirmed to be 0.83 (±0.04) and 0.69 (±0.04) for Dataset I and II, respectively. The results show that the proposed method exhibits performance increases of approximately 0.09~0.27 and 0.08~0.24 compared with the conventional models for Dataset I and II, respectively. The outcomes suggest that the training model for decoding imagined movements can be performed using data from ME and a small number of data samples from MI. Hence, it is presented the feasibility of BCI learning strategies that can sufficiently learn deep learning with a few amount of calibration dataset and time only, with stable performance.
Collapse
|
33
|
Liu Y, Wang Z, Huang S, Wang W, Ming D. EEG characteristic investigation of the sixth-finger motor imagery and optimal channel selection for classification. J Neural Eng 2022; 19. [PMID: 35008079 DOI: 10.1088/1741-2552/ac49a6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 01/10/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Supernumerary Robotic Limbs (SRL) are body augmentation robotic devices by adding extra limbs or fingers to the human body different from the traditional wearable robotic devices such as prosthesis and exoskeleton. We proposed a novel MI (Motor imagery)-based BCI paradigm based on the sixth-finger which imagines controlling the extra finger movements. The goal of this work is to investigate the EEG characteristics and the application potential of MI-based BCI systems based on the new imagination paradigm (the sixth finger MI). APPROACH 14 subjects participated in the experiment involving the sixth finger MI tasks and rest state. Event-related spectral perturbation (ERSP) was adopted to analyse EEG spatial features and key-channel time-frequency features. Common spatial patterns (CSP) were used for feature extraction and classification was implemented by support vector machine (SVM). A genetic algorithm (GA) was used to select combinations of EEG channels that maximized classification accuracy and verified EEG patterns based on the sixth finger MI. And we conducted a longitudinal 4-week EEG control experiment based on the new paradigm. MAIN RESULTS ERD (event-related desynchronization) was found in the supplementary motor area (SMA) and primary motor area (M1) with a faint contralateral dominance. Unlike traditional MI based on the human hand, ERD was also found in frontal lobe. GA results showed that the distribution of the optimal 8-channel is similar to EEG topographical distributions, nearing parietal and frontal lobe. And the classification accuracy based on the optimal 8-channel (the highest accuracy of 80% and mean accuracy of 70%) was significantly better than that based on the random 8-channel (p<0.01). SIGNIFICANCE This work provided a new paradigm for MI-based MI system and verified its feasibility, widened the control bandwidth of the BCI system.
Collapse
Affiliation(s)
- Yuan Liu
- Tianjin University, Tianjin University,Tianjin, Tianjin, Tianjin, 300072, CHINA
| | - Zhuang Wang
- Tianjin University, Tianjin University , Tianjin, Tianjin, Tianjin, 300072, CHINA
| | - Shuaifei Huang
- Tianjin University, Tianjin University,tianjin, Tianjin, Tianjin, 300072, CHINA
| | - Wenjie Wang
- Tianjin University, Tianjin University , Tianjin, Tianjin, Tianjin, 300072, CHINA
| | - Dong Ming
- Tianjin University, Tianjin University , Tianjin, Tianjin, 300072, CHINA
| |
Collapse
|
34
|
Gurgone S, Borzelli D, De Pasquale P, Berger DJ, Lisini Baldi T, D'Aurizio N, Prattichizzo D, d'Avella A. Simultaneous control of natural and extra degrees of freedom by isometric force and electromyographic activity in the muscle-to-force null space. J Neural Eng 2022; 19. [PMID: 34983036 DOI: 10.1088/1741-2552/ac47db] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 01/04/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Muscle activation patterns in the muscle-to-force null space, i.e., patterns that do not generate task-relevant forces, may provide an opportunity for motor augmentation by allowing to control additional end-effectors simultaneously to natural limbs. Here we tested the feasibility of muscular null space control for augmentation by assessing simultaneous control of natural and extra degrees of freedom. APPROACH We instructed eight participants to control translation and rotation of a virtual 3D end-effector by simultaneous generation of isometric force at the hand and null space activity extracted in real-time from the electromyographic signals recorded from 15 shoulder and arm muscles. First, we identified the null space components that each participant could control more naturally by voluntary co-contraction. Then, participants performed several blocks of a reaching and holding task. They displaced an ellipsoidal cursor to reach one of nine targets by generating force, and simultaneously rotated the cursor to match the target orientation by activating null space components. We developed an information-theoretic metric, an index of difficulty defined as the sum of a spatial and a temporal term, to assess individual null space control ability for both reaching and holding. MAIN RESULTS On average, participants could reach the targets in most trials already in the first block (72%) and they improved with practice (maximum 93%) but holding performance remained lower (maximum 43%). As there was a high inter-individual variability in performance, we performed a simulation with different spatial and temporal task conditions to estimate those for which each individual participants would have performed best. SIGNIFICANCE Muscular null space control is feasible and may be used to control additional virtual or robotics end-effectors. However, decoding of motor commands must be optimized according to individual null space control ability.
Collapse
Affiliation(s)
- Sergio Gurgone
- University of Messina, Viale Ferdinando Stagno D'Alcontres 31, Messina, 98166, ITALY
| | - Daniele Borzelli
- University of Messina, Via Consolare Valeria, Messina, Messina, 98122, ITALY
| | - Paolo De Pasquale
- Scienze Biomediche, Odontoiatriche e delle Immagini Morfologiche e Funzionali, Università degli Studi di Messina, Via Consolare Valeria, 1, Messina, Messina, ME, 98124, ITALY
| | - Denise J Berger
- Laboratorio di Fisiologia Neuromotoria, Fondazione Santa Lucia, Via Ardeatina 306, Via Ardeatina 306, Roma, 00179, ITALY
| | | | - Nicole D'Aurizio
- Università degli Studi di Siena, Via Roma 56, Siena, 53100, ITALY
| | | | - Andrea d'Avella
- Scienze Biomediche, Odontoiatriche e delle Immagini Morfologiche e Funzionali, Università degli Studi di Messina, Via Consolare Valeria, 1, Messina, Messina, ME, 98124, ITALY
| |
Collapse
|
35
|
Emerging trends in BCI-robotics for motor control and rehabilitation. CURRENT OPINION IN BIOMEDICAL ENGINEERING 2021. [DOI: 10.1016/j.cobme.2021.100354] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
36
|
Quantitative Investigation of Hand Grasp Functionality: Thumb Grasping Behavior Adapting to Different Object Shapes, Sizes, and Relative Positions. Appl Bionics Biomech 2021; 2021:2640422. [PMID: 34819994 PMCID: PMC8608516 DOI: 10.1155/2021/2640422] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 08/01/2021] [Accepted: 09/20/2021] [Indexed: 11/18/2022] Open
Abstract
This paper is the first in the two-part series quantitatively modelling human grasp functionality and understanding the way human grasp objects. The aim is to investigate the thumb movement behavior influenced by object shapes, sizes, and relative positions. Ten subjects were requested to grasp six objects (3 shapes × 2 sizes) in 27 different relative positions (3 X deviation × 3 Y deviation × 3 Z deviation). Thumb postures were investigated to each specific joint. The relative position (X, Y, and Z deviation) significantly affects thumb opposition rotation (Rot) and flexion (interphalangeal (IP) and metacarpo-phalangeal (MCP)), while the object property (object shape and size) significantly affects thumb abduction/adduction (ABD) motion. Based on the F value, the Y deviation has the primary effects on thumb motion. When the Y deviation changing from proximal to distal, thumb opposition rotation (Rot) and flexion (IP and MCP joint) angles were increased and decreased, respectively. For principal component analysis (PCA) results, thumb grasp behavior can be accurately reconstructed by first two principal components (PCs) which variance explanation ratio reached 93.8% and described by the inverse and homodromous coordination movement between thumb opposition and IP flexion. This paper provides a more comprehensive understanding of thumb grasp behavior. The postural synergies can reproduce the anthropomorphic motion, reduce the robot hardware, and control dimensionality. All of these provide a more accurate and general basis for the design and control of the bionic thumb and novel wearable assistant robot, thumb function assessment, and rehabilitation.
Collapse
|
37
|
Dominijanni G, Shokur S, Salvietti G, Buehler S, Palmerini E, Rossi S, De Vignemont F, d’Avella A, Makin TR, Prattichizzo D, Micera S. The neural resource allocation problem when enhancing human bodies with extra robotic limbs. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-021-00398-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
38
|
Wittevrongel B, Holmes N, Boto E, Hill R, Rea M, Libert A, Khachatryan E, Van Hulle MM, Bowtell R, Brookes MJ. Practical real-time MEG-based neural interfacing with optically pumped magnetometers. BMC Biol 2021; 19:158. [PMID: 34376215 PMCID: PMC8356471 DOI: 10.1186/s12915-021-01073-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 04/25/2021] [Indexed: 01/23/2023] Open
Abstract
BACKGROUND Brain-computer interfaces decode intentions directly from the human brain with the aim to restore lost functionality, control external devices or augment daily experiences. To combine optimal performance with wide applicability, high-quality brain signals should be captured non-invasively. Magnetoencephalography (MEG) is a potent candidate but currently requires costly and confining recording hardware. The recently developed optically pumped magnetometers (OPMs) promise to overcome this limitation, but are currently untested in the context of neural interfacing. RESULTS In this work, we show that OPM-MEG allows robust single-trial analysis which we exploited in a real-time 'mind-spelling' application yielding an average accuracy of 97.7%. CONCLUSIONS This shows that OPM-MEG can be used to exploit neuro-magnetic brain responses in a practical and flexible manner, and opens up new avenues for a wide range of new neural interface applications in the future.
Collapse
Affiliation(s)
- Benjamin Wittevrongel
- Laboratory for Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, Belgium. .,Leuven Institute for Artificial Intelligence (Leuven.AI), Leuven, Belgium. .,Leuven Brain Institute (LBI), Leuven, Belgium.
| | - Niall Holmes
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, Nottingham, UK
| | - Elena Boto
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, Nottingham, UK
| | - Ryan Hill
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, Nottingham, UK
| | - Molly Rea
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, Nottingham, UK
| | - Arno Libert
- Laboratory for Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Brain Institute (LBI), Leuven, Belgium
| | - Elvira Khachatryan
- Laboratory for Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Brain Institute (LBI), Leuven, Belgium
| | - Marc M Van Hulle
- Laboratory for Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Leuven Institute for Artificial Intelligence (Leuven.AI), Leuven, Belgium.,Leuven Brain Institute (LBI), Leuven, Belgium
| | - Richard Bowtell
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, Nottingham, UK
| | - Matthew J Brookes
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, Nottingham, UK
| |
Collapse
|
39
|
Kieliba P, Clode D, Maimon-Mor RO, Makin TR. Robotic hand augmentation drives changes in neural body representation. Sci Robot 2021; 6:eabd7935. [PMID: 34043536 PMCID: PMC7612043 DOI: 10.1126/scirobotics.abd7935] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Accepted: 04/23/2021] [Indexed: 01/11/2023]
Abstract
Humans have long been fascinated by the opportunities afforded through augmentation. This vision not only depends on technological innovations but also critically relies on our brain's ability to learn, adapt, and interface with augmentation devices. Here, we investigated whether successful motor augmentation with an extra robotic thumb can be achieved and what its implications are on the neural representation and function of the biological hand. Able-bodied participants were trained to use an extra robotic thumb (called the Third Thumb) over 5 days, including both lab-based and unstructured daily use. We challenged participants to complete normally bimanual tasks using only the augmented hand and examined their ability to develop hand-robot interactions. Participants were tested on a variety of behavioral and brain imaging tests, designed to interrogate the augmented hand's representation before and after the training. Training improved Third Thumb motor control, dexterity, and hand-robot coordination, even when cognitive load was increased or when vision was occluded. It also resulted in increased sense of embodiment over the Third Thumb. Consequently, augmentation influenced key aspects of hand representation and motor control. Third Thumb usage weakened natural kinematic synergies of the biological hand. Furthermore, brain decoding revealed a mild collapse of the augmented hand's motor representation after training, even while the Third Thumb was not worn. Together, our findings demonstrate that motor augmentation can be readily achieved, with potential for flexible use, reduced cognitive reliance, and increased sense of embodiment. Yet, augmentation may incur changes to the biological hand representation. Such neurocognitive consequences are crucial for successful implementation of future augmentation technologies.
Collapse
Affiliation(s)
- Paulina Kieliba
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AZ, UK
| | - Danielle Clode
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AZ, UK
- Dani Clode design, 40 Hillside Road, London SW2 3HW, UK
| | - Roni O Maimon-Mor
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AZ, UK
- WIN Centre, University of Oxford, Oxford OX3 9DU, UK
| | - Tamar R Makin
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AZ, UK.
- Wellcome Trust Centre for Neuroimaging, University College London, London WC1N 3AR, UK
| |
Collapse
|
40
|
Bräcklein M, Ibáñez J, Barsakcioglu DY, Farina D. Towards human motor augmentation by voluntary decoupling beta activity in the neural drive to muscle and force production. J Neural Eng 2021; 18. [PMID: 33237879 DOI: 10.1088/1741-2552/abcdbf] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 11/25/2020] [Indexed: 11/12/2022]
Abstract
Objective.Effective human motor augmentation should rely on biological signals that can be volitionally modulated without compromising natural motor control.Approach.We provided human subjects with real-time information on the power of two separate spectral bands of the spiking activity of motor neurons innervating the tibialis anterior muscle: the low-frequency band (<7 Hz), which is directly translated into natural force control, and the beta band (13-30 Hz), which is outside the dynamics of the neuromuscular system.Main Results.Subjects could gain control over the powers in these two bands to navigate a cursor towards specific targets in a 2D space (experiment 1) and to up- and down-modulate beta activity while keeping steady force contractions (experiment 2).Significance.Results indicate that beta projections to the spinal motor neuron pool can be voluntarily controlled partially decoupled from natural muscle contractions and, therefore, they could be valid control signals for implementing effective human motor augmentation platforms.
Collapse
Affiliation(s)
- M Bräcklein
- Neuromechanics and Rehabilitation Technology Group, Department of Bioengineering, Faculty of Engineering, Imperial College London, London SW7 2AZ, United Kingdom
| | - J Ibáñez
- Neuromechanics and Rehabilitation Technology Group, Department of Bioengineering, Faculty of Engineering, Imperial College London, London SW7 2AZ, United Kingdom.,Department of Clinical and Movement Neurosciences, Institute of Neurology, University College London, London WC1N 3BG, United Kingdom
| | - D Y Barsakcioglu
- Neuromechanics and Rehabilitation Technology Group, Department of Bioengineering, Faculty of Engineering, Imperial College London, London SW7 2AZ, United Kingdom
| | - D Farina
- Neuromechanics and Rehabilitation Technology Group, Department of Bioengineering, Faculty of Engineering, Imperial College London, London SW7 2AZ, United Kingdom
| |
Collapse
|
41
|
Luo J, Shi W, Lu N, Wang J, Chen H, Wang Y, Lu X, Wang X, Hei X. Improving the performance of multisubject motor imagery-based BCIs using twin cascaded softmax CNNs. J Neural Eng 2021; 18. [PMID: 33540387 DOI: 10.1088/1741-2552/abe357] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/04/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Motor imagery (MI) EEG signals vary greatly among subjects, so scholarly research on motor imagery-based brain-computer interfaces (BCIs) has mainly focused on single-subject systems or subject-dependent systems. However, the single-subject model is applicable only to the target subject, and the small sample number greatly limits the performance of the model. This paper aims to study a convolutional neural network to achieve an adaptable MI-BCI that is applicable to multiple subjects. APPROACH In this paper, a twin cascaded softmax convolutional neural network (TCSCNN) is proposed for multisubject MI-BCIs. The proposed TCSCNN is independent and can be applied to any single-subject MI classification CNN model. First, to reduce the influence of individual differences, subject recognition and MI recognition are accomplished simultaneously. A cascaded softmax structure consisting of two softmax layers, related to subject recognition and MI recognition, is subsequently applied. Second, to improve the MI classification precision, a twin network structure is proposed on the basis of ensemble learning. TCSCNN is built by combining a cascaded softmax structure and twin network structure. MAIN RESULTS Experiments were conducted on three popular CNN models (EEGNet and Shallow ConvNet and Deep ConvNet from EEGDecoding) and three public datasets (BCI Competition IV datasets 2a and 2b and the High-Gamma dataset) to verify the performance of the proposed TCSCNN. The results show that compared with the state-of-the-art CNN model, the proposed TCSCNN obviously improves the precision and convergence of multisubject MI recognition. SIGNIFICANCE This study provides a promising scheme for multisubject MI-BCI, reflecting the progress made in the development and application of MI-BCIs.
Collapse
Affiliation(s)
- Jing Luo
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering , Xi'an University of Technology, No. 5, Jinhua South Road, Xi'an, Shaanxi Province, Xi'an, Shaanxi, 710048, CHINA
| | - Weiwei Shi
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering , Xi'an University of Technology, No. 5, Jinhua South Road, Xi'an, Shaanxi Province, Xi'an, Shaanxi, 710048, CHINA
| | - Na Lu
- Systems Engineering Institute, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, Xi'an, 710049, CHINA
| | - Jie Wang
- State Key Laboratory for Manufacturing System Engineering, System Engineering Institute, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, Xi'an, Shaanxi, 710049, CHINA
| | - Hao Chen
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, No. 5, Jinhua South Road, Xi'an, Shaanxi Province, Xi'an, Shaanxi, 710048, CHINA
| | - Yaojie Wang
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering , Xi'an University of Technology, No. 5, Jinhua South Road, Xi'an, Shaanxi Province, Xi'an, Shaanxi, 710048, CHINA
| | - Xiaofeng Lu
- School of computer science, Xi'an University of Technology, No. 5, Jinhua South Road, Xi'an, Shaanxi Province, Xi'an, Shaanxi, 710048, CHINA
| | - Xiaofan Wang
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering , Xi'an University of Technology, No. 5, Jinhua South Road, Xi'an, Shaanxi Province, Xi'an, Shaanxi, 710048, CHINA
| | - Xinhong Hei
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering , Xi'an University of Technology, No. 5, Jinhua South Road, Xi'an, Shaanxi Province, Xi'an, Shaanxi, 710048, CHINA
| |
Collapse
|
42
|
Kim D, Kang BB, Kim KB, Choi H, Ha J, Cho KJ, Jo S. Eyes are faster than hands: A soft wearable robot learns user intention from the egocentric view. Sci Robot 2021; 4:4/26/eaav2949. [PMID: 33137763 DOI: 10.1126/scirobotics.aav2949] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Accepted: 12/12/2018] [Indexed: 11/02/2022]
Abstract
To perceive user intentions for wearable robots, we present a learning-based intention detection methodology using a first-person-view camera.
Collapse
Affiliation(s)
- Daekyum Kim
- Soft Robotics Research Center, Seoul National University, Seoul 08826, Korea.,Neuro-Machine Augmented Intelligence Laboratory, School of Computing, KAIST, Daejeon 34141, Korea
| | - Brian Byunghyun Kang
- Soft Robotics Research Center, Seoul National University, Seoul 08826, Korea.,Biorobotics Laboratory, Department of Mechanical Engineering, Seoul National University, Seoul 08826, Korea.,Institute of Advanced Machines and Design, Seoul National University, Seoul 08826, Korea
| | - Kyu Bum Kim
- Soft Robotics Research Center, Seoul National University, Seoul 08826, Korea.,Biorobotics Laboratory, Department of Mechanical Engineering, Seoul National University, Seoul 08826, Korea.,Institute of Advanced Machines and Design, Seoul National University, Seoul 08826, Korea
| | - Hyungmin Choi
- Soft Robotics Research Center, Seoul National University, Seoul 08826, Korea.,Biorobotics Laboratory, Department of Mechanical Engineering, Seoul National University, Seoul 08826, Korea.,Institute of Advanced Machines and Design, Seoul National University, Seoul 08826, Korea
| | - Jeesoo Ha
- Soft Robotics Research Center, Seoul National University, Seoul 08826, Korea.,Neuro-Machine Augmented Intelligence Laboratory, School of Computing, KAIST, Daejeon 34141, Korea
| | - Kyu-Jin Cho
- Soft Robotics Research Center, Seoul National University, Seoul 08826, Korea. .,Biorobotics Laboratory, Department of Mechanical Engineering, Seoul National University, Seoul 08826, Korea.,Institute of Advanced Machines and Design, Seoul National University, Seoul 08826, Korea
| | - Sungho Jo
- Soft Robotics Research Center, Seoul National University, Seoul 08826, Korea. .,Neuro-Machine Augmented Intelligence Laboratory, School of Computing, KAIST, Daejeon 34141, Korea
| |
Collapse
|
43
|
Jeong JH, Cho JH, Shim KH, Kwon BH, Lee BH, Lee DY, Lee DH, Lee SW. Multimodal signal dataset for 11 intuitive movement tasks from single upper extremity during multiple recording sessions. Gigascience 2020; 9:giaa098. [PMID: 33034634 PMCID: PMC7539536 DOI: 10.1093/gigascience/giaa098] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 08/12/2020] [Accepted: 09/07/2020] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND Non-invasive brain-computer interfaces (BCIs) have been developed for realizing natural bi-directional interaction between users and external robotic systems. However, the communication between users and BCI systems through artificial matching is a critical issue. Recently, BCIs have been developed to adopt intuitive decoding, which is the key to solving several problems such as a small number of classes and manually matching BCI commands with device control. Unfortunately, the advances in this area have been slow owing to the lack of large and uniform datasets. This study provides a large intuitive dataset for 11 different upper extremity movement tasks obtained during multiple recording sessions. The dataset includes 60-channel electroencephalography, 7-channel electromyography, and 4-channel electro-oculography of 25 healthy participants collected over 3-day sessions for a total of 82,500 trials across all the participants. FINDINGS We validated our dataset via neurophysiological analysis. We observed clear sensorimotor de-/activation and spatial distribution related to real-movement and motor imagery, respectively. Furthermore, we demonstrated the consistency of the dataset by evaluating the classification performance of each session using a baseline machine learning method. CONCLUSIONS The dataset includes the data of multiple recording sessions, various classes within the single upper extremity, and multimodal signals. This work can be used to (i) compare the brain activities associated with real movement and imagination, (ii) improve the decoding performance, and (iii) analyze the differences among recording sessions. Hence, this study, as a Data Note, has focused on collecting data required for further advances in the BCI technology.
Collapse
Affiliation(s)
- Ji-Hoon Jeong
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Jeong-Hyun Cho
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Kyung-Hwan Shim
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Byoung-Hee Kwon
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Byeong-Hoo Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Do-Yeun Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Dae-Hyeok Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
- Department of Artificial Intelligence, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| |
Collapse
|
44
|
Luo Y, Wang M, Wan C, Cai P, Loh XJ, Chen X. Devising Materials Manufacturing Toward Lab-to-Fab Translation of Flexible Electronics. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2020; 32:e2001903. [PMID: 32743815 DOI: 10.1002/adma.202001903] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 05/04/2020] [Indexed: 06/11/2023]
Abstract
Flexible electronics have witnessed exciting progress in academia over the past decade, but most of the research outcomes have yet to be translated into products or gain much market share. For mass production and commercialization, industrial adoption of newly developed functional materials and fabrication techniques is a prerequisite. However, due to the disparate features of academic laboratories and industrial plants, translating materials and manufacturing technologies from labs to fabs is notoriously difficult. Therefore, herein, key challenges in the materials manufacturing of flexible electronics are identified and discussed for its lab-to-fab translation, along the four stages in product manufacturing: design, materials supply, processing, and integration. Perspectives on industry-oriented strategies to overcome some of these obstacles are also proposed. Priorities for action are outlined, including standardization, iteration between basic and applied research, and adoption of smart manufacturing. With concerted efforts from academia and industry, flexible electronics will bring a bigger impact to society as promised.
Collapse
Affiliation(s)
- Yifei Luo
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
- Institute of Materials Research and Engineering, Agency for Science, Technology and Research (A*STAR), 2 Fusionopolis Way, Innovis, #08-03, Singapore, 138634, Singapore
| | - Ming Wang
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
| | - Changjin Wan
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
| | - Pingqiang Cai
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
| | - Xian Jun Loh
- Institute of Materials Research and Engineering, Agency for Science, Technology and Research (A*STAR), 2 Fusionopolis Way, Innovis, #08-03, Singapore, 138634, Singapore
- College of Chemical Engineering and Materials Science, Quanzhou Normal University, Quanzhou, Fujian, 362000, China
| | - Xiaodong Chen
- Innovative Center for Flexible Devices (iFLEX), Max Planck - NTU Joint Lab for Artificial Senses, School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
| |
Collapse
|
45
|
Chen C, Chen P, Belkacem AN, Lu L, Xu R, Tan W, Li P, Gao Q, Shin D, Wang C, Ming D. Neural activities classification of left and right finger gestures during motor execution and motor imagery. BRAIN-COMPUTER INTERFACES 2020. [DOI: 10.1080/2326263x.2020.1782124] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Affiliation(s)
- Chao Chen
- Key Laboratory of Complex System Control Theory and Application, Tianjin University of Technology, Tianjin, China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Peiji Chen
- Key Laboratory of Complex System Control Theory and Application, Tianjin University of Technology, Tianjin, China
| | - Abdelkader Nasreddine Belkacem
- Department of Computer and Network Engineering, College of Information Technology, UAE University, Al Ain, United Arab Emirates
| | - Lin Lu
- Department of Computer Science and Technology, Zhonghuan Information College Tianjin University of Technology, Tianjin, China
| | - Rui Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Wenjun Tan
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Penghai Li
- Key Laboratory of Complex System Control Theory and Application, Tianjin University of Technology, Tianjin, China
| | - Qiang Gao
- Key Laboratory of Complex System Control Theory and Application, Tianjin University of Technology, Tianjin, China
| | - Duk Shin
- Department of Electronics and Mechatronics, Tokyo Polytechnic University, Japan
| | - Changming Wang
- Beijing Key Laboratory of Mental Disorders, Beijing Anding Hospital, Capital Medical University, Beijing, China
- North China University of Science and Technology, Tangshan, Hebei, China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| |
Collapse
|
46
|
Jeong JH, Shim KH, Kim DJ, Lee SW. Trajectory Decoding of Arm Reaching Movement Imageries for Brain-Controlled Robot Arm System. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5544-5547. [PMID: 31947110 DOI: 10.1109/embc.2019.8856312] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Development of noninvasive brain-machine interface (BMI) systems based on electroencephalography (EEG), driven by spontaneous movement intentions, is a useful tool for controlling external devices or supporting a neuro- rehabilitation. In this study, we present the possibility of brain-controlled robot arm system using arm trajectory decoding. To do that, we first constructed the experimental system that can acquire the EEG data for not only movement execution (ME) task but also movement imagery (MI) tasks. Five subjects participated in our experiments and performed four directional reaching tasks (Left, right, forward, and backward) in the 3D plane. For robust arm trajectory decoding, we propose a subject-dependent deep neural network (DNN) architecture. The decoding model applies the principle of bi-directional long short-term memory (LSTM) network. As a result, we confirmed the decoding performance (r-value: >0.8) for all X-, Y-, and Z-axis across all subjects in the MI as well as ME tasks. These results show the feasibility of the EEG-based intuitive robot arm control system for high-level tasks (e.g., drink water or moving some objects). Also, we confirm that the proposed method has no much decoding performance variations between ME and MI tasks for the offline analysis. Hence, we will demonstrate that the decoding model is capable of robust trajectory decoding even in a real-time environment.
Collapse
|
47
|
Maimon-Mor RO, Makin TR. Is an artificial limb embodied as a hand? Brain decoding in prosthetic limb users. PLoS Biol 2020; 18:e3000729. [PMID: 32511238 PMCID: PMC7302856 DOI: 10.1371/journal.pbio.3000729] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Revised: 06/18/2020] [Accepted: 05/20/2020] [Indexed: 02/07/2023] Open
Abstract
The potential ability of the human brain to represent an artificial limb as a body part (embodiment) has been inspiring engineers, clinicians, and scientists as a means to optimise human-machine interfaces. Using functional MRI (fMRI), we studied whether neural embodiment actually occurs in prosthesis users' occipitotemporal cortex (OTC). Compared with controls, different prostheses types were visually represented more similarly to each other, relative to hands and tools, indicating the emergence of a dissociated prosthesis categorisation. Greater daily life prosthesis usage correlated positively with greater prosthesis categorisation. Moreover, when comparing prosthesis users' representation of their own prosthesis to controls' representation of a similar looking prosthesis, prosthesis users represented their own prosthesis more dissimilarly to hands, challenging current views of visual prosthesis embodiment. Our results reveal a use-dependent neural correlate for wearable technology adoption, demonstrating adaptive use-related plasticity within the OTC. Because these neural correlates were independent of the prostheses' appearance and control, our findings offer new opportunities for prosthesis design by lifting restrictions imposed by the embodiment theory for artificial limbs.
Collapse
Affiliation(s)
- Roni O. Maimon-Mor
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- WIN Centre, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom
| | - Tamar R. Makin
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- WIN Centre, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| |
Collapse
|
48
|
Jeong JH, Shim KH, Kim DJ, Lee SW. Brain-Controlled Robotic Arm System Based on Multi-Directional CNN-BiLSTM Network Using EEG Signals. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1226-1238. [DOI: 10.1109/tnsre.2020.2981659] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
49
|
Zeng H, Shen Y, Hu X, Song A, Xu B, Li H, Wang Y, Wen P. Semi-Autonomous Robotic Arm Reaching With Hybrid Gaze-Brain Machine Interface. Front Neurorobot 2020; 13:111. [PMID: 32038219 PMCID: PMC6992643 DOI: 10.3389/fnbot.2019.00111] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 12/11/2019] [Indexed: 11/13/2022] Open
Abstract
Recent developments in the non-muscular human-robot interface (HRI) and shared control strategies have shown potential for controlling the assistive robotic arm by people with no residual movement or muscular activity in upper limbs. However, most non-muscular HRIs only produce discrete-valued commands, resulting in non-intuitive and less effective control of the dexterous assistive robotic arm. Furthermore, the user commands and the robot autonomy commands usually switch in the shared control strategies of such applications. This characteristic has been found to yield a reduced sense of agency as well as frustration for the user according to previous user studies. In this study, we firstly propose an intuitive and easy-to-learn-and-use hybrid HRI by combing the Brain-machine interface (BMI) and the gaze-tracking interface. For the proposed hybrid gaze-BMI, the continuous modulation of the movement speed via the motor intention occurs seamlessly and simultaneously to the unconstrained movement direction control with the gaze signals. We then propose a shared control paradigm that always combines user input and the autonomy with the dynamic combination regulation. The proposed hybrid gaze-BMI and shared control paradigm were validated for a robotic arm reaching task performed with healthy subjects. All the users were able to employ the hybrid gaze-BMI for moving the end-effector sequentially to reach the target across the horizontal plane while also avoiding collisions with obstacles. The shared control paradigm maintained as much volitional control as possible, while providing the assistance for the most difficult parts of the task. The presented semi-autonomous robotic system yielded continuous, smooth, and collision-free motion trajectories for the end effector approaching the target. Compared to a system without assistances from robot autonomy, it significantly reduces the rate of failure as well as the time and effort spent by the user to complete the tasks.
Collapse
Affiliation(s)
- Hong Zeng
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Yitao Shen
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Xuhui Hu
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Aiguo Song
- State Key Laboratory of Bioelectronics, School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Baoguo Xu
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Huijun Li
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Yanxin Wang
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Pengcheng Wen
- AVIC Aeronautics Computing Technique Research Institute, Xi’an, China
| |
Collapse
|
50
|
Jeong JH, Kwak NS, Guan C, Lee SW. Decoding Movement-Related Cortical Potentials Based on Subject-Dependent and Section-Wise Spectral Filtering. IEEE Trans Neural Syst Rehabil Eng 2020; 28:687-698. [PMID: 31944982 DOI: 10.1109/tnsre.2020.2966826] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
An important challenge in developing a movement-related cortical potential (MRCP)-based brain-machine interface (BMI) is an accurate decoding of the user intention for real-world environments. However, the performance remains insufficient for real-time decoding owing to the endogenous signal characteristics compared to other BMI paradigms. This study aims to enhance the MRCP decoding performance from the perspective of preprocessing techniques (i.e., spectral filtering). To the best of our knowledge, existing MRCP studies have used spectral filters with a fixed frequency bandwidth for all subjects. Hence, we propose a subject-dependent and section-wise spectral filtering (SSSF) method that considers the subjects' individual MRCP characteristics for two different temporal sections. In this study, MRCP data were acquired under a powered exoskeleton environments in which the subjects conducted self-initiated walking. We evaluated our method using both our experimental data and a public dataset (BNCI Horizon 2020). The decoding performance using the SSSF was 0.86 (±0.09), and the performance on the public dataset was 0.73 (±0.06) across all subjects. The experimental results showed a statistically significant enhancement ( ) compared with the fixed frequency bands used in previous methods on both datasets. In addition, we presented successful decoding results from a pseudo-online analysis. Therefore, we demonstrated that the proposed SSSF method can involve more meaningful MRCP information than conventional methods.
Collapse
|