1
|
Chen Y, Wang F, Li T, Zhao L, Gong A, Nan W, Ding P, Fu Y. Considerations and discussions on the clear definition and definite scope of brain-computer interfaces. Front Neurosci 2024; 18:1449208. [PMID: 39161655 PMCID: PMC11330831 DOI: 10.3389/fnins.2024.1449208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 07/22/2024] [Indexed: 08/21/2024] Open
Abstract
Brain-computer interface (BCI) is a revolutionizing human-computer interaction with potential applications in both medical and non-medical fields, emerging as a cutting-edge and trending research direction. Increasing numbers of groups are engaging in BCI research and development. However, in recent years, there has been some confusion regarding BCI, including misleading and hyped propaganda about BCI, and even non-BCI technologies being labeled as BCI. Therefore, a clear definition and a definite scope for BCI are thoroughly considered and discussed in the paper, based on the existing definitions of BCI, including the six key or essential components of BCI. In the review, different from previous definitions of BCI, BCI paradigms and neural coding are explicitly included in the clear definition of BCI provided, and the BCI user (the brain) is clearly identified as a key component of the BCI system. Different people may have different viewpoints on the definition and scope of BCI, as well as some related issues, which are discussed in the article. This review argues that a clear definition and definite scope of BCI will benefit future research and commercial applications. It is hoped that this review will reduce some of the confusion surrounding BCI and promote sustainable development in this field.
Collapse
Affiliation(s)
- Yanxiao Chen
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Fan Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Tianwen Li
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
- Faculty of Science, Kunming University of Science and Technology, Kunming, China
| | - Lei Zhao
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
- Faculty of Science, Kunming University of Science and Technology, Kunming, China
| | - Anmin Gong
- School of Information Engineering, Chinese People’s Armed Police Force Engineering University, Xi’an, China
| | - Wenya Nan
- School of Psychology, Shanghai Normal University, Shanghai, China
| | - Peng Ding
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Yunfa Fu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| |
Collapse
|
2
|
Kosmyna N, Hauptmann E, Hmaidan Y. A Brain-Controlled Quadruped Robot: A Proof-of-Concept Demonstration. SENSORS (BASEL, SWITZERLAND) 2023; 24:80. [PMID: 38202942 PMCID: PMC10780665 DOI: 10.3390/s24010080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 12/15/2023] [Accepted: 12/19/2023] [Indexed: 01/12/2024]
Abstract
Coupling brain-computer interfaces (BCIs) and robotic systems in the future can enable seamless personal assistant systems in everyday life, with the requests that can be performed in a discrete manner, using one's brain activity only. These types of systems might be of a particular interest for people with locked-in syndrome (LIS) or amyotrophic lateral sclerosis (ALS) because they can benefit from communicating with robotic assistants using brain sensing interfaces. In this proof-of-concept work, we explored how a wireless and wearable BCI device can control a quadruped robot-Boston Dynamics' Spot. The device measures the user's electroencephalography (EEG) and electrooculography (EOG) activity of the user from the electrodes embedded in the glasses' frame. The user responds to a series of questions with YES/NO answers by performing a brain-teaser activity of mental calculus. Each question-answer pair has a pre-configured set of actions for Spot. For instance, Spot was prompted to walk across a room, pick up an object, and retrieve it for the user (i.e., bring a bottle of water) when a sequence resolved to a YES response. Our system achieved at a success rate of 83.4%. To the best of our knowledge, this is the first integration of wireless, non-visual-based BCI systems with Spot in the context of personal assistant use cases. While this BCI quadruped robot system is an early prototype, future iterations may embody friendly and intuitive cues similar to regular service dogs. As such, this project aims to pave a path towards future developments in modern day personal assistant robots powered by wireless and wearable BCI systems in everyday living conditions.
Collapse
Affiliation(s)
- Nataliya Kosmyna
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | | | - Yasmeen Hmaidan
- Psychology Department, University of Toronto, Toronto, ON M5S 3E4, Canada;
| |
Collapse
|
3
|
Barnova K, Mikolasova M, Kahankova RV, Jaros R, Kawala-Sterniuk A, Snasel V, Mirjalili S, Pelc M, Martinek R. Implementation of artificial intelligence and machine learning-based methods in brain-computer interaction. Comput Biol Med 2023; 163:107135. [PMID: 37329623 DOI: 10.1016/j.compbiomed.2023.107135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 05/13/2023] [Accepted: 06/04/2023] [Indexed: 06/19/2023]
Abstract
Brain-computer interfaces are used for direct two-way communication between the human brain and the computer. Brain signals contain valuable information about the mental state and brain activity of the examined subject. However, due to their non-stationarity and susceptibility to various types of interference, their processing, analysis and interpretation are challenging. For these reasons, the research in the field of brain-computer interfaces is focused on the implementation of artificial intelligence, especially in five main areas: calibration, noise suppression, communication, mental condition estimation, and motor imagery. The use of algorithms based on artificial intelligence and machine learning has proven to be very promising in these application domains, especially due to their ability to predict and learn from previous experience. Therefore, their implementation within medical technologies can contribute to more accurate information about the mental state of subjects, alleviate the consequences of serious diseases or improve the quality of life of disabled patients.
Collapse
Affiliation(s)
- Katerina Barnova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia.
| | - Martina Mikolasova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia.
| | - Radana Vilimkova Kahankova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia
| | - Rene Jaros
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia.
| | - Aleksandra Kawala-Sterniuk
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Poland.
| | - Vaclav Snasel
- Department of Computer Science, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia.
| | - Seyedali Mirjalili
- Centre for Artificial Intelligence Research and Optimisation, Torrens University Australia, Australia.
| | - Mariusz Pelc
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Poland; School of Computing and Mathematical Sciences, University of Greenwich, London, UK.
| | - Radek Martinek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia; Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Poland.
| |
Collapse
|
4
|
Zhou Y, Yu T, Gao W, Huang W, Lu Z, Huang Q, Li Y. Shared Three-Dimensional Robotic Arm Control Based on Asynchronous BCI and Computer Vision. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3163-3175. [PMID: 37498753 DOI: 10.1109/tnsre.2023.3299350] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
OBJECTIVE A brain-computer interface (BCI) can be used to translate neuronal activity into commands to control external devices. However, using noninvasive BCI to control a robotic arm for movements in three-dimensional (3D) environments and accomplish complicated daily tasks, such as grasping and drinking, remains a challenge. APPROACH In this study, a shared robotic arm control system based on hybrid asynchronous BCI and computer vision was presented. The BCI model, which combines steady-state visual evoked potentials (SSVEPs) and blink-related electrooculography (EOG) signals, allows users to freely choose from fifteen commands in an asynchronous mode corresponding to robot actions in a 3D workspace and reach targets with a wide movement range, while computer vision can identify objects and assist a robotic arm in completing more precise tasks, such as grasping a target automatically. RESULTS Ten subjects participated in the experiments and achieved an average accuracy of more than 92% and a high trajectory efficiency for robot movement. All subjects were able to perform the reach-grasp-drink tasks successfully using the proposed shared control method, with fewer error commands and shorter completion time than with direct BCI control. SIGNIFICANCE Our results demonstrated the feasibility and efficiency of generating practical multidimensional control of an intuitive robotic arm by merging hybrid asynchronous BCI and computer vision-based recognition.
Collapse
|
5
|
Wang J, Cheng S, Tian J, Gao Y. A 2D CNN-LSTM hybrid algorithm using time series segments of EEG data for motor imagery classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
|
6
|
Zhao SN, Cui Y, He Y, He Z, Diao Z, Peng F, Cheng C. Teleoperation control of a wheeled mobile robot based on Brain-machine Interface. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:3638-3660. [PMID: 36899597 DOI: 10.3934/mbe.2023170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
This paper presents a novel teleoperation system using Electroencephalogram (EEG) to control the motion of a wheeled mobile robot (WMR). Different from the other traditional motion controlling method, the WMR is braked with the EEG classification results. Furthermore, the EEG will be induced by using the online BMI (Brain Machine Interface) system, and adopting the non-intrusion induced mode SSVEP (steady state visually evoked potentials). Then, user's motion intention can be recognized by canonical correlation analysis (CCA) classifier, which will be converted into motion commands of the WMR. Finally, the teleoperation technique is utilized to manage the information of the movement scene and adjust the control instructions based on the real-time information. Bezier curve is used to parameterize the path planning of the robot, and the trajectory can be adjusted in real time by EEG recognition results. A motion controller based on error model is proposed to track the planned trajectory by using velocity feedback control, providing excellent track tracking performance. Finally, the feasibility and performance of the proposed teleoperation brain-controlled WMR system are verified using demonstration experiments.
Collapse
Affiliation(s)
- Su-Na Zhao
- College of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou 450000, China
| | - Yingxue Cui
- College of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou 450000, China
| | - Yan He
- College of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou 450000, China
| | - Zhendong He
- College of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou 450000, China
| | - Zhihua Diao
- College of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou 450000, China
| | - Fang Peng
- Zhongshan Institute, University of Electronic Science and Technology of China, Zhongshan 528402, China
| | - Chao Cheng
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130022, China
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
| |
Collapse
|
7
|
Song M, Jeong H, Kim J, Jang SH, Kim J. An EEG-based asynchronous MI-BCI system to reduce false positives with a small number of channels for neurorehabilitation: A pilot study. Front Neurorobot 2022; 16:971547. [PMID: 36172602 PMCID: PMC9510756 DOI: 10.3389/fnbot.2022.971547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 08/08/2022] [Indexed: 11/22/2022] Open
Abstract
Many studies have used motor imagery-based brain–computer interface (MI-BCI) systems for stroke rehabilitation to induce brain plasticity. However, they mainly focused on detecting motor imagery but did not consider the effect of false positive (FP) detection. The FP could be a threat to patients with stroke as it can induce wrong-directed brain plasticity that would result in adverse effects. In this study, we proposed a rehabilitative MI-BCI system that focuses on rejecting the FP. To this end, we first identified numerous electroencephalogram (EEG) signals as the causes of the FP, and based on the characteristics of the signals, we designed a novel two-phase classifier using a small number of EEG channels, including the source of the FP. Through experiments with eight healthy participants and nine patients with stroke, our proposed MI-BCI system showed 71.76% selectivity and 13.70% FP rate by using only four EEG channels in the patient group with stroke. Moreover, our system can compensate for day-to-day variations for prolonged session intervals by recalibration. The results suggest that our proposed system, a practical approach for the clinical setting, could improve the therapeutic effect of MI-BCI by reducing the adverse effect of the FP.
Collapse
Affiliation(s)
- Minsu Song
- Department of Medical Device, Korea Institute of Machinery and Materials, Daegu, South Korea
| | - Hojun Jeong
- School of Mechanical Engineering, Sungkyunkwan University, Gyeonggi-do, South Korea
| | - Jongbum Kim
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu, South Korea
| | - Sung-Ho Jang
- Department of Physical Medicine and Rehabilitation, College of Medicine, Yeungnam University, Daegu, South Korea
| | - Jonghyun Kim
- School of Mechanical Engineering, Sungkyunkwan University, Gyeonggi-do, South Korea
- *Correspondence: Jonghyun Kim
| |
Collapse
|
8
|
Jang SJ, Yang YJ, Ryun S, Kim JS, Chung CK, Jeong J. Decoding trajectories of imagined hand movement using electrocorticograms for brain-machine interface. J Neural Eng 2022; 19. [PMID: 35985293 DOI: 10.1088/1741-2552/ac8b37] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 08/19/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Reaching hand movement is an important motor skill actively examined in brain-computer interface (BCI). Among various components of movement analyzed is the hand's trajectory, which describes the hand's continuous positions in three-dimensional space. While a large body of studies have investigated the decoding of real movements and the reconstruction of real hand movement trajectories from neural signals, fewer studies have attempted to decode the trajectory of imagined hand movement. To develop BCI systems for patients with hand motor dysfunctions, the systems essentially require to achieve movement-free control of external devices, which is only possible through successful decoding of purely imagined hand movement. APPROACH To achieve this goal, this study used a machine learning technique (i.e., the variational Bayesian least square) to analyze the electrocorticogram (ECoG) of eighteen epilepsy patients obtained from when they performed movement execution (ME) and kinesthetic movement imagination (KMI) of the reach-and-grasp hand action. MAIN RESULTS The variational Bayesian decoding model was able to successfully predict the imagined trajectories of hand movement significantly above chance level. The Pearson's correlation coefficient between imagined and predicted trajectories was 0.3393 and 0.4936 for the KMI (KMI trials only) and MEKMI paradigm (alternating trials of ME and KMI) respectively. SIGNIFICANCE This study demonstrated a high accuracy of prediction for trajectories of imagined hand movement, and more importantly, higher decoding accuracy of imagined trajectories in the MEKMI paradigm than in the KMI paradigm solely.
Collapse
Affiliation(s)
- Sang Jin Jang
- Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 411 E16-1(YBS Building) Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, South Korea 34141, Daejeon, Daejeon, 34141, Korea (the Republic of)
| | - Yu Jin Yang
- Seoul National University College of Natural Sciences, 103, Daehak-ro, Jongno-gu, Seoul, Republic of Korea, Seoul, 03080, Korea (the Republic of)
| | - Seokyun Ryun
- Seoul National University College of Natural Sciences, 103, Daehak-ro, Jongno-gu, Seoul, Republic of Korea, Seoul, 03080, Korea (the Republic of)
| | - June Sic Kim
- Seoul National University College of Natural Sciences, 103, Daehak-ro, Jongno-gu, Seoul, Republic of Korea, Seoul, 03080, Korea (the Republic of)
| | - Chun Kee Chung
- Seoul National University College of Natural Sciences, 103, Daehak-ro, Jongno-gu, Seoul, Republic of Korea, Seoul, 03080, Korea (the Republic of)
| | - Jaeseung Jeong
- Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 514 E16-1(YBS Building) Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, South Korea 34141, Daejeon, 34141, Korea (the Republic of)
| |
Collapse
|
9
|
A Comprehensive Review of Endogenous EEG-Based BCIs for Dynamic Device Control. SENSORS 2022; 22:s22155802. [PMID: 35957360 PMCID: PMC9370865 DOI: 10.3390/s22155802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 07/23/2022] [Accepted: 07/30/2022] [Indexed: 11/28/2022]
Abstract
Electroencephalogram (EEG)-based brain–computer interfaces (BCIs) provide a novel approach for controlling external devices. BCI technologies can be important enabling technologies for people with severe mobility impairment. Endogenous paradigms, which depend on user-generated commands and do not need external stimuli, can provide intuitive control of external devices. This paper discusses BCIs to control various physical devices such as exoskeletons, wheelchairs, mobile robots, and robotic arms. These technologies must be able to navigate complex environments or execute fine motor movements. Brain control of these devices presents an intricate research problem that merges signal processing and classification techniques with control theory. In particular, obtaining strong classification performance for endogenous BCIs is challenging, and EEG decoder output signals can be unstable. These issues present myriad research questions that are discussed in this review paper. This review covers papers published until the end of 2021 that presented BCI-controlled dynamic devices. It discusses the devices controlled, EEG paradigms, shared control, stabilization of the EEG signal, traditional machine learning and deep learning techniques, and user experience. The paper concludes with a discussion of open questions and avenues for future work.
Collapse
|
10
|
Band decomposition of asynchronous electroencephalogram signal for upper limb movement classification. Phys Eng Sci Med 2022; 45:643-656. [DOI: 10.1007/s13246-022-01132-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 04/29/2022] [Indexed: 10/18/2022]
|
11
|
Lin TC, Krishnan AU, Li Z. Intuitive, Efficient and Ergonomic Tele-Nursing Robot Interfaces: Design Evaluation and Evolution. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Tele-nursing robots provide a safe approach for patient-caring in quarantine areas. For effective nurse-robot collaboration, ergonomic teleoperation and intuitive interfaces with low physical and cognitive workload must be developed. We propose a framework to evaluate the control interfaces to iteratively develop an intuitive, efficient, and ergonomic teleoperation interface. The framework is a hierarchical procedure that incorporates general to specific assessment and its role in design evolution. We first present pre-defined objective and subjective metrics used to evaluate three representative contemporary teleoperation interfaces. The results indicate that teleoperation via human motion mapping outperforms the gamepad and stylus interfaces. The trade-off with using motion mapping as a teleoperation interface is the non-trivial physical fatigue. To understand the impact of heavy physical demand during motion mapping teleoperation, we propose an objective assessment of physical workload in teleoperation using electromyography (EMG). We find that physical fatigue happens in the actions that involve precise manipulation and steady posture maintenance. We further implemented teleoperation assistance in the form of shared autonomy to eliminate the fatigue-causing component in robot teleoperation via motion mapping. The experimental results show that the autonomous feature effectively reduces the physical effort while improving the efficiency and accuracy of the teleoperation interface.
Collapse
Affiliation(s)
- Tsung-Chi Lin
- Worcester Polytechnic Institute, Robotics Engineering
| | | | - Zhi Li
- Worcester Polytechnic Institute, Robotics Engineering
| |
Collapse
|
12
|
A Human-Machine Interface Based on an EOG and a Gyroscope for Humanoid Robot Control and Its Application to Home Services. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1650387. [PMID: 35345662 PMCID: PMC8957419 DOI: 10.1155/2022/1650387] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 01/28/2022] [Accepted: 02/14/2022] [Indexed: 11/18/2022]
Abstract
The human-machine interface (HMI) has been studied for robot teleoperation with the aim of empowering people who experience motor disabilities to increase their interaction with the physical environment. The challenge of an HMI for robot control is to rapidly, accurately, and sufficiently produce control commands. In this paper, an asynchronous HMI based on an electrooculogram (EOG) and a gyroscope is proposed using two self-paced and endogenous features, double blink and head rotation. By designing the multilevel graphical user interface (GUI), the user can rotate his head to move the cursor of the GUI and create a double blink to trigger the button in the interface. The proposed HMI is able to supply sufficient commands at the same time with high accuracy (ACC) and low response time (RT). In the trigger task of sixteen healthy subjects, the target was clicked from 20 options with ACC of 99.2% and RT 2.34 s. Furthermore, a continuous strategy that uses motion start and motion stop commands to create a certain robot motion is proposed to control a humanoid robot based on the HMI. It avoids the situation that combines some commands to achieve one motion or converts the certain motion to a command directly. In the home service experiment, all subjects operated a humanoid robot changing the state of a switch, grasping a key, and putting it into a box. The time ratio between HMI control and manual control was 1.22, and the number of commands ratio was 1.18. The results demonstrated that the continuous strategy and proposed HMI can improve performance in humanoid robot control.
Collapse
|
13
|
A novel classification framework using multiple bandwidth method with optimized CNN for brain–computer interfaces with EEG-fNIRS signals. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06202-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
14
|
Huang C, Xiao Y, Xu G. Predicting Human Intention-Behavior Through EEG Signal Analysis Using Multi-Scale CNN. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:1722-1729. [PMID: 33226953 DOI: 10.1109/tcbb.2020.3039834] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
At present, the application of Electroencephalogram (EEG) signal classification to human intention-behavior prediction has become a hot topic in the brain computer interface (BCI) research field. In recent studies, the introduction of convolutional neural networks (CNN) has contributed to substantial improvements in the EEG signal classification performance. However, there is still a key challenge with the existing CNN-based EEG signal classification methods, the accuracy of them is not very satisfying. This is because most of the existing methods only utilize the feature maps in the last layer of CNN for EEG signal classification, which might miss some local and detailed information for accurate classification. To address this challenge, this paper proposes a multi-scale CNN model-based EEG signal classification method. In this method, first, the EEG signals are preprocessed and converted to time-frequency images using the short-time Fourier Transform (STFT) technique. Then, a multi-scale CNN model is designed for EEG signal classification, which takes the converted time-frequency image as the input. Especially, in the designed multi-scale CNN model, both the local and global information is taken into consideration. The performance of the proposed method is verified on the benchmark data set 2b used in the BCI contest IV. The experimental results show that the average accuracy of the proposed method is 73.9 percent, which improves the classification accuracy of 10.4, 5.5, 16.2 percent compared with the traditional methods including artificial neural network, support vector machine, and stacked auto-encoder.
Collapse
|
15
|
Park J, Park J, Shin D, Choi Y. A BCI Based Alerting System for Attention Recovery of UAV Operators. SENSORS (BASEL, SWITZERLAND) 2021; 21:2447. [PMID: 33918116 PMCID: PMC8037861 DOI: 10.3390/s21072447] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Revised: 03/25/2021] [Accepted: 03/26/2021] [Indexed: 12/11/2022]
Abstract
As unmanned aerial vehicles have become popular, the number of accidents caused by an operator's inattention have increased. To prevent such accidents, the operator should maintain an attention status. However, limited research has been conducted on the brain-computer interface (BCI)-based system with an alerting module for the operator's attention recovery of unmanned aerial vehicles. Therefore, we introduce a detection and alerting system that prevents an unmanned aerial vehicle operator from falling into inattention status by using the operator's electroencephalogram signal. The proposed system consists of the following three components: a signal processing module, which collects and preprocesses an electroencephalogram signal of an operator, an inattention detection module, which determines whether an inattention status occurred based on the preprocessed signal, and, lastly, an alert providing module that presents stimulus to an operator when inattention is detected. As a result of evaluating the performance with a real-world dataset, it was shown that the proposed system successfully contributed to the recovery of operator attention in the evaluating dataset, although statistical significance could not be established due to the small number of subjects.
Collapse
Affiliation(s)
- Jonghyuk Park
- Department of Industrial Engineering and Institute for Industrial Systems Innovation, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea; (J.P.); (J.P.)
- ai.m Inc., Gangnamdae-ro, Gangnam-gu, Seoul 06241, Korea
| | - Jonghun Park
- Department of Industrial Engineering and Institute for Industrial Systems Innovation, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea; (J.P.); (J.P.)
| | - Dongmin Shin
- Department of Industrial and Management Engineering, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Korea;
| | - Yerim Choi
- ai.m Inc., Gangnamdae-ro, Gangnam-gu, Seoul 06241, Korea
- Department of Data Science, Seoul Women’s University, Hwarang-ro, Nowon-gu, Seoul 01797, Korea
| |
Collapse
|
16
|
Abstract
The prospect and potentiality of interfacing minds with machines has long captured human imagination. Recent advances in biomedical engineering, computer science, and neuroscience are making brain–computer interfaces a reality, paving the way to restoring and potentially augmenting human physical and mental capabilities. Applications of brain–computer interfaces are being explored in applications as diverse as security, lie detection, alertness monitoring, gaming, education, art, and human cognition augmentation. The present tutorial aims to survey the principal features and challenges of brain–computer interfaces (such as reliable acquisition of brain signals, filtering and processing of the acquired brainwaves, ethical and legal issues related to brain–computer interface (BCI), data privacy, and performance assessment) with special emphasis to biomedical engineering and automation engineering applications. The content of this paper is aimed at students, researchers, and practitioners to glimpse the multifaceted world of brain–computer interfacing.
Collapse
|
17
|
Zhang J, Wang M. A survey on robots controlled by motor imagery brain-computer interfaces. COGNITIVE ROBOTICS 2021. [DOI: 10.1016/j.cogr.2021.02.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
18
|
Developing a Motor Imagery-Based Real-Time Asynchronous Hybrid BCI Controller for a Lower-Limb Exoskeleton. SENSORS 2020; 20:s20247309. [PMID: 33352714 PMCID: PMC7766128 DOI: 10.3390/s20247309] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 12/14/2020] [Accepted: 12/14/2020] [Indexed: 11/18/2022]
Abstract
This study aimed to develop an intuitive gait-related motor imagery (MI)-based hybrid brain-computer interface (BCI) controller for a lower-limb exoskeleton and investigate the feasibility of the controller under a practical scenario including stand-up, gait-forward, and sit-down. A filter bank common spatial pattern (FBCSP) and mutual information-based best individual feature (MIBIF) selection were used in the study to decode MI electroencephalogram (EEG) signals and extract a feature matrix as an input to the support vector machine (SVM) classifier. A successive eye-blink switch was sequentially combined with the EEG decoder in operating the lower-limb exoskeleton. Ten subjects demonstrated more than 80% accuracy in both offline (training) and online. All subjects successfully completed a gait task by wearing the lower-limb exoskeleton through the developed real-time BCI controller. The BCI controller achieved a time ratio of 1.45 compared with a manual smartwatch controller. The developed system can potentially be benefit people with neurological disorders who may have difficulties operating manual control.
Collapse
|
19
|
Improving performance in motor imagery BCI-based control applications via virtually embodied feedback. Comput Biol Med 2020; 127:104079. [PMID: 33126130 DOI: 10.1016/j.compbiomed.2020.104079] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 09/30/2020] [Accepted: 10/20/2020] [Indexed: 12/16/2022]
Abstract
OBJECTIVE Brain-computer interfaces (BCIs) based on motor imagery (MI) are commonly used for control applications. However, these applications require strong and discriminant neural patterns for which extensive experience in MI may be necessary. Inspired by the field of rehabilitation where embodiment is a key element for improving cortical activity, our study proposes a novel control scheme in which virtually embodiable feedback is provided during control to enhance performance. METHODS Subjects underwent two immersive virtual reality control scenarios in which they controlled the two-dimensional movement of a device using electroencephalography (EEG). The two scenarios only differ on whether embodiable feedback, which mirrors the movement of the classified intention, is provided. After undergoing each scenario, subjects also answered a questionnaire in which they rated how immersive the scenario and embodiable the feedback were. RESULTS Subjects exhibited higher control performance, greater discriminability in brain activity patterns, and enhanced cortical activation when using our control scheme compared to the standard control scheme in which embodiable feedback is absent. Moreover, the self-rated embodiment and presence scores showed significantly positive linear relationships with performance. SIGNIFICANCE The findings in our study provide evidence that providing embodiable feedback as guidance on how intention is classified may be effective for control applications by inducing enhanced neural activity and patterns with greater discriminability. By applying embodiable feedback to immersive virtual reality, our study also serves as another instance in which virtual reality is shown to be a promising tool for improving MI.
Collapse
|
20
|
Jeong YC, Lee HE, Shin A, Kim DG, Lee KJ, Kim D. Progress in Brain-Compatible Interfaces with Soft Nanomaterials. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2020; 32:e1907522. [PMID: 32297395 DOI: 10.1002/adma.201907522] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Revised: 02/03/2020] [Accepted: 02/04/2020] [Indexed: 06/11/2023]
Abstract
Neural interfaces facilitating communication between the brain and machines must be compatible with the soft, curvilinear, and elastic tissues of the brain and yet yield enough power to read and write information across a wide range of brain areas through high-throughput recordings or optogenetics. Biocompatible-material engineering has facilitated the development of brain-compatible neural interfaces to support built-in modulation of neural circuits and neurological disorders. Recent developments in brain-compatible neural interfaces that use soft nanomaterials more suitable for complex neural circuit analysis and modulation are reviewed. Preclinical tests of the compatibility and specificity of these interfaces in animal models are also discussed.
Collapse
Affiliation(s)
- Yong-Cheol Jeong
- Department of Biological Science, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
| | - Han Eol Lee
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
| | - Anna Shin
- Department of Biological Science, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
| | - Dae-Gun Kim
- Department of Biological Science, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
| | - Keon Jae Lee
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
| | - Daesoo Kim
- Department of Biological Science, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
| |
Collapse
|
21
|
Brain-Computer Interface-Based Humanoid Control: A Review. SENSORS 2020; 20:s20133620. [PMID: 32605077 PMCID: PMC7374399 DOI: 10.3390/s20133620] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 06/12/2020] [Accepted: 06/17/2020] [Indexed: 11/17/2022]
Abstract
A Brain-Computer Interface (BCI) acts as a communication mechanism using brain signals to control external devices. The generation of such signals is sometimes independent of the nervous system, such as in Passive BCI. This is majorly beneficial for those who have severe motor disabilities. Traditional BCI systems have been dependent only on brain signals recorded using Electroencephalography (EEG) and have used a rule-based translation algorithm to generate control commands. However, the recent use of multi-sensor data fusion and machine learning-based translation algorithms has improved the accuracy of such systems. This paper discusses various BCI applications such as tele-presence, grasping of objects, navigation, etc. that use multi-sensor fusion and machine learning to control a humanoid robot to perform a desired task. The paper also includes a review of the methods and system design used in the discussed applications.
Collapse
|
22
|
Gu L, Yu Z, Ma T, Wang H, Li Z, Fan H. EEG-based Classification of Lower Limb Motor Imagery with Brain Network Analysis. Neuroscience 2020; 436:93-109. [PMID: 32283182 DOI: 10.1016/j.neuroscience.2020.04.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 03/06/2020] [Accepted: 04/02/2020] [Indexed: 01/06/2023]
Abstract
This study aims to investigate the difference in cortical signal characteristics between the left and right foot imaginary movements and to improve the classification accuracy of the experimental tasks. Raw signals were gathered from 64-channel scalp electroencephalograms of 11 healthy participants. Firstly, the cortical source model was defined with 62 regions of interest over the sensorimotor cortex (nine Brodmann areas). Secondly, functional connectivity was calculated by phase lock value for α and β rhythm networks. Thirdly, network-based statistics were applied to identify whether there existed stable and significant subnetworks that formed between the two types of motor imagery tasks. Meanwhile, ten graph theory indices were investigated for each network by t-test to determine statistical significance between tasks. Finally, sparse multinomial logistic regression (SMLR)-support vector machine (SVM), as a feature selection and classification model, was used to analyze the graph theory features. The specific time-frequency (α event-related desynchronization and β event-related synchronization) difference network between the two tasks was congregated at the midline and demonstrated significant connections in the premotor areas and primary somatosensory cortex. A few of statistically significant differences in the network properties were observed between tasks in the α and β rhythm. The SMLR-SVM classification model achieved fair discrimination accuracy between imaginary movements of the two feet (maximum 75% accuracy rate in single-trial analyses). This study reveals the network mechanism of the discrimination of the left and right foot motor imagery, which can provide a novel avenue for the BCI system by unilateral lower limb motor imagery.
Collapse
Affiliation(s)
- Lingyun Gu
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing 210096, Jiangsu, PR China
| | - Zhenhua Yu
- College of Computer Science and Technology, Xi'an University of Science and Technology, Xi'an 710054, Shanxi, PR China
| | - Tian Ma
- College of Computer Science and Technology, Xi'an University of Science and Technology, Xi'an 710054, Shanxi, PR China
| | - Haixian Wang
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing 210096, Jiangsu, PR China.
| | - Zhanli Li
- College of Computer Science and Technology, Xi'an University of Science and Technology, Xi'an 710054, Shanxi, PR China.
| | - Hui Fan
- Co-innovation Center of Shandong Colleges and Universities: Future Intelligent Computing, Shandong Technology and Business University, Yantai 264005, Shandong, PR China
| |
Collapse
|
23
|
Pan J, Xie Q, Qin P, Chen Y, He Y, Huang H, Wang F, Ni X, Cichocki A, Yu R, Li Y. Prognosis for patients with cognitive motor dissociation identified by brain-computer interface. Brain 2020; 143:1177-1189. [PMID: 32101603 PMCID: PMC7174053 DOI: 10.1093/brain/awaa026] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 12/08/2019] [Accepted: 12/17/2019] [Indexed: 01/15/2023] Open
Abstract
Cognitive motor dissociation describes a subset of patients with disorders of consciousness who show neuroimaging evidence of consciousness but no detectable command-following behaviours. Although essential for family counselling, decision-making, and the design of rehabilitation programmes, the prognosis for patients with cognitive motor dissociation remains under-investigated. The current study included 78 patients with disorders of consciousness who showed no detectable command-following behaviours. These patients included 45 patients with unresponsive wakefulness syndrome and 33 patients in a minimally conscious state, as diagnosed using the Coma Recovery Scale-Revised. Each patient underwent an EEG-based brain-computer interface experiment, in which he or she was instructed to perform an item-selection task (i.e. select a photograph or a number from two candidates). Patients who achieved statistically significant brain-computer interface accuracies were identified as cognitive motor dissociation. Two evaluations using the Coma Recovery Scale-Revised, one before the experiment and the other 3 months later, were carried out to measure the patients' behavioural improvements. Among the 78 patients with disorders of consciousness, our results showed that within the unresponsive wakefulness syndrome patient group, 15 of 18 patients with cognitive motor dissociation (83.33%) regained consciousness, while only five of the other 27 unresponsive wakefulness syndrome patients without significant brain-computer interface accuracies (18.52%) regained consciousness. Furthermore, within the minimally conscious state patient group, 14 of 16 patients with cognitive motor dissociation (87.5%) showed improvements in their Coma Recovery Scale-Revised scores, whereas only four of the other 17 minimally conscious state patients without significant brain-computer interface accuracies (23.53%) had improved Coma Recovery Scale-Revised scores. Our results suggest that patients with cognitive motor dissociation have a better outcome than other patients. Our findings extend current knowledge of the prognosis for patients with cognitive motor dissociation and have important implications for brain-computer interface-based clinical diagnosis and prognosis for patients with disorders of consciousness.
Collapse
Affiliation(s)
- Jiahui Pan
- Center for Brain-Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China
- School of Software, South China Normal University, Guangzhou, China
| | - Qiuyou Xie
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Centre for Hyperbaric Oxygen and Neurorehabilitation, Guangzhou General Hospital of Guangzhou Military Command, Guangzhou, China
| | - Pengmin Qin
- Centre for Studies of Psychological Applications, Guangdong Key Laboratory of Mental Health and Cognitive Science, School of Psychology, South China Normal University, Guangzhou, China
| | - Yan Chen
- Centre for Hyperbaric Oxygen and Neurorehabilitation, Guangzhou General Hospital of Guangzhou Military Command, Guangzhou, China
| | - Yanbin He
- Centre for Hyperbaric Oxygen and Neurorehabilitation, Guangzhou General Hospital of Guangzhou Military Command, Guangzhou, China
- Department of Traumatic Brain Injury Rehabilitation and Severe Rehabilitation, Guangdong Work Injury Rehabilitation Hospital, Guangzhou, China
| | - Haiyun Huang
- Center for Brain-Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China
| | - Fei Wang
- Center for Brain-Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China
- School of Software, South China Normal University, Guangzhou, China
| | - Xiaoxiao Ni
- Centre for Hyperbaric Oxygen and Neurorehabilitation, Guangzhou General Hospital of Guangzhou Military Command, Guangzhou, China
| | - Andrzej Cichocki
- Skolkovo Institute of Science and Technology (Skoltech), Moscow 143026, Russia
- Nicolaus Copernicus University (UMK), Torun 87-100, Poland
| | - Ronghao Yu
- Centre for Hyperbaric Oxygen and Neurorehabilitation, Guangzhou General Hospital of Guangzhou Military Command, Guangzhou, China
| | - Yuanqing Li
- Center for Brain-Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou, China
| |
Collapse
|
24
|
Wairagkar M, Hayashi Y, Nasuto SJ. Modeling the Ongoing Dynamics of Short and Long-Range Temporal Correlations in Broadband EEG During Movement. Front Syst Neurosci 2019; 13:66. [PMID: 31787885 PMCID: PMC6856010 DOI: 10.3389/fnsys.2019.00066] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 10/15/2019] [Indexed: 11/17/2022] Open
Abstract
Electroencephalogram (EEG) undergoes complex temporal and spectral changes during voluntary movement intention. Characterization of such changes has focused mostly on narrowband spectral processes such as Event-Related Desynchronization (ERD) in the sensorimotor rhythms because EEG is mostly considered as emerging from oscillations of the neuronal populations. However, the changes in the temporal dynamics, especially in the broadband arrhythmic EEG have not been investigated for movement intention detection. The Long-Range Temporal Correlations (LRTC) are ubiquitously present in several neuronal processes, typically requiring longer timescales to detect. In this paper, we study the ongoing changes in the dynamics of long- as well as short-range temporal dependencies in the single trial broadband EEG during movement intention. We obtained LRTC in 2 s windows of broadband EEG and modeled it using the Autoregressive Fractionally Integrated Moving Average (ARFIMA) model which allowed simultaneous modeling of short- and long-range temporal correlations. There were significant (p < 0.05) changes in both broadband long- and short-range temporal correlations during movement intention and execution. We discovered that the broadband LRTC and narrowband ERD are complementary processes providing distinct information about movement because eliminating LRTC from the signal did not affect the ERD and conversely, eliminating ERD from the signal did not affect LRTC. Exploring the possibility of applications in Brain Computer Interfaces (BCI), we used hybrid features with combinations of LRTC, ARFIMA, and ERD to detect movement intention. A significantly higher (p < 0.05) classification accuracy of 88.3 ± 4.2% was obtained using the combination of ARFIMA and ERD features together, which also predicted the earliest movement at 1 s before its onset. The ongoing changes in the long- and short-range temporal correlations in broadband EEG contribute to effectively capturing the motor command generation and can be used to detect movement successfully. These temporal dependencies provide different and additional information about the movement.
Collapse
Affiliation(s)
- Maitreyee Wairagkar
- Brain Embodiment Laboratory, Biomedical Engineering, School of Biological Sciences, University of Reading, Reading, United Kingdom
| | | | | |
Collapse
|
25
|
Prasanga DK, Tanida K, Ohnishi K, Murakami T. Simultaneous bipedal locomotion based on haptics for teleoperation. Adv Robot 2019. [DOI: 10.1080/01691864.2019.1646162] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- D. Kasun Prasanga
- School of Integrated Design Engineering, Keio University, Yokohama, Japan
| | - Kazuki Tanida
- System Development Department, Technology Research Center, Sumitomo Heavy Industries, Japan
| | - Kouhei Ohnishi
- Haptics Research Center, Keio University, Yokohama, Japan
| | - Toshiyuki Murakami
- Department of System Design Engineering, Keio University, Yokohama, Japan
| |
Collapse
|
26
|
Kim HH, Jeong J. Decoding electroencephalographic signals for direction in brain-computer interface using echo state network and Gaussian readouts. Comput Biol Med 2019; 110:254-264. [PMID: 31233971 DOI: 10.1016/j.compbiomed.2019.05.024] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 05/31/2019] [Accepted: 05/31/2019] [Indexed: 10/26/2022]
Abstract
BACKGROUND Noninvasive brain-computer interfaces (BCI) for movement control via an electroencephalogram (EEG) have been extensively investigated. However, most previous studies decoded user intention for movement directions based on sensorimotor rhythms during motor imagery. BCI systems based on mapping imagery movement of body parts (e.g., left or right hands) to movement directions (left or right directional movement of a machine or cursor) are less intuitive and less convenient due to the complex training procedures. Thus, direct decoding methods for detecting user intention about movement directions are urgently needed. METHODS Here, we describe a novel direct decoding method for user intention about the movement directions using the echo state network and Gaussian readouts. Importantly parameters in the network were optimized using the genetic algorithm method to achieve better decoding performance. We tested the decoding performance of this method with four healthy subjects and an inexpensive wireless EEG system containing 14 channels and then compared the performance outcome with that of a conventional machine learning method. RESULTS We showed that this decoding method successfully classified eight directions of intended movement (approximately 95% of an accuracy). CONCLUSIONS We suggest that the echo state network and Gaussian readouts can be a useful decoding method to directly read user intention of movement directions even using an inexpensive and portable EEG system.
Collapse
Affiliation(s)
- Hoon-Hee Kim
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| | - Jaeseung Jeong
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea; Program of Brain and Cognitive Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea.
| |
Collapse
|
27
|
Tinoco Varela D, Gudiño Peñaloza F, Villaseñor Rodelas CJ. Characterized Bioelectric Signals by Means of Neural Networks and Wavelets to Remotely Control a Human-Machine Interface. SENSORS 2019; 19:s19081923. [PMID: 31022847 PMCID: PMC6515184 DOI: 10.3390/s19081923] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 04/14/2019] [Accepted: 04/19/2019] [Indexed: 11/16/2022]
Abstract
Everyday, people interact with different types of human machine interfaces, and the use of them is increasing, thus, it is necessary to design interfaces which are capable of responding in an intelligent, natural, inexpensive, and accessible way, regardless of social, cultural, economic, or physical features of a user. In this sense, it has been sought out the development of small interfaces to avoid any type of user annoyance. In this paper, bioelectric signals have been analyzed and characterized in order to propose a more natural human-machine interaction system. The proposed scheme is controlled by electromyographic signals that a person can create through arm movements. Such arm signals have been analyzed and characterized by a back-propagation neural network, and by a wavelet analysis, in this way control commands were obtained from such arm electromyographic signals. The developed interface, uses Extensible Messaging and Presence Protocol (XMPP) to send control commands remotely. In the experiment, it manipulated a vehicle that was approximately 52 km away from the user, with which it can be showed that a characterized electromyographic signal can be sufficient for controlling embedded devices such as a Raspberri Pi, and in this way we can use the neural network and the wavelet analysis to generate control words which can be used inside the Internet of Things too. A Tiva-C board has been used to acquire data instead of more popular development boards, with an adequate response. One of the most important aspects related to the proposed interface is that it can be used by almost anyone, including people with different abilities and even illiterate people. Due to the existence of individual efforts to characterize different types of bioelectric signals, we propose the generation of free access Bioelectric Control Dictionary, to define and consult each characterized biosignal.
Collapse
Affiliation(s)
- David Tinoco Varela
- Department of Engineering, ITSE, FESC, UNAM, Cuautitlán Izcalli 54714, Edo. de Mex, Mexico.
| | | | | |
Collapse
|
28
|
Zhang W, Tan C, Sun F, Wu H, Zhang B. A Review of EEG-Based Brain-Computer Interface Systems Design. BRAIN SCIENCE ADVANCES 2019. [DOI: 10.26599/bsa.2018.9050010] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
A brain-computer interface (BCI) system can recognize the mental activities pattern by computer algorithms to control the external devices. Electroencephalogram (EEG) is one of the most common used approach for BCI due to the convenience and non-invasive implement. Therefore, more and more BCIs have been designed for the disabled people that suffer from stroke or spinal cord injury to help them for rehabilitation and life. We introduce the common BCI paradigms, the signal processing, and feature extraction methods. Then, we survey the different combined modes of hybrids BCIs and review the design of the synchronous/asynchronous BCIs. Finally, the shared control methods are discussed.
Collapse
Affiliation(s)
- Wenchang Zhang
- Institute of Medical Support Technology, Academy of Military Sciences, Tianjin 300161, China
- State Key Lab. of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, the Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Chuanqi Tan
- State Key Lab. of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, the Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Fuchun Sun
- State Key Lab. of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, the Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Hang Wu
- Institute of Medical Support Technology, Academy of Military Sciences, Tianjin 300161, China
| | - Bo Zhang
- State Key Lab. of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, the Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| |
Collapse
|
29
|
Mao X, Li W, Lei C, Jin J, Duan F, Chen S. A Brain-Robot Interaction System by Fusing Human and Machine Intelligence. IEEE Trans Neural Syst Rehabil Eng 2019; 27:533-542. [PMID: 30716043 DOI: 10.1109/tnsre.2019.2897323] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This paper presents a new brain-robot interaction system by fusing human and machine intelligence to improve the real-time control performance. This system consists of a hybrid P300 and steady-state visual evoked potential (SSVEP) mode conveying a human being's intention, and the machine intelligence combining a fuzzy-logic-based image processing algorithm with multi-sensor fusion technology. A subject selects an object of interest via P300, and the classification algorithm transfers the corresponding parameters to an improved fuzzy color extractor for object extraction. A central vision tracking strategy automatically guides the NAO humanoid robot to the destination selected by the subject intentions represented by brainwaves. During this process, human supervises the system at high level, while machine intelligence assists the robot in accomplishing tasks by analyzing image feeding back from the camera, distance monitoring using out-of-gauge alarms from sonars, and collision detecting from bumper sensors. In this scenario, the SSVEP takes over the situations in which the machine intelligence cannot make decisions. The experimental results show that the subjects can control the robot to a destination of interest, with fewer commands than only using a brain-robot interface. Therefore, the fusion of human and machine intelligence greatly alleviates the brain load and enhances the robot executive efficiency of a brain-robot interaction system.
Collapse
|
30
|
Yu Y, Liu Y, Jiang J, Yin E, Zhou Z, Hu D. An Asynchronous Control Paradigm Based on Sequential Motor Imagery and Its Application in Wheelchair Navigation. IEEE Trans Neural Syst Rehabil Eng 2018; 26:2367-2375. [DOI: 10.1109/tnsre.2018.2881215] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
31
|
Ma Z, Qiu T. Quasi-periodic fluctuation in Donchin's speller signals and its potential use for asynchronous control. ACTA ACUST UNITED AC 2018; 63:105-112. [PMID: 27655447 DOI: 10.1515/bmt-2016-0050] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 08/19/2016] [Indexed: 11/15/2022]
Abstract
When we examine the event-related potential (ERP) responses of Donchin's brain-computer interface (BCI) speller, a type of quasi-periodic fluctuation (FLUC) overlapping with the ERP components can be observed; this fluctuation is traditionally treated as interference. However, if the FLUC is detectable in a working BCI, it can be used for asynchronous control, i.e. to indicate whether the BCI is under the control state (CS) or under the non-control idle state (NC). Asynchronous control is an important issue to address to enable BCI's practical use. In this paper, we examine the characteristics of the FLUC and explore the possibility of using the FLUC for asynchronous control of the BCI. For detecting the FLUC, we propose a method based on the power spectrum and evaluate the detection rates in a simulation. As a result, high true positive rates (TPRs) and low false positive rates (FPRs) are obtained. Our work reveals that the FLUC is of great value for implementing an asynchronous BCI.
Collapse
Affiliation(s)
- Zheng Ma
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
| | - Tianshuang Qiu
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
| |
Collapse
|
32
|
Georgiadis K, Laskaris N, Nikolopoulos S, Kompatsiaris I. Exploiting the heightened phase synchrony in patients with neuromuscular disease for the establishment of efficient motor imagery BCIs. J Neuroeng Rehabil 2018; 15:90. [PMID: 30373619 PMCID: PMC6206934 DOI: 10.1186/s12984-018-0431-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Accepted: 09/21/2018] [Indexed: 11/25/2022] Open
Abstract
Background Phase synchrony has extensively been studied for understanding neural coordination in health and disease. There are a few studies concerning the implications in the context of BCIs, but its potential for establishing a communication channel in patients suffering from neuromuscular disorders remains totally unexplored. We investigate, here, this possibility by estimating the time-resolved phase connectivity patterns induced during a motor imagery (MI) task and adopting a supervised learning scheme to recover the subject’s intention from the streaming data. Methods Electroencephalographic activity from six patients suffering from neuromuscular disease (NMD) and six healthy individuals was recorded during two randomly alternating, externally cued, MI tasks (clenching either left or right fist) and a rest condition. The metric of Phase locking value (PLV) was used to describe the functional coupling between all recording sites. The functional connectivity patterns and the associate network organization was first compared between the two cohorts. Next, working at the level of individual patients, we trained support vector machines (SVMs) to discriminate between “left” and “right” based on different instantiations of connectivity patterns (depending on the encountered brain rhythm and the temporal interval). Finally, we designed and realized a novel brain decoding scheme that could interpret the intention from streaming connectivity patterns, based on an ensemble of SVMs. Results The group-level analysis revealed increased phase synchrony and richer network organization in patients. This trend was also seen in the performance of the employed classifiers. Time-resolved connectivity led to superior performance, with distinct SVMs acting as local experts, specialized in the patterning emerged within specific temporal windows (defined with respect to the external trigger). This empirical finding was further exploited in implementing a decoding scheme that can be activated without the need of the precise timing of a trigger. Conclusion The increased phase synchrony in NMD patients can turn to a valuable tool for MI decoding. Considering the fast implementation for the PLV pattern computation in multichannel signals, we can envision the development of efficient personalized BCI systems in assistance of these patients. Electronic supplementary material The online version of this article (10.1186/s12984-018-0431-6) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Kostas Georgiadis
- AIIA lab, Informatics Department, AUTH, Thessaloniki, Greece. .,Information Technologies Institute (ITI), Centre for Research & Technology Hellas, Thessaloniki-Thermi, Greece.
| | - Nikos Laskaris
- AIIA lab, Informatics Department, AUTH, Thessaloniki, Greece.,NeuroInformatics.GRoup, AUTH, Thessaloniki, Greece
| | - Spiros Nikolopoulos
- Information Technologies Institute (ITI), Centre for Research & Technology Hellas, Thessaloniki-Thermi, Greece
| | - Ioannis Kompatsiaris
- Information Technologies Institute (ITI), Centre for Research & Technology Hellas, Thessaloniki-Thermi, Greece
| |
Collapse
|
33
|
Kaya M, Binli MK, Ozbay E, Yanar H, Mishchenko Y. A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces. Sci Data 2018; 5:180211. [PMID: 30325349 PMCID: PMC6190745 DOI: 10.1038/sdata.2018.211] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 08/09/2018] [Indexed: 12/03/2022] Open
Abstract
Recent advancements in brain computer interfaces (BCI) have demonstrated control of robotic systems by mental processes alone. Together with invasive BCI, electroencephalographic (EEG) BCI represent an important direction in the development of BCI systems. In the context of EEG BCI, the processing of EEG data is the key challenge. Unfortunately, advances in that direction have been complicated by a lack of large and uniform datasets that could be used to design and evaluate different data processing approaches. In this work, we release a large set of EEG BCI data collected during the development of a slow cortical potentials-based EEG BCI. The dataset contains 60 h of EEG recordings, 13 participants, 75 recording sessions, 201 individual EEG BCI interaction session-segments, and over 60 000 examples of motor imageries in 4 interaction paradigms. The current dataset presents one of the largest EEG BCI datasets publically available to date.
Collapse
Affiliation(s)
- Murat Kaya
- Mersin University, Mersin, 33140, Turkey
| | | | | | | | | |
Collapse
|
34
|
Schaeffer MC, Aksenova T. Data-Driven Transducer Design and Identification for Internally-Paced Motor Brain Computer Interfaces: A Review. Front Neurosci 2018; 12:540. [PMID: 30158847 PMCID: PMC6104172 DOI: 10.3389/fnins.2018.00540] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Accepted: 07/17/2018] [Indexed: 11/13/2022] Open
Abstract
Brain-Computer Interfaces (BCIs) are systems that establish a direct communication pathway between the users' brain activity and external effectors. They offer the potential to improve the quality of life of motor-impaired patients. Motor BCIs aim to permit severely motor-impaired users to regain limb mobility by controlling orthoses or prostheses. In particular, motor BCI systems benefit patients if the decoded actions reflect the users' intentions with an accuracy that enables them to efficiently interact with their environment. One of the main challenges of BCI systems is to adapt the BCI's signal translation blocks to the user to reach a high decoding accuracy. This paper will review the literature of data-driven and user-specific transducer design and identification approaches and it focuses on internally-paced motor BCIs. In particular, continuous kinematic biomimetic and mental-task decoders are reviewed. Furthermore, static and dynamic decoding approaches, linear and non-linear decoding, offline and real-time identification algorithms are considered. The current progress and challenges related to the design of clinical-compatible motor BCI transducers are additionally discussed.
Collapse
Affiliation(s)
| | - Tetiana Aksenova
- CEA, LETI, CLINATEC, MINATEC Campus, Université Grenoble Alpes, Grenoble, France
| |
Collapse
|
35
|
Yu Y, Zhou Z, Liu Y, Jiang J, Yin E, Zhang N, Wang Z, Liu Y, Wu X, Hu D. Self-Paced Operation of a Wheelchair Based on a Hybrid Brain-Computer Interface Combining Motor Imagery and P300 Potential. IEEE Trans Neural Syst Rehabil Eng 2018; 25:2516-2526. [PMID: 29220327 DOI: 10.1109/tnsre.2017.2766365] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents a hybrid brain-computer interface (BCI) that combines motor imagery (MI) and P300 potential for the asynchronous operation of a brain-controlled wheelchair whose design is based on a Mecanum wheel. This paradigm is completely user-centric. By sequentially performing MI tasks or paying attention to P300 flashing, the user can use eleven functions to control the wheelchair: move forward/backward, move left/right, move left45/right45, accelerate/decelerate, turn left/right, and stop. The practicality and effectiveness of the proposed approach were validated in eight subjects, all of whom achieved good performance. The preliminary results indicated that the proposed hybrid BCI system with different mental strategies operating sequentially is feasible and has potential applications for practical self-paced control.
Collapse
|
36
|
Tariq M, Trivailo PM, Simic M. EEG-Based BCI Control Schemes for Lower-Limb Assistive-Robots. Front Hum Neurosci 2018; 12:312. [PMID: 30127730 PMCID: PMC6088276 DOI: 10.3389/fnhum.2018.00312] [Citation(s) in RCA: 88] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 07/16/2018] [Indexed: 12/14/2022] Open
Abstract
Over recent years, brain-computer interface (BCI) has emerged as an alternative communication system between the human brain and an output device. Deciphered intents, after detecting electrical signals from the human scalp, are translated into control commands used to operate external devices, computer displays and virtual objects in the real-time. BCI provides an augmentative communication by creating a muscle-free channel between the brain and the output devices, primarily for subjects having neuromotor disorders, or trauma to nervous system, notably spinal cord injuries (SCI), and subjects with unaffected sensorimotor functions but disarticulated or amputated residual limbs. This review identifies the potentials of electroencephalography (EEG) based BCI applications for locomotion and mobility rehabilitation. Patients could benefit from its advancements such as wearable lower-limb (LL) exoskeletons, orthosis, prosthesis, wheelchairs, and assistive-robot devices. The EEG communication signals employed by the aforementioned applications that also provide feasibility for future development in the field are sensorimotor rhythms (SMR), event-related potentials (ERP) and visual evoked potentials (VEP). The review is an effort to progress the development of user's mental task related to LL for BCI reliability and confidence measures. As a novel contribution, the reviewed BCI control paradigms for wearable LL and assistive-robots are presented by a general control framework fitting in hierarchical layers. It reflects informatic interactions, between the user, the BCI operator, the shared controller, the robotic device and the environment. Each sub layer of the BCI operator is discussed in detail, highlighting the feature extraction, classification and execution methods employed by the various systems. All applications' key features and their interaction with the environment are reviewed for the EEG-based activity mode recognition, and presented in form of a table. It is suggested to structure EEG-BCI controlled LL assistive devices within the presented framework, for future generation of intent-based multifunctional controllers. Despite the development of controllers, for BCI-based wearable or assistive devices that can seamlessly integrate user intent, practical challenges associated with such systems exist and have been discerned, which can be constructive for future developments in the field.
Collapse
Affiliation(s)
| | | | - Milan Simic
- School of Engineering, RMIT University Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
37
|
Wireless Stimulus-on-Device Design for Novel P300 Hybrid Brain-Computer Interface Applications. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2018; 2018:2301804. [PMID: 30111993 PMCID: PMC6077535 DOI: 10.1155/2018/2301804] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Accepted: 06/28/2018] [Indexed: 12/02/2022]
Abstract
Improving the independent living ability of people who have suffered spinal cord injuries (SCIs) is essential for their quality of life. Brain-computer interfaces (BCIs) provide promising solutions for people with high-level SCIs. This paper proposes a novel and practical P300-based hybrid stimulus-on-device (SoD) BCI architecture for wireless networking applications. Instead of a stimulus-on-panel architecture (SoP), the proposed SoD architecture provides an intuitive control scheme. However, because P300 recognitions rely on the synchronization between stimuli and response potentials, the variation of latency between target stimuli and elicited P300 is a concern when applying a P300-based BCI to wireless applications. In addition, the subject-dependent variation of elicited P300 affects the performance of the BCI. Thus, an adaptive model that determines an appropriate interval for P300 feature extraction was proposed in this paper. Hence, this paper employed the artificial bee colony- (ABC-) based interval type-2 fuzzy logic system (IT2FLS) to deal with the variation of latency between target stimuli and elicited P300 so that the proposed P300-based SoD approach would be feasible. Furthermore, the target and nontarget stimuli were identified in terms of a support vector machine (SVM) classifier. Experimental results showed that, from five subjects, the performance of classification and information transfer rate were improved after calibrations (86.00% and 24.2 bits/ min before calibrations; 90.25% and 27.9 bits/ min after calibrations).
Collapse
|
38
|
Wang F, Zhang X, Fu R, Sun G. Study of the Home-Auxiliary Robot Based on BCI. SENSORS (BASEL, SWITZERLAND) 2018; 18:E1779. [PMID: 29865175 PMCID: PMC6021918 DOI: 10.3390/s18061779] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Revised: 05/20/2018] [Accepted: 05/29/2018] [Indexed: 01/13/2023]
Abstract
A home-auxiliary robot platform is developed in the current study which could assist patients with physical disabilities and older persons with mobility impairments. The robot, mainly controlled by brain computer interface (BCI) technology, can not only perform actions in a person's field of vision, but also work outside the field of vision. The wavelet decomposition (WD) is used in this study to extract the δ (0~4 Hz) and θ (4~8 Hz) sub-bands of subjects' electroencephalogram (EEG) signals. The correlation between pairs of 14 EEG channels is determined with synchronization likelihood (SL), and the brain network structure is generated. Then, the motion characteristics are analyzed using the brain network parameters clustering coefficient (C) and global efficiency (G). Meanwhile, the eye movement characteristics in the F3 and F4 channels are identified. Finally, the motion characteristics identified by brain networks and eye movement characteristics can be used to control the home-auxiliary robot platform. The experimental result shows that the accuracy rate of left and right motion recognition using this method is more than 93%. Additionally, the similarity between that autonomous return path and the real path of the home-auxiliary robot reaches up to 0.89.
Collapse
Affiliation(s)
- Fuwang Wang
- School of Mechanic Engineering, Northeast Electric Power University, Jilin 132012, China.
| | - Xiaolei Zhang
- School of Mechanic Engineering, Northeast Electric Power University, Jilin 132012, China.
| | - Rongrong Fu
- College of Electrical Engineering, Yanshan University, Qinhuangdao 066004, China.
| | - Guangbin Sun
- Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China.
| |
Collapse
|
39
|
Liu YH, Lin LF, Chou CW, Chang Y, Hsiao YT, Hsu WC. Analysis of Electroencephalography Event-Related Desynchronisation and Synchronisation Induced by Lower-Limb Stepping Motor Imagery. J Med Biol Eng 2018. [DOI: 10.1007/s40846-018-0379-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
40
|
Liu D, Chen W, Pei Z, Wang J. A brain-controlled lower-limb exoskeleton for human gait training. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2017; 88:104302. [PMID: 29092520 DOI: 10.1063/1.5006461] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Brain-computer interfaces have been a novel approach to translate human intentions into movement commands in robotic systems. This paper describes an electroencephalogram-based brain-controlled lower-limb exoskeleton for gait training, as a proof of concept towards rehabilitation with human-in-the-loop. Instead of using conventional single electroencephalography correlates, e.g., evoked P300 or spontaneous motor imagery, we propose a novel framework integrated two asynchronous signal modalities, i.e., sensorimotor rhythms (SMRs) and movement-related cortical potentials (MRCPs). We executed experiments in a biologically inspired and customized lower-limb exoskeleton where subjects (N = 6) actively controlled the robot using their brain signals. Each subject performed three consecutive sessions composed of offline training, online visual feedback testing, and online robot-control recordings. Post hoc evaluations were conducted including mental workload assessment, feature analysis, and statistics test. An average robot-control accuracy of 80.16% ± 5.44% was obtained with the SMR-based method, while estimation using the MRCP-based method yielded an average performance of 68.62% ± 8.55%. The experimental results showed the feasibility of the proposed framework with all subjects successfully controlled the exoskeleton. The current paradigm could be further extended to paraplegic patients in clinical trials.
Collapse
Affiliation(s)
- Dong Liu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
| | - Weihai Chen
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
| | - Zhongcai Pei
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
| | - Jianhua Wang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
| |
Collapse
|
41
|
Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2017; 2017:2727856. [PMID: 28874909 PMCID: PMC5569879 DOI: 10.1155/2017/2727856] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 05/14/2017] [Accepted: 07/02/2017] [Indexed: 11/17/2022]
Abstract
Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance.
Collapse
|
42
|
Virtual and Actual Humanoid Robot Control with Four-Class Motor-Imagery-Based Optical Brain-Computer Interface. BIOMED RESEARCH INTERNATIONAL 2017; 2017:1463512. [PMID: 28804712 PMCID: PMC5539938 DOI: 10.1155/2017/1463512] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Accepted: 06/06/2017] [Indexed: 12/11/2022]
Abstract
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.
Collapse
|
43
|
Zhao J, Li W, Mao X, Hu H, Niu L, Chen G. Behavior-Based SSVEP Hierarchical Architecture for Telepresence Control of Humanoid Robot to Achieve Full-Body Movement. IEEE Trans Cogn Dev Syst 2017. [DOI: 10.1109/tcds.2016.2541162] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
44
|
Lebedev MA, Nicolelis MAL. Brain-Machine Interfaces: From Basic Science to Neuroprostheses and Neurorehabilitation. Physiol Rev 2017; 97:767-837. [PMID: 28275048 DOI: 10.1152/physrev.00027.2016] [Citation(s) in RCA: 235] [Impact Index Per Article: 33.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Brain-machine interfaces (BMIs) combine methods, approaches, and concepts derived from neurophysiology, computer science, and engineering in an effort to establish real-time bidirectional links between living brains and artificial actuators. Although theoretical propositions and some proof of concept experiments on directly linking the brains with machines date back to the early 1960s, BMI research only took off in earnest at the end of the 1990s, when this approach became intimately linked to new neurophysiological methods for sampling large-scale brain activity. The classic goals of BMIs are 1) to unveil and utilize principles of operation and plastic properties of the distributed and dynamic circuits of the brain and 2) to create new therapies to restore mobility and sensations to severely disabled patients. Over the past decade, a wide range of BMI applications have emerged, which considerably expanded these original goals. BMI studies have shown neural control over the movements of robotic and virtual actuators that enact both upper and lower limb functions. Furthermore, BMIs have also incorporated ways to deliver sensory feedback, generated from external actuators, back to the brain. BMI research has been at the forefront of many neurophysiological discoveries, including the demonstration that, through continuous use, artificial tools can be assimilated by the primate brain's body schema. Work on BMIs has also led to the introduction of novel neurorehabilitation strategies. As a result of these efforts, long-term continuous BMI use has been recently implicated with the induction of partial neurological recovery in spinal cord injury patients.
Collapse
|
45
|
Comparison of Brain Activation during Motor Imagery and Motor Movement Using fNIRS. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2017; 2017:5491296. [PMID: 28546809 PMCID: PMC5435907 DOI: 10.1155/2017/5491296] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Revised: 02/18/2017] [Accepted: 04/06/2017] [Indexed: 11/26/2022]
Abstract
Motor-activity-related mental tasks are widely adopted for brain-computer interfaces (BCIs) as they are a natural extension of movement intention, requiring no training to evoke brain activity. The ideal BCI aims to eliminate neuromuscular movement, making motor imagery tasks, or imagined actions with no muscle movement, good candidates. This study explores cortical activation differences between motor imagery and motor execution for both upper and lower limbs using functional near-infrared spectroscopy (fNIRS). Four simple finger- or toe-tapping tasks (left hand, right hand, left foot, and right foot) were performed with both motor imagery and motor execution and compared to resting state. Significant activation was found during all four motor imagery tasks, indicating that they can be detected via fNIRS. Motor execution produced higher activation levels, a faster response, and a different spatial distribution compared to motor imagery, which should be taken into account when designing an imagery-based BCI. When comparing left versus right, upper limb tasks are the most clearly distinguishable, particularly during motor execution. Left and right lower limb activation patterns were found to be highly similar during both imagery and execution, indicating that higher resolution imaging, advanced signal processing, or improved subject training may be required to reliably distinguish them.
Collapse
|
46
|
Bhattacharyya S, Konar A, Tibarewala DN, Hayashibe M. A Generic Transferable EEG Decoder for Online Detection of Error Potential in Target Selection. Front Neurosci 2017; 11:226. [PMID: 28512396 PMCID: PMC5411431 DOI: 10.3389/fnins.2017.00226] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Accepted: 04/04/2017] [Indexed: 11/13/2022] Open
Abstract
Reliable detection of error from electroencephalography (EEG) signals as feedback while performing a discrete target selection task across sessions and subjects has a huge scope in real-time rehabilitative application of Brain-computer Interfacing (BCI). Error Related Potentials (ErrP) are EEG signals which occur when the participant observes an erroneous feedback from the system. ErrP holds significance in such closed-loop system, as BCI is prone to error and we need an effective method of systematic error detection as feedback for correction. In this paper, we have proposed a novel scheme for online detection of error feedback directly from the EEG signal in a transferable environment (i.e., across sessions and across subjects). For this purpose, we have used a P300-speller dataset available on a BCI competition website. The task involves the subject to select a letter of a word which is followed by a feedback period. The feedback period displays the letter selected and, if the selection is wrong, the subject perceives it by the generation of ErrP signal. Our proposed system is designed to detect ErrP present in the EEG from new independent datasets, not involved in its training. Thus, the decoder is trained using EEG features of 16 subjects for single-trial classification and tested on 10 independent subjects. The decoder designed for this task is an ensemble of linear discriminant analysis, quadratic discriminant analysis, and logistic regression classifier. The performance of the decoder is evaluated using accuracy, F1-score, and Area Under the Curve metric and the results obtained is 73.97, 83.53, and 73.18%, respectively.
Collapse
Affiliation(s)
| | - Amit Konar
- Department of Electronics and Telecommunication Engineering, Jadavpur UniveristyKolkata, India
| | - D N Tibarewala
- School of Bioscience and Engineering, Jadavpur UniveristyKolkata, India
| | | |
Collapse
|
47
|
Mao X, Li M, Li W, Niu L, Xian B, Zeng M, Chen G. Progress in EEG-Based Brain Robot Interaction Systems. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2017; 2017:1742862. [PMID: 28484488 PMCID: PMC5397651 DOI: 10.1155/2017/1742862] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2016] [Accepted: 03/21/2017] [Indexed: 11/17/2022]
Abstract
The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques.
Collapse
Affiliation(s)
- Xiaoqian Mao
- School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072, China
| | - Mengfan Li
- School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072, China
| | - Wei Li
- School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072, China
- Department of Computer & Electrical Engineering and Computer Science, California State University, Bakersfield, CA 93311, USA
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Shenyang, Liaoning 110016, China
| | - Linwei Niu
- Department of Math and Computer Science, West Virginia State University, Institute, WV 25112, USA
| | - Bin Xian
- School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072, China
| | - Ming Zeng
- School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072, China
| | - Genshe Chen
- Intelligent Fusion Technology, Inc., Germantown, MD 20876, USA
| |
Collapse
|
48
|
Kaongoen N, Jo S. A novel hybrid auditory BCI paradigm combining ASSR and P300. J Neurosci Methods 2017; 279:44-51. [DOI: 10.1016/j.jneumeth.2017.01.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2016] [Accepted: 01/14/2017] [Indexed: 10/20/2022]
|
49
|
Brain-Computer Interface for Control of Wheelchair Using Fuzzy Neural Networks. BIOMED RESEARCH INTERNATIONAL 2016; 2016:9359868. [PMID: 27777953 PMCID: PMC5061989 DOI: 10.1155/2016/9359868] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2016] [Revised: 07/30/2016] [Accepted: 08/21/2016] [Indexed: 11/17/2022]
Abstract
The design of brain-computer interface for the wheelchair for physically disabled people is presented. The design of the proposed system is based on receiving, processing, and classification of the electroencephalographic (EEG) signals and then performing the control of the wheelchair. The number of experimental measurements of brain activity has been done using human control commands of the wheelchair. Based on the mental activity of the user and the control commands of the wheelchair, the design of classification system based on fuzzy neural networks (FNN) is considered. The design of FNN based algorithm is used for brain-actuated control. The training data is used to design the system and then test data is applied to measure the performance of the control system. The control of the wheelchair is performed under real conditions using direction and speed control commands of the wheelchair. The approach used in the paper allows reducing the probability of misclassification and improving the control accuracy of the wheelchair.
Collapse
|
50
|
Three-Class EEG-Based Motor Imagery Classification Using Phase-Space Reconstruction Technique. Brain Sci 2016; 6:brainsci6030036. [PMID: 27563927 PMCID: PMC5039465 DOI: 10.3390/brainsci6030036] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2016] [Revised: 08/11/2016] [Accepted: 08/16/2016] [Indexed: 11/17/2022] Open
Abstract
Over the last few decades, brain signals have been significantly exploited for brain-computer interface (BCI) applications. In this paper, we study the extraction of features using event-related desynchronization/synchronization techniques to improve the classification accuracy for three-class motor imagery (MI) BCI. The classification approach is based on combining the features of the phase and amplitude of the brain signals using fast Fourier transform (FFT) and autoregressive (AR) modeling of the reconstructed phase space as well as the modification of the BCI parameters (trial length, trial frequency band, classification method). We report interesting results compared with those present in the literature by utilizing sequential forward floating selection (SFFS) and a multi-class linear discriminant analysis (LDA), our findings showed superior classification results, a classification accuracy of 86.06% and 93% for two BCI competition datasets, with respect to results from previous studies.
Collapse
|