1
|
Huang C, Shi N, Miao Y, Chen X, Wang Y, Gao X. Visual tracking brain-computer interface. iScience 2024; 27:109376. [PMID: 38510138 PMCID: PMC10951983 DOI: 10.1016/j.isci.2024.109376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/25/2024] [Accepted: 02/27/2024] [Indexed: 03/22/2024] Open
Abstract
Brain-computer interfaces (BCIs) offer a way to interact with computers without relying on physical movements. Non-invasive electroencephalography-based visual BCIs, known for efficient speed and calibration ease, face limitations in continuous tasks due to discrete stimulus design and decoding methods. To achieve continuous control, we implemented a novel spatial encoding stimulus paradigm and devised a corresponding projection method to enable continuous modulation of decoded velocity. Subsequently, we conducted experiments involving 17 participants and achieved Fitt's information transfer rate (ITR) of 0.55 bps for the fixed tracking task and 0.37 bps for the random tracking task. The proposed BCI with a high Fitt's ITR was then integrated into two applications, including painting and gaming. In conclusion, this study proposed a visual BCI based-control method to go beyond discrete commands, allowing natural continuous control based on neural activity.
Collapse
Affiliation(s)
- Changxing Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Nanlin Shi
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Yining Miao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin 300192, China
| | - Yijun Wang
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences Beijing, Beijing 100083, China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| |
Collapse
|
2
|
Zolfaghari S, Yousefi Rezaii T, Meshgini S. Applying Common Spatial Pattern and Convolutional Neural Network to Classify Movements via EEG Signals. Clin EEG Neurosci 2024:15500594241234836. [PMID: 38523306 DOI: 10.1177/15500594241234836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/26/2024]
Abstract
Developing an electroencephalography (EEG)-based brain-computer interface (BCI) system is crucial to enhancing the control of external prostheses by accurately distinguishing various movements through brain signals. This innovation can provide comfortable circumstances for the populace who have movement disabilities. This study combined the most prospering methods used in BCI systems, including one-versus-rest common spatial pattern (OVR-CSP) and convolutional neural network (CNN), to automatically extract features and classify eight different movements of the shoulder, wrist, and elbow via EEG signals. The number of subjects who participated in the experiment was 10, and their EEG signals were recorded while performing movements at fast and slow speeds. We used preprocessing techniques before transforming EEG signals into another space by OVR-CSP, followed by sending signals into the CNN architecture consisting of four convolutional layers. Moreover, we extracted feature vectors after applying OVR-CSP and considered them as inputs to KNN, SVM, and MLP classifiers. Then, the performance of these classifiers was compared with the CNN method. The results demonstrated that the classification of eight movements using the proposed CNN architecture obtained an average accuracy of 97.65% for slow movements and 96.25% for fast movements in the subject-independent model. This method outperformed other classifiers with a substantial difference; ergo, it can be useful in improving BCI systems for better control of prostheses.
Collapse
Affiliation(s)
- Sepideh Zolfaghari
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| | - Tohid Yousefi Rezaii
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| | - Saeed Meshgini
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| |
Collapse
|
3
|
Ganjali M, Mehridehnavi A, Rakhshani S, Khorasani A. Unsupervised Neural Manifold Alignment for Stable Decoding of Movement from Cortical Signals. Int J Neural Syst 2024; 34:2450006. [PMID: 38063378 DOI: 10.1142/s0129065724500060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
The stable decoding of movement parameters using neural activity is crucial for the success of brain-machine interfaces (BMIs). However, neural activity can be unstable over time, leading to changes in the parameters used for decoding movement, which can hinder accurate movement decoding. To tackle this issue, one approach is to transfer neural activity to a stable, low-dimensional manifold using dimensionality reduction techniques and align manifolds across sessions by maximizing correlations of the manifolds. However, the practical use of manifold stabilization techniques requires knowledge of the true subject intentions such as target direction or behavioral state. To overcome this limitation, an automatic unsupervised algorithm is proposed that determines movement target intention before manifold alignment in the presence of manifold rotation and scaling across sessions. This unsupervised algorithm is combined with a dimensionality reduction and alignment method to overcome decoder instabilities. The effectiveness of the BMI stabilizer method is represented by decoding the two-dimensional (2D) hand velocity of two rhesus macaque monkeys during a center-out-reaching movement task. The performance of the proposed method is evaluated using correlation coefficient and R-squared measures, demonstrating higher decoding performance compared to a state-of-the-art unsupervised BMI stabilizer. The results offer benefits for the automatic determination of movement intents in long-term BMI decoding. Overall, the proposed method offers a promising automatic solution for achieving stable and accurate movement decoding in BMI applications.
Collapse
Affiliation(s)
- Mohammadali Ganjali
- Department of Biomedical Engineering, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Alireza Mehridehnavi
- Department of Biomedical Engineering, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Sajed Rakhshani
- Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Abed Khorasani
- Department of Neurology, Northwestern University, Chicago, IL, 60611, USA
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| |
Collapse
|
4
|
Lin S, Jiang J, Huang K, Li L, He X, Du P, Wu Y, Liu J, Li X, Huang Z, Zhou Z, Yu Y, Gao J, Lei M, Wu H. Advanced Electrode Technologies for Noninvasive Brain-Computer Interfaces. ACS NANO 2023; 17:24487-24513. [PMID: 38064282 DOI: 10.1021/acsnano.3c06781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
Abstract
Brain-computer interfaces (BCIs) have garnered significant attention in recent years due to their potential applications in medical, assistive, and communication technologies. Building on this, noninvasive BCIs stand out as they provide a safe and user-friendly method for interacting with the human brain. In this work, we provide a comprehensive overview of the latest developments and advancements in material, design, and application of noninvasive BCIs electrode technology. We also explore the challenges and limitations currently faced by noninvasive BCI electrode technology and sketch out the technological roadmap from three dimensions: Materials and Design; Performances; Mode and Function. We aim to unite research efforts within the field of noninvasive BCI electrode technology, focusing on the consolidation of shared goals and fostering integrated development strategies among a diverse array of multidisciplinary researchers.
Collapse
Affiliation(s)
- Sen Lin
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Jingjing Jiang
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Kai Huang
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
- State Key Laboratory of Information Photonics and Optical Communications and School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Lei Li
- National Engineering Research Center of Electric Vehicles, Beijing Institute of Technology, Beijing 100081, China
| | - Xian He
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
| | - Peng Du
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
| | - Yufeng Wu
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
| | - Junchen Liu
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
- State Key Laboratory of Information Photonics and Optical Communications and School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Xilin Li
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
- Advanced Institute for Brain and Intelligence, Guangxi University, Nanning 530004, China
| | - Zhibao Huang
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Zenan Zhou
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Yuanhang Yu
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Jiaxin Gao
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Ming Lei
- State Key Laboratory of Information Photonics and Optical Communications and School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Hui Wu
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
| |
Collapse
|
5
|
Kim M, Choi MS, Jang GR, Bae JH, Park HS. EEG-controlled tele-grasping for undefined objects. Front Neurorobot 2023; 17:1293878. [PMID: 38186671 PMCID: PMC10770246 DOI: 10.3389/fnbot.2023.1293878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 11/29/2023] [Indexed: 01/09/2024] Open
Abstract
This paper presents a teleoperation system of robot grasping for undefined objects based on a real-time EEG (Electroencephalography) measurement and shared autonomy. When grasping an undefined object in an unstructured environment, real-time human decision is necessary since fully autonomous grasping may not handle uncertain situations. The proposed system allows involvement of a wide range of human decisions throughout the entire grasping procedure, including 3D movement of the gripper, selecting proper grasping posture, and adjusting the amount of grip force. These multiple decision-making procedures of the human operator have been implemented with six flickering blocks for steady-state visually evoked potentials (SSVEP) by dividing the grasping task into predefined substeps. Each substep consists of approaching the object, selecting posture and grip force, grasping, transporting to the desired position, and releasing. The graphical user interface (GUI) displays the current substep and simple symbols beside each flickering block for quick understanding. The tele-grasping of various objects by using real-time human decisions of selecting among four possible postures and three levels of grip force has been demonstrated. This system can be adapted to other sequential EEG-controlled teleoperation tasks that require complex human decisions.
Collapse
Affiliation(s)
- Minki Kim
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Myoung-Su Choi
- Applied Robot R&D Department, Korea Institute of Industrial Technology, Ansan, Republic of Korea
| | - Ga-Ram Jang
- Applied Robot R&D Department, Korea Institute of Industrial Technology, Ansan, Republic of Korea
| | - Ji-Hun Bae
- Applied Robot R&D Department, Korea Institute of Industrial Technology, Ansan, Republic of Korea
| | - Hyung-Soon Park
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| |
Collapse
|
6
|
Zhou Y, Yu T, Gao W, Huang W, Lu Z, Huang Q, Li Y. Shared Three-Dimensional Robotic Arm Control Based on Asynchronous BCI and Computer Vision. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3163-3175. [PMID: 37498753 DOI: 10.1109/tnsre.2023.3299350] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
OBJECTIVE A brain-computer interface (BCI) can be used to translate neuronal activity into commands to control external devices. However, using noninvasive BCI to control a robotic arm for movements in three-dimensional (3D) environments and accomplish complicated daily tasks, such as grasping and drinking, remains a challenge. APPROACH In this study, a shared robotic arm control system based on hybrid asynchronous BCI and computer vision was presented. The BCI model, which combines steady-state visual evoked potentials (SSVEPs) and blink-related electrooculography (EOG) signals, allows users to freely choose from fifteen commands in an asynchronous mode corresponding to robot actions in a 3D workspace and reach targets with a wide movement range, while computer vision can identify objects and assist a robotic arm in completing more precise tasks, such as grasping a target automatically. RESULTS Ten subjects participated in the experiments and achieved an average accuracy of more than 92% and a high trajectory efficiency for robot movement. All subjects were able to perform the reach-grasp-drink tasks successfully using the proposed shared control method, with fewer error commands and shorter completion time than with direct BCI control. SIGNIFICANCE Our results demonstrated the feasibility and efficiency of generating practical multidimensional control of an intuitive robotic arm by merging hybrid asynchronous BCI and computer vision-based recognition.
Collapse
|
7
|
Ai J, Meng J, Mai X, Zhu X. BCI Control of a Robotic Arm Based on SSVEP With Moving Stimuli for Reach and Grasp Tasks. IEEE J Biomed Health Inform 2023; 27:3818-3829. [PMID: 37200132 DOI: 10.1109/jbhi.2023.3277612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain-computer interface (BCI) provides a novel technology for patients and healthy human subjects to control a robotic arm. Currently, BCI control of a robotic arm to complete the reaching and grasping tasks in an unstructured environment is still challenging because the current BCI technology does not meet the requirement of manipulating a multi-degree robotic arm accurately and robustly. BCI based on steady-state visual evoked potential (SSVEP) could output a high information transfer rate; however, the conventional SSVEP paradigm failed to control a robotic arm to move continuously and accurately because the users have to switch their gaze between the flickering stimuli and the target frequently. This study proposed a novel SSVEP paradigm in which the flickering stimuli were attached to the robotic arm's gripper and moved with it. First, an offline experiment was designed to investigate the effects of moving flickering stimuli on the SSVEP's responses and decoding accuracy. After that, contrast experiments were conducted, and twelve subjects were recruited to participate in a robotic arm control experiment using both the paradigm one (P1, with moving flickering stimuli) and the paradigm two (P2, conventional fixed flickering stimuli) using a block randomization design to balance their sequences. Double blinks were used to trigger the grasping action asynchronously whenever the subjects were confident that the position of the robotic arm's gripper was accurate enough. Experimental results showed that the paradigm P1 with moving flickering stimuli provided a much better control performance than the conventional paradigm P2 in completing a reaching and grasping task in an unstructured environment. Subjects' subjective feedback scored by a NASA-TLX mental workload scale also corroborated the BCI control performance. The results of this study suggest that the proposed control interface based on SSVEP BCI provides a better solution for robotic arm control to complete the accurate reaching and grasping tasks.
Collapse
|
8
|
Wireless EEG: A survey of systems and studies. Neuroimage 2023; 269:119774. [PMID: 36566924 DOI: 10.1016/j.neuroimage.2022.119774] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/18/2022] [Accepted: 11/27/2022] [Indexed: 12/24/2022] Open
Abstract
The popular brain monitoring method of electroencephalography (EEG) has seen a surge in commercial attention in recent years, focusing mostly on hardware miniaturization. This has led to a varied landscape of portable EEG devices with wireless capability, allowing them to be used by relatively unconstrained users in real-life conditions outside of the laboratory. The wide availability and relative affordability of these devices provide a low entry threshold for newcomers to the field of EEG research. The large device variety and the at times opaque communication from their manufacturers, however, can make it difficult to obtain an overview of this hardware landscape. Similarly, given the breadth of existing (wireless) EEG knowledge and research, it can be challenging to get started with novel ideas. Therefore, this paper first provides a list of 48 wireless EEG devices along with a number of important-sometimes difficult-to-obtain-features and characteristics to enable their side-by-side comparison, along with a brief introduction to each of these aspects and how they may influence one's decision. Secondly, we have surveyed previous literature and focused on 110 high-impact journal publications making use of wireless EEG, which we categorized by application and analyzed for device used, number of channels, sample size, and participant mobility. Together, these provide a basis for informed decision making with respect to hardware and experimental precedents when considering new, wireless EEG devices and research. At the same time, this paper provides background material and commentary about pitfalls and caveats regarding this increasingly accessible line of research.
Collapse
|
9
|
Borkin D, Nemethova A, Nemeth M, Tanuska P. Control of a Production Manipulator with the Use of BCI in Conjunction with an Industrial PLC. SENSORS (BASEL, SWITZERLAND) 2023; 23:3546. [PMID: 37050605 PMCID: PMC10098813 DOI: 10.3390/s23073546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 03/15/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
Research in the field of gathering and analyzing biological signals is growing. The sensors are becoming more available and more non-invasive for examining such signals, which in the past required the inconvenient acquisition of data. This was achieved mainly by the fact that biological sensors were able to be built into wearable and portable devices. The representation and analysis of EEGs (electroencephalograms) is nowadays commonly used in various application areas. The application of the use of the EEG signals to the field of automation is still an unexplored area and therefore provides opportunities for interesting research. In our research, we focused on the area of processing automation; especially the use of the EEG signals to bridge the communication between control of individual processes and a human. In this study, the real-time communication between a PLC (programmable logic controller) and BCI (brain computer interface) was investigated and described. In the future, this approach can help people with physical disabilities to control certain machines or devices and therefore it could find applicability in overcoming physical disabilities. The main contribution of the article is, that we have demonstrated the possibility of interaction between a person and a manipulator controlled by a PLC with the help of a BCI. Potentially, with the expansion of functionality, such solutions will allow a person with physical disabilities to participate in the production process.
Collapse
|
10
|
Guo R, Lin Y, Luo X, Gao X, Zhang S. A robotic arm control system with simultaneous and sequential modes combining eye-tracking with steady-state visual evoked potential in virtual reality environment. Front Neurorobot 2023; 17:1146415. [PMID: 37051328 PMCID: PMC10083338 DOI: 10.3389/fnbot.2023.1146415] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 03/10/2023] [Indexed: 03/29/2023] Open
Abstract
At present, single-modal brain-computer interface (BCI) still has limitations in practical application, such as low flexibility, poor autonomy, and easy fatigue for subjects. This study developed an asynchronous robotic arm control system based on steady-state visual evoked potentials (SSVEP) and eye-tracking in virtual reality (VR) environment, including simultaneous and sequential modes. For simultaneous mode, target classification was realized by decision-level fusion of electroencephalography (EEG) and eye-gaze. The stimulus duration for each subject was non-fixed, which was determined by an adjustable window method. Subjects could autonomously control the start and stop of the system using triple blink and eye closure, respectively. For sequential mode, no calibration was conducted before operation. First, subjects’ gaze area was obtained through eye-gaze, and then only few stimulus blocks began to flicker. Next, target classification was determined using EEG. Additionally, subjects could reject false triggering commands using eye closure. In this study, the system effectiveness was verified through offline experiment and online robotic-arm grasping experiment. Twenty subjects participated in offline experiment. For simultaneous mode, average ACC and ITR at the stimulus duration of 0.9 s were 90.50% and 60.02 bits/min, respectively. For sequential mode, average ACC and ITR at the stimulus duration of 1.4 s were 90.47% and 45.38 bits/min, respectively. Fifteen subjects successfully completed the online tasks of grabbing balls in both modes, and most subjects preferred the sequential mode. The proposed hybrid brain-computer interface (h-BCI) system could increase autonomy, reduce visual fatigue, meet individual needs, and improve the efficiency of the system.
Collapse
Affiliation(s)
- Rongxiao Guo
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Yanfei Lin
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
- *Correspondence: Yanfei Lin,
| | - Xi Luo
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Shangen Zhang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
| |
Collapse
|
11
|
Li B, Zhang S, Hu Y, Lin Y, Gao X. Assembling global and local spatial-temporal filters to extract discriminant information of EEG in RSVP task. J Neural Eng 2023; 20. [PMID: 36745927 DOI: 10.1088/1741-2552/acb96f] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 02/06/2023] [Indexed: 02/08/2023]
Abstract
Objective.Brain-computer interface (BCI) system has developed rapidly in the past decade. And rapid serial visual presentation (RSVP) is an important BCI paradigm to detect the targets in high-speed image streams. For decoding electroencephalography (EEG) in RSVP task, the ensemble-model methods have better performance than the single-model ones.Approach.This study proposed a method based on ensemble learning to extract discriminant information of EEG. An extreme gradient boosting framework was utilized to sequentially generate the sub models, including one global spatial-temporal filter and a group of local ones. EEG was reshaped into a three-dimensional form by remapping the electrode dimension into a 2D array to learn the spatial-temporal features from real local space.Main results.A benchmark RSVP EEG dataset was utilized to evaluate the performance of the proposed method, where EEG data of 63 subjects were analyzed. Compared with several state-of-the-art methods, the spatial-temporal patterns of proposed method were more consistent with P300, and the proposed method can provide significantly better classification performance.Significance.The ensemble model in this study was end-to-end optimized, which can avoid error accumulation. The sub models optimized by gradient boosting theory can extract discriminant information complementarily and non-redundantly.
Collapse
Affiliation(s)
- Bowen Li
- School of Medicine, Tsinghua University, Beijing 100084, People's Republic of China
| | - Shangen Zhang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Yijun Hu
- School of Medicine, Tsinghua University, Beijing 100084, People's Republic of China
| | - Yanfei Lin
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Xiaorong Gao
- School of Medicine, Tsinghua University, Beijing 100084, People's Republic of China
| |
Collapse
|
12
|
Siribunyaphat N, Punsawad Y. Brain-Computer Interface Based on Steady-State Visual Evoked Potential Using Quick-Response Code Pattern for Wheelchair Control. SENSORS (BASEL, SWITZERLAND) 2023; 23:2069. [PMID: 36850667 PMCID: PMC9964090 DOI: 10.3390/s23042069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 02/09/2023] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
Brain-computer interfaces (BCIs) are widely utilized in control applications for people with severe physical disabilities. Several researchers have aimed to develop practical brain-controlled wheelchairs. An existing electroencephalogram (EEG)-based BCI based on steady-state visually evoked potential (SSVEP) was developed for device control. This study utilized a quick-response (QR) code visual stimulus pattern for a robust existing system. Four commands were generated using the proposed visual stimulation pattern with four flickering frequencies. Moreover, we employed a relative power spectrum density (PSD) method for the SSVEP feature extraction and compared it with an absolute PSD method. We designed experiments to verify the efficiency of the proposed system. The results revealed that the proposed SSVEP method and algorithm yielded an average classification accuracy of approximately 92% in real-time processing. For the wheelchair simulated via independent-based control, the proposed BCI control required approximately five-fold more time than the keyboard control for real-time control. The proposed SSVEP method using a QR code pattern can be used for BCI-based wheelchair control. However, it suffers from visual fatigue owing to long-time continuous control. We will verify and enhance the proposed system for wheelchair control in people with severe physical disabilities.
Collapse
Affiliation(s)
| | - Yunyong Punsawad
- School of Informatics, Walailak University, Nakhon Si Thammarat 80160, Thailand
- Informatics Innovative Center of Excellence, Walailak University, Nakhon Si Thammarat 80160, Thailand
| |
Collapse
|
13
|
Hardware Design of FPGA-Based Embedded Heuristic Optimization Technique for Solving a Robotic Problem: IC-PSO. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2023. [DOI: 10.1007/s13369-023-07655-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/13/2023]
|
14
|
Lyu J, Maýe A, Görner M, Ruppel P, Engel AK, Zhang J. Coordinating human-robot collaboration by EEG-based human intention prediction and vigilance control. Front Neurorobot 2022; 16:1068274. [DOI: 10.3389/fnbot.2022.1068274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 11/08/2022] [Indexed: 12/04/2022] Open
Abstract
In human-robot collaboration scenarios with shared workspaces, a highly desired performance boost is offset by high requirements for human safety, limiting speed and torque of the robot drives to levels which cannot harm the human body. Especially for complex tasks with flexible human behavior, it becomes vital to maintain safe working distances and coordinate tasks efficiently. An established approach in this regard is reactive servo in response to the current human pose. However, such an approach does not exploit expectations of the human's behavior and can therefore fail to react to fast human motions in time. To adapt the robot's behavior as soon as possible, predicting human intention early becomes a factor which is vital but hard to achieve. Here, we employ a recently developed type of brain-computer interface (BCI) which can detect the focus of the human's overt attention as a predictor for impending action. In contrast to other types of BCI, direct projection of stimuli onto the workspace facilitates a seamless integration in workflows. Moreover, we demonstrate how the signal-to-noise ratio of the brain response can be used to adjust the velocity of the robot movements to the vigilance or alertness level of the human. Analyzing this adaptive system with respect to performance and safety margins in a physical robot experiment, we found the proposed method could improve both collaboration efficiency and safety distance.
Collapse
|
15
|
Dong F, Wu L, Feng Y, Liang D. Research on Movement Intentions of Human's Left and Right Legs Based on EEG Signals. J Med Device 2022. [DOI: 10.1115/1.4055435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Abstract
Active rehabilitation training method can help stroke patients recover better and faster. However, the lower limb rehabilitation robot based on electroencephalogram (EEG) has low recognition accuracy now. A classification method based on EEG signals of motor imagery is proposed to enable patients to accurately control their left and right legs. Firstly, aiming at the unstable characteristics of EEG signals, an experimental protocl of motor imagery was constructed based on multi-joint motion coupling of left and right legs. The signals with time-frequency analysis and ERD/S analysis have proved the reliability and validity of the collected EEG signals. Then, the EEG signals generated by the protocol were preprocessed and Common Space Pattern (CSP) was used to extract their features. Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) are adapted and their accuracy of classification results are compared. Finally, on the basis of the proposed classifier with excellent performance, the classifier is used in the active control strategy of the lower limb rehabilitation robot, and the experiment verified that the average accuracy of two volunteers in controlling the lower limb rehabilitation robot reached 95.1%. This research provides a good theoretical basis for the realization and application of brain-computer interface in rehabilitation training.
Collapse
Affiliation(s)
- Fangyan Dong
- Ningbo University, No. 818, Fenghua Road, Jiangbei District, Ningbo City, Zhejiang Province, China, 315211 Ningbo
| | - Liangda Wu
- Ningbo University, No. 818, Fenghua Road, Jiangbei District, Ningbo City, Zhejiang Province, China, 315211 Ningbo
| | - Yongfei Feng
- Ningbo University, No. 818, Fenghua Road, Jiangbei District, Ningbo City, Zhejiang Province, China, 315211 Ningbo
| | - Dongtai Liang
- Ningbo University, No. 818, Fenghua Road, Jiangbei District, Ningbo City, Zhejiang Province, China, 315211 Ningbo
| |
Collapse
|
16
|
Lu Z, Zhang X, Li H, Zhang T, Gu L, Tao Q. An asynchronous artifact-enhanced electroencephalogram based control paradigm assisted by slight facial expression. Front Neurosci 2022; 16:892794. [PMID: 36051646 PMCID: PMC9424911 DOI: 10.3389/fnins.2022.892794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 07/25/2022] [Indexed: 11/25/2022] Open
Abstract
In this study, an asynchronous artifact-enhanced electroencephalogram (EEG)-based control paradigm assisted by slight-facial expressions (sFE-paradigm) was developed. The brain connectivity analysis was conducted to reveal the dynamic directional interactions among brain regions under sFE-paradigm. The component analysis was applied to estimate the dominant components of sFE-EEG and guide the signal processing. Enhanced by the artifact within the detected electroencephalogram (EEG), the sFE-paradigm focused on the mainstream defect as the insufficiency of real-time capability, asynchronous logic, and robustness. The core algorithm contained four steps, including “obvious non-sFE-EEGs exclusion,” “interface ‘ON’ detection,” “sFE-EEGs real-time decoding,” and “validity judgment.” It provided the asynchronous function, decoded eight instructions from the latest 100 ms signal, and greatly reduced the frequent misoperation. In the offline assessment, the sFE-paradigm achieved 96.46% ± 1.07 accuracy for interface “ON” detection and 92.68% ± 1.21 for sFE-EEGs real-time decoding, with the theoretical output timespan less than 200 ms. This sFE-paradigm was applied to two online manipulations for evaluating stability and agility. In “object-moving with a robotic arm,” the averaged intersection-over-union was 60.03 ± 11.53%. In “water-pouring with a prosthetic hand,” the average water volume was 202.5 ± 7.0 ml. During online, the sFE-paradigm performed no significant difference (P = 0.6521 and P = 0.7931) with commercial control methods (i.e., FlexPendant and Joystick), indicating a similar level of controllability and agility. This study demonstrated the capability of sFE-paradigm, enabling a novel solution to the non-invasive EEG-based control in real-world challenges.
Collapse
Affiliation(s)
- Zhufeng Lu
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China
- Shaanxi Key Laboratory of Intelligent Robot, Xi’an Jiaotong University, Xi’an, China
| | - Xiaodong Zhang
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China
- Shaanxi Key Laboratory of Intelligent Robot, Xi’an Jiaotong University, Xi’an, China
- *Correspondence: Xiaodong Zhang,
| | - Hanzhe Li
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China
- Shaanxi Key Laboratory of Intelligent Robot, Xi’an Jiaotong University, Xi’an, China
| | - Teng Zhang
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China
- Shaanxi Key Laboratory of Intelligent Robot, Xi’an Jiaotong University, Xi’an, China
| | - Linxia Gu
- Department of Biomedical and Chemical Engineering and Sciences, College of Engineering and Science, Florida Institute of Technology, Melbourne, FL, United States
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Wulumuqi, China
| |
Collapse
|
17
|
Liu K, Yu Y, Liu Y, Tang J, Liang X, Chu X, Zhou Z. A novel brain-controlled wheelchair combined with computer vision and augmented reality. Biomed Eng Online 2022; 21:50. [PMID: 35883092 PMCID: PMC9327337 DOI: 10.1186/s12938-022-01020-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 07/11/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Brain-controlled wheelchairs (BCWs) are important applications of brain-computer interfaces (BCIs). Currently, most BCWs are semiautomatic. When users want to reach a target of interest in their immediate environment, this semiautomatic interaction strategy is slow. METHODS To this end, we combined computer vision (CV) and augmented reality (AR) with a BCW and proposed the CVAR-BCW: a BCW with a novel automatic interaction strategy. The proposed CVAR-BCW uses a translucent head-mounted display (HMD) as the user interface, uses CV to automatically detect environments, and shows the detected targets through AR technology. Once a user has chosen a target, the CVAR-BCW can automatically navigate to it. For a few scenarios, the semiautomatic strategy might be useful. We integrated a semiautomatic interaction framework into the CVAR-BCW. The user can switch between the automatic and semiautomatic strategies. RESULTS We recruited 20 non-disabled subjects for this study and used the accuracy, information transfer rate (ITR), and average time required for the CVAR-BCW to reach each designated target as performance metrics. The experimental results showed that our CVAR-BCW performed well in indoor environments: the average accuracies across all subjects were 83.6% (automatic) and 84.1% (semiautomatic), the average ITRs were 8.2 bits/min (automatic) and 8.3 bits/min (semiautomatic), the average times required to reach a target were 42.4 s (automatic) and 93.4 s (semiautomatic), and the average workloads and degrees of fatigue for the two strategies were both approximately 20. CONCLUSIONS Our CVAR-BCW provides a user-centric interaction approach and a good framework for integrating more advanced artificial intelligence technologies, which may be useful in the field of disability assistance.
Collapse
Affiliation(s)
- Kaixuan Liu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| | - Yang Yu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China.
| | - Yadong Liu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| | - Jingsheng Tang
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| | - Xinbin Liang
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| | - Xingxing Chu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| | - Zongtan Zhou
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| |
Collapse
|
18
|
Zhang S, Gao X, Chen X. Humanoid Robot Walking in Maze Controlled by SSVEP-BCI Based on Augmented Reality Stimulus. Front Hum Neurosci 2022; 16:908050. [PMID: 35911600 PMCID: PMC9330178 DOI: 10.3389/fnhum.2022.908050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 06/22/2022] [Indexed: 11/17/2022] Open
Abstract
The application study of robot control based brain-computer interface (BCI) not only helps to promote the practicality of BCI but also helps to promote the advancement of robot technology, which is of great significance. Among the many obstacles, the importability of the stimulator brings much inconvenience to the robot control task. In this study, augmented reality (AR) technology was employed as the visual stimulator of steady-state visual evoked potential (SSVEP)-BCI and the robot walking experiment in the maze was designed to testify the applicability of the AR-BCI system. The online experiment was designed to complete the robot maze walking task and the robot walking commands were sent out by BCI system, in which human intentions were decoded by Filter Bank Canonical Correlation Analysis (FBCCA) algorithm. The results showed that all the 12 subjects could complete the robot walking task in the maze, which verified the feasibility of the AR-SSVEP-NAO system. This study provided an application demonstration for the robot control base on brain–computer interface, and further provided a new method for the future portable BCI system.
Collapse
Affiliation(s)
- Shangen Zhang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
- *Correspondence: Xiaogang Chen,
| |
Collapse
|
19
|
Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm. SENSORS 2022; 22:s22135000. [PMID: 35808498 PMCID: PMC9269816 DOI: 10.3390/s22135000] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 06/28/2022] [Accepted: 06/29/2022] [Indexed: 11/16/2022]
Abstract
Robotics has been successfully applied in the design of collaborative robots for assistance to people with motor disabilities. However, man-machine interaction is difficult for those who suffer severe motor disabilities. The aim of this study was to test the feasibility of a low-cost robotic arm control system with an EEG-based brain-computer interface (BCI). The BCI system relays on the Steady State Visually Evoked Potentials (SSVEP) paradigm. A cross-platform application was obtained in C++. This C++ platform, together with the open-source software Openvibe was used to control a Stäubli robot arm model TX60. Communication between Openvibe and the robot was carried out through the Virtual Reality Peripheral Network (VRPN) protocol. EEG signals were acquired with the 8-channel Enobio amplifier from Neuroelectrics. For the processing of the EEG signals, Common Spatial Pattern (CSP) filters and a Linear Discriminant Analysis classifier (LDA) were used. Five healthy subjects tried the BCI. This work allowed the communication and integration of a well-known BCI development platform such as Openvibe with the specific control software of a robot arm such as Stäubli TX60 using the VRPN protocol. It can be concluded from this study that it is possible to control the robotic arm with an SSVEP-based BCI with a reduced number of dry electrodes to facilitate the use of the system.
Collapse
|
20
|
Manual 3D Control of an Assistive Robotic Manipulator Using Alpha Rhythms and an Auditory Menu: A Proof-of-Concept. SIGNALS 2022. [DOI: 10.3390/signals3020024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Brain–Computer Interfaces (BCIs) have been regarded as potential tools for individuals with severe motor disabilities, such as those with amyotrophic lateral sclerosis, that render interfaces that rely on movement unusable. This study aims to develop a dependent BCI system for manual end-point control of a robotic arm. A proof-of-concept system was devised using parieto-occipital alpha wave modulation and a cyclic menu with auditory cues. Users choose a movement to be executed and asynchronously stop said action when necessary. Tolerance intervals allowed users to cancel or confirm actions. Eight able-bodied subjects used the system to perform a pick-and-place task. To investigate the potential learning effects, the experiment was conducted twice over the course of two consecutive days. Subjects obtained satisfactory completion rates (84.0 ± 15.0% and 74.4 ± 34.5% for the first and second day, respectively) and high path efficiency (88.9 ± 11.7% and 92.2 ± 9.6%). Subjects took on average 439.7 ± 203.3 s to complete each task, but the robot was only in motion 10% of the time. There was no significant difference in performance between both days. The developed control scheme provided users with intuitive control, but a considerable amount of time is spent waiting for the right target (auditory cue). Implementing other brain signals may increase its speed.
Collapse
|
21
|
Riemannian geometry-based transfer learning for reducing training time in c-VEP BCIs. Sci Rep 2022; 12:9818. [PMID: 35701505 PMCID: PMC9197830 DOI: 10.1038/s41598-022-14026-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 05/31/2022] [Indexed: 12/05/2022] Open
Abstract
One of the main problems that a brain-computer interface (BCI) face is that a training stage is required for acquiring training data to calibrate its classification model just before every use. Transfer learning is a promising method for addressing the problem. In this paper, we propose a Riemannian geometry-based transfer learning algorithm for code modulated visual evoked potential (c-VEP)-based BCIs, which can effectively reduce the calibration time without sacrificing the classification accuracy. The algorithm includes the main procedures of log-Euclidean data alignment (LEDA), super-trial construction, covariance matrix estimation, training accuracy-based subject selection (TSS) and minimum distance to mean classification. Among them, the LEDA reduces the difference in data distribution between subjects, whereas the TSS promotes the similarity between a target subject and the source subjects. The resulting performance of transfer learning is improved significantly. Sixteen subjects participated in a c-VEP BCI experiment and the recorded data were used in offline analysis. Leave-one subject-out (LOSO) cross-validation was used to evaluate the proposed algorithm on the data set. The results showed that the algorithm achieved much higher classification accuracy than the subject-specific (baseline) algorithm with the same number of training trials. Equivalently, the algorithm reduces the training time of the BCI at the same performance level and thus facilitates its application in real world.
Collapse
|
22
|
Gou H, Piao Y, Ren J, Zhao Q, Chen Y, Liu C, Hong W, Zhang X. A solution to supervised motor imagery task in the BCI Controlled Robot Contest in World Robot Contest. BRAIN SCIENCE ADVANCES 2022. [DOI: 10.26599/bsa.2022.9050014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Background: One of the most prestigious competitions in the world is the World Robot Conference. This paper presents the winning solution to the supervised motor imagery (MI) task in the BCI Controlled Robot Contest in World Robot Contest 2021. Methods: Data augmentation, preprocessing, feature extraction, and model training are the main components of the solution. The model is based on EEGNet, a popular convolutional neural networks model for classifying electroencephalography data. Results: Despite the model’s lack of stability, this solution was the most successful in the task. The channels closest to the vertex were the most helpful in feature extraction. Conclusion: This solution is suitable for supervised MI tasks not only in this competition but also in future scenarios.
Collapse
Affiliation(s)
- Huixing Gou
- Key Laboratory of Brain Function and Disease, Chinese Academy of Sciences, School of Life Science, Division of Life Science and Medicine, University of Science & Technology of China, Hefei 230027, Anhui, China
- These authors contributed equally to this work
| | - Yi Piao
- Institute of Advanced Technology, University of Science and Technology of China, Hefei 30001, Anhui, China
- These authors contributed equally to this work
| | - Jiecheng Ren
- Key Laboratory of Brain Function and Disease, Chinese Academy of Sciences, School of Life Science, Division of Life Science and Medicine, University of Science & Technology of China, Hefei 230027, Anhui, China
| | - Qian Zhao
- Key Laboratory of Brain Function and Disease, Chinese Academy of Sciences, School of Life Science, Division of Life Science and Medicine, University of Science & Technology of China, Hefei 230027, Anhui, China
| | - Yijun Chen
- Key Laboratory of Brain Function and Disease, Chinese Academy of Sciences, School of Life Science, Division of Life Science and Medicine, University of Science & Technology of China, Hefei 230027, Anhui, China
| | - Chang Liu
- Key Laboratory of Brain Function and Disease, Chinese Academy of Sciences, School of Life Science, Division of Life Science and Medicine, University of Science & Technology of China, Hefei 230027, Anhui, China
| | - Wei Hong
- Key Laboratory of Brain Function and Disease, Chinese Academy of Sciences, School of Life Science, Division of Life Science and Medicine, University of Science & Technology of China, Hefei 230027, Anhui, China
| | - Xiaochu Zhang
- Key Laboratory of Brain Function and Disease, Chinese Academy of Sciences, School of Life Science, Division of Life Science and Medicine, University of Science & Technology of China, Hefei 230027, Anhui, China
- Institute of Advanced Technology, University of Science and Technology of China, Hefei 30001, Anhui, China
| |
Collapse
|
23
|
Liu B, Wang Y, Gao X, Chen X. eldBETA: A Large Eldercare-oriented Benchmark Database of SSVEP-BCI for the Aging Population. Sci Data 2022; 9:252. [PMID: 35641547 PMCID: PMC9156785 DOI: 10.1038/s41597-022-01372-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 05/05/2022] [Indexed: 11/10/2022] Open
Abstract
Global population aging poses an unprecedented challenge and calls for a rising effort in eldercare and healthcare. Steady-state visual evoked potential based brain-computer interface (SSVEP-BCI) boasts its high transfer rate and shows great promise in real-world applications to support aging. Public database is critically important for designing the SSVEP-BCI systems. However, the SSVEP-BCI database tailored for the elder is scarce in existing studies. Therefore, in this study, we present a large eldercare-oriented BEnchmark database of SSVEP-BCI for The Aging population (eldBETA). The eldBETA database consisted of the 64-channel electroencephalogram (EEG) from 100 elder participants, each of whom performed seven blocks of 9-target SSVEP-BCI task. The quality and characteristics of the eldBETA database were validated by a series of analyses followed by a classification analysis of thirteen frequency recognition methods. We expect that the eldBETA database would provide a substrate for the design and optimization of the BCI systems intended for the elders. The eldBETA database is open-access for research and can be downloaded from the website 10.6084/m9.figshare.18032669. Measurement(s) | Steady-state visual evoked potential (SSVEP) | Technology Type(s) | Electroencephalography (EEG) | Factor Type(s) | Elder population | Sample Characteristic - Organism | Homo sapiens | Sample Characteristic - Environment | Electromagnetic shielding room |
Collapse
Affiliation(s)
- Bingchuan Liu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Yijun Wang
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, 300192, China.
| |
Collapse
|
24
|
Wang K, Zhai DH, Xiong Y, Hu L, Xia Y. An MVMD-CCA Recognition Algorithm in SSVEP-Based BCI and Its Application in Robot Control. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:2159-2167. [PMID: 34951857 DOI: 10.1109/tnnls.2021.3135696] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article proposes a novel recognition algorithm for the steady-state visual evoked potentials (SSVEP)-based brain-computer interface (BCI) system. By combining the advantages of multivariate variational mode decomposition (MVMD) and canonical correlation analysis (CCA), an MVMD-CCA algorithm is investigated to improve the detection ability of SSVEP electroencephalogram (EEG) signals. In comparison with the classical filter bank canonical correlation analysis (FBCCA), the nonlinear and non-stationary EEG signals are decomposed into a fixed number of sub-bands by MVMD, which can enhance the effect of SSVEP-related sub-bands. The experimental results show that MVMD-CCA can effectively reduce the influence of noise and EEG artifacts and improve the performance of SSVEP-based BCI. The offline experiments show that the average accuracies of MVMD-CCA in the training dataset and testing dataset are improved by 3.08% and 1.67%, respectively. In the SSVEP-based online robotic manipulator grasping experiment, the recognition accuracies of the four subjects are 92.5%, 93.33%, 90.83%, and 91.67%, respectively.
Collapse
|
25
|
Pan K, Li L, Zhang L, Li S, Yang Z, Guo Y. A Noninvasive BCI System for 2D Cursor Control Using a Spectral-Temporal Long Short-Term Memory Network. Front Comput Neurosci 2022; 16:799019. [PMID: 35399917 PMCID: PMC8984968 DOI: 10.3389/fncom.2022.799019] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 02/22/2022] [Indexed: 01/16/2023] Open
Abstract
Two-dimensional cursor control is an important and challenging problem in the field of electroencephalography (EEG)-based brain computer interfaces (BCIs) applications. However, most BCIs based on categorical outputs are incapable of generating accurate and smooth control trajectories. In this article, a novel EEG decoding framework based on a spectral-temporal long short-term memory (stLSTM) network is proposed to generate control signals in the horizontal and vertical directions for accurate cursor control. Precisely, the spectral information is used to decode the subject's motor imagery intention, and the error-related P300 information is used to detect a deviation in the movement trajectory. The concatenated spectral and temporal features are fed into the stLSTM network and mapped to the velocities in vertical and horizontal directions of the 2D cursor under the velocity-constrained (VC) strategy, which enables the decoding network to fit the velocity in the imaginary direction and simultaneously suppress the velocity in the non-imaginary direction. This proposed framework was validated on a public real BCI control dataset. Results show that compared with the state-of-the-art method, the RMSE of the proposed method in the non-imaginary directions on the testing sets of 2D control tasks is reduced by an average of 63.45%. Besides, the visualization of the actual trajectories distribution of the cursor also demonstrates that the decoupling of velocity is capable of yielding accurate cursor control in complex path tracking tasks and significantly improves the control accuracy.
Collapse
|
26
|
A Human-Machine Interface Based on an EOG and a Gyroscope for Humanoid Robot Control and Its Application to Home Services. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1650387. [PMID: 35345662 PMCID: PMC8957419 DOI: 10.1155/2022/1650387] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 01/28/2022] [Accepted: 02/14/2022] [Indexed: 11/18/2022]
Abstract
The human-machine interface (HMI) has been studied for robot teleoperation with the aim of empowering people who experience motor disabilities to increase their interaction with the physical environment. The challenge of an HMI for robot control is to rapidly, accurately, and sufficiently produce control commands. In this paper, an asynchronous HMI based on an electrooculogram (EOG) and a gyroscope is proposed using two self-paced and endogenous features, double blink and head rotation. By designing the multilevel graphical user interface (GUI), the user can rotate his head to move the cursor of the GUI and create a double blink to trigger the button in the interface. The proposed HMI is able to supply sufficient commands at the same time with high accuracy (ACC) and low response time (RT). In the trigger task of sixteen healthy subjects, the target was clicked from 20 options with ACC of 99.2% and RT 2.34 s. Furthermore, a continuous strategy that uses motion start and motion stop commands to create a certain robot motion is proposed to control a humanoid robot based on the HMI. It avoids the situation that combines some commands to achieve one motion or converts the certain motion to a command directly. In the home service experiment, all subjects operated a humanoid robot changing the state of a switch, grasping a key, and putting it into a box. The time ratio between HMI control and manual control was 1.22, and the number of commands ratio was 1.18. The results demonstrated that the continuous strategy and proposed HMI can improve performance in humanoid robot control.
Collapse
|
27
|
Peng F, Li M, Zhao SN, Xu Q, Xu J, Wu H. Control of a Robotic Arm With an Optimized Common Template-Based CCA Method for SSVEP-Based BCI. Front Neurorobot 2022; 16:855825. [PMID: 35370596 PMCID: PMC8965569 DOI: 10.3389/fnbot.2022.855825] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 02/11/2022] [Indexed: 11/16/2022] Open
Abstract
Recently, the robotic arm control system based on a brain-computer interface (BCI) has been employed to help the disabilities to improve their interaction abilities without body movement. However, it's the main challenge to implement the desired task by a robotic arm in a three-dimensional (3D) space because of the instability of electroencephalogram (EEG) signals and the interference by the spontaneous EEG activities. Moreover, the free motion control of a manipulator in 3D space is a complicated operation that requires more output commands and higher accuracy for brain activity recognition. Based on the above, a steady-state visual evoked potential (SSVEP)-based synchronous BCI system with six stimulus targets was designed to realize the motion control function of the seven degrees of freedom (7-DOF) robotic arm. Meanwhile, a novel template-based method, which builds the optimized common templates (OCTs) from various subjects and learns spatial filters from the common templates and the multichannel EEG signal, was applied to enhance the SSVEP recognition accuracy, called OCT-based canonical correlation analysis (OCT-CCA). The comparison results of offline experimental based on a public benchmark dataset indicated that the proposed OCT-CCA method achieved significant improvement of detection accuracy in contrast to CCA and individual template-based CCA (IT-CCA), especially using a short data length. In the end, online experiments with five healthy subjects were implemented for achieving the manipulator real-time control system. The results showed that all five subjects can accomplish the tasks of controlling the manipulator to reach the designated position in the 3D space independently.
Collapse
Affiliation(s)
- Fang Peng
- Zhongshan Institute, University of Electronic Science and Technology of China, Zhongshan, China
| | - Ming Li
- School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Su-na Zhao
- College of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou, China
- *Correspondence: Su-na Zhao
| | - Qinyi Xu
- School of Automation, Guangdong University of Technology, Guangzhou, China
| | - Jiajun Xu
- Zhongshan Institute, University of Electronic Science and Technology of China, Zhongshan, China
| | - Haozhen Wu
- Zhongshan Institute, University of Electronic Science and Technology of China, Zhongshan, China
| |
Collapse
|
28
|
Continuous Hybrid BCI Control for Robotic Arm Using Noninvasive Electroencephalogram, Computer Vision, and Eye Tracking. MATHEMATICS 2022. [DOI: 10.3390/math10040618] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
The controlling of robotic arms based on brain–computer interface (BCI) can revolutionize the quality of life and living conditions for individuals with physical disabilities. Invasive electroencephalography (EEG)-based BCI has been able to control multiple degrees of freedom (DOFs) robotic arms in three dimensions. However, it is still hard to control a multi-DOF robotic arm to reach and grasp the desired target accurately in complex three-dimensional (3D) space by a noninvasive system mainly due to the limitation of EEG decoding performance. In this study, we propose a noninvasive EEG-based BCI for a robotic arm control system that enables users to complete multitarget reach and grasp tasks and avoid obstacles by hybrid control. The results obtained from seven subjects demonstrated that motor imagery (MI) training could modulate brain rhythms, and six of them completed the online tasks using the hybrid-control-based robotic arm system. The proposed system shows effective performance due to the combination of MI-based EEG, computer vision, gaze detection, and partially autonomous guidance, which drastically improve the accuracy of online tasks and reduce the brain burden caused by long-term mental activities.
Collapse
|
29
|
Cherloo MN, Amiri HK, Daliri MR. Spatio-Spectral CCA (SS-CCA): A Novel Approach for Frequency Recognition in SSVEP-Based BCI. J Neurosci Methods 2022; 371:109499. [DOI: 10.1016/j.jneumeth.2022.109499] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 01/31/2022] [Accepted: 02/08/2022] [Indexed: 12/24/2022]
|
30
|
Customizing skills for assistive robotic manipulators, an inverse reinforcement learning approach with error-related potentials. Commun Biol 2021; 4:1406. [PMID: 34916587 PMCID: PMC8677775 DOI: 10.1038/s42003-021-02891-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 11/10/2021] [Indexed: 11/09/2022] Open
Abstract
Robotic assistance via motorized robotic arm manipulators can be of valuable assistance to individuals with upper-limb motor disabilities. Brain-computer interfaces (BCI) offer an intuitive means to control such assistive robotic manipulators. However, BCI performance may vary due to the non-stationary nature of the electroencephalogram (EEG) signals. It, hence, cannot be used safely for controlling tasks where errors may be detrimental to the user. Avoiding obstacles is one such task. As there exist many techniques to avoid obstacles in robotics, we propose to give the control to the robot to avoid obstacles and to leave to the user the choice of the robot behavior to do so a matter of personal preference as some users may be more daring while others more careful. We enable the users to train the robot controller to adapt its way to approach obstacles relying on BCI that detects error-related potentials (ErrP), indicative of the user’s error expectation of the robot’s current strategy to meet their preferences. Gaussian process-based inverse reinforcement learning, in combination with the ErrP-BCI, infers the user’s preference and updates the obstacle avoidance controller so as to generate personalized robot trajectories. We validate the approach in experiments with thirteen able-bodied subjects using a robotic arm that picks up, places and avoids real-life objects. Results show that the algorithm can learn user’s preference and adapt the robot behavior rapidly using less than five demonstrations not necessarily optimal. Teaching an assistive robotic manipulator to move objects in a cluttered table requires demonstrations from expert operators, but what if the experts are individuals with motor disabilities? Batzianoulis et al. propose a learning approach which combines robot autonomy and a brain-computer interfacing that decodes whether the generated trajectories meet the user’s criteria, and show how their system enables the robot to learn individual user’s preferred behaviors using less than five demonstrations that are not necessarily optimal.
Collapse
|
31
|
Easttom C, Bianchi L, Valeriani D, Nam CS, Hossaini A, Zapała D, Roman-Gonzalez A, Singh AK, Antonietti A, Sahonero-Alvarez G, Balachandran P. A functional BCI model by the P2731 working group: control interface. BRAIN-COMPUTER INTERFACES 2021. [DOI: 10.1080/2326263x.2021.2002004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
| | | | | | - Chang S. Nam
- North Carolina State University, Raleigh, NC, USA
| | | | - Dariusz Zapała
- The John Paul II Catholic University of Lublin, Lublin, Poland
| | | | - Avinash K Singh
- Australian Artificial Intelligence Institute, University of Technology Sydney, Australia
| | | | | | | |
Collapse
|
32
|
Gutierrez-Martinez J, Mercado-Gutierrez JA, Carvajal-Gámez BE, Rosas-Trigueros JL, Contreras-Martinez AE. Artificial Intelligence Algorithms in Visual Evoked Potential-Based Brain-Computer Interfaces for Motor Rehabilitation Applications: Systematic Review and Future Directions. Front Hum Neurosci 2021; 15:772837. [PMID: 34899220 PMCID: PMC8656949 DOI: 10.3389/fnhum.2021.772837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 11/04/2021] [Indexed: 11/13/2022] Open
Abstract
Brain-Computer Interface (BCI) is a technology that uses electroencephalographic (EEG) signals to control external devices, such as Functional Electrical Stimulation (FES). Visual BCI paradigms based on P300 and Steady State Visually Evoked potentials (SSVEP) have shown high potential for clinical purposes. Numerous studies have been published on P300- and SSVEP-based non-invasive BCIs, but many of them present two shortcomings: (1) they are not aimed for motor rehabilitation applications, and (2) they do not report in detail the artificial intelligence (AI) methods used for classification, or their performance metrics. To address this gap, in this paper the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology was applied to prepare a systematic literature review (SLR). Papers older than 10 years, repeated or not related to a motor rehabilitation application, were excluded. Of all the studies, 51.02% referred to theoretical analysis of classification algorithms. Of the remaining, 28.48% were for spelling, 12.73% for diverse applications (control of wheelchair or home appliances), and only 7.77% were focused on motor rehabilitation. After the inclusion and exclusion criteria were applied and quality screening was performed, 34 articles were selected. Of them, 26.47% used the P300 and 55.8% the SSVEP signal. Five applications categories were established: Rehabilitation Systems (17.64%), Virtual Reality environments (23.52%), FES (17.64%), Orthosis (29.41%), and Prosthesis (11.76%). Of all the works, only four performed tests with patients. The most reported machine learning (ML) algorithms used for classification were linear discriminant analysis (LDA) (48.64%) and support vector machine (16.21%), while only one study used a deep learning algorithm: a Convolutional Neural Network (CNN). The reported accuracy ranged from 38.02 to 100%, and the Information Transfer Rate from 1.55 to 49.25 bits per minute. While LDA is still the most used AI algorithm, CNN has shown promising results, but due to their high technical implementation requirements, many researchers do not justify its implementation as worthwile. To achieve quick and accurate online BCIs for motor rehabilitation applications, future works on SSVEP-, P300-based and hybrid BCIs should focus on optimizing the visual stimulation module and the training stage of ML and DL algorithms.
Collapse
Affiliation(s)
- Josefina Gutierrez-Martinez
- División de Investigación en Ingeniería Médica, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, Mexico City, Mexico
| | - Jorge A. Mercado-Gutierrez
- División de Investigación en Ingeniería Médica, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, Mexico City, Mexico
| | | | | | | |
Collapse
|
33
|
A CNN-based multi-target fast classification method for AR-SSVEP. Comput Biol Med 2021; 141:105042. [PMID: 34802710 DOI: 10.1016/j.compbiomed.2021.105042] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 11/13/2021] [Accepted: 11/13/2021] [Indexed: 11/20/2022]
Abstract
Because an augmented-reality-based brain-computer interface (AR-BCI) is easily disturbed by external factors, the traditional electroencephalograph (EEG) classification algorithms fail to meet the real-time processing requirements with a large number of stimulus targets or in a real environment. We propose a multi-target fast classification method for augmented-reality-based steady-state visual evoked potential (AR-SSVEP), using a convolutional neural network (CNN). To explore the availability and accuracy of high-efficiency multi-target classification methods in AR-SSVEP with a short stimulation duration, a similar stimulus layout was used for a computer screen (PC) and an optical see-through head-mounted display (OST-HMD) device (HoloLens). The experiment included nine flicker stimuli of different frequencies, and a multi-target fast classification method based on a CNN was constructed to complete nine classification tasks, for which the average accuracy of AR-BCI in our CNN model at 0.5- and 1-s stimulus duration was 67.93% and 80.83%, respectively. These results verified the efficacy of the proposed model for processing multi-target classification in AR-BCI.
Collapse
|
34
|
Chen L, Chen P, Zhao S, Luo Z, Chen W, Pei Y, Zhao H, Jiang J, Xu M, Yan Y, Yin E. Adaptive asynchronous control system of robotic arm based on augmented reality-assisted brain-computer interface. J Neural Eng 2021; 18. [PMID: 34654000 DOI: 10.1088/1741-2552/ac3044] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 10/15/2021] [Indexed: 11/12/2022]
Abstract
Objective. Brain-controlled robotic arms have shown broad application prospects with the development of robotics, science and information decoding. However, disadvantages, such as poor flexibility restrict its wide application.Approach. In order to alleviate these drawbacks, this study proposed a robotic arm asynchronous control system based on steady-state visual evoked potential (SSVEP) in an augmented reality (AR) environment. In the AR environment, the participants were able to concurrently see the robot arm and visual stimulation interface through the AR device. Therefore, there was no need to switch attention frequently between the visual stimulation interface and the robotic arm. This study proposed a multi-template algorithm based on canonical correlation analysis and task-related component analysis to identify 12 targets. An optimization strategy based on dynamic window was adopted to adjust the duration of visual stimulation adaptively.Main results. Experimental results of this study found that the high-frequency SSVEP-based brain-computer interface (BCI) realized the switch of the system state, which controlled the robotic arm asynchronously. The average accuracy of the offline experiment was 94.97%, whereas the average information translate rate was 67.37 ± 14.27 bits·min-1. The online results from ten healthy subjects showed that the average selection time of a single online command was 2.04 s, which effectively reduced the visual fatigue of the subjects. Each subject could quickly complete the puzzle task.Significance. The experimental results demonstrated the feasibility and potential of this human-computer interaction strategy and provided new ideas for BCI-controlled robots.
Collapse
Affiliation(s)
- Lingling Chen
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin 300130, People's Republic of China.,Engineering Research Center of Intelligent Rehabilitation Device and Detection Technology Ministry of Education, Tianjin 300130, People's Republic of China
| | - Pengfei Chen
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin 300130, People's Republic of China.,Engineering Research Center of Intelligent Rehabilitation Device and Detection Technology Ministry of Education, Tianjin 300130, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Shaokai Zhao
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Zhiguo Luo
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Wei Chen
- National Research Center for Rehabilitation Technical Aids, Beijing 100176, People's Republic of China
| | - Yu Pei
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Hongyu Zhao
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China.,East China University of Science and Technology, Shanghai 200237, People's Republic of China
| | - Jing Jiang
- National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Center, Beijing 100094, People's Republic of China
| | - Minpeng Xu
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China.,Tianjin University, Tianjin 300072, People's Republic of China
| | - Ye Yan
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Erwei Yin
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| |
Collapse
|
35
|
Khan MA, Saibene M, Das R, Brunner IC, Puthusserypady S. Emergence of flexible technology in developing advanced systems for post-stroke rehabilitation: a comprehensive review. J Neural Eng 2021; 18. [PMID: 34736239 DOI: 10.1088/1741-2552/ac36aa] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Accepted: 11/04/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Stroke is one of the most common neural disorders, which causes physical disabilities and motor impairments among its survivors. Several technologies have been developed for providing stroke rehabilitation and to assist the survivors in performing their daily life activities. Currently, the use of flexible technology (FT) for stroke rehabilitation systems is on a rise that allows the development of more compact and lightweight wearable systems, which stroke survivors can easily use for long-term activities. APPROACH For stroke applications, FT mainly includes the "flexible/stretchable electronics", "e-textile (electronic textile)" and "soft robotics". Thus, a thorough literature review has been performed to report the practical implementation of FT for post-stroke application. MAIN RESULTS In this review, the highlights of the advancement of FT in stroke rehabilitation systems are dealt with. Such systems mainly involve the "biosignal acquisition unit", "rehabilitation devices" and "assistive systems". In terms of biosignals acquisition, electroencephalography (EEG) and electromyography (EMG) are comprehensively described. For rehabilitation/assistive systems, the application of functional electrical stimulation (FES) and robotics units (exoskeleton, orthosis, etc.) have been explained. SIGNIFICANCE This is the first review article that compiles the different studies regarding flexible technology based post-stroke systems. Furthermore, the technological advantages, limitations, and possible future implications are also discussed to help improve and advance the flexible systems for the betterment of the stroke community.
Collapse
Affiliation(s)
- Muhammad Ahmed Khan
- Technical University of Denmark, Ørsteds Plads Building 345C, Room 215, Lyngby, 2800, DENMARK
| | - Matteo Saibene
- Technical University of Denmark, Ørsteds Plads, Building 345C, Lyngby, 2800, DENMARK
| | - Rig Das
- Technical University of Denmark, Ørsteds Plads Building 345C, Room 214, Lyngby, 2800, DENMARK
| | | | | |
Collapse
|
36
|
Feng N, Hu F, Wang H, Zhou B. Motor Intention Decoding from the Upper Limb by Graph Convolutional Network Based on Functional Connectivity. Int J Neural Syst 2021; 31:2150047. [PMID: 34693880 DOI: 10.1142/s0129065721500477] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Decoding brain intention from noninvasively measured neural signals has recently been a hot topic in brain-computer interface (BCI). The motor commands about the movements of fine parts can increase the degrees of freedom under control and be applied to external equipment without stimulus. In the decoding process, the classifier is one of the key factors, and the graph information of the EEG was ignored by most researchers. In this paper, a graph convolutional network (GCN) based on functional connectivity was proposed to decode the motor intention of four fine parts movements (shoulder, elbow, wrist, hand). First, event-related desynchronization was analyzed to reveal the differences between the four classes. Second, functional connectivity was constructed by using synchronization likelihood (SL), phase-locking value (PLV), H index (H), mutual information (MI), and weighted phase-lag index (WPLI) to acquire the electrode pairs with a difference. Subsequently, a GCN and convolutional neural networks (CNN) were performed based on functional topological structures and time points, respectively. The results demonstrated that the proposed method achieved a decoding accuracy of up to 92.81% in the four-class task. Besides, the combination of GCN and functional connectivity can promote the development of BCI.
Collapse
Affiliation(s)
- Naishi Feng
- Department of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, P. R. China
| | - Fo Hu
- Department of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, P. R. China
| | - Hong Wang
- Department of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, P. R. China
| | - Bin Zhou
- Department of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, P. R. China
| |
Collapse
|
37
|
KARADUMAN M, KARCİ A. Determining the Demands of Disabled People by Artificial Intelligence Methods. COMPUTER SCIENCE 2021. [DOI: 10.53070/bbd.990485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
38
|
Mridha MF, Das SC, Kabir MM, Lima AA, Islam MR, Watanobe Y. Brain-Computer Interface: Advancement and Challenges. SENSORS 2021; 21:s21175746. [PMID: 34502636 PMCID: PMC8433803 DOI: 10.3390/s21175746] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 08/15/2021] [Accepted: 08/20/2021] [Indexed: 02/04/2023]
Abstract
Brain-Computer Interface (BCI) is an advanced and multidisciplinary active research domain based on neuroscience, signal processing, biomedical sensors, hardware, etc. Since the last decades, several groundbreaking research has been conducted in this domain. Still, no comprehensive review that covers the BCI domain completely has been conducted yet. Hence, a comprehensive overview of the BCI domain is presented in this study. This study covers several applications of BCI and upholds the significance of this domain. Then, each element of BCI systems, including techniques, datasets, feature extraction methods, evaluation measurement matrices, existing BCI algorithms, and classifiers, are explained concisely. In addition, a brief overview of the technologies or hardware, mostly sensors used in BCI, is appended. Finally, the paper investigates several unsolved challenges of the BCI and explains them with possible solutions.
Collapse
Affiliation(s)
- M. F. Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Sujoy Chandra Das
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
- Correspondence:
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu 965-8580, Japan;
| |
Collapse
|
39
|
Cao L, Li G, Xu Y, Zhang H, Shu X, Zhang D. A brain-actuated robotic arm system using non-invasive hybrid brain-computer interface and shared control strategy. J Neural Eng 2021; 18. [PMID: 33862607 DOI: 10.1088/1741-2552/abf8cb] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 04/16/2021] [Indexed: 01/20/2023]
Abstract
Objective.The electroencephalography (EEG)-based brain-computer interfaces (BCIs) have been used in the control of robotic arms. The performance of non-invasive BCIs may not be satisfactory due to the poor quality of EEG signals, so the shared control strategies were tried as an alternative solution. However, most of the existing shared control methods set the arbitration rules manually, which highly depended on the specific tasks and developer's experience. In this study, we proposed a novel shared control model that automatically optimized the control commands in a dynamical way based on the context in real-time control. Besides, we employed the hybrid BCI to better allocate commands with multiple functions. The system allowed non-invasive BCI users to manipulate a robotic arm moving in a three-dimensional (3D) space and complete a pick-place task of multiple objects.Approach.Taking the scene information obtained by computer vision as a knowledge base, a machine agent was designed to infer the user's intention and generate automatic commands. Based on the inference confidence and user's characteristic, the proposed shared control model fused the machine autonomy and human intention dynamically for robotic arm motion optimization during the online control. In addition, we introduced a hybrid BCI scheme that applied steady-state visual evoked potentials and motor imagery to the divided primary and secondary BCI interfaces to better allocate the BCI resources (e.g. decoding computing power, screen occupation) and realize the multi-dimensional control of the robotic arm.Main results.Eleven subjects participated in the online experiments of picking and placing five objects that scattered at different positions in a 3D workspace. The results showed that most of the subjects could control the robotic arm to complete accurate and robust picking task with an average success rate of approximately 85% under the shared control strategy, while the average success rate of placing task controlled by pure BCI was 50% approximately.Significance.In this paper, we proposed a novel shared controller for motion automatic optimization, together with a hybrid BCI control scheme that allocated paradigms according to the importance of commands to realize multi-dimensional and effective control of a robotic arm. Our study indicated that the shared control strategy with hybrid BCI could greatly improve the performance of the brain-actuated robotic arm system.
Collapse
Affiliation(s)
- Linfeng Cao
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Guangye Li
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Yang Xu
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Heng Zhang
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Xiaokang Shu
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Dingguo Zhang
- Department of Electronic and Electrical Engineering, University of Bath, Bath, United Kingdom
| |
Collapse
|
40
|
Dereli S. A new modified grey wolf optimization algorithm proposal for a fundamental engineering problem in robotics. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06050-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
41
|
Maymandi H, Perez Benitez JL, Gallegos-Funes F, Perez Benitez JA. A novel monitor for practical brain-computer interface applications based on visual evoked potential. BRAIN-COMPUTER INTERFACES 2021. [DOI: 10.1080/2326263x.2021.1900032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Hamidreza Maymandi
- Laboratorio de Electromagnetismo Aplicado (LENDE), Escuela Superior de Ingeniería Mecánica y Eléctrica (ESIME), Instituto Politécnico Nacional (IPN), CDMX, Mexico
| | - Jorge Luis Perez Benitez
- Laboratorio de Electromagnetismo Aplicado (LENDE), Escuela Superior de Ingeniería Mecánica y Eléctrica (ESIME), Instituto Politécnico Nacional (IPN), CDMX, Mexico
| | - F. Gallegos-Funes
- Laboratorio de Electromagnetismo Aplicado (LENDE), Escuela Superior de Ingeniería Mecánica y Eléctrica (ESIME), Instituto Politécnico Nacional (IPN), CDMX, Mexico
| | - J. A. Perez Benitez
- Laboratorio de Electromagnetismo Aplicado (LENDE), Escuela Superior de Ingeniería Mecánica y Eléctrica (ESIME), Instituto Politécnico Nacional (IPN), CDMX, Mexico
| |
Collapse
|
42
|
Sosnik R, Li Z. Reconstruction of hand, elbow and shoulder actual and imagined trajectories in 3D space using EEG current source dipoles. J Neural Eng 2021; 18. [PMID: 33752186 DOI: 10.1088/1741-2552/abf0d7] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Accepted: 03/22/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Growing evidence suggests that EEG electrode (sensor) potential time series (PTS) of slow cortical potentials (SCPs) hold motor neural correlates that can be used for motion trajectory prediction (MTP), commonly by multiple linear regression (mLR). It is not yet known whether arm-joint trajectories can be reliably decoded from current sources, computed from sensor data, from which brain areas they can be decoded and using which neural features. APPROACH In this study, the PTS of 44 sensors were fed into sLORETA source localization software to compute current source activity in 30 regions of interest (ROIs) found in a recent meta-analysis to be engaged in action execution, motor imagery and motor preparation. The current sources PTS and band-power time series (BTS) in several frequency bands and time lags were used to predict actual and imagined trajectories in 3D space of the three velocity components of the hand, elbow and shoulder of nine subjects using an mLR model. MAIN RESULTS For all arm joints and movement types, current source SCPs PTS contributed most to trajectory reconstruction with time lags 150ms, 116ms and 84ms providing the highest contribution, and current source BTS in any of the tested frequency bands was not informative. Person's correlation coefficient (r) averaged across movement types, arm joints and velocity components using source data was slightly lower than using sensor data (r=0.25 and r=0.28, respectively). For each ROI, the three current source dipoles had different contribution to the reconstruction of each of the three velocity components. SIGNIFICANCE Overall, our results demonstrate the feasibility of predicting of actual and imagined 3D trajectories of all arm joints from current sources, computed from scalp EEG. These findings may be used by developers of a future BCI as a validated set of contributing ROIs.
Collapse
Affiliation(s)
- Ronen Sosnik
- Electrical, Electronics and Communication Engineering, Holon Institute of Technology, 52 Golomb St., Holon, 5810201, ISRAEL
| | - Zheng Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Yingdong Building, Xinjiekouwai Street 19, Beijing Haidian, Beijing, 100875, CHINA
| |
Collapse
|
43
|
Asgher U, Khan MJ, Asif Nizami MH, Khalil K, Ahmad R, Ayaz Y, Naseer N. Motor Training Using Mental Workload (MWL) With an Assistive Soft Exoskeleton System: A Functional Near-Infrared Spectroscopy (fNIRS) Study for Brain-Machine Interface (BMI). Front Neurorobot 2021; 15:605751. [PMID: 33815084 PMCID: PMC8012849 DOI: 10.3389/fnbot.2021.605751] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2020] [Accepted: 02/05/2021] [Indexed: 11/24/2022] Open
Abstract
Mental workload is a neuroergonomic human factor, which is widely used in planning a system's safety and areas like brain-machine interface (BMI), neurofeedback, and assistive technologies. Robotic prosthetics methodologies are employed for assisting hemiplegic patients in performing routine activities. Assistive technologies' design and operation are required to have an easy interface with the brain with fewer protocols, in an attempt to optimize mobility and autonomy. The possible answer to these design questions may lie in neuroergonomics coupled with BMI systems. In this study, two human factors are addressed: designing a lightweight wearable robotic exoskeleton hand that is used to assist the potential stroke patients with an integrated portable brain interface using mental workload (MWL) signals acquired with portable functional near-infrared spectroscopy (fNIRS) system. The system may generate command signals for operating a wearable robotic exoskeleton hand using two-state MWL signals. The fNIRS system is used to record optical signals in the form of change in concentration of oxy and deoxygenated hemoglobin (HbO and HbR) from the pre-frontal cortex (PFC) region of the brain. Fifteen participants participated in this study and were given hand-grasping tasks. Two-state MWL signals acquired from the PFC region of the participant's brain are segregated using machine learning classifier-support vector machines (SVM) to utilize in operating a robotic exoskeleton hand. The maximum classification accuracy is 91.31%, using a combination of mean-slope features with an average information transfer rate (ITR) of 1.43. These results show the feasibility of a two-state MWL (fNIRS-based) robotic exoskeleton hand (BMI system) for hemiplegic patients assisting in the physical grasping tasks.
Collapse
Affiliation(s)
- Umer Asgher
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Muhammad Jawad Khan
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Muhammad Hamza Asif Nizami
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
- Florida State University College of Engineering, Florida A&M University, Tallahassee, FL, United States
| | - Khurram Khalil
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Riaz Ahmad
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
- Directorate of Quality Assurance and International Collaboration, National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Yasar Ayaz
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
- National Center of Artificial Intelligence (NCAI), National University of Sciences and Technology, Islamabad, Pakistan
| | - Noman Naseer
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| |
Collapse
|
44
|
Li B, Lin Y, Gao X, Liu Z. Enhancing the EEG classification in RSVP task by combining interval model of ERPs with spatial and temporal regions of interest. J Neural Eng 2021; 18. [DOI: 10.1088/1741-2552/abc8d5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 11/09/2020] [Indexed: 02/02/2023]
|
45
|
Arpaia P, Donnarumma F, Esposito A, Parvis M. Channel Selection for Optimal EEG Measurement in Motor Imagery-Based Brain-Computer Interfaces. Int J Neural Syst 2020; 31:2150003. [PMID: 33353529 DOI: 10.1142/s0129065721500039] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
A method for selecting electroencephalographic (EEG) signals in motor imagery-based brain-computer interfaces (MI-BCI) is proposed for enhancing the online interoperability and portability of BCI systems, as well as user comfort. The attempt is also to reduce variability and noise of MI-BCI, which could be affected by a large number of EEG channels. The relation between selected channels and MI-BCI performance is therefore analyzed. The proposed method is able to select acquisition channels common to all subjects, while achieving a performance compatible with the use of all the channels. Results are reported with reference to a standard benchmark dataset, the BCI competition IV dataset 2a. They prove that a performance compatible with the best state-of-the-art approaches can be achieved, while adopting a significantly smaller number of channels, both in two and in four tasks classification. In particular, classification accuracy is about 77-83% in binary classification with down to 6 EEG channels, and above 60% for the four-classes case when 10 channels are employed. This gives a contribution in optimizing the EEG measurement while developing non-invasive and wearable MI-based brain-computer interfaces.
Collapse
Affiliation(s)
- Pasquale Arpaia
- Department of Electrical Engineering and Information Technology (DIETI), Universita' degli Studi di Napoli Federico II, Naples, Italy.,Augmented Reality for Health Monitoring Laboratory (ARHeMLab), Italy
| | - Francesco Donnarumma
- Institute of Cognitive Sciences and Technologies, National Research Council (ISTC-CNR), Rome, Italy.,Augmented Reality for Health Monitoring Laboratory (ARHeMLab), Italy
| | - Antonio Esposito
- Department of Electronics and Telecommunications (DET), Politecnico di Torino, Turin, Italy.,Augmented Reality for Health Monitoring Laboratory (ARHeMLab), Italy
| | - Marco Parvis
- Department of Electronics and Telecommunications (DET), Politecnico di Torino, Turin, Italy.,Augmented Reality for Health Monitoring Laboratory (ARHeMLab), Italy
| |
Collapse
|
46
|
Wei Q, Zhu S, Wang Y, Gao X, Guo H, Wu X. A Training Data-Driven Canonical Correlation Analysis Algorithm for Designing Spatial Filters to Enhance Performance of SSVEP-Based BCIs. Int J Neural Syst 2020; 30:2050020. [PMID: 32380925 DOI: 10.1142/s0129065720500203] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Canonical correlation analysis (CCA) is an effective spatial filtering algorithm widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs). In existing CCA methods, training data are used for constructing templates of stimulus targets and the spatial filters are created between the template signals and a single-trial testing signal. The fact that spatial filters rely on testing data, however, results in low classification performance of CCA compared to other state-of-the-art algorithms such as task-related component analysis (TRCA). In this study, we proposed a novel CCA method in which spatial filters are estimated using training data only. This is achieved by using observed EEG training data and their SSVEP components as the two inputs of CCA and the objective function is optimized by averaging multiple training trials. In this case, we proved in theory that the two spatial filters estimated by the CCA are equivalent, and that the CCA and TRCA are also equivalent under certain hypotheses. A benchmark SSVEP data set from 35 subjects was used to compare the performance of the two algorithms according to different lengths of data, numbers of channels and numbers of training trials. In addition, the CCA was also compared with power spectral density analysis (PSDA). The experimental results suggest that the CCA is equivalent to TRCA if the signal-to-noise ratio of training data is high enough; otherwise, the CCA outperforms TRCA in terms of classification accuracy. The CCA is much faster than PSDA in detecting time of targets. The robustness of the training data-driven CCA to noise gives it greater potential in practical applications.
Collapse
Affiliation(s)
- Qingguo Wei
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, P. R. China
| | - Shan Zhu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, P. R. China
| | - Yijun Wang
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, P. R. China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, P. R. China
| | - Hai Guo
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, P. R. China
| | - Xuan Wu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, P. R. China
| |
Collapse
|
47
|
Zhu Y, Li Y, Lu J, Li P. A Hybrid BCI Based on SSVEP and EOG for Robotic Arm Control. Front Neurorobot 2020; 14:583641. [PMID: 33328950 PMCID: PMC7714925 DOI: 10.3389/fnbot.2020.583641] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 10/26/2020] [Indexed: 11/21/2022] Open
Abstract
Brain-computer interface (BCI) for robotic arm control has been studied to improve the life quality of people with severe motor disabilities. There are still challenges for robotic arm control in accomplishing a complex task with a series of actions. An efficient switch and a timely cancel command are helpful in the application of robotic arm. Based on the above, we proposed an asynchronous hybrid BCI in this study. The basic control of a robotic arm with six degrees of freedom was a steady-state visual evoked potential (SSVEP) based BCI with fifteen target classes. We designed an EOG-based switch which used a triple blink to either activate or deactivate the flash of SSVEP-based BCI. Stopping flash in the idle state can help to reduce visual fatigue and false activation rate (FAR). Additionally, users were allowed to cancel the current command simply by a wink in the feedback phase to avoid executing the incorrect command. Fifteen subjects participated and completed the experiments. The cue-based experiment obtained an average accuracy of 92.09%, and the information transfer rates (ITR) resulted in 35.98 bits/min. The mean FAR of the switch was 0.01/min. Furthermore, all subjects succeeded in asynchronously operating the robotic arm to grasp, lift, and move a target object from the initial position to a specific location. The results indicated the feasibility of the combination of EOG and SSVEP signals and the flexibility of EOG signal in BCI to complete a complicated task of robotic arm control.
Collapse
Affiliation(s)
- Yuanlu Zhu
- Wuhan National Laboratory for Optoelectronics, Britton Chance Center of Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Ying Li
- Wuhan National Laboratory for Optoelectronics, Britton Chance Center of Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Jinling Lu
- Wuhan National Laboratory for Optoelectronics, Britton Chance Center of Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Pengcheng Li
- Wuhan National Laboratory for Optoelectronics, Britton Chance Center of Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Suzhou, China
| |
Collapse
|
48
|
Li Y, Xiang J, Kesavadas T. Convolutional Correlation Analysis for Enhancing the Performance of SSVEP-Based Brain-Computer Interface. IEEE Trans Neural Syst Rehabil Eng 2020; 28:2681-2690. [PMID: 33201824 DOI: 10.1109/tnsre.2020.3038718] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Currently, most of the high-performance models for frequency recognition of steady-state visual evoked potentials (SSVEPs) are linear. However, SSVEPs collected from different channels can have non-linear relationship among each other. Linearly combining electroencephalogram (EEG) from multiple channels is not the most accurate solution in SSVEPs classification. To further improve the performance of SSVEP-based brain-computer interface (BCI), we propose a convolutional neural network-based non-linear model, i.e. convolutional correlation analysis (Conv-CA). Different from pure deep learning models, Conv-CA use convolutional neural networks (CNNs) at the top of a self-defined correlation layer. The CNNs function on how to transform multiple channel EEGs into a single EEG signal. The correlation layer calculates the correlation coefficients between the transformed single EEG signal and reference signals. The CNNs provide non-linear operations to combine EEGs in different channels and different time. And the correlation layer constrains the fitting space of the deep learning model. A comparison study between the proposed Conv-CA method and the task-related component analysis (TRCA) based methods is conducted. Both methods are validated on a 40-class SSVEP benchmark dataset recorded from 35 subjects. The study verifies that the Conv-CA method significantly outperforms the TRCA-based methods. Moreover, Conv-CA has good explainability since its inputs of the correlation layer can be analyzed for visualizing what the model learnt from the data. Conv-CA is a non-linear extension of spatial filters. Its CNN structures can be further explored and tuned for reaching a better performance. The structure of combining neural networks and unsupervised features has the potential to be applied to the classification of other signals.
Collapse
|
49
|
Chen X, Huang X, Wang Y, Gao X. Combination of Augmented Reality Based Brain- Computer Interface and Computer Vision for High-Level Control of a Robotic Arm. IEEE Trans Neural Syst Rehabil Eng 2020; 28:3140-3147. [PMID: 33196442 DOI: 10.1109/tnsre.2020.3038209] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Recent advances in robotics, neuroscience, and signal processing make it possible to operate a robot through electroencephalography (EEG)-based brain-computer interface (BCI). Although some successful attempts have been made in recent years, the practicality of the entire system still has much room for improvement. The present study designed and realized a robotic arm control system by combing augmented reality (AR), computer vision, and steady-state visual evoked potential (SSVEP)-BCI. AR environment was implemented by a Microsoft HoloLens. Flickering stimuli for eliciting SSVEPs were presented on the HoloLens, which allowed users to see both the robotic arm and the user interface of the BCI. Thus users did not need to switch attention between the visual stimulator and the robotic arm. A four-command SSVEP-BCI was built for users to choose the specific object to be operated by the robotic arm. Once an object was selected, the computer vision would provide the location and color of the object in the workspace. Subsequently, the object was autonomously picked up and placed by the robotic arm. According to the online results obtained from twelve participants, the mean classification accuracy of the proposed system was 93.96 ± 5.05%. Moreover, all subjects could utilize the proposed system to successfully pick and place objects in a specific order. These results demonstrated the potential of combining AR-BCI and computer vision to control robotic arms, which is expected to further promote the practicality of BCI-controlled robots.
Collapse
|
50
|
Comprehensive review on brain-controlled mobile robots and robotic arms based on electroencephalography signals. INTEL SERV ROBOT 2020. [DOI: 10.1007/s11370-020-00328-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|