26
|
Convolutional Neural Network with a Topographic Representation Module for EEG-Based Brain-Computer Interfaces. Brain Sci 2023; 13:brainsci13020268. [PMID: 36831811 PMCID: PMC9954538 DOI: 10.3390/brainsci13020268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/02/2023] [Accepted: 02/03/2023] [Indexed: 02/08/2023] Open
Abstract
Convolutional neural networks (CNNs) have shown great potential in the field of brain-computer interfaces (BCIs) due to their ability to directly process raw electroencephalogram (EEG) signals without artificial feature extraction. Some CNNs have achieved better classification accuracy than that of traditional methods. Raw EEG signals are usually represented as a two-dimensional (2-D) matrix composed of channels and time points, ignoring the spatial topological information of electrodes. Our goal is to make a CNN that takes raw EEG signals as inputs have the ability to learn spatial topological features and improve its classification performance while basically maintaining its original structure. We propose an EEG topographic representation module (TRM). This module consists of (1) a mapping block from raw EEG signals to a 3-D topographic map and (2) a convolution block from the topographic map to an output with the same size as the input. According to the size of the convolutional kernel used in the convolution block, we design two types of TRMs, namely TRM-(5,5) and TRM-(3,3). We embed the two TRM types into three widely used CNNs (ShallowConvNet, DeepConvNet and EEGNet) and test them on two publicly available datasets (the Emergency Braking During Simulated Driving Dataset (EBDSDD) and the High Gamma Dataset (HGD)). Results show that the classification accuracies of all three CNNs are improved on both datasets after using the TRMs. With TRM-(5,5), the average classification accuracies of DeepConvNet, EEGNet and ShallowConvNet are improved by 6.54%, 1.72% and 2.07% on the EBDSDD and by 6.05%, 3.02% and 5.14% on the HGD, respectively; with TRM-(3,3), they are improved by 7.76%, 1.71% and 2.17% on the EBDSDD and by 7.61%, 5.06% and 6.28% on the HGD, respectively. We improve the classification performance of three CNNs on both datasets through the use of TRMs, indicating that they have the capability to mine spatial topological EEG information. More importantly, since the output of a TRM has the same size as the input, CNNs with raw EEG signals as inputs can use this module without changing their original structures.
Collapse
|
27
|
Arı E, Taçgın E. Input Shape Effect on Classification Performance of Raw EEG Motor Imagery Signals with Convolutional Neural Networks for Use in Brain-Computer Interfaces. Brain Sci 2023; 13:brainsci13020240. [PMID: 36831784 PMCID: PMC9954790 DOI: 10.3390/brainsci13020240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 01/26/2023] [Accepted: 01/28/2023] [Indexed: 02/04/2023] Open
Abstract
EEG signals are interpreted, analyzed and classified by many researchers for use in brain-computer interfaces. Although there are many different EEG signal acquisition methods, one of the most interesting is motor imagery signals. Many different signal processing methods, machine learning and deep learning models have been developed for the classification of motor imagery signals. Among these, Convolutional Neural Network models generally achieve better results than other models. Because the size and shape of the data is important for training Convolutional Neural Network models and discovering the right relationships, researchers have designed and experimented with many different input shape structures. However, no study has been found in the literature evaluating the effect of different input shapes on model performance and accuracy. In this study, the effects of different input shapes on model performance and accuracy in the classification of EEG motor imagery signals were investigated, which had not been specifically studied before. In addition, signal preprocessing methods, which take a long time before classification, were not used; rather, two CNN models were developed for training and classification using raw data. Two different datasets, BCI Competition IV 2A and 2B, were used in classification processes. For different input shapes, 53.03-89.29% classification accuracy and 2-23 s epoch time were obtained for 2A dataset, 64.84-84.94% classification accuracy and 4-10 s epoch time were obtained for 2B dataset. This study showed that the input shape has a significant effect on the classification performance, and when the correct input shape is selected and the correct CNN architecture is developed, feature extraction and classification can be done well by the CNN architecture without any signal preprocessing.
Collapse
|
28
|
Ron-Angevin R, Fernández-Rodríguez Á, Dupont C, Maigrot J, Meunier J, Tavard H, Lespinet-Najib V, André JM. Comparison of Two Paradigms Based on Stimulation with Images in a Spelling Brain-Computer Interface. SENSORS (BASEL, SWITZERLAND) 2023; 23:1304. [PMID: 36772343 PMCID: PMC9920351 DOI: 10.3390/s23031304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 01/11/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
A P300-based speller can be used to control a home automation system via brain activity. Evaluation of the visual stimuli used in a P300-based speller is a common topic in the field of brain-computer interfaces (BCIs). The aim of the present work is to compare, using the usability approach, two types of stimuli that have provided high performance in previous studies. Twelve participants controlled a BCI under two conditions, which varied in terms of the type of stimulus employed: a red famous face surrounded by a white rectangle (RFW) and a range of neutral pictures (NPs). The usability approach included variables related to effectiveness (accuracy and information transfer rate), efficiency (stress and fatigue), and satisfaction (pleasantness and System Usability Scale and Affect Grid questionnaires). The results indicated that there were no significant differences in effectiveness, but the system that used NPs was reported as significantly more pleasant. Hence, since satisfaction variables should also be considered in systems that potential users are likely to employ regularly, the use of different NPs may be a more suitable option than the use of a single RFW for the development of a home automation system based on a visual P300-based speller.
Collapse
|
29
|
Ma Z, Wang K, Xu M, Yi W, Xu F, Ming D. Transformed common spatial pattern for motor imagery-based brain-computer interfaces. Front Neurosci 2023; 17:1116721. [PMID: 36960172 PMCID: PMC10028145 DOI: 10.3389/fnins.2023.1116721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/20/2023] [Indexed: 03/09/2023] Open
Abstract
Objective The motor imagery (MI)-based brain-computer interface (BCI) is one of the most popular BCI paradigms. Common spatial pattern (CSP) is an effective algorithm for decoding MI-related electroencephalogram (EEG) patterns. However, it highly depends on the selection of EEG frequency bands. To address this problem, previous researchers often used a filter bank to decompose EEG signals into multiple frequency bands before applying the traditional CSP. Approach This study proposed a novel method, i.e., transformed common spatial pattern (tCSP), to extract the discriminant EEG features from multiple frequency bands after but not before CSP. To verify its effectiveness, we tested tCSP on a dataset collected by our team and a public dataset from BCI competition III. We also performed an online evaluation of the proposed method. Main results As a result, for the dataset collected by our team, the classification accuracy of tCSP was significantly higher than CSP by about 8% and filter bank CSP (FBCSP) by about 4.5%. The combination of tCSP and CSP further improved the system performance with an average accuracy of 84.77% and a peak accuracy of 100%. For dataset IVa in BCI competition III, the combination method got an average accuracy of 94.55%, which performed best among all the presented CSP-based methods. In the online evaluation, tCSP and the combination method achieved an average accuracy of 80.00 and 84.00%, respectively. Significance The results demonstrate that the frequency band selection after CSP is better than before for MI-based BCIs. This study provides a promising approach for decoding MI EEG patterns, which is significant for the development of BCIs.
Collapse
|
30
|
Syrov N, Yakovlev L, Miroshnikov A, Kaplan A. Beyond passive observation: feedback anticipation and observation activate the mirror system in virtual finger movement control via P300-BCI. Front Hum Neurosci 2023; 17:1180056. [PMID: 37213933 PMCID: PMC10192585 DOI: 10.3389/fnhum.2023.1180056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 04/13/2023] [Indexed: 05/23/2023] Open
Abstract
Action observation (AO) is widely used as a post-stroke therapy to activate sensorimotor circuits through the mirror neuron system. However, passive observation is often considered to be less effective and less interactive than goal-directed movement observation, leading to the suggestion that observation of goal-directed actions may have stronger therapeutic potential, as goal-directed AO has been shown to activate mechanisms for monitoring action errors. Some studies have also suggested the use of AO as a form of Brain-computer interface (BCI) feedback. In this study, we investigated the potential for observation of virtual hand movements within a P300-based BCI as a feedback system to activate the mirror neuron system. We also explored the role of feedback anticipation and estimation mechanisms during movement observation. Twenty healthy subjects participated in the study. We analyzed event-related desynchronization and synchronization (ERD/S) of sensorimotor EEG rhythms and Error-related potentials (ErrPs) during observation of virtual hand finger flexion presented as feedback in the P300-BCI loop and compared the dynamics of ERD/S and ErrPs during observation of correct feedback and errors. We also analyzed these EEG markers during passive AO under two conditions: when subjects anticipated the action demonstration and when the action was unexpected. A pre-action mu-ERD was found both before passive AO and during action anticipation within the BCI loop. Furthermore, a significant increase in beta-ERS was found during AO within incorrect BCI feedback trials. We suggest that the BCI feedback may exaggerate the passive-AO effect, as it engages feedback anticipation and estimation mechanisms as well as movement error monitoring simultaneously. The results of this study provide insights into the potential of P300-BCI with AO-feedback as a tool for neurorehabilitation.
Collapse
|
31
|
Ma Y, Gong A, Nan W, Ding P, Wang F, Fu Y. Personalized Brain-Computer Interface and Its Applications. J Pers Med 2022; 13:46. [PMID: 36675707 PMCID: PMC9861730 DOI: 10.3390/jpm13010046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
Brain-computer interfaces (BCIs) are a new technology that subverts traditional human-computer interaction, where the control signal source comes directly from the user's brain. When a general BCI is used for practical applications, it is difficult for it to meet the needs of different individuals because of the differences among individual users in physiological and mental states, sensations, perceptions, imageries, cognitive thinking activities, and brain structures and functions. For this reason, it is necessary to customize personalized BCIs for specific users. So far, few studies have elaborated on the key scientific and technical issues involved in personalized BCIs. In this study, we will focus on personalized BCIs, give the definition of personalized BCIs, and detail their design, development, evaluation methods and applications. Finally, the challenges and future directions of personalized BCIs are discussed. It is expected that this study will provide some useful ideas for innovative studies and practical applications of personalized BCIs.
Collapse
|
32
|
Fernández-Rodríguez Á, Darves-Bornoz A, Velasco-Álvarez F, Ron-Angevin R. Effect of Stimulus Size in a Visual ERP-Based BCI under RSVP. SENSORS (BASEL, SWITZERLAND) 2022; 22:9505. [PMID: 36502205 PMCID: PMC9741214 DOI: 10.3390/s22239505] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/23/2022] [Accepted: 12/02/2022] [Indexed: 06/17/2023]
Abstract
Rapid serial visual presentation (RSVP) is currently one of the most suitable paradigms for use with a visual brain-computer interface based on event-related potentials (ERP-BCI) by patients with a lack of ocular motility. However, gaze-independent paradigms have not been studied as closely as gaze-dependent ones, and variables such as the sizes of the stimuli presented have not yet been explored under RSVP. Hence, the aim of the present work is to assess whether stimulus size has an impact on ERP-BCI performance under the RSVP paradigm. Twelve participants tested the ERP-BCI under RSVP using three different stimulus sizes: small (0.1 × 0.1 cm), medium (1.9 × 1.8 cm), and large (20.05 × 19.9 cm) at 60 cm. The results showed significant differences in accuracy between the conditions; the larger the stimulus, the better the accuracy obtained. It was also shown that these differences were not due to incorrect perception of the stimuli since there was no effect from the size in a perceptual discrimination task. The present work therefore shows that stimulus size has an impact on the performance of an ERP-BCI under RSVP. This finding should be considered by future ERP-BCI proposals aimed at users who need gaze-independent systems.
Collapse
|
33
|
Colucci A, Vermehren M, Cavallo A, Angerhöfer C, Peekhaus N, Zollo L, Kim WS, Paik NJ, Soekadar SR. Brain-Computer Interface-Controlled Exoskeletons in Clinical Neurorehabilitation: Ready or Not? Neurorehabil Neural Repair 2022; 36:747-756. [PMID: 36426541 PMCID: PMC9720703 DOI: 10.1177/15459683221138751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
The development of brain-computer interface-controlled exoskeletons promises new treatment strategies for neurorehabilitation after stroke or spinal cord injury. By converting brain/neural activity into control signals of wearable actuators, brain/neural exoskeletons (B/NEs) enable the execution of movements despite impaired motor function. Beyond the use as assistive devices, it was shown that-upon repeated use over several weeks-B/NEs can trigger motor recovery, even in chronic paralysis. Recent development of lightweight robotic actuators, comfortable and portable real-world brain recordings, as well as reliable brain/neural control strategies have paved the way for B/NEs to enter clinical care. Although B/NEs are now technically ready for broader clinical use, their promotion will critically depend on early adopters, for example, research-oriented physiotherapists or clinicians who are open for innovation. Data collected by early adopters will further elucidate the underlying mechanisms of B/NE-triggered motor recovery and play a key role in increasing efficacy of personalized treatment strategies. Moreover, early adopters will provide indispensable feedback to the manufacturers necessary to further improve robustness, applicability, and adoption of B/NEs into existing therapy plans.
Collapse
|
34
|
Emsawas T, Morita T, Kimura T, Fukui KI, Numao M. Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:8250. [PMID: 36365948 PMCID: PMC9654218 DOI: 10.3390/s22218250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/22/2022] [Accepted: 10/23/2022] [Indexed: 06/16/2023]
Abstract
Deep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain-computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration of various architectural design components that influence the representational ability of extracted features. This study proposes an EEG-based emotion classification model called the multi-kernel temporal and spatial convolution network (MultiT-S ConvNet). The multi-scale kernel is used in the model to learn various time resolutions, and separable convolutions are applied to find related spatial patterns. In addition, we enhanced both the temporal and spatial filters with a lightweight gating mechanism. To validate the performance and classification accuracy of MultiT-S ConvNet, we conduct subject-dependent and subject-independent experiments on EEG-based emotion datasets: DEAP and SEED. Compared with existing methods, MultiT-S ConvNet outperforms with higher accuracy results and a few trainable parameters. Moreover, the proposed multi-scale module in temporal filtering enables extracting a wide range of EEG representations, covering short- to long-wavelength components. This module could be further implemented in any model of EEG-based convolution networks, and its ability potentially improves the model's learning capacity.
Collapse
|
35
|
Du Y, Liu J. IENet: a robust convolutional neural network for EEG based brain-computer interfaces. J Neural Eng 2022; 19. [PMID: 35605585 DOI: 10.1088/1741-2552/ac7257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 05/22/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Brain-computer interfaces (BCIs) based on electroencephalogram (EEG) develop into novel application areas with more complex scenarios, which put forward higher requirements for the robustness of EEG signal processing algorithms. Deep learning can automatically extract discriminative features and potential dependencies via deep structures, demonstrating strong analytical capabilities in numerous domains such as computer vision (CV) and natural language processing (NLP). Making full use of deep learning technology to design a robust algorithm that is capable of analyzing EEG across BCI paradigms is our main work in this paper. APPROACH Inspired by InceptionV4 and InceptionTime architecture, we introduce a neural network ensemble named InceptionEEG-Net (IENet), where multi-scale convolutional layer and convolution of length 1 enable model to extract rich high-dimensional features with limited parameters. In addition, we propose the average receptive field gain for convolutional neural networks (CNNs), which optimizes IENet to detect long patterns at a smaller cost. We compare with the current state-of-the-art method across five EEG-BCI paradigms: steady-state visual evoked potentials, epilepsy EEG, overt attention P300 visual-evoked potentials, covert attention P300 visual-evoked potentials and movement-related cortical potentials. MAIN RESULTS The classification results show that the generalizability of IENet is on par with the state-of-the-art paradigm-agnostic models on test datasets. Furthermore, the feature explainability analysis of IENet illustrates its capability to extract neurophysiologically interpretable features for different BCI paradigms, ensuring the reliability of algorithm. Significance. It can be seen from our results that IENet can generalize to different BCI paradigms. And it is essential for deep CNNs to increase the receptive field size using average receptive field gain.
Collapse
|
36
|
Värbu K, Muhammad N, Muhammad Y. Past, Present, and Future of EEG-Based BCI Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:3331. [PMID: 35591021 PMCID: PMC9101004 DOI: 10.3390/s22093331] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 04/05/2022] [Accepted: 04/25/2022] [Indexed: 06/15/2023]
Abstract
An electroencephalography (EEG)-based brain-computer interface (BCI) is a system that provides a pathway between the brain and external devices by interpreting EEG. EEG-based BCI applications have initially been developed for medical purposes, with the aim of facilitating the return of patients to normal life. In addition to the initial aim, EEG-based BCI applications have also gained increasing significance in the non-medical domain, improving the life of healthy people, for instance, by making it more efficient, collaborative and helping develop themselves. The objective of this review is to give a systematic overview of the literature on EEG-based BCI applications from the period of 2009 until 2019. The systematic literature review has been prepared based on three databases PubMed, Web of Science and Scopus. This review was conducted following the PRISMA model. In this review, 202 publications were selected based on specific eligibility criteria. The distribution of the research between the medical and non-medical domain has been analyzed and further categorized into fields of research within the reviewed domains. In this review, the equipment used for gathering EEG data and signal processing methods have also been reviewed. Additionally, current challenges in the field and possibilities for the future have been analyzed.
Collapse
|
37
|
Automatic Muscle Artifacts Identification and Removal from Single-Channel EEG Using Wavelet Transform with Meta-Heuristically Optimized Non-Local Means Filter. SENSORS 2022; 22:s22082948. [PMID: 35458940 PMCID: PMC9030243 DOI: 10.3390/s22082948] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 03/20/2022] [Accepted: 04/10/2022] [Indexed: 11/17/2022]
Abstract
Electroencephalogram (EEG) signals may get easily contaminated by muscle artifacts, which may lead to wrong interpretation in the brain–computer interface (BCI) system as well as in various medical diagnoses. The main objective of this paper is to remove muscle artifacts without distorting the information contained in the EEG. A novel multi-stage EEG denoising method is proposed for the first time in which wavelet packet decomposition (WPD) is combined with a modified non-local means (NLM) algorithm. At first, the artifact EEG signal is identified through a pre-trained classifier. Next, the identified EEG signal is decomposed into wavelet coefficients and corrected through a modified NLM filter. Finally, the artifact-free EEG is reconstructed from corrected wavelet coefficients through inverse WPD. To optimize the filter parameters, two meta-heuristic algorithms are used in this paper for the first time. The proposed system is first validated on simulated EEG data and then tested on real EEG data. The proposed approach achieved average mutual information (MI) as 2.9684 ± 0.7045 on real EEG data. The result reveals that the proposed system outperforms recently developed denoising techniques with higher average MI, which indicates that the proposed approach is better in terms of quality of reconstruction and is fully automatic.
Collapse
|
38
|
Kim S, Shin DY, Kim T, Lee S, Hyun JK, Park SM. Enhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography. SENSORS 2022; 22:s22020680. [PMID: 35062641 PMCID: PMC8778369 DOI: 10.3390/s22020680] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 01/09/2022] [Accepted: 01/14/2022] [Indexed: 02/04/2023]
Abstract
Motion classification can be performed using biometric signals recorded by electroencephalography (EEG) or electromyography (EMG) with noninvasive surface electrodes for the control of prosthetic arms. However, current single-modal EEG and EMG based motion classification techniques are limited owing to the complexity and noise of EEG signals, and the electrode placement bias, and low-resolution of EMG signals. We herein propose a novel system of two-dimensional (2D) input image feature multimodal fusion based on an EEG/EMG-signal transfer learning (TL) paradigm for detection of hand movements in transforearm amputees. A feature extraction method in the frequency domain of the EEG and EMG signals was adopted to establish a 2D image. The input images were used for training on a model based on the convolutional neural network algorithm and TL, which requires 2D images as input data. For the purpose of data acquisition, five transforearm amputees and nine healthy controls were recruited. Compared with the conventional single-modal EEG signal trained models, the proposed multimodal fusion method significantly improved classification accuracy in both the control and patient groups. When the two signals were combined and used in the pretrained model for EEG TL, the classification accuracy increased by 4.18-4.35% in the control group, and by 2.51-3.00% in the patient group.
Collapse
|
39
|
Vekety B, Logemann A, Takacs ZK. Mindfulness Practice with a Brain-Sensing Device Improved Cognitive Functioning of Elementary School Children: An Exploratory Pilot Study. Brain Sci 2022; 12:103. [PMID: 35053846 PMCID: PMC8774020 DOI: 10.3390/brainsci12010103] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Revised: 01/07/2022] [Accepted: 01/11/2022] [Indexed: 12/04/2022] Open
Abstract
This is the first pilot study with children that has assessed the effects of a brain-computer interface-assisted mindfulness program on neural mechanisms and associated cognitive performance. The participants were 31 children aged 9-10 years who were randomly assigned to either an eight-session mindfulness training with EEG-feedback or a passive control group. Mindfulness-related brain activity was measured during the training, while cognitive tests and resting-state brain activity were measured pre- and post-test. The within-group measurement of calm/focused brain states and mind-wandering revealed a significant linear change. Significant positive changes were detected in children's inhibition, information processing, and resting-state brain activity (alpha, theta) compared to the control group. Elevated baseline alpha activity was associated with less reactivity in reaction time on a cognitive test. Our exploratory findings show some preliminary support for a potential executive function-enhancing effect of mindfulness supplemented with EEG-feedback, which may have some important implications for children's self-regulated learning and academic achievement.
Collapse
|
40
|
Enhancing EEG-Based Mental Stress State Recognition Using an Improved Hybrid Feature Selection Algorithm. SENSORS 2021; 21:s21248370. [PMID: 34960469 PMCID: PMC8703860 DOI: 10.3390/s21248370] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 12/06/2021] [Accepted: 12/10/2021] [Indexed: 01/15/2023]
Abstract
In real-life applications, electroencephalogram (EEG) signals for mental stress recognition require a conventional wearable device. This, in turn, requires an efficient number of EEG channels and an optimal feature set. This study aims to identify an optimal feature subset that can discriminate mental stress states while enhancing the overall classification performance. We extracted multi-domain features within the time domain, frequency domain, time-frequency domain, and network connectivity features to form a prominent feature vector space for stress. We then proposed a hybrid feature selection (FS) method using minimum redundancy maximum relevance with particle swarm optimization and support vector machines (mRMR-PSO-SVM) to select the optimal feature subset. The performance of the proposed method is evaluated and verified using four datasets, namely EDMSS, DEAP, SEED, and EDPMSC. To further consolidate, the effectiveness of the proposed method is compared with that of the state-of-the-art metaheuristic methods. The proposed model significantly reduced the features vector space by an average of 70% compared with the state-of-the-art methods while significantly increasing overall detection performance.
Collapse
|
41
|
Iliopoulos AC, Papasotiriou I. Functional Complex Networks Based on Operational Architectonics: Application on Electroencephalography-Brain-computer Interface for Imagined Speech. Neuroscience 2021; 484:98-118. [PMID: 34871742 DOI: 10.1016/j.neuroscience.2021.11.045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 11/26/2021] [Accepted: 11/29/2021] [Indexed: 01/18/2023]
Abstract
A new method for analyzing brain complex dynamics and states is presented. This method constructs functional brain graphs and is comprised of two pylons: (a) Operational architectonics (OA) concept of brain and mind functioning. (b) Network neuroscience. In particular, the algorithm utilizes OA framework for a non-parametric segmentation of EEGs, which leads to the identification of change points, namely abrupt jumps in EEG amplitude, called Rapid Transition Processes (RTPs). Subsequently, the time coordinates of RTPs are used for the generation of undirected weighted complex networks fulfilling a scale-free topology criterion, from which various network metrics of brain connectivity are estimated. These metrics form feature vectors, which can be used in machine learning algorithms for classification and/or prediction. The method is tested in classification problems on an EEG-based BCI data set, acquired from individuals during imagery pronunciation tasks of various words/vowels. The classification results, based on a Naïve Bayes classifier, show that the overall accuracies were found to be above chance level in all tested cases. This method was also compared with other state-of-the-art computational approaches commonly used for functional network generation, exhibiting competitive performance. The method can be useful to neuroscientists wishing to enhance their repository of brain research algorithms.
Collapse
|
42
|
Martínez-Cagigal V, Thielen J, Santamaría-Vázquez E, Pérez-Velasco S, Desain P, Hornero R. Brain-computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review. J Neural Eng 2021; 18. [PMID: 34763331 DOI: 10.1088/1741-2552/ac38cf] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 11/11/2021] [Indexed: 11/11/2022]
Abstract
Objective.Code-modulated visual evoked potentials (c-VEP) have been consolidated in recent years as robust control signals capable of providing non-invasive brain-computer interfaces (BCIs) for reliable, high-speed communication. Their usefulness for communication and control purposes has been reflected in an exponential increase of related articles in the last decade. The aim of this review is to provide a comprehensive overview of the literature to gain understanding of the existing research on c-VEP-based BCIs, since its inception (1984) until today (2021), as well as to identify promising future research lines.Approach.The literature review was conducted according to the Preferred Reporting Items for Systematic reviews and Meta-Analysis guidelines. After assessing the eligibility of journal manuscripts, conferences, book chapters and non-indexed documents, a total of 70 studies were included. A comprehensive analysis of the main characteristics and design choices of c-VEP-based BCIs was discussed, including stimulation paradigms, signal processing, modeling responses, applications, etc.Main results.The literature review showed that state-of-the-art c-VEP-based BCIs are able to provide an accurate control of the system with a large number of commands, high selection speeds and even without calibration. In general, a lack of validation in real setups was observed, especially regarding the validation with disabled populations. Future work should be focused toward developing self-paced c-VEP-based portable BCIs applied in real-world environments that could exploit the unique benefits of c-VEP paradigms. Some aspects such as asynchrony, unsupervised training, or code optimization still require further research and development.Significance.Despite the growing popularity of c-VEP-based BCIs, to the best of our knowledge, this is the first literature review on the topic. In addition to providing a joint discussion of the advances in the field, some future lines of research are suggested to contribute to the development of reliable plug-and-play c-VEP-based BCIs.
Collapse
|
43
|
Si X, Li S, Xiang S, Yu J, Ming D. Imagined speech increases the hemodynamic response and functional connectivity of the dorsal motor cortex. J Neural Eng 2021; 18. [PMID: 34507311 DOI: 10.1088/1741-2552/ac25d9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 09/10/2021] [Indexed: 11/12/2022]
Abstract
Objective. Decoding imagined speech from brain signals could provide a more natural, user-friendly way for developing the next generation of the brain-computer interface (BCI). With the advantages of non-invasive, portable, relatively high spatial resolution and insensitivity to motion artifacts, the functional near-infrared spectroscopy (fNIRS) shows great potential for developing the non-invasive speech BCI. However, there is a lack of fNIRS evidence in uncovering the neural mechanism of imagined speech. Our goal is to investigate the specific brain regions and the corresponding cortico-cortical functional connectivity features during imagined speech with fNIRS.Approach. fNIRS signals were recorded from 13 subjects' bilateral motor and prefrontal cortex during overtly and covertly repeating words. Cortical activation was determined through the mean oxygen-hemoglobin concentration changes, and functional connectivity was calculated by Pearson's correlation coefficient.Main results. (a) The bilateral dorsal motor cortex was significantly activated during the covert speech, whereas the bilateral ventral motor cortex was significantly activated during the overt speech. (b) As a subregion of the motor cortex, sensorimotor cortex (SMC) showed a dominant dorsal response to covert speech condition, whereas a dominant ventral response to overt speech condition. (c) Broca's area was deactivated during the covert speech but activated during the overt speech. (d) Compared to overt speech, dorsal SMC(dSMC)-related functional connections were enhanced during the covert speech.Significance. We provide fNIRS evidence for the involvement of dSMC in speech imagery. dSMC is the speech imagery network's key hub and is probably involved in the sensorimotor information processing during the covert speech. This study could inspire the BCI community to focus on the potential contribution of dSMC during speech imagery.
Collapse
|
44
|
BCI-Based Control for Ankle Exoskeleton T-FLEX: Comparison of Visual and Haptic Stimuli with Stroke Survivors. SENSORS 2021; 21:s21196431. [PMID: 34640750 PMCID: PMC8512904 DOI: 10.3390/s21196431] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/31/2021] [Accepted: 09/15/2021] [Indexed: 11/16/2022]
Abstract
Brain–computer interface (BCI) remains an emerging tool that seeks to improve the patient interaction with the therapeutic mechanisms and to generate neuroplasticity progressively through neuromotor abilities. Motor imagery (MI) analysis is the most used paradigm based on the motor cortex’s electrical activity to detect movement intention. It has been shown that motor imagery mental practice with movement-associated stimuli may offer an effective strategy to facilitate motor recovery in brain injury patients. In this sense, this study aims to present the BCI associated with visual and haptic stimuli to facilitate MI generation and control the T-FLEX ankle exoskeleton. To achieve this, five post-stroke patients (55–63 years) were subjected to three different strategies using T-FLEX: stationary therapy (ST) without motor imagination, motor imagination with visual stimulation (MIV), and motor imagination with visual-haptic inducement (MIVH). The quantitative characterization of both BCI stimuli strategies was made through the motor imagery accuracy rate, the electroencephalographic (EEG) analysis during the MI active periods, the statistical analysis, and a subjective patient’s perception. The preliminary results demonstrated the viability of the BCI-controlled ankle exoskeleton system with the beta rebound, in terms of patient’s performance during MI active periods and satisfaction outcomes. Accuracy differences employing haptic stimulus were detected with an average of 68% compared with the 50.7% over only visual stimulus. However, the power spectral density (PSD) did not present changes in prominent activation of the MI band but presented significant variations in terms of laterality. In this way, visual and haptic stimuli improved the subject’s MI accuracy but did not generate differential brain activity over the affected hemisphere. Hence, long-term sessions with a more extensive sample and a more robust algorithm should be carried out to evaluate the impact of the proposed system on neuronal and motor evolution after stroke.
Collapse
|
45
|
Haddix C, Al-Bakri AF, Sunderam S. Prediction of isometric handgrip force from graded event-related desynchronization of the sensorimotor rhythm. J Neural Eng 2021; 18. [PMID: 34479215 DOI: 10.1088/1741-2552/ac23c0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 09/03/2021] [Indexed: 11/12/2022]
Abstract
Objective. Brain-computer interfaces (BCIs) show promise as a direct line of communication between the brain and the outside world that could benefit those with impaired motor function. But the commands available for BCI operation are often limited by the ability of the decoder to differentiate between the many distinct motor or cognitive tasks that can be visualized or attempted. Simple binary command signals (e.g. right hand at rest versus movement) are therefore used due to their ability to produce large observable differences in neural recordings. At the same time, frequent command switching can impose greater demands on the subject's focus and takes time to learn. Here, we attempt to decode the degree of effort in a specific movement task to produce a graded and more flexible command signal.Approach.Fourteen healthy human subjects (nine male, five female) responded to visual cues by squeezing a hand dynamometer to different levels of predetermined force, guided by continuous visual feedback, while the electroencephalogram (EEG) and grip force were monitored. Movement-related EEG features were extracted and modeled to predict exerted force.Main results.We found that event-related desynchronization (ERD) of the 8-30 Hz mu-beta sensorimotor rhythm of the EEG is separable for different degrees of motor effort. Upon four-fold cross-validation, linear classifiers were found to predict grip force from an ERD vector with mean accuracies across subjects of 53% and 55% for the dominant and non-dominant hand, respectively. ERD amplitude increased with target force but appeared to pass through a trough that hinted at non-monotonic behavior.Significance.Our results suggest that modeling and interactive feedback based on the intended level of motor effort is feasible. The observed ERD trends suggest that different mechanisms may govern intermediate versus low and high degrees of motor effort. This may have utility in rehabilitative protocols for motor impairments.
Collapse
|
46
|
Kumar A, Pirogova E, Mahmoud SS, Fang Q. Classification of error-related potentials evoked during stroke rehabilitation training. J Neural Eng 2021; 18. [PMID: 34384052 DOI: 10.1088/1741-2552/ac1d32] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 08/12/2021] [Indexed: 01/22/2023]
Abstract
Objective.Error-related potentials (ErrPs) are elicited in the human brain following an error's perception. Recently, ErrPs have been observed in a novel task situation, i.e. when stroke patients perform upper-limb rehabilitation exercises. These ErrPs can be used to developassist-as-needed(AAN) robotic stroke rehabilitation systems. However, to date, there is no reported research on assessing the feasibility of using the ErrPs to implement the AAN approach. Hence, in this study, we evaluated and compared the single-trial classification of novel ErrPs using various classical machine learning and deep learning approaches.Approach.Electroencephalogram data of 13 stroke patients recorded while performing an upper-limb physical rehabilitation exercise were used. Two classification approaches, one combining the xDAWN spatial filtering and support vector machines, and the other using a convolutional neural network-based double transfer learning, were utilized.Main results.Results showed that the ErrPs could be detected with a mean area under the receiver operating characteristics curve of 0.838, and a mean accuracy of 0.842, 0.257 above the chance level (p< 0.05), for a within-subject classification. The results indicated the feasibility of using ErrP signals in real-time AAN robot therapy with evidence from the conducted latency analysis, cross-subject classification, and three-class asynchronous classification.Significance.The findings presented support our proposed approach of using ErrPs as a measure to trigger and/or modulate as required the robotic assistance in a real-timehuman-in-the-looprobotic stroke rehabilitation system.
Collapse
|
47
|
P300 Brain-Computer Interface-Based Drone Control in Virtual and Augmented Reality. SENSORS 2021; 21:s21175765. [PMID: 34502655 PMCID: PMC8434009 DOI: 10.3390/s21175765] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 08/19/2021] [Accepted: 08/24/2021] [Indexed: 01/01/2023]
Abstract
Since the emergence of head-mounted displays (HMDs), researchers have attempted to introduce virtual and augmented reality (VR, AR) in brain–computer interface (BCI) studies. However, there is a lack of studies that incorporate both AR and VR to compare the performance in the two environments. Therefore, it is necessary to develop a BCI application that can be used in both VR and AR to allow BCI performance to be compared in the two environments. In this study, we developed an opensource-based drone control application using P300-based BCI, which can be used in both VR and AR. Twenty healthy subjects participated in the experiment with this application. They were asked to control the drone in two environments and filled out questionnaires before and after the experiment. We found no significant (p > 0.05) difference in online performance (classification accuracy and amplitude/latency of P300 component) and user experience (satisfaction about time length, program, environment, interest, difficulty, immersion, and feeling of self-control) between VR and AR. This indicates that the P300 BCI paradigm is relatively reliable and may work well in various situations.
Collapse
|
48
|
Li F, Chao W, Li Y, Fu B, Ji Y, Wu H, Shi G. Decoding imagined speech from EEG signals using hybrid-scale spatial-temporal dilated convolution network. J Neural Eng 2021; 18. [PMID: 34256357 DOI: 10.1088/1741-2552/ac13c0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 07/13/2021] [Indexed: 11/12/2022]
Abstract
Objective.Directly decoding imagined speech from electroencephalogram (EEG) signals has attracted much interest in brain-computer interface applications, because it provides a natural and intuitive communication method for locked-in patients. Several methods have been applied to imagined speech decoding, but how to construct spatial-temporal dependencies and capture long-range contextual cues in EEG signals to better decode imagined speech should be considered.Approach.In this study, we propose a novel model called hybrid-scale spatial-temporal dilated convolution network (HS-STDCN) for EEG-based imagined speech recognition. HS-STDCN integrates feature learning from temporal and spatial information into a unified end-to-end model. To characterize the temporal dependencies of the EEG sequences, we adopted a hybrid-scale temporal convolution layer to capture temporal information at multiple levels. A depthwise spatial convolution layer was then designed to construct intrinsic spatial relationships of EEG electrodes, which can produce a spatial-temporal representation of the input EEG data. Based on the spatial-temporal representation, dilated convolution layers were further employed to learn long-range discriminative features for the final classification.Main results.To evaluate the proposed method, we compared the HS-STDCN with other existing methods on our collected dataset. The HS-STDCN achieved an averaged classification accuracy of 54.31% for decoding eight imagined words, which is significantly better than other methods at a significance level of 0.05.Significance.The proposed HS-STDCN model provided an effective approach to make use of both the temporal and spatial dependencies of the input EEG signals for imagined speech recognition. We also visualized the word semantic differences to analyze the impact of word semantics on imagined speech recognition, investigated the important regions in the decoding process, and explored the use of fewer electrodes to achieve comparable performance.
Collapse
|
49
|
Zhou X, Xu M, Xiao X, Wang Y, Jung TP, Ming D. Detection of fixation points using a small visual landmark for brain-computer interfaces. J Neural Eng 2021; 18. [PMID: 34130268 DOI: 10.1088/1741-2552/ac0b51] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 06/15/2021] [Indexed: 11/12/2022]
Abstract
Objective.The speed of visual brain-computer interfaces (v-BCIs) has been greatly improved in recent years. However, the traditional v-BCI paradigms require users to directly gaze at the intensive flickering items, which would cause severe problems such as visual fatigue and excessive visual resource consumption in practical applications. Therefore, it is imperative to develop a user-friendly v-BCI.Approach.According to the retina-cortical relationship, this study developed a novel BCI paradigm to detect the fixation point of eyes using a small visual stimulus that subtended only 0.6° in visual angle and was out of the central visual field. Specifically, the visual stimulus was treated as a landmark to judge the eccentricity and polar angle of the fixation point. Sixteen different fixation points were selected around the visual landmark, i.e. different combinations of two eccentricities (2° and 4°) and eight polar angles (0,π4,π2,3π4,π,5π4,3π2and7π4). Twelve subjects participated in this study, and they were asked to gaze at one out of the 16 points for each trial. A multi-class discriminative canonical pattern matching (Multi-DCPM) algorithm was proposed to decode the user's fixation point.Main results.We found the visual stimulation landmark elicited different spatial event-related potential patterns for different fixation points. Multi-DCPM could achieve an average accuracy of 66.2% with a standard deviation of 15.8% for the classification of the sixteen fixation points, which was significantly higher than traditional algorithms (p⩽0.001). Experimental results of this study demonstrate the feasibility of using a small visual stimulus as a landmark to track the relative position of the fixation point.Significance.The proposed new paradigm provides a potential approach to alleviate the problem of irritating stimuli in v-BCIs, which can broaden the applications of BCIs.
Collapse
|
50
|
Chen Y, Yang C, Ye X, Chen X, Wang Y, Gao X. Implementing a calibration-free SSVEP-based BCI system with 160 targets. J Neural Eng 2021; 18. [PMID: 34134091 DOI: 10.1088/1741-2552/ac0bfa] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 06/16/2021] [Indexed: 11/11/2022]
Abstract
Objective.Steady-state visual evoked potential (SSVEP) is an essential paradigm of electroencephalogram based brain-computer interface (BCI). Previous studies in the BCI research field mostly focused on enhancing classification accuracy and reducing stimuli duration. This study, however, concentrated on increasing the number of available targets in the BCI systems without calibration.Approach. Motivated by the idea of multiple frequency sequential coding, we developed a calibration-free SSVEP-BCI system implementing 160 targets by four continuous sinusoidal stimuli that lasted four seconds in total. Taking advantage of the benchmark dataset of SSVEP-BCI, this study optimized an arrangement of stimuli sequences, maximizing the response distance between different stimuli. We proposed an effective classification algorithm based on filter bank canonical correlation analysis. To evaluate the performance of this system, we conducted offline and online experiments using cue-guided selection tasks. Eight subjects participated in the offline experiments, and 12 subjects participated in the online experiments with real-time feedbacks.Mainresults. Offline experiments indicated the feasibility of the stimulation selection and detection algorithms. Furthermore, the online system achieved an average accuracy of 87.16 ± 11.46% and an information transfer rate of 78.84 ± 15.59 bits min-1. Specifically, seven of 12 subjects accomplished online experiments with accuracy higher than 90%. This study proposed an intact solution of applying numerous targets to SSVEP-based BCIs. Results of experiments confirmed the utility and efficiency of the system.Significance. This study firstly provides a calibration-free SSVEP-BCI speller system that enables more than 100 commands. This system could significantly expand the application scenario of SSVEP-based BCI. Meanwhile, the design criterion can hopefully enhance the overall performance of the BCI system. The demo video can be found in the supplementary material available online atstacks.iop.org/JNE/18/046094/mmedia.
Collapse
|