51
|
Tao X, Yi W, Wang K, He F, Qi H. Inter-stimulus phase coherence in steady-state somatosensory evoked potentials and its application in improving the performance of single-channel MI-BCI. J Neural Eng 2021; 18. [PMID: 34077914 DOI: 10.1088/1741-2552/ac0767] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 06/02/2021] [Indexed: 11/12/2022]
Abstract
Objective. With the development of clinical applications of motor imagery-based brain-computer interfaces (MI-BCIs), a single-channel MI-BCI system that can be easily assembled is an attractive goal. However, due to the low quality of the spectral power features in the traditional MI-BCI paradigm, the recognition performance of current single-channel systems is far lower than that of multi-channel systems, impeding their use in clinical applications.Approach.In this study, the subjects' right and left hands were stimulated simultaneously at different frequencies to induce steady-state somatosensory evoked potentials (SSSEP). Subjects then performed motor imagery (MI) tasks. A new electroencephalography (EEG) index, inter-stimulus phase coherence (ISPC), was built to measure phase desynchronization of SSSEP caused by MI. Then, ISPC is introduced as a feature into left-hand and right-hand MI recognition.Main results.ISPC analysis found that left-handed MI can cause a significant decrease in phase synchronization in contralateral sensorimotor SSSEP, while right-handed MI has little effect on it, and vice versa. Combining ISPC features with traditional spectral power features, the single-channel left-hand versus right-hand MI recognition accuracy reaches 81.0%, which is much higher than that observed with traditional MI paradigms (about 60%).Significance.This work shows that the hybrid MI-SSSEP paradigm can provide more sensitive EEG features to decode motor intentions, demonstrating its potential for clinical applications.
Collapse
|
52
|
De Venuto D, Mezzina G. A Single-Trial P300 Detector Based on Symbolized EEG and Autoencoded-(1D)CNN to Improve ITR Performance in BCIs. SENSORS 2021; 21:s21123961. [PMID: 34201381 PMCID: PMC8226883 DOI: 10.3390/s21123961] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 06/02/2021] [Accepted: 06/07/2021] [Indexed: 12/01/2022]
Abstract
In this paper, we propose a breakthrough single-trial P300 detector that maximizes the information translate rate (ITR) of the brain–computer interface (BCI), keeping high recognition accuracy performance. The architecture, designed to improve the portability of the algorithm, demonstrated full implementability on a dedicated embedded platform. The proposed P300 detector is based on the combination of a novel pre-processing stage based on the EEG signals symbolization and an autoencoded convolutional neural network (CNN). The proposed system acquires data from only six EEG channels; thus, it treats them with a low-complexity preprocessing stage including baseline correction, windsorizing and symbolization. The symbolized EEG signals are then sent to an autoencoder model to emphasize those temporal features that can be meaningful for the following CNN stage. This latter consists of a seven-layer CNN, including a 1D convolutional layer and three dense ones. Two datasets have been analyzed to assess the algorithm performance: one from a P300 speller application in BCI competition III data and one from self-collected data during a fluid prototype car driving experiment. Experimental results on the P300 speller dataset showed that the proposed method achieves an average ITR (on two subjects) of 16.83 bits/min, outperforming by +5.75 bits/min the state-of-the-art for this parameter. Jointly with the speed increase, the recognition performance returned disruptive results in terms of the harmonic mean of precision and recall (F1-Score), which achieve 51.78 ± 6.24%. The same method used in the prototype car driving led to an ITR of ~33 bit/min with an F1-Score of 70.00% in a single-trial P300 detection context, allowing fluid usage of the BCI for driving purposes. The realized network has been validated on an STM32L4 microcontroller target, for complexity and implementation assessment. The implementation showed an overall resource occupation of 5.57% of the total available ROM, ~3% of the available RAM, requiring less than 3.5 ms to provide the classification outcome.
Collapse
|
53
|
Velasco-Álvarez F, Fernández-Rodríguez Á, Vizcaíno-Martín FJ, Díaz-Estrella A, Ron-Angevin R. Brain-Computer Interface (BCI) Control of a Virtual Assistant in a Smartphone to Manage Messaging Applications. SENSORS (BASEL, SWITZERLAND) 2021; 21:3716. [PMID: 34073602 PMCID: PMC8199460 DOI: 10.3390/s21113716] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 05/21/2021] [Accepted: 05/25/2021] [Indexed: 12/13/2022]
Abstract
Brain-computer interfaces (BCI) are a type of assistive technology that uses the brain signals of users to establish a communication and control channel between them and an external device. BCI systems may be a suitable tool to restore communication skills in severely motor-disabled patients, as BCI do not rely on muscular control. The loss of communication is one of the most negative consequences reported by such patients. This paper presents a BCI system focused on the control of four mainstream messaging applications running in a smartphone: WhatsApp, Telegram, e-mail and short message service (SMS). The control of the BCI is achieved through the well-known visual P300 row-column paradigm (RCP), allowing the user to select control commands as well as spelling characters. For the control of the smartphone, the system sends synthesized voice commands that are interpreted by a virtual assistant running in the smartphone. Four tasks related to the four mentioned messaging services were tested with 15 healthy volunteers, most of whom were able to accomplish the tasks, which included sending free text e-mails to an address proposed by the subjects themselves. The online performance results obtained, as well as the results of subjective questionnaires, support the viability of the proposed system.
Collapse
|
54
|
Detecting Attention Levels in ADHD Children with a Video Game and the Measurement of Brain Activity with a Single-Channel BCI Headset. SENSORS 2021; 21:s21093221. [PMID: 34066492 PMCID: PMC8124980 DOI: 10.3390/s21093221] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 05/01/2021] [Accepted: 05/02/2021] [Indexed: 12/20/2022]
Abstract
Attentional biomarkers in attention deficit hyperactivity disorder are difficult to detect using only behavioural testing. We explored whether attention measured by a low-cost EEG system might be helpful to detect a possible disorder at its earliest stages. The GokEvolution application was designed to train attention and to provide a measure to identify attentional problems in children early on. Attention changes registered with NeuroSky MindWave in combination with the CARAS-R psychological test were used to characterise the attentional profiles of 52 non-ADHD and 23 ADHD children aged 7 to 12 years old. The analyses revealed that the GokEvolution was valuable in measuring attention through its use of EEG–BCI technology. The ADHD group showed lower levels of attention and more variability in brain attentional responses when compared to the control group. The application was able to map the low attention profiles of the ADHD group when compared to the control group and could distinguish between participants who completed the task and those who did not. Therefore, this system could potentially be used in clinical settings as a screening tool for early detection of attentional traits in order to prevent their development.
Collapse
|
55
|
Rybář M, Poli R, Daly I. Decoding of semantic categories of imagined concepts of animals and tools in fNIRS. J Neural Eng 2021; 18:046035. [PMID: 33780916 DOI: 10.1088/1741-2552/abf2e5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 03/29/2021] [Indexed: 11/11/2022]
Abstract
Objective.Semantic decoding refers to the identification of semantic concepts from recordings of an individual's brain activity. It has been previously reported in functional magnetic resonance imaging and electroencephalography. We investigate whether semantic decoding is possible with functional near-infrared spectroscopy (fNIRS). Specifically, we attempt to differentiate between the semantic categories of animals and tools. We also identify suitable mental tasks for potential brain-computer interface (BCI) applications.Approach.We explore the feasibility of a silent naming task, for the first time in fNIRS, and propose three novel intuitive mental tasks based on imagining concepts using three sensory modalities: visual, auditory, and tactile. Participants are asked to visualize an object in their minds, imagine the sounds made by the object, and imagine the feeling of touching the object. A general linear model is used to extract hemodynamic responses that are then classified via logistic regression in a univariate and multivariate manner.Main results.We successfully classify all tasks with mean accuracies of 76.2% for the silent naming task, 80.9% for the visual imagery task, 72.8% for the auditory imagery task, and 70.4% for the tactile imagery task. Furthermore, we show that consistent neural representations of semantic categories exist by applying classifiers across tasks.Significance.These findings show that semantic decoding is possible in fNIRS. The study is the first step toward the use of semantic decoding for intuitive BCI applications for communication.
Collapse
|
56
|
Zhao X, Wang Z, Zhang M, Hu H. A comfortable steady state visual evoked potential stimulation paradigm using peripheral vision. J Neural Eng 2021; 18. [PMID: 33784640 DOI: 10.1088/1741-2552/abf397] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 03/30/2021] [Indexed: 11/11/2022]
Abstract
Objective. Steady-state visual evoked potential (SSVEP)-brain-computer interfaces (BCIs) can cause much visual discomfort if the users use the SSVEP-BCIs for a long time. As an alternative scheme to reduce users' visual fatigue, this study proposes a new stimulation paradigm (termed as steady state peripheral visual evoked potential, abbreviated as SSPVEP) which makes full use of peripheral vision. The electroencephalography (EEG) signals are classifiable which means this proposed stimulation paradigm can be used in BCI system with the aid of the latest hybrid signal processing approach.Approach. Under the SSPVEP stimulation paradigm, 20 targets are mounted on 20 frequencies and other targets are set between two targets with flicker stimuli coding. In order to ensure the classification accuracy of SSPVEP signal detection under the proposed stimulation paradigm, two optimization schemes are proposed for the detection stage of the conventional ensemble task-related component analysis (ETRCA) algorithm. The first optimization scheme uses nonlinear correlation coefficient at the detection part for the first time to improve the classification accuracy of the system. The second optimization scheme usesγcorrection to enhance the time domain features of the SSPVEP signals, and uses Manhattan distance for the final detection.Main results. According to the response waveforms of the EEG signals generated under the SSPVEP stimulation paradigm and the results of the questionnaire on user's comfort level to the two stimulation paradigms (SSPVEP paradigm and conventional SSVEP paradigm), the proposed stimulation paradigm brings less visual fatigue. The comparison results indicate that the proposed detection methods (ETRCA +γcorrection + Manhattan distance, ETRCA + Spearman correlation) can greatly improve the classification accuracy compared with the individual template canonical correlation analysis method and conventional ETRCA method based on Pearson correlation.Significance. The SSPVEP stimulation paradigm reduces users' visual fatigue via using peripheral vision, which provides a new design idea for SSVEP stimulation paradigm aimed at visual comfort.
Collapse
|
57
|
Li M, He D, Li C, Qi S. Brain-Computer Interface Speller Based on Steady-State Visual Evoked Potential: A Review Focusing on the Stimulus Paradigm and Performance. Brain Sci 2021; 11:450. [PMID: 33916189 PMCID: PMC8065759 DOI: 10.3390/brainsci11040450] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 03/25/2021] [Accepted: 03/29/2021] [Indexed: 11/17/2022] Open
Abstract
The steady-state visual evoked potential (SSVEP), measured by the electroencephalograph (EEG), has high rates of information transfer and signal-to-noise ratio, and has been used to construct brain-computer interface (BCI) spellers. In BCI spellers, the targets of alphanumeric characters are assigned different visual stimuli and the fixation of each target generates a unique SSVEP. Matching the SSVEP to the stimulus allows users to select target letters and numbers. Many BCI spellers that harness the SSVEP have been proposed over the past two decades. Various paradigms of visual stimuli, including the procedure of target selection, layout of targets, stimulus encoding, and the combination with other triggering methods are used and considered to influence on the BCI speller performance significantly. This paper reviews these stimulus paradigms and analyzes factors influencing their performance. The fundamentals of BCI spellers are first briefly described. SSVEP-based BCI spellers, where only the SSVEP is used, are classified by stimulus paradigms and described in chronological order. Furthermore, hybrid spellers that involve the use of the SSVEP are presented in parallel. Factors influencing the performance and visual fatigue of BCI spellers are provided. Finally, prevailing challenges and prospective research directions are discussed to promote the development of BCI spellers.
Collapse
|
58
|
Singh A, Hussain AA, Lal S, Guesgen HW. A Comprehensive Review on Critical Issues and Possible Solutions of Motor Imagery Based Electroencephalography Brain-Computer Interface. SENSORS 2021; 21:s21062173. [PMID: 33804611 PMCID: PMC8003721 DOI: 10.3390/s21062173] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 03/15/2021] [Accepted: 03/16/2021] [Indexed: 01/16/2023]
Abstract
Motor imagery (MI) based brain–computer interface (BCI) aims to provide a means of communication through the utilization of neural activity generated due to kinesthetic imagination of limbs. Every year, a significant number of publications that are related to new improvements, challenges, and breakthrough in MI-BCI are made. This paper provides a comprehensive review of the electroencephalogram (EEG) based MI-BCI system. It describes the current state of the art in different stages of the MI-BCI (data acquisition, MI training, preprocessing, feature extraction, channel and feature selection, and classification) pipeline. Although MI-BCI research has been going for many years, this technology is mostly confined to controlled lab environments. We discuss recent developments and critical algorithmic issues in MI-based BCI for commercial deployment.
Collapse
|
59
|
Li M, Li F, Pan J, Zhang D, Zhao S, Li J, Wang F. The MindGomoku: An Online P300 BCI Game Based on Bayesian Deep Learning. SENSORS (BASEL, SWITZERLAND) 2021; 21:1613. [PMID: 33668950 PMCID: PMC7956207 DOI: 10.3390/s21051613] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 02/18/2021] [Accepted: 02/20/2021] [Indexed: 11/18/2022]
Abstract
In addition to helping develop products that aid the disabled, brain-computer interface (BCI) technology can also become a modality of entertainment for all people. However, most BCI games cannot be widely promoted due to the poor control performance or because they easily cause fatigue. In this paper, we propose a P300 brain-computer-interface game (MindGomoku) to explore a feasible and natural way to play games by using electroencephalogram (EEG) signals in a practical environment. The novelty of this research is reflected in integrating the characteristics of game rules and the BCI system when designing BCI games and paradigms. Moreover, a simplified Bayesian convolutional neural network (SBCNN) algorithm is introduced to achieve high accuracy on limited training samples. To prove the reliability of the proposed algorithm and system control, 10 subjects were selected to participate in two online control experiments. The experimental results showed that all subjects successfully completed the game control with an average accuracy of 90.7% and played the MindGomoku an average of more than 11 min. These findings fully demonstrate the stability and effectiveness of the proposed system. This BCI system not only provides a form of entertainment for users, particularly the disabled, but also provides more possibilities for games.
Collapse
|
60
|
Xie J, Cao G, Xu G, Fang P, Cui G, Xiao Y, Li G, Li M, Xue T, Zhang Y, Han X. Auditory Noise Leads to Increased Visual Brain-Computer Interface Performance: A Cross-Modal Study. Front Neurosci 2021; 14:590963. [PMID: 33414701 PMCID: PMC7783197 DOI: 10.3389/fnins.2020.590963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 11/18/2020] [Indexed: 11/25/2022] Open
Abstract
Noise has been proven to have a beneficial role in non-linear systems, including the human brain, based on the stochastic resonance (SR) theory. Several studies have been implemented on single-modal SR. Cross-modal SR phenomenon has been confirmed in different human sensory systems. In our study, a cross-modal SR enhanced brain–computer interface (BCI) was proposed by applying auditory noise to visual stimuli. Fast Fourier transform and canonical correlation analysis methods were used to evaluate the influence of noise, results of which indicated that a moderate amount of auditory noise could enhance periodic components in visual responses. Directed transfer function was applied to investigate the functional connectivity patterns, and the flow gain value was used to measure the degree of activation of specific brain regions in the information transmission process. The results of flow gain maps showed that moderate intensity of auditory noise activated the brain area to a greater extent. Further analysis by weighted phase-lag index (wPLI) revealed that the phase synchronization between visual and auditory regions under auditory noise was significantly enhanced. Our study confirms the existence of cross-modal SR between visual and auditory regions and achieves a higher accuracy for recognition, along with shorter time window length. Such findings can be used to improve the performance of visual BCIs to a certain extent.
Collapse
|
61
|
Fernández-Rodríguez Á, Ron-Angevin R, Sanz-Arigita EJ, Parize A, Esquirol J, Perrier A, Laur S, André JM, Lespinet-Najib V, Garcia L. Effect of Distracting Background Speech in an Auditory Brain-Computer Interface. Brain Sci 2021; 11:brainsci11010039. [PMID: 33401410 PMCID: PMC7823829 DOI: 10.3390/brainsci11010039] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 11/12/2020] [Accepted: 12/23/2020] [Indexed: 11/16/2022] Open
Abstract
Studies so far have analyzed the effect of distractor stimuli in different types of brain-computer interface (BCI). However, the effect of a background speech has not been studied using an auditory event-related potential (ERP-BCI), a convenient option when the visual path cannot be adopted by users. Thus, the aim of the present work is to examine the impact of a background speech on selection performance and user workload in auditory BCI systems. Eleven participants tested three conditions: (i) auditory BCI control condition, (ii) auditory BCI with a background speech to ignore (non-attentional condition), and (iii) auditory BCI while the user has to pay attention to the background speech (attentional condition). The results demonstrated that, despite no significant differences in performance, shared attention to auditory BCI and background speech required a higher cognitive workload. In addition, the P300 target stimuli in the non-attentional condition were significantly higher than those in the attentional condition for several channels. The non-attentional condition was the only condition that showed significant differences in the amplitude of the P300 between target and non-target stimuli. The present study indicates that background speech, especially when it is attended to, is an important interference that should be avoided while using an auditory BCI.
Collapse
|
62
|
A BCI System Based on Motor Imagery for Assisting People with Motor Deficiencies in the Limbs. Brain Sci 2020; 10:brainsci10110864. [PMID: 33212777 PMCID: PMC7697603 DOI: 10.3390/brainsci10110864] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 10/27/2020] [Accepted: 11/06/2020] [Indexed: 12/26/2022] Open
Abstract
Motor deficiencies constitute a significant problem affecting millions of people worldwide. Such people suffer from a debility in daily functioning, which may lead to decreased and incoherence in daily routines and deteriorate their quality of life (QoL). Thus, there is an essential need for assistive systems to help those people achieve their daily actions and enhance their overall QoL. This study proposes a novel brain–computer interface (BCI) system for assisting people with limb motor disabilities in performing their daily life activities by using their brain signals to control assistive devices. The extraction of useful features is vital for an efficient BCI system. Therefore, the proposed system consists of a hybrid feature set that feeds into three machine-learning (ML) classifiers to classify motor Imagery (MI) tasks. This hybrid feature selection (FS) system is practical, real-time, and an efficient BCI with low computation cost. We investigate different combinations of channels to select the combination that has the highest impact on performance. The results indicate that the highest achieved accuracies using a support vector machine (SVM) classifier are 93.46% and 86.0% for the BCI competition III–IVa dataset and the autocalibration and recurrent adaptation dataset, respectively. These datasets are used to test the performance of the proposed BCI. Also, we verify the effectiveness of the proposed BCI by comparing its performance with recent studies. We show that the proposed system is accurate and efficient. Future work can apply the proposed system to individuals with limb motor disabilities to assist them and test their capability to improve their QoL. Moreover, the forthcoming work can examine the system’s performance in controlling assistive devices such as wheelchairs or artificial limbs.
Collapse
|
63
|
Al-Nafjan A, Alharthi K, Kurdi H. Lightweight Building of an Electroencephalogram-Based Emotion Detection System. Brain Sci 2020; 10:brainsci10110781. [PMID: 33114646 PMCID: PMC7693518 DOI: 10.3390/brainsci10110781] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/23/2020] [Accepted: 10/23/2020] [Indexed: 11/24/2022] Open
Abstract
Brain–computer interface (BCI) technology provides a direct interface between the brain and an external device. BCIs have facilitated the monitoring of conscious brain electrical activity via electroencephalogram (EEG) signals and the detection of human emotion. Recently, great progress has been made in the development of novel paradigms for EEG-based emotion detection. These studies have also attempted to apply BCI research findings in varied contexts. Interestingly, advances in BCI technologies have increased the interest of scientists because such technologies’ practical applications in human–machine relationships seem promising. This emphasizes the need for a building process for an EEG-based emotion detection system that is lightweight, in terms of a smaller EEG dataset size and no involvement of feature extraction methods. In this study, we investigated the feasibility of using a spiking neural network to build an emotion detection system from a smaller version of the DEAP dataset with no involvement of feature extraction methods while maintaining decent accuracy. The results showed that by using a NeuCube-based spiking neural network, we could detect the valence emotion level using only 60 EEG samples with 84.62% accuracy, which is a comparable accuracy to that of previous studies.
Collapse
|
64
|
He Z, Li Z, Yang F, Wang L, Li J, Zhou C, Pan J. Advances in Multimodal Emotion Recognition Based on Brain-Computer Interfaces. Brain Sci 2020; 10:E687. [PMID: 33003397 PMCID: PMC7600724 DOI: 10.3390/brainsci10100687] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 09/19/2020] [Accepted: 09/26/2020] [Indexed: 11/16/2022] Open
Abstract
With the continuous development of portable noninvasive human sensor technologies such as brain-computer interfaces (BCI), multimodal emotion recognition has attracted increasing attention in the area of affective computing. This paper primarily discusses the progress of research into multimodal emotion recognition based on BCI and reviews three types of multimodal affective BCI (aBCI): aBCI based on a combination of behavior and brain signals, aBCI based on various hybrid neurophysiology modalities and aBCI based on heterogeneous sensory stimuli. For each type of aBCI, we further review several representative multimodal aBCI systems, including their design principles, paradigms, algorithms, experimental results and corresponding advantages. Finally, we identify several important issues and research directions for multimodal emotion recognition based on BCI.
Collapse
|
65
|
Stawicki P, Volosyak I. Comparison of Modern Highly Interactive Flicker-Free Steady State Motion Visual Evoked Potentials for Practical Brain-Computer Interfaces. Brain Sci 2020; 10:E686. [PMID: 32998379 PMCID: PMC7601073 DOI: 10.3390/brainsci10100686] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 09/19/2020] [Accepted: 09/24/2020] [Indexed: 11/23/2022] Open
Abstract
Motion-based visual evoked potentials (mVEP) is a new emerging trend in the field of steady-state visual evoked potentials (SSVEP)-based brain-computer interfaces (BCI). In this paper, we introduce different movement-based stimulus patterns (steady-state motion visual evoked potentials-SSMVEP), without employing the typical flickering. The tested movement patterns for the visual stimuli included a pendulum-like movement, a flipping illusion, a checkerboard pulsation, checkerboard inverse arc pulsations, and reverse arc rotations, all with a spelling task consisting of 18 trials. In an online experiment with nine participants, the movement-based BCI systems were evaluated with an online four-target BCI-speller, in which each letter may be selected in three steps (three trials). For classification, the minimum energy combination and a filter bank approach were used. The following frequencies were utilized: 7.06 Hz, 7.50 Hz, 8.00 Hz, and 8.57 Hz, reaching an average accuracy between 97.22% and 100% and an average information transfer rate (ITR) between 15.42 bits/min and 33.92 bits/min. All participants successfully used the SSMVEP-based speller with all types of stimulation pattern. The most successful SSMVEP stimulus was the SSMVEP1 (pendulum-like movement), with the average results reaching 100% accuracy and 33.92 bits/min for the ITR.
Collapse
|
66
|
Cooney C, Korik A, Folli R, Coyle D. Evaluation of Hyperparameter Optimization in Machine and Deep Learning Methods for Decoding Imagined Speech EEG. SENSORS 2020; 20:s20164629. [PMID: 32824559 PMCID: PMC7472624 DOI: 10.3390/s20164629] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 08/10/2020] [Accepted: 08/13/2020] [Indexed: 01/12/2023]
Abstract
Classification of electroencephalography (EEG) signals corresponding to imagined speech production is important for the development of a direct-speech brain-computer interface (DS-BCI). Deep learning (DL) has been utilized with great success across several domains. However, it remains an open question whether DL methods provide significant advances over traditional machine learning (ML) approaches for classification of imagined speech. Furthermore, hyperparameter (HP) optimization has been neglected in DL-EEG studies, resulting in the significance of its effects remaining uncertain. In this study, we aim to improve classification of imagined speech EEG by employing DL methods while also statistically evaluating the impact of HP optimization on classifier performance. We trained three distinct convolutional neural networks (CNN) on imagined speech EEG using a nested cross-validation approach to HP optimization. Each of the CNNs evaluated was designed specifically for EEG decoding. An imagined speech EEG dataset consisting of both words and vowels facilitated training on both sets independently. CNN results were compared with three benchmark ML methods: Support Vector Machine, Random Forest and regularized Linear Discriminant Analysis. Intra- and inter-subject methods of HP optimization were tested and the effects of HPs statistically analyzed. Accuracies obtained by the CNNs were significantly greater than the benchmark methods when trained on both datasets (words: 24.97%, p < 1 × 10-7, chance: 16.67%; vowels: 30.00%, p < 1 × 10-7, chance: 20%). The effects of varying HP values, and interactions between HPs and the CNNs were both statistically significant. The results of HP optimization demonstrate how critical it is for training CNNs to decode imagined speech.
Collapse
|
67
|
Functional Electrical Stimulation Controlled by Motor Imagery Brain-Computer Interface for Rehabilitation. Brain Sci 2020; 10:brainsci10080512. [PMID: 32748888 PMCID: PMC7465702 DOI: 10.3390/brainsci10080512] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 07/23/2020] [Accepted: 07/31/2020] [Indexed: 11/17/2022] Open
Abstract
Sensorimotor rhythm (SMR)-based brain–computer interface (BCI) controlled Functional Electrical Stimulation (FES) has gained importance in recent years for the rehabilitation of motor deficits. However, there still remain many research questions to be addressed, such as unstructured Motor Imagery (MI) training procedures; a lack of methods to classify different MI tasks in a single hand, such as grasping and opening; and difficulty in decoding voluntary MI-evoked SMRs compared to FES-driven passive-movement-evoked SMRs. To address these issues, a study that is composed of two phases was conducted to develop and validate an SMR-based BCI-FES system with 2-class MI tasks in a single hand (Phase 1), and investigate the feasibility of the system with stroke and traumatic brain injury (TBI) patients (Phase 2). The results of Phase 1 showed that the accuracy of classifying 2-class MIs (approximately 71.25%) was significantly higher than the true chance level, while that of distinguishing voluntary and passive SMRs was not. In Phase 2, where the patients performed goal-oriented tasks in a semi-asynchronous mode, the effects of the FES existence type and adaptive learning on task performance were evaluated. The results showed that adaptive learning significantly increased the accuracy, and the accuracy after applying adaptive learning under the No-FES condition (61.9%) was significantly higher than the true chance level. The outcomes of the present research would provide insight into SMR-based BCI-controlled FES systems that can connect those with motor disabilities (e.g., stroke and TBI patients) to other people by greatly improving their quality of life. Recommendations for future work with a larger sample size and kinesthetic MI were also presented.
Collapse
|
68
|
Tang J, Xu M, Han J, Liu M, Dai T, Chen S, Ming D. Optimizing SSVEP-Based BCI System towards Practical High-Speed Spelling. SENSORS 2020; 20:s20154186. [PMID: 32731432 PMCID: PMC7435370 DOI: 10.3390/s20154186] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 07/23/2020] [Accepted: 07/25/2020] [Indexed: 02/03/2023]
Abstract
The brain–computer interface (BCI) spellers based on steady-state visual evoked potentials (SSVEPs) have recently been widely investigated for their high information transfer rates (ITRs). This paper aims to improve the practicability of the SSVEP-BCIs for high-speed spelling. The system acquired the electroencephalogram (EEG) data from a self-developed dedicated EEG device and the stimulation was arranged as a keyboard. The task-related component analysis (TRCA) spatial filter was modified (mTRCA) for target classification and showed significantly higher performance compared with the original TRCA in the offline analysis. In the online system, the dynamic stopping (DS) strategy based on Bayesian posterior probability was utilized to realize alterable stimulating time. In addition, the temporal filtering process and the programs were optimized to facilitate the online DS operation. Notably, the online ITR reached 330.4 ± 45.4 bits/min on average, which is significantly higher than that of fixed stopping (FS) strategy, and the peak value of 420.2 bits/min is the highest online spelling ITR with a SSVEP-BCI up to now. The proposed system with portable EEG acquisition, friendly interaction, and alterable time of command output provides more flexibility for SSVEP-based BCIs and is promising for practical high-speed spelling.
Collapse
|
69
|
Jiang J, Wang C, Wu J, Qin W, Xu M, Yin E. Temporal Combination Pattern Optimization Based on Feature Selection Method for Motor Imagery BCIs. Front Hum Neurosci 2020; 14:231. [PMID: 32714167 PMCID: PMC7344307 DOI: 10.3389/fnhum.2020.00231] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Accepted: 05/25/2020] [Indexed: 11/19/2022] Open
Abstract
Common spatial pattern (CSP) method is widely used for spatial filtering and brain pattern extraction from electroencephalogram (EEG) signals in motor imagery (MI)-based brain-computer interfaces (BCIs). The participant-specific time window relative to the visual cue has a significant impact on the effectiveness of the CSP. However, the time window is usually selected experientially or manually. To solve this problem, we propose a novel feature selection approach for MI-based BCIs. Specifically, multiple time segments were obtained by decomposing each EEG sample of the MI task. Furthermore, the features were extracted by CSP from each time segment and were combined to form a new feature vector. Finally, the optimal temporal combination patterns for the new feature vector were selected based on four feature selection algorithms, i.e., mutual information, least absolute shrinkage and selection operator, principal component analysis and stepwise linear discriminant analysis (denoted as MUIN, LASSO, PCA, and SWLDA, respectively), and the classification algorithm was employed to evaluate the average classification accuracy. With three BCI competition datasets, the results of the four proposed algorithms were compared with traditional CSP algorithm in classification accuracy. Experimental results show that compared with traditional algorithm, the proposed methods significantly improve performance. Specifically, the LASSO achieved the highest accuracy (88.58%) among the proposed methods. Importantly, the average classification accuracies using the proposed approaches significantly improved 10.14% (MUIN), 11.40% (LASSO), 6.08% (PCA), and 10.25% (SWLDA) compared to that using CSP. These results indicate that the proposed approach is expected to be practical in MI-based BCIs.
Collapse
|
70
|
Ko LW, Chikara RK, Lee YC, Lin WC. Exploration of User's Mental State Changes during Performing Brain-Computer Interface. SENSORS 2020; 20:s20113169. [PMID: 32503162 PMCID: PMC7308896 DOI: 10.3390/s20113169] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 05/24/2020] [Accepted: 05/28/2020] [Indexed: 01/27/2023]
Abstract
Substantial developments have been established in the past few years for enhancing the performance of brain–computer interface (BCI) based on steady-state visual evoked potential (SSVEP). The past SSVEP-BCI studies utilized different target frequencies with flashing stimuli in many different applications. However, it is not easy to recognize user’s mental state changes when performing the SSVEP-BCI task. What we could observe was the increasing EEG power of the target frequency from the user’s visual area. BCI user’s cognitive state changes, especially in mental focus state or lost-in-thought state, will affect the BCI performance in sustained usage of SSVEP. Therefore, how to differentiate BCI users’ physiological state through exploring their neural activities changes while performing SSVEP is a key technology for enhancing the BCI performance. In this study, we designed a new BCI experiment which combined working memory task into the flashing targets of SSVEP task using 12 Hz or 30 Hz frequencies. Through exploring the EEG activity changes corresponding to the working memory and SSVEP task performance, we can recognize if the user’s cognitive state is in mental focus or lost-in-thought. Experiment results show that the delta (1–4 Hz), theta (4–7 Hz), and beta (13–30 Hz) EEG activities increased more in mental focus than in lost-in-thought state at the frontal lobe. In addition, the powers of the delta (1–4 Hz), alpha (8–12 Hz), and beta (13–30 Hz) bands increased more in mental focus in comparison with the lost-in-thought state at the occipital lobe. In addition, the average classification performance across subjects for the KNN and the Bayesian network classifiers were observed as 77% to 80%. These results show how mental state changes affect the performance of BCI users. In this work, we developed a new scenario to recognize the user’s cognitive state during performing BCI tasks. These findings can be used as the novel neural markers in future BCI developments.
Collapse
|
71
|
Browarczyk J, Kurowski A, Kostek B. Analyzing the Effectiveness of the Brain-Computer Interface for Task Discerning Based on Machine Learning. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2403. [PMID: 32340276 PMCID: PMC7219492 DOI: 10.3390/s20082403] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 04/15/2020] [Accepted: 04/21/2020] [Indexed: 11/16/2022]
Abstract
The aim of the study is to compare electroencephalographic (EEG) signal feature extraction methods in the context of the effectiveness of the classification of brain activities. For classification, electroencephalographic signals were obtained using an EEG device from 17 subjects in three mental states (relaxation, excitation, and solving logical task). Blind source separation employing independent component analysis (ICA) was performed on obtained signals. Welch's method, autoregressive modeling, and discrete wavelet transform were used for feature extraction. Principal component analysis (PCA) was performed in order to reduce the dimensionality of feature vectors. k-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Neural Networks (NN) were employed for classification. Precision, recall, F1 score, as well as a discussion based on statistical analysis, were shown. The paper also contains code utilized in preprocessing and the main part of experiments.
Collapse
|
72
|
Benda M, Volosyak I. Comparison of Different Visual Feedback Methods for SSVEP-Based BCIs. Brain Sci 2020; 10:E240. [PMID: 32325633 PMCID: PMC7226383 DOI: 10.3390/brainsci10040240] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 04/13/2020] [Accepted: 04/16/2020] [Indexed: 11/29/2022] Open
Abstract
In this paper we compared different visual feedback methods, informing users about classification progress in a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) speller application. According to results from our previous studies, changes in stimulus size and contrast as online feedback of classification progress have great impact on BCI performance in SSVEP-based spellers. In this experiment we further investigated these effects, and tested a 4-target SSVEP speller interface with a much higher number of subjects. Five different scenarios were used with variations in stimulus size and contrast, "no feedback", "size increasing", "size decreasing", "contrast increasing", and "contrast decreasing". With each of the five scenarios, 24 participants had to spell six letter words (at least 18 selections with this three-steps speller). The fastest feedback modalities were different for the users, there was no visual feedback which was generally better than the others. With the used interface, six users achieved significantly better Information Transfer Rates (ITRs) compared to the "no feedback" condition. Their average improvement by using the individually fastest feedback method was 46.52%. This finding is very important for BCI experiments, as by determining the optimal feedback for the user, the speed of the BCI can be improved without impairing the accuracy.
Collapse
|
73
|
Sun Y, Ayaz H, Akansu AN. Multimodal Affective State Assessment Using fNIRS + EEG and Spontaneous Facial Expression. Brain Sci 2020; 10:E85. [PMID: 32041316 PMCID: PMC7071625 DOI: 10.3390/brainsci10020085] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 01/31/2020] [Accepted: 02/01/2020] [Indexed: 01/04/2023] Open
Abstract
Human facial expressions are regarded as a vital indicator of one's emotion and intention, and even reveal the state of health and wellbeing. Emotional states have been associated with information processing within and between subcortical and cortical areas of the brain, including the amygdala and prefrontal cortex. In this study, we evaluated the relationship between spontaneous human facial affective expressions and multi-modal brain activity measured via non-invasive and wearable sensors: functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) signals. The affective states of twelve male participants detected via fNIRS, EEG, and spontaneous facial expressions were investigated in response to both image-content stimuli and video-content stimuli. We propose a method to jointly evaluate fNIRS and EEG signals for affective state detection (emotional valence as positive or negative). Experimental results reveal a strong correlation between spontaneous facial affective expressions and the perceived emotional valence. Moreover, the affective states were estimated by the fNIRS, EEG, and fNIRS + EEG brain activity measurements. We show that the proposed EEG + fNIRS hybrid method outperforms fNIRS-only and EEG-only approaches. Our findings indicate that the dynamic (video-content based) stimuli triggers a larger affective response than the static (image-content based) stimuli. These findings also suggest joint utilization of facial expression and wearable neuroimaging, fNIRS, and EEG, for improved emotional analysis and affective brain-computer interface applications.
Collapse
|
74
|
Kumar A, Fang Q, Fu J, Pirogova E, Gu X. Error-Related Neural Responses Recorded by Electroencephalography During Post-stroke Rehabilitation Movements. Front Neurorobot 2019; 13:107. [PMID: 31920616 PMCID: PMC6934053 DOI: 10.3389/fnbot.2019.00107] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 12/06/2019] [Indexed: 01/07/2023] Open
Abstract
Error-related potential (ErrP) based assist-as-needed robot-therapy can be an effective rehabilitation method. To date, several studies have shown the presence of ErrP under various task situations. However, in the context of assist-as-needed methods, the existence of ErrP is unexplored. Therefore, the principal objective of this study is to determine if an ErrP can be evoked when a subject is unable to complete a physical exercise in a given time. Fifteen stroke patients participated in an experiment that involved performing a physical rehabilitation exercise. Results showed that the electroencephalographic (EEG) response of the trials, where patients failed to complete the exercise, against the trials, where patients successfully completed the exercise, significantly differ from each other, and the resulting difference of event-related potentials resembles the previously reported ErrP signals as well as has some unique features. Along with the highly statistically significant difference, the trials differ in time-frequency patterns and scalp distribution maps. In summary, the results of the study provide a novel basis for the detection of the failure against the success events while executing rehabilitation exercises that can be used to improve the state-of-the-art robot-assisted rehabilitation methods.
Collapse
|
75
|
A Pilot Study on Falling-Risk Detection Method Based on Postural Perturbation Evoked Potential Features. SENSORS 2019; 19:s19245554. [PMID: 31888176 PMCID: PMC6960671 DOI: 10.3390/s19245554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 12/13/2019] [Accepted: 12/13/2019] [Indexed: 12/02/2022]
Abstract
In the human-robot hybrid system, due to the error recognition of the pattern recognition system, the robot may perform erroneous motor execution, which may lead to falling-risk. While, the human can clearly detect the existence of errors, which is manifested in the central nervous activity characteristics. To date, the majority of studies on falling-risk detection have focused primarily on computer vision and physical signals. There are no reports of falling-risk detection methods based on neural activity. In this study, we propose a novel method to monitor multi erroneous motion events using electroencephalogram (EEG) features. There were 15 subjects who participated in this study, who kept standing with an upper limb supported posture and received an unpredictable postural perturbation. EEG signal analysis revealed a high negative peak with a maximum averaged amplitude of −14.75 ± 5.99 μV, occurring at 62 ms after postural perturbation. The xDAWN algorithm was used to reduce the high-dimension of EEG signal features. And, Bayesian linear discriminant analysis (BLDA) was used to train a classifier. The detection rate of the falling-risk onset is 98.67%. And the detection latency is 334ms, when we set detection rate beyond 90% as the standard of dangerous event onset. Further analysis showed that the falling-risk detection method based on postural perturbation evoked potential features has a good generalization ability. The model based on typical event data achieved 94.2% detection rate for unlearned atypical perturbation events. This study demonstrated the feasibility of using neural response to detect dangerous fall events.
Collapse
|