1
|
Van Den Kerchove A, Si-Mohammed H, Van Hulle MM, Cabestaing F. Correcting for ERP latency jitter improves gaze-independent BCI decoding. J Neural Eng 2024; 21:046013. [PMID: 38959876 DOI: 10.1088/1741-2552/ad5ec0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 07/03/2024] [Indexed: 07/05/2024]
Abstract
Objective.Patients suffering from heavy paralysis or Locked-in-Syndrome can regain communication using a Brain-Computer Interface (BCI). Visual event-related potential (ERP) based BCI paradigms exploit visuospatial attention (VSA) to targets laid out on a screen. However, performance drops if the user does not direct their eye gaze at the intended target, harming the utility of this class of BCIs for patients suffering from eye motor deficits. We aim to create an ERP decoder that is less dependent on eye gaze.Approach.ERP component latency jitter plays a role in covert visuospatial attention (VSA) decoding. We introduce a novel decoder which compensates for these latency effects, termed Woody Classifier-based Latency Estimation (WCBLE). We carried out a BCI experiment recording ERP data in overt and covert visuospatial attention (VSA), and introduce a novel special case of covert VSA termed split VSA, simulating the experience of patients with severely impaired eye motor control. We evaluate WCBLE on this dataset and the BNCI2014-009 dataset, within and across VSA conditions to study the dependency on eye gaze and the variation thereof during the experiment.Main results.WCBLE outperforms state-of-the-art methods in the VSA conditions of interest in gaze-independent decoding, without reducing overt VSA performance. Results from across-condition evaluation show that WCBLE is more robust to varying VSA conditions throughout a BCI operation session.Significance. Together, these results point towards a pathway to achieving gaze independence through suited ERP decoding. Our proposed gaze-independent solution enhances decoding performance in those cases where performing overt VSA is not possible.
Collapse
Affiliation(s)
- A Van Den Kerchove
- Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
- KU Leuven, Department of Neurosciences, Laboratory for Neuro- & Psychophysiology, Campus Gasthuisberg O&N2, Herestraat 49 bus 1021, BE-3000 Leuven, Belgium
| | - H Si-Mohammed
- Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
| | - M M Van Hulle
- KU Leuven, Department of Neurosciences, Laboratory for Neuro- & Psychophysiology, Campus Gasthuisberg O&N2, Herestraat 49 bus 1021, BE-3000 Leuven, Belgium
| | - F Cabestaing
- Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
| |
Collapse
|
2
|
Pitt KM, Cole ZJ, Zosky J. Promoting Simple and Engaging Brain-Computer Interface Designs for Children by Evaluating Contrasting Motion Techniques. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3974-3987. [PMID: 37696046 DOI: 10.1044/2023_jslhr-23-00292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2023]
Abstract
PURPOSE There is an increasing focus on using motion in augmentative and alternative communication (AAC) systems. In considering brain-computer interface access to AAC (BCI-AAC), motion may provide a simpler or more intuitive avenue for BCI-AAC control. Different motion techniques may be utilized in supporting competency with AAC devices including simple (e.g., zoom) and complex (behaviorally relevant animation) methods. However, how different pictorial symbol animation techniques impact BCI-AAC is unclear. METHOD Sixteen healthy children completed two experimental conditions. These conditions included highlighting of pictorial symbols via both functional (complex) and zoom (simple) animation to evaluate the effects of motion techniques on P300-based BCI-AAC signals and offline (predicted) BCI-AAC performance. RESULTS Functional (complex) animation significantly increased attentional-related P200/P300 event-related potential (ERP) amplitudes in the parieto-occipital area. Zoom (simple) animation significantly decreased N400 latency. N400 ERP amplitude was significantly greater, and occurred significantly earlier, on the right versus left side for the functional animation condition within the parieto-occipital bin. N200 ERP latency was significantly reduced over the left hemisphere for the zoom condition in the central bin. As hypothesized, elicitation of all targeted ERP components supported offline (predicted) BCI-AAC performance being similar between conditions. CONCLUSION Study findings provide continued support for the use of animation in BCI-AAC systems for children and highlight differences in neural and attentional processing between complex and simple animation techniques. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24085623.
Collapse
Affiliation(s)
- Kevin M Pitt
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln
| | - Zachary J Cole
- Department of Psychology, University of Nebraska-Lincoln
| | - Joshua Zosky
- Department of Psychology, University of Nebraska-Lincoln
| |
Collapse
|
3
|
Chen G, Zhang X, Zhang J, Li F, Duan S. A novel brain-computer interface based on audio-assisted visual evoked EEG and spatial-temporal attention CNN. Front Neurorobot 2022; 16:995552. [PMID: 36247357 PMCID: PMC9561921 DOI: 10.3389/fnbot.2022.995552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Accepted: 09/06/2022] [Indexed: 11/13/2022] Open
Abstract
Objective Brain-computer interface (BCI) can translate intentions directly into instructions and greatly improve the interaction experience for disabled people or some specific interactive applications. To improve the efficiency of BCI, the objective of this study is to explore the feasibility of an audio-assisted visual BCI speller and a deep learning-based single-trial event related potentials (ERP) decoding strategy. Approach In this study, a two-stage BCI speller combining the motion-onset visual evoked potential (mVEP) and semantically congruent audio evoked ERP was designed to output the target characters. In the first stage, the different group of characters were presented in the different locations of visual field simultaneously and the stimuli were coded to the mVEP based on a new space division multiple access scheme. And then, the target character can be output based on the audio-assisted mVEP in the second stage. Meanwhile, a spatial-temporal attention-based convolutional neural network (STA-CNN) was proposed to recognize the single-trial ERP components. The CNN can learn 2-dimentional features including the spatial information of different activated channels and time dependence among ERP components. In addition, the STA mechanism can enhance the discriminative event-related features by adaptively learning probability weights. Main results The performance of the proposed two-stage audio-assisted visual BCI paradigm and STA-CNN model was evaluated using the Electroencephalogram (EEG) recorded from 10 subjects. The average classification accuracy of proposed STA-CNN can reach 59.6 and 77.7% for the first and second stages, which were always significantly higher than those of the comparison methods (p < 0.05). Significance The proposed two-stage audio-assisted visual paradigm showed a great potential to be used to BCI speller. Moreover, through the analysis of the attention weights from time sequence and spatial topographies, it was proved that STA-CNN could effectively extract interpretable spatiotemporal EEG features.
Collapse
|
4
|
Santamaría-Vázquez E, Martínez-Cagigal V, Pérez-Velasco S, Marcos-Martínez D, Hornero R. Robust asynchronous control of ERP-Based brain-Computer interfaces using deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106623. [PMID: 35030477 DOI: 10.1016/j.cmpb.2022.106623] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 12/11/2021] [Accepted: 01/04/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Brain-computer interfaces (BCI) based on event-related potentials (ERP) are a promising technology for alternative and augmented communication in an assistive context. However, most approaches to date are synchronous, requiring the intervention of a supervisor when the user wishes to turn his attention away from the BCI system. In order to bring these BCIs into real-life applications, a robust asynchronous control of the system is required through monitoring of user attention. Despite the great importance of this limitation, which prevents the deployment of these systems outside the laboratory, it is often overlooked in research articles. This study was aimed to propose a novel method to solve this problem, taking advantage of deep learning for the first time in this context to overcome the limitations of previous strategies based on hand-crafted features. METHODS The proposed method, based on EEG-Inception, a novel deep convolutional neural network, divides the problem in 2 stages to achieve the asynchronous control: (i) the model detects user's control state, and (ii) decodes the command only if the user is attending to the stimuli. Additionally, we used transfer learning to reduce the calibration time, even exploring a calibration-less approach. RESULTS Our method was evaluated with 22 healthy subjects, analyzing the impact of the calibration time and number of stimulation sequences on the system's performance. For the control state detection stage, we report average accuracies above 91% using only 1 sequence of stimulation and 30 calibration trials, reaching a maximum of 96.95% with 15 sequences. Moreover, our calibration-less approach also achieved suitable results, with a maximum accuracy of 89.36%, showing the benefits of transfer learning. As for the overall asynchronous system, which includes both stages, the maximum information transfer rate was 35.54 bpm, a suitable value for high-speed communication. CONCLUSIONS The proposed strategy achieved higher performance with less calibration trials and stimulation sequences than former approaches, representing a promising step forward that paves the way for more practical applications of ERP-based spellers.
Collapse
Affiliation(s)
- Eduardo Santamaría-Vázquez
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, 47011, Valladolid, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Spain.
| | - Víctor Martínez-Cagigal
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, 47011, Valladolid, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Spain.
| | - Sergio Pérez-Velasco
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, 47011, Valladolid, Spain.
| | - Diego Marcos-Martínez
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, 47011, Valladolid, Spain.
| | - Roberto Hornero
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, 47011, Valladolid, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Spain.
| |
Collapse
|
5
|
Xiao X, Xu M, Han J, Yin E, Liu S, Zhang X, Jung TP, Ming D. Enhancement for P300-speller classification using multi-window discriminative canonical pattern matching. J Neural Eng 2021; 18. [PMID: 34096888 DOI: 10.1088/1741-2552/ac028b] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 05/18/2021] [Indexed: 11/12/2022]
Abstract
Objective.P300s are one of the most studied event-related potentials (ERPs), which have been widely used for brain-computer interfaces (BCIs). Thus, fast and accurate recognition of P300s is an important issue for BCI study. Recently, there emerges a lot of novel classification algorithms for P300-speller. Among them, discriminative canonical pattern matching (DCPM) has been proven to work effectively, in which discriminative spatial pattern (DSP) filter can significantly enhance the spatial features of P300s. However, the pattern of ERPs in space varies with time, which was not taken into consideration in the traditional DCPM algorithm.Approach.In this study, we developed an advanced version of DCPM, i.e. multi-window DCPM, which contained a series of time-dependent DSP filters to fine-tune the extraction of spatial ERP features. To verify its effectiveness, 25 subjects were recruited and they were asked to conduct the typical P300-speller experiment.Main results.As a result, multi-window DCPM achieved the character recognition accuracy of 91.84% with only five training characters, which was significantly better than the traditional DCPM algorithm. Furthermore, it was also compared with eight other popular methods, including SWLDA, SKLDA, STDA, BLDA, xDAWN, HDCA, sHDCA and EEGNet. The results showed multi-window DCPM preformed the best, especially using a small calibration dataset. The proposed algorithm was applied to the BCI Controlled Robot Contest of P300 paradigm in 2019 World Robot Conference, and won the first place.Significance.These results demonstrate that multi-window DCPM is a promising method for improving the performance and enhancing the practicability of P300-speller.
Collapse
Affiliation(s)
- Xiaolin Xiao
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China.,Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
| | - Minpeng Xu
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China.,Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin, People's Republic of China
| | - Jin Han
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Erwei Yin
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin, People's Republic of China
| | - Shuang Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
| | - Xin Zhang
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China.,Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
| | - Tzyy-Ping Jung
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China.,The Swartz Centre for Computational Neuroscience, University of California, San Diego, CA, United States of America
| | - Dong Ming
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China.,Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
| |
Collapse
|
6
|
Liu D, Liu C, Chen J, Zhang D, Hong B. Doubling the Speed of N200 Speller via Dual-Directional Motion Encoding. IEEE Trans Biomed Eng 2020; 68:204-213. [PMID: 32746042 DOI: 10.1109/tbme.2020.3005518] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Motion-onset visual evoked potentials (mVEPs)-based spellers, also known as N200 spellers, have been successfully implemented, avoiding flashing stimuli that are common in visual brain-computer interface (BCI). However, their information transfer rates (ITRs), typically below 50 bits/min, are lower than other visual BCI spellers. In this study, we sought to improve the speed of N200 speller to a level above the well-known P300 spellers. APPROACH Based on our finding of the spatio-temporal asymmetry of N200 response elicited by leftward and rightward visual motion, a novel dual-directional N200 speller was implemented. By presenting visual stimuli moving in two different directions simultaneously, the new paradigm reduced the stimuli presentation time by half, while ensuring separable N200 features between two visual motion directions. Furthermore, a probability-based dynamic stopping algorithm was also proposed to shorten the decision time for each output further. Both offline and online tests were conducted to evaluate the performance in ten participants. MAIN RESULTS Offline results revealed contralateral dominant temporal and spatial patterns in N200 responses when subjects attended to stimuli moving leftward or rightward. In online experiments, the dual-directional paradigm achieved an average ITR of 79.8 bits/min, with the highest ITR of 124.8 bits/min. Compared with the traditional uni-directional N200 speller, the median gain on the ITR was 202%. SIGNIFICANCE The proposed dual-directional paradigm managed to double the speed of the N200 speller. Together with its non-flashing characteristics, this dual-directional N200 speller is promising to be a competent candidate for fast and reliable BCI applications.
Collapse
|
7
|
Xiao X, Xu M, Jin J, Wang Y, Jung TP, Ming D. Discriminative Canonical Pattern Matching for Single-Trial Classification of ERP Components. IEEE Trans Biomed Eng 2020; 67:2266-2275. [DOI: 10.1109/tbme.2019.2958641] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
Bianchi L, Liti C, Piccialli V. A New Early Stopping Method for P300 Spellers. IEEE Trans Neural Syst Rehabil Eng 2019; 27:1635-1643. [PMID: 31226078 DOI: 10.1109/tnsre.2019.2924080] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In event-related potentials based brain-computer interfaces, the responses evoked by a well defined stimuli sequence are usually averaged to overcome the limitations caused by the intrinsic poor EEG signal-to-noise ratio. This, however, implies that the time necessary to detect the brain signals increases and then that the communication rate can be dramatically reduced. A common approach is then at first to estimate an optimal fixed number of responses to be averaged on a calibration data set and then to use this number on the online/testing dataset. In contrast to this strategy, several early stopping methods have been successfully proposed, aiming at dynamically stopping the stimulation sequence when a certain condition is met. We propose an efficient and easy to implement early stopping method that outperforms the ones proposed in the literature, showing its effectiveness on several publicly available datasets recorded from either healthy subjects or amyotrophic lateral sclerosis patients.
Collapse
|
9
|
Won DO, Hwang HJ, Kim DM, Muller KR, Lee SW. Motion-Based Rapid Serial Visual Presentation for Gaze-Independent Brain-Computer Interfaces. IEEE Trans Neural Syst Rehabil Eng 2017; 26:334-343. [PMID: 28809703 DOI: 10.1109/tnsre.2017.2736600] [Citation(s) in RCA: 61] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Most event-related potential (ERP)-based brain-computer interface (BCI) spellers primarily use matrix layouts and generally require moderate eye movement for successful operation. The fundamental objective of this paper is to enhance the perceptibility of target characters by introducing motion stimuli to classical rapid serial visual presentation (RSVP) spellers that do not require any eye movement, thereby applying them to paralyzed patients with oculomotor dysfunctions. To test the feasibility of the proposed motion-based RSVP paradigm, we implemented three RSVP spellers: 1) fixed-direction motion (FM-RSVP); 2) random-direction motion (RM-RSVP); and 3) (the conventional) non-motion stimulation (NM-RSVP), and evaluated the effect of the three different stimulation methods on spelling performance. The two motion-based stimulation methods, FM- and RM-RSVP, showed shorter P300 latency and higher P300 amplitudes (i.e., 360.4-379.6 ms; 5.5867- ) than the NM-RSVP (i.e., 480.4 ms; ). This led to higher and more stable performances for FM- and RM-RSVP spellers than NM-RSVP speller (i.e., 79.06±6.45% for NM-RSVP, 90.60±2.98% for RM-RSVP, and 92.74±2.55% for FM-RSVP). In particular, the proposed motion-based RSVP paradigm was significantly beneficial for about half of the subjects who might not accurately perceive rapidly presented static stimuli. These results indicate that the use of proposed motion-based RSVP paradigm is more beneficial for target recognition when developing BCI applications for severely paralyzed patients with complex ocular dysfunctions.
Collapse
|
10
|
Maximizing Information Transfer in SSVEP-Based Brain–Computer Interfaces. IEEE Trans Biomed Eng 2017; 64:381-394. [DOI: 10.1109/tbme.2016.2559527] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
11
|
Xu M, Wang Y, Nakanishi M, Wang YT, Qi H, Jung TP, Ming D. Fast detection of covert visuospatial attention using hybrid N2pc and SSVEP features. J Neural Eng 2016; 13:066003. [DOI: 10.1088/1741-2560/13/6/066003] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
12
|
Chen L, Jin J, Daly I, Zhang Y, Wang X, Cichocki A. Exploring Combinations of Different Color and Facial Expression Stimuli for Gaze-Independent BCIs. Front Comput Neurosci 2016; 10:5. [PMID: 26858634 PMCID: PMC4731496 DOI: 10.3389/fncom.2016.00005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Accepted: 01/11/2016] [Indexed: 11/25/2022] Open
Abstract
Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced. New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern. Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed. Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that the colored dummy face pattern had a significant advantage in terms of classification accuracy (p < 0.05) and information transfer rate (p < 0.05) compared to the other two patterns. Conclusions: The stimuli used in the colored dummy face paradigm combined color and facial expressions. This had a significant advantage in terms of the evoked P300 and N400 amplitudes and resulted in high classification accuracies and information transfer rates. It was compared with colored ball and gray dummy face stimuli.
Collapse
Affiliation(s)
- Long Chen
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology Shanghai, China
| | - Jing Jin
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology Shanghai, China
| | - Ian Daly
- Brain Embodiment Lab, School of Systems Engineering, University of Reading Reading, UK
| | - Yu Zhang
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology Shanghai, China
| | - Xingyu Wang
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology Shanghai, China
| | - Andrzej Cichocki
- Riken Brain Science InstituteWako-shi, Japan; Systems Research Institute of Polish Academy of SciencesWarsaw, Poland; Skolkovo Institute of Science and TechnologyMoscow, Russia
| |
Collapse
|
13
|
Höhne J, Bartz D, Hebart MN, Müller KR, Blankertz B. Analyzing neuroimaging data with subclasses: A shrinkage approach. Neuroimage 2016; 124:740-751. [DOI: 10.1016/j.neuroimage.2015.09.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2015] [Revised: 09/10/2015] [Accepted: 09/15/2015] [Indexed: 11/30/2022] Open
|
14
|
Zhang R, Xu P, Chen R, Ma T, Lv X, Li F, Li P, Liu T, Yao D. An Adaptive Motion-Onset VEP-Based Brain-Computer Interface. ACTA ACUST UNITED AC 2015. [DOI: 10.1109/tamd.2015.2426176] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
15
|
A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids. Sci Rep 2015; 5:15890. [PMID: 26510583 PMCID: PMC4625131 DOI: 10.1038/srep15890] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Accepted: 10/05/2015] [Indexed: 12/12/2022] Open
Abstract
A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.
Collapse
|
16
|
Development of a hybrid mental spelling system combining SSVEP-based brain–computer interface and webcam-based eye tracking. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2015.05.012] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
17
|
Li W, Li M, Zhao J. Control of humanoid robot via motion-onset visual evoked potentials. Front Syst Neurosci 2015; 8:247. [PMID: 25620918 PMCID: PMC4287730 DOI: 10.3389/fnsys.2014.00247] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2014] [Accepted: 12/17/2014] [Indexed: 11/26/2022] Open
Abstract
This paper investigates controlling humanoid robot behavior via motion-onset specific N200 potentials. In this study, N200 potentials are induced by moving a blue bar through robot images intuitively representing robot behaviors to be controlled with mind. We present the individual impact of each subject on N200 potentials and discuss how to deal with individuality to obtain a high accuracy. The study results document the off-line average accuracy of 93% for hitting targets across over five subjects, so we use this major component of the motion-onset visual evoked potential (mVEP) to code people's mental activities and to perform two types of on-line operation tasks: navigating a humanoid robot in an office environment with an obstacle and picking-up an object. We discuss the factors that affect the on-line control success rate and the total time for completing an on-line operation task.
Collapse
Affiliation(s)
- Wei Li
- Department of Computer and Electrical Engineering and Computer Science, California State University Bakersfield, CA, USA ; School of Electrical Engineering and Automation, Tianjin University Tianjin, China
| | - Mengfan Li
- School of Electrical Engineering and Automation, Tianjin University Tianjin, China
| | - Jing Zhao
- School of Electrical Engineering and Automation, Tianjin University Tianjin, China
| |
Collapse
|
18
|
An X, Höhne J, Ming D, Blankertz B. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces. PLoS One 2014; 9:e111070. [PMID: 25350547 PMCID: PMC4211702 DOI: 10.1371/journal.pone.0111070] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2014] [Accepted: 09/20/2014] [Indexed: 12/03/2022] Open
Abstract
For Brain-Computer Interface (BCI) systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP) speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller) and interleaved independent streams (Parallel-Speller). Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3%) showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.
Collapse
Affiliation(s)
- Xingwei An
- Department of Biomedical Engineering, Tianjin University, Tianjin, China
- Neurotechnology Group, Berlin Institute of Technology, Berlin, Germany
- * E-mail: (XA); (DM)
| | - Johannes Höhne
- Neurotechnology Group, Berlin Institute of Technology, Berlin, Germany
- Machine Learning Group, Berlin Institute of Technology, Berlin, Germany
| | - Dong Ming
- Department of Biomedical Engineering, Tianjin University, Tianjin, China
- * E-mail: (XA); (DM)
| | | |
Collapse
|
19
|
Kindermans PJ, Schreuder M, Schrauwen B, Müller KR, Tangermann M. True zero-training brain-computer interfacing--an online study. PLoS One 2014; 9:e102504. [PMID: 25068464 PMCID: PMC4113217 DOI: 10.1371/journal.pone.0102504] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2014] [Accepted: 06/19/2014] [Indexed: 11/18/2022] Open
Abstract
Despite several approaches to realize subject-to-subject transfer of pre-trained classifiers, the full performance of a Brain-Computer Interface (BCI) for a novel user can only be reached by presenting the BCI system with data from the novel user. In typical state-of-the-art BCI systems with a supervised classifier, the labeled data is collected during a calibration recording, in which the user is asked to perform a specific task. Based on the known labels of this recording, the BCI's classifier can learn to decode the individual's brain signals. Unfortunately, this calibration recording consumes valuable time. Furthermore, it is unproductive with respect to the final BCI application, e.g. text entry. Therefore, the calibration period must be reduced to a minimum, which is especially important for patients with a limited concentration ability. The main contribution of this manuscript is an online study on unsupervised learning in an auditory event-related potential (ERP) paradigm. Our results demonstrate that the calibration recording can be bypassed by utilizing an unsupervised trained classifier, that is initialized randomly and updated during usage. Initially, the unsupervised classifier tends to make decoding mistakes, as the classifier might not have seen enough data to build a reliable model. Using a constant re-analysis of the previously spelled symbols, these initially misspelled symbols can be rectified posthoc when the classifier has learned to decode the signals. We compare the spelling performance of our unsupervised approach and of the unsupervised posthoc approach to the standard supervised calibration-based dogma for n = 10 healthy users. To assess the learning behavior of our approach, it is unsupervised trained from scratch three times per user. Even with the relatively low SNR of an auditory ERP paradigm, the results show that after a limited number of trials (30 trials), the unsupervised approach performs comparably to a classic supervised model.
Collapse
Affiliation(s)
- Pieter-Jan Kindermans
- Electronics and Information Systems (ELIS) Dept., Ghent University, Ghent, Belgium
- * E-mail: (PJK); (KRM); (MT)
| | - Martijn Schreuder
- Machine Learning Laboratory, Technical University of Berlin, Berlin, Germany
| | - Benjamin Schrauwen
- Electronics and Information Systems (ELIS) Dept., Ghent University, Ghent, Belgium
| | - Klaus-Robert Müller
- Machine Learning Laboratory, Technical University of Berlin, Berlin, Germany
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
- * E-mail: (PJK); (KRM); (MT)
| | - Michael Tangermann
- BrainLinks-BrainTools Excellence Cluster, Computer Science Dept., University of Freiburg, Freiburg, Germany
- * E-mail: (PJK); (KRM); (MT)
| |
Collapse
|
20
|
Mora-Cortes A, Manyakov NV, Chumerin N, Van Hulle MM. Language model applications to spelling with Brain-Computer Interfaces. SENSORS (BASEL, SWITZERLAND) 2014; 14:5967-93. [PMID: 24675760 PMCID: PMC4029701 DOI: 10.3390/s140405967] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2013] [Revised: 02/17/2014] [Accepted: 02/24/2014] [Indexed: 11/16/2022]
Abstract
Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies.
Collapse
Affiliation(s)
- Anderson Mora-Cortes
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Campus Gasthuisberg, O&N2, Herestraat 49, Leuven B-3000, Belgium.
| | - Nikolay V Manyakov
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Campus Gasthuisberg, O&N2, Herestraat 49, Leuven B-3000, Belgium.
| | - Nikolay Chumerin
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Campus Gasthuisberg, O&N2, Herestraat 49, Leuven B-3000, Belgium.
| | - Marc M Van Hulle
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Campus Gasthuisberg, O&N2, Herestraat 49, Leuven B-3000, Belgium.
| |
Collapse
|
21
|
Hill K, Kovacs T, Shin S. Reliability of brain computer interface language sample transcription procedures. ACTA ACUST UNITED AC 2014; 51:579-90. [DOI: 10.1682/jrrd.2013.05.0102] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
22
|
Chennu S, Alsufyani A, Filetti M, Owen AM, Bowman H. The cost of space independence in P300-BCI spellers. J Neuroeng Rehabil 2013; 10:82. [PMID: 23895406 PMCID: PMC3733823 DOI: 10.1186/1743-0003-10-82] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2012] [Accepted: 06/14/2013] [Indexed: 11/24/2022] Open
Abstract
Background Though non-invasive EEG-based Brain Computer Interfaces (BCI) have been researched extensively over the last two decades, most designs require control of spatial attention and/or gaze on the part of the user. Methods In healthy adults, we compared the offline performance of a space-independent P300-based BCI for spelling words using Rapid Serial Visual Presentation (RSVP), to the well-known space-dependent Matrix P300 speller. Results EEG classifiability with the RSVP speller was as good as with the Matrix speller. While the Matrix speller’s performance was significantly reliant on early, gaze-dependent Visual Evoked Potentials (VEPs), the RSVP speller depended only on the space-independent P300b. However, there was a cost to true spatial independence: the RSVP speller was less efficient in terms of spelling speed. Conclusions The advantage of space independence in the RSVP speller was concomitant with a marked reduction in spelling efficiency. Nevertheless, with key improvements to the RSVP design, truly space-independent BCIs could approach efficiencies on par with the Matrix speller. With sufficiently high letter spelling rates fused with predictive language modelling, they would be viable for potential applications with patients unable to direct overt visual gaze or covert attentional focus.
Collapse
Affiliation(s)
- Srivas Chennu
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK.
| | | | | | | | | |
Collapse
|
23
|
Schreuder M, Höhne J, Blankertz B, Haufe S, Dickhaus T, Tangermann M. Optimizing event-related potential based brain-computer interfaces: a systematic evaluation of dynamic stopping methods. J Neural Eng 2013; 10:036025. [PMID: 23685458 DOI: 10.1088/1741-2560/10/3/036025] [Citation(s) in RCA: 76] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE In brain-computer interface (BCI) research, systems based on event-related potentials (ERP) are considered particularly successful and robust. This stems in part from the repeated stimulation which counteracts the low signal-to-noise ratio in electroencephalograms. Repeated stimulation leads to an optimization problem, as more repetitions also cost more time. The optimal number of repetitions thus represents a data-dependent trade-off between the stimulation time and the obtained accuracy. Several methods for dealing with this have been proposed as 'early stopping', 'dynamic stopping' or 'adaptive stimulation'. Despite their high potential for BCI systems at the patient's bedside, those methods are typically ignored in current BCI literature. The goal of the current study is to assess the benefit of these methods. APPROACH This study assesses for the first time the existing methods on a common benchmark of both artificially generated data and real BCI data of 83 BCI sessions, allowing for a direct comparison between these methods in the context of text entry. MAIN RESULTS The results clearly show the beneficial effect on the online performance of a BCI system, if the trade-off between the number of stimulus repetitions and accuracy is optimized. All assessed methods work very well for data of good subjects, and worse for data of low-performing subjects. Most methods, however, are robust in the sense that they do not reduce the performance below the baseline of a simple no stopping strategy. SIGNIFICANCE Since all methods can be realized as a module between the BCI and an application, minimal changes are needed to include these methods into existing BCI software architectures. Furthermore, the hyperparameters of most methods depend to a large extend on only a single variable-the discriminability of the training data. For the convenience of BCI practitioners, the present study proposes linear regression coefficients for directly estimating the hyperparameters from the data based on this discriminability. The data that were used in this publication are made publicly available to benchmark future methods.
Collapse
|
24
|
Acqualagna L, Blankertz B. Gaze-independent BCI-spelling using rapid serial visual presentation (RSVP). Clin Neurophysiol 2013; 124:901-8. [PMID: 23466266 DOI: 10.1016/j.clinph.2012.12.050] [Citation(s) in RCA: 85] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2012] [Revised: 11/27/2012] [Accepted: 12/05/2012] [Indexed: 11/18/2022]
Affiliation(s)
- Laura Acqualagna
- Machine Learning Laboratory, Berlin Institute of Technology, Berlin, Germany.
| | | |
Collapse
|
25
|
Zhang D, Song H, Xu R, Zhou W, Ling Z, Hong B. Toward a minimally invasive brain–computer interface using a single subdural channel: A visual speller study. Neuroimage 2013; 71:30-41. [PMID: 23313779 DOI: 10.1016/j.neuroimage.2012.12.069] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2012] [Revised: 11/29/2012] [Accepted: 12/29/2012] [Indexed: 02/07/2023] Open
|