1
|
Liu X, Hu B, Si Y, Wang Q. The role of eye movement signals in non-invasive brain-computer interface typing system. Med Biol Eng Comput 2024; 62:1981-1990. [PMID: 38509350 DOI: 10.1007/s11517-024-03070-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 03/05/2024] [Indexed: 03/22/2024]
Abstract
Brain-Computer Interfaces (BCIs) have shown great potential in providing communication and control for individuals with severe motor disabilities. However, traditional BCIs that rely on electroencephalography (EEG) signals suffer from low information transfer rates and high variability across users. Recently, eye movement signals have emerged as a promising alternative due to their high accuracy and robustness. Eye movement signals are the electrical or mechanical signals generated by the movements and behaviors of the eyes, serving to denote the diverse forms of eye movements, such as fixations, smooth pursuit, and other oculomotor activities like blinking. This article presents a review of recent studies on the development of BCI typing systems that incorporate eye movement signals. We first discuss the basic principles of BCI and the recent advancements in text entry. Then, we provide a comprehensive summary of the latest advancements in BCI typing systems that leverage eye movement signals. This includes an in-depth analysis of hybrid BCIs that are built upon the integration of electrooculography (EOG) and eye tracking technology, aiming to enhance the performance and functionality of the system. Moreover, we highlight the advantages and limitations of different approaches, as well as potential future directions. Overall, eye movement signals hold great potential for enhancing the usability and accessibility of BCI typing systems, and further research in this area could lead to more effective communication and control for individuals with motor disabilities.
Collapse
Affiliation(s)
- Xi Liu
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| | - Bingliang Hu
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| | - Yang Si
- Department of Neurology, Sichuan Academy of Medical Science and Sichuan Provincial People's Hospital, Chengdu, 611731, China
- University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Quan Wang
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China.
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China.
| |
Collapse
|
2
|
Ron-Angevin R, Fernández-Rodríguez Á, Velasco-Álvarez F, Lespinet-Najib V, André JM. Evaluation of Different Types of Stimuli in an Event-Related Potential-Based Brain-Computer Interface Speller under Rapid Serial Visual Presentation. SENSORS (BASEL, SWITZERLAND) 2024; 24:3315. [PMID: 38894107 PMCID: PMC11174573 DOI: 10.3390/s24113315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/10/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024]
Abstract
Rapid serial visual presentation (RSVP) is currently a suitable gaze-independent paradigm for controlling visual brain-computer interfaces (BCIs) based on event-related potentials (ERPs), especially for users with limited eye movement control. However, unlike gaze-dependent paradigms, gaze-independent ones have received less attention concerning the specific choice of visual stimuli that are used. In gaze-dependent BCIs, images of faces-particularly those tinted red-have been shown to be effective stimuli. This study aims to evaluate whether the colour of faces used as visual stimuli influences ERP-BCI performance under RSVP. Fifteen participants tested four conditions that varied only in the visual stimulus used: grey letters (GL), red famous faces with letters (RFF), green famous faces with letters (GFF), and blue famous faces with letters (BFF). The results indicated significant accuracy differences only between the GL and GFF conditions, unlike prior gaze-dependent studies. Additionally, GL achieved higher comfort ratings compared with other face-related conditions. This study highlights that the choice of stimulus type impacts both performance and user comfort, suggesting implications for future ERP-BCI designs for users requiring gaze-independent systems.
Collapse
Affiliation(s)
- Ricardo Ron-Angevin
- Departamento de Tecnología Electrónica, Instituto Universitario de Investigación en Telecomunicación de la Universidad de Málaga (TELMA), Universidad de Málaga, 29071 Malaga, Spain; (Á.F.-R.); (F.V.-Á.)
| | - Álvaro Fernández-Rodríguez
- Departamento de Tecnología Electrónica, Instituto Universitario de Investigación en Telecomunicación de la Universidad de Málaga (TELMA), Universidad de Málaga, 29071 Malaga, Spain; (Á.F.-R.); (F.V.-Á.)
| | - Francisco Velasco-Álvarez
- Departamento de Tecnología Electrónica, Instituto Universitario de Investigación en Telecomunicación de la Universidad de Málaga (TELMA), Universidad de Málaga, 29071 Malaga, Spain; (Á.F.-R.); (F.V.-Á.)
| | - Véronique Lespinet-Najib
- Laboratoire IMS, CNRS UMR 5218, Cognitive Team, Bordeaux INP-ENSC, 33400 Bordeaux, France; (V.L.-N.); (J.-M.A.)
| | - Jean-Marc André
- Laboratoire IMS, CNRS UMR 5218, Cognitive Team, Bordeaux INP-ENSC, 33400 Bordeaux, France; (V.L.-N.); (J.-M.A.)
| |
Collapse
|
3
|
Herbert C. Brain-computer interfaces and human factors: the role of language and cultural differences-Still a missing gap? Front Hum Neurosci 2024; 18:1305445. [PMID: 38665897 PMCID: PMC11043545 DOI: 10.3389/fnhum.2024.1305445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 02/02/2024] [Indexed: 04/28/2024] Open
Abstract
Brain-computer interfaces (BCIs) aim at the non-invasive investigation of brain activity for supporting communication and interaction of the users with their environment by means of brain-machine assisted technologies. Despite technological progress and promising research aimed at understanding the influence of human factors on BCI effectiveness, some topics still remain unexplored. The aim of this article is to discuss why it is important to consider the language of the user, its embodied grounding in perception, action and emotions, and its interaction with cultural differences in information processing in future BCI research. Based on evidence from recent studies, it is proposed that detection of language abilities and language training are two main topics of enquiry of future BCI studies to extend communication among vulnerable and healthy BCI users from bench to bedside and real world applications. In addition, cultural differences shape perception, actions, cognition, language and emotions subjectively, behaviorally as well as neuronally. Therefore, BCI applications should consider cultural differences in information processing to develop culture- and language-sensitive BCI applications for different user groups and BCIs, and investigate the linguistic and cultural contexts in which the BCI will be used.
Collapse
Affiliation(s)
- Cornelia Herbert
- Applied Emotion and Motivation Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
4
|
Reichert C, Sweeney-Reed CM, Hinrichs H, Dürschmid S. A toolbox for decoding BCI commands based on event-related potentials. Front Hum Neurosci 2024; 18:1358809. [PMID: 38505100 PMCID: PMC10949531 DOI: 10.3389/fnhum.2024.1358809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 02/14/2024] [Indexed: 03/21/2024] Open
Abstract
Commands in brain-computer interface (BCI) applications often rely on the decoding of event-related potentials (ERP). For instance, the P300 potential is frequently used as a marker of attention to an oddball event. Error-related potentials and the N2pc signal are further examples of ERPs used for BCI control. One challenge in decoding brain activity from the electroencephalogram (EEG) is the selection of the most suitable channels and appropriate features for a particular classification approach. Here we introduce a toolbox that enables ERP-based decoding using the full set of channels, while automatically extracting informative components from relevant channels. The strength of our approach is that it handles sequences of stimuli that encode multiple items using binary classification, such as target vs. nontarget events typically used in ERP-based spellers. We demonstrate examples of application scenarios and evaluate the performance of four openly available datasets: a P300-based matrix speller, a P300-based rapid serial visual presentation (RSVP) speller, a binary BCI based on the N2pc, and a dataset capturing error potentials. We show that our approach achieves performances comparable to those in the original papers, with the advantage that only conventional preprocessing is required by the user, while channel weighting and decoding algorithms are internally performed. Thus, we provide a tool to reliably decode ERPs for BCI use with minimal programming requirements.
Collapse
Affiliation(s)
- Christoph Reichert
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Catherine M. Sweeney-Reed
- Neurocybernetics and Rehabilitation, Department of Neurology, Otto von Guericke University, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Otto von Guericke University, Magdeburg, Germany
| | - Hermann Hinrichs
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Otto von Guericke University, Magdeburg, Germany
- Department of Neurology, Otto von Guericke University, Magdeburg, Germany
| | - Stefan Dürschmid
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Otto von Guericke University, Magdeburg, Germany
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
- Department of Cellular Neuroscience, Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
5
|
Larsen OFP, Tresselt WG, Lorenz EA, Holt T, Sandstrak G, Hansen TI, Su X, Holt A. A method for synchronized use of EEG and eye tracking in fully immersive VR. Front Hum Neurosci 2024; 18:1347974. [PMID: 38468815 PMCID: PMC10925625 DOI: 10.3389/fnhum.2024.1347974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/06/2024] [Indexed: 03/13/2024] Open
Abstract
This study explores the synchronization of multimodal physiological data streams, in particular, the integration of electroencephalography (EEG) with a virtual reality (VR) headset featuring eye-tracking capabilities. A potential use case for the synchronized data streams is demonstrated by implementing a hybrid steady-state visually evoked potential (SSVEP) based brain-computer interface (BCI) speller within a fully immersive VR environment. The hardware latency analysis reveals an average offset of 36 ms between EEG and eye-tracking data streams and a mean jitter of 5.76 ms. The study further presents a proof of concept brain-computer interface (BCI) speller in VR, showcasing its potential for real-world applications. The findings highlight the feasibility of combining commercial EEG and VR technologies for neuroscientific research and open new avenues for studying brain activity in ecologically valid VR environments. Future research could focus on refining the synchronization methods and exploring applications in various contexts, such as learning and social interactions.
Collapse
Affiliation(s)
- Olav F. P. Larsen
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - William G. Tresselt
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Emanuel A. Lorenz
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Tomas Holt
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Grethe Sandstrak
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Tor I. Hansen
- Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Acquired Brain Injury, St. Olav's University Hospital, Trondheim, Norway
| | - Xiaomeng Su
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Alexander Holt
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
6
|
Shi B, Yue Z, Yin S, Zhao J, Wang J. Multi-domain feature joint optimization based on multi-view learning for improving the EEG decoding. Front Hum Neurosci 2023; 17:1292428. [PMID: 38130433 PMCID: PMC10733485 DOI: 10.3389/fnhum.2023.1292428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023] Open
Abstract
Background Brain-computer interface (BCI) systems based on motor imagery (MI) have been widely used in neurorehabilitation. Feature extraction applied by the common spatial pattern (CSP) is very popular in MI classification. The effectiveness of CSP is highly affected by the frequency band and time window of electroencephalogram (EEG) segments and channels selected. Objective In this study, the multi-domain feature joint optimization (MDFJO) based on the multi-view learning method is proposed, which aims to select the discriminative features enhancing the classification performance. Method The channel patterns are divided using the Fisher discriminant criterion (FDC). Furthermore, the raw EEG is intercepted for multiple sub-bands and time interval signals. The high-dimensional features are constructed by extracting features from CSP on each EEG segment. Specifically, the multi-view learning method is used to select the optimal features, and the proposed feature sparsification strategy on the time level is proposed to further refine the optimal features. Results Two public EEG datasets are employed to validate the proposed MDFJO method. The average classification accuracy of the MDFJO in Data 1 and Data 2 is 88.29 and 87.21%, respectively. The classification result of MDFJO was significantly better than MSO (p < 0.05), FBCSP32 (p < 0.01), and other competing methods (p < 0.001). Conclusion Compared with the CSP, sparse filter band common spatial pattern (SFBCSP), and filter bank common spatial pattern (FBCSP) methods with channel numbers 16, 32 and all channels as well as MSO, the MDFJO significantly improves the test accuracy. The feature sparsification strategy proposed in this article can effectively enhance classification accuracy. The proposed method could improve the practicability and effectiveness of the BCI system.
Collapse
Affiliation(s)
- Bin Shi
- Xi’an Research Institute of High-Technology, Xi’an, Shaanxi, China
| | - Zan Yue
- Institute of Robotics and Intelligent System, School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China
| | - Shuai Yin
- Institute of Robotics and Intelligent System, School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China
| | - Junyang Zhao
- Xi’an Research Institute of High-Technology, Xi’an, Shaanxi, China
| | - Jing Wang
- Institute of Robotics and Intelligent System, School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China
| |
Collapse
|