1
|
Gwon D, Ahn M. Motor task-to-task transfer learning for motor imagery brain-computer interfaces. Neuroimage 2024; 302:120906. [PMID: 39490945 DOI: 10.1016/j.neuroimage.2024.120906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Revised: 10/21/2024] [Accepted: 10/24/2024] [Indexed: 11/05/2024] Open
Abstract
Motor imagery (MI) is one of the popular control paradigms in the non-invasive brain-computer interface (BCI) field. MI-BCI generally requires users to conduct the imagination of movement (e.g., left or right hand) to collect training data for generating a classification model during the calibration phase. However, this calibration phase is generally time-consuming and tedious, as users conduct the imagination of hand movement several times without being given feedback for an extended period. This obstacle makes MI-BCI non user-friendly and hinders its use. On the other hand, motor execution (ME) and motor observation (MO) are relatively easier tasks, yield lower fatigue than MI, and share similar neural mechanisms to MI. However, few studies have integrated these three tasks into BCIs. In this study, we propose a new task-to-task transfer learning approach of 3-motor tasks (ME, MO, and MI) for building a better user-friendly MI-BCI. For this study, 28 subjects participated in 3-motor tasks experiment, and electroencephalography (EEG) was acquired. User opinions regarding the 3-motor tasks were also collected through questionnaire survey. The 3-motor tasks showed a power decrease in the alpha rhythm, known as event-related desynchronization, but with slight differences in the temporal patterns. In the classification analysis, the cross-validated accuracy (within-task) was 67.05 % for ME, 65.93 % for MI, and 73.16 % for MO on average. Consistently with the results, the subjects scored MI (3.16) as the most difficult task compared with MO (1.42) and ME (1.41), with p < 0.05. In the analysis of task-to-task transfer learning, where training and testing are performed using different task datasets, the ME-trained model yielded an accuracy of 65.93 % (MI test), which is statistically similar to the within-task accuracy (p > 0.05). The MO-trained model achieved an accuracy of 60.82 % (MI test). On the other hand, combining two datasets yielded interesting results. ME and 50 % of the MI-trained model (50-shot) classified MI with a 69.21 % accuracy, which outperformed the within-task accuracy (p < 0.05), and MO and 50 % of the MI-trained model showed an accuracy of 66.75 %. Of the low performers with a within-task accuracy of 70 % or less, 90 % (n = 21) of the subjects improved in training with ME, and 76.2 % (n = 16) improved in training with MO on the MI test at 50-shot. These results demonstrate that task-to-task transfer learning is possible and could be a promising approach to building a user-friendly training protocol in MI-BCI.
Collapse
Affiliation(s)
- Daeun Gwon
- Department of Computer Science and Electrical Engineering, Handong Global University, 37554, South Korea
| | - Minkyu Ahn
- Department of Computer Science and Electrical Engineering, Handong Global University, 37554, South Korea; School of Computer Science and Electrical Engineering, Handong Global University, 37554, South Korea.
| |
Collapse
|
2
|
Lee S, Kim M, Ahn M. Evaluation of consumer-grade wireless EEG systems for brain-computer interface applications. Biomed Eng Lett 2024; 14:1433-1443. [PMID: 39465107 PMCID: PMC11502727 DOI: 10.1007/s13534-024-00416-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 07/18/2024] [Accepted: 08/07/2024] [Indexed: 10/29/2024] Open
Abstract
With the growing popularity of consumer-grade electroencephalogram (EEG) devices for health, entertainment, and cognitive research, assessing their signal quality is essential. In this study, we evaluated four consumer-grade wireless and dry-electrode EEG systems widely used for brain-computer interface (BCI) research and applications, comparing them with a research-grade system. We designed an EEG phantom method that reproduced µV-level amplitude EEG signals and evaluated the five devices based on their spectral responses, temporal patterns of event-related potential (ERP), and spectral patterns of resting-state EEG. We discovered that the consumer-grade devices had limited bandwidth compared with the research-grade device. A late component (e.g., P300) was detectable in the consumer-grade devices, but the overall ERP temporal pattern was distorted. Only one device showed an ERP temporal pattern comparable to that of the research-grade device. On the other hand, we confirmed that the activation of the alpha rhythm was observable in all devices. The results provide valuable insights for researchers and developers when it comes to selecting suitable EEG devices for BCI research and applications.
Collapse
Affiliation(s)
- Seungchan Lee
- Department of Medical Device, Korea Institute of Machinery & Materials, Daegu, 42994 Republic of Korea
| | - Misung Kim
- School of Computer Science and Electrical Engineering, Handong Global University, Pohang, 37554 Republic of Korea
| | - Minkyu Ahn
- School of Computer Science and Electrical Engineering, Handong Global University, Pohang, 37554 Republic of Korea
| |
Collapse
|
3
|
Song M, Gwon D, Jun SC, Ahn M. Signal alignment for cross-datasets in P300 brain-computer interfaces. J Neural Eng 2024; 21:036007. [PMID: 38657615 DOI: 10.1088/1741-2552/ad430d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 04/24/2024] [Indexed: 04/26/2024]
Abstract
Objective.Transfer learning has become an important issue in the brain-computer interface (BCI) field, and studies on subject-to-subject transfer within the same dataset have been performed. However, few studies have been performed on dataset-to-dataset transfer, including paradigm-to-paradigm transfer. In this study, we propose a signal alignment (SA) for P300 event-related potential (ERP) signals that is intuitive, simple, computationally less expensive, and can be used for cross-dataset transfer learning.Approach.We proposed a linear SA that uses the P300's latency, amplitude scale, and reverse factor to transform signals. For evaluation, four datasets were introduced (two from conventional P300 Speller BCIs, one from a P300 Speller with face stimuli, and the last from a standard auditory oddball paradigm).Results.Although the standard approach without SA had an average precision (AP) score of 25.5%, the approach demonstrated a 35.8% AP score, and we observed that the number of subjects showing improvement was 36.0% on average. Particularly, we confirmed that the Speller dataset with face stimuli was more comparable with other datasets.Significance.We proposed a simple and intuitive way to align ERP signals that uses the characteristics of ERP signals. The results demonstrated the feasibility of cross-dataset transfer learning even between datasets with different paradigms.
Collapse
Affiliation(s)
- Minseok Song
- Department of Computer Science and Electrical Engineering, Handong Global University, Pohang, Republic of Korea
| | - Daeun Gwon
- Department of Computer Science and Electrical Engineering, Handong Global University, Pohang, Republic of Korea
| | - Sung Chan Jun
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, Republic of Korea
- AI Graduate School, Gwangju Institute of Science and Technology, Gwangju, Republic of Korea
| | - Minkyu Ahn
- Department of Computer Science and Electrical Engineering, Handong Global University, Pohang, Republic of Korea
- School of Computer Science and Electrical Engineering, Handong Global University, Pohang, Republic of Korea
| |
Collapse
|
4
|
Jeong JH, Cho JH, Lee YE, Lee SH, Shin GH, Kweon YS, Millán JDR, Müller KR, Lee SW. 2020 International brain-computer interface competition: A review. Front Hum Neurosci 2022; 16:898300. [PMID: 35937679 PMCID: PMC9354666 DOI: 10.3389/fnhum.2022.898300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 07/01/2022] [Indexed: 11/16/2022] Open
Abstract
The brain-computer interface (BCI) has been investigated as a form of communication tool between the brain and external devices. BCIs have been extended beyond communication and control over the years. The 2020 international BCI competition aimed to provide high-quality neuroscientific data for open access that could be used to evaluate the current degree of technical advances in BCI. Although there are a variety of remaining challenges for future BCI advances, we discuss some of more recent application directions: (i) few-shot EEG learning, (ii) micro-sleep detection (iii) imagined speech decoding, (iv) cross-session classification, and (v) EEG(+ear-EEG) detection in an ambulatory environment. Not only did scientists from the BCI field compete, but scholars with a broad variety of backgrounds and nationalities participated in the competition to address these challenges. Each dataset was prepared and separated into three data that were released to the competitors in the form of training and validation sets followed by a test set. Remarkable BCI advances were identified through the 2020 competition and indicated some trends of interest to BCI researchers.
Collapse
Affiliation(s)
- Ji-Hoon Jeong
- School of Computer Science, Chungbuk National University, Cheongju, South Korea
| | - Jeong-Hyun Cho
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Eun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Seo-Hyun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Gi-Hwan Shin
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Seok Kweon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - José del R. Millán
- Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX, United States
| | - Klaus-Robert Müller
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Machine Learning Group, Department of Computer Science, Berlin Institute of Technology, Berlin, Germany
- Max Planck Institute for Informatics, Saarbrucken, Germany
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
5
|
Rojas M, Ponce P, Molina A. Development of a Sensing Platform Based on Hands-Free Interfaces for Controlling Electronic Devices. Front Hum Neurosci 2022; 16:867377. [PMID: 35754778 PMCID: PMC9231433 DOI: 10.3389/fnhum.2022.867377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
Hands-free interfaces are essential to people with limited mobility for interacting with biomedical or electronic devices. However, there are not enough sensing platforms that quickly tailor the interface to these users with disabilities. Thus, this article proposes to create a sensing platform that could be used by patients with mobility impairments to manipulate electronic devices, thereby their independence will be increased. Hence, a new sensing scheme is developed by using three hands-free signals as inputs: voice commands, head movements, and eye gestures. These signals are obtained by using non-invasive sensors: a microphone for the speech commands, an accelerometer to detect inertial head movements, and an infrared oculography to register eye gestures. These signals are processed and received as the user's commands by an output unit, which provides several communication ports for sending control signals to other devices. The interaction methods are intuitive and could extend boundaries for people with disabilities to manipulate local or remote digital systems. As a study case, two volunteers with severe disabilities used the sensing platform to steer a power wheelchair. Participants performed 15 common skills for wheelchair users and their capacities were evaluated according to a standard test. By using the head control they obtained 93.3 and 86.6%, respectively for volunteers A and B; meanwhile, by using the voice control they obtained 63.3 and 66.6%, respectively. These results show that the end-users achieved high performance by developing most of the skills by using the head movements interface. On the contrary, the users were not able to develop most of the skills by using voice control. These results showed valuable information for tailoring the sensing platform according to the end-user needs.
Collapse
Affiliation(s)
- Mario Rojas
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
| | - Pedro Ponce
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
| | - Arturo Molina
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
| |
Collapse
|
6
|
P300 Brain-Computer Interface-Based Drone Control in Virtual and Augmented Reality. SENSORS 2021; 21:s21175765. [PMID: 34502655 PMCID: PMC8434009 DOI: 10.3390/s21175765] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 08/19/2021] [Accepted: 08/24/2021] [Indexed: 01/01/2023]
Abstract
Since the emergence of head-mounted displays (HMDs), researchers have attempted to introduce virtual and augmented reality (VR, AR) in brain–computer interface (BCI) studies. However, there is a lack of studies that incorporate both AR and VR to compare the performance in the two environments. Therefore, it is necessary to develop a BCI application that can be used in both VR and AR to allow BCI performance to be compared in the two environments. In this study, we developed an opensource-based drone control application using P300-based BCI, which can be used in both VR and AR. Twenty healthy subjects participated in the experiment with this application. They were asked to control the drone in two environments and filled out questionnaires before and after the experiment. We found no significant (p > 0.05) difference in online performance (classification accuracy and amplitude/latency of P300 component) and user experience (satisfaction about time length, program, environment, interest, difficulty, immersion, and feeling of self-control) between VR and AR. This indicates that the P300 BCI paradigm is relatively reliable and may work well in various situations.
Collapse
|