1
|
Zhang X, Zhang T, Jiang Y, Zhang W, Lu Z, Wang Y, Tao Q. A novel brain-controlled prosthetic hand method integrating AR-SSVEP augmentation, asynchronous control, and machine vision assistance. Heliyon 2024; 10:e26521. [PMID: 38463871 PMCID: PMC10920167 DOI: 10.1016/j.heliyon.2024.e26521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 11/27/2023] [Accepted: 02/14/2024] [Indexed: 03/12/2024] Open
Abstract
Background and objective The brain-computer interface (BCI) system based on steady-state visual evoked potentials (SSVEP) is expected to help disabled patients achieve alternative prosthetic hand assistance. However, the existing study still has some shortcomings in interaction aspects such as stimulus paradigm and control logic. The purpose of this study is to innovate the visual stimulus paradigm and asynchronous decoding/control strategy by integrating augmented reality technology, and propose an asynchronous pattern recognition algorithm, thereby improving the interaction logic and practical application capabilities of the prosthetic hand with the BCI system. Methods An asynchronous visual stimulus paradigm based on an augmented reality (AR) interface was proposed in this paper, in which there were 8 control modes, including Grasp, Put down, Pinch, Point, Fist, Palm push, Hold pen, and Initial. According to the attentional orienting characteristics of the paradigm, a novel asynchronous pattern recognition algorithm that combines center extended canonical correlation analysis and support vector machine (Center-ECCA-SVM) was proposed. Then, this study proposed an intelligent BCI system switch based on a deep learning object detection algorithm (YOLOv4) to improve the level of user interaction. Finally, two experiments were designed to test the performance of the brain-controlled prosthetic hand system and its practical performance in real scenarios. Results Under the AR paradigm of this study, compared with the liquid crystal display (LCD) paradigm, the average SSVEP spectrum amplitude of multiple subjects increased by 17.41%, and the signal-noise ratio (SNR) increased by 3.52%. The average stimulus pattern recognition accuracy was 96.71 ± 3.91%, which was 2.62% higher than the LCD paradigm. Under the data analysis time of 2s, the Center-ECCA-SVM classifier obtained 94.66 ± 3.87% and 97.40 ± 2.78% asynchronous pattern recognition accuracy under the Normal metric and the Tolerant metric, respectively. And the YOLOv4-tiny model achieves a speed of 25.29fps and a 96.4% confidence in the prosthetic hand in real-time detection. Finally, the brain-controlled prosthetic hand helped the subjects to complete 4 kinds of daily life tasks in the real scene, and the time-consuming were all within an acceptable range, which verified the effectiveness and practicability of the system. Conclusion This research is based on improving the user interaction level of the prosthetic hand with the BCI system, and has made improvements in the SSVEP paradigm, asynchronous pattern recognition, interaction, and control logic. Furthermore, it also provides support for BCI areas for alternative prosthetic control, and movement disorder rehabilitation programs.
Collapse
Affiliation(s)
- Xiaodong Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an, Shannxi, 710049, China
| | - Teng Zhang
- Zhejiang Normal University, Jinhua, Zhejiang, 321004, China
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an, Shannxi, 710049, China
| | - Yongyu Jiang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Weiming Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Zhufeng Lu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Yu Wang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Wulumuqi, Xinjiang, 830000, China
| |
Collapse
|
2
|
Choi YJ, Kwon OS, Kim SP. Design of auditory P300-based brain-computer interfaces with a single auditory channel and no visual support. Cogn Neurodyn 2023; 17:1401-1416. [PMID: 37974580 PMCID: PMC10640544 DOI: 10.1007/s11571-022-09901-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 09/05/2022] [Accepted: 10/14/2022] [Indexed: 11/19/2022] Open
Abstract
Non-invasive brain-computer interfaces (BCIs) based on an event-related potential (ERP) component, P300, elicited via the oddball paradigm, have been extensively developed to enable device control and communication. While most P300-based BCIs employ visual stimuli in the oddball paradigm, auditory P300-based BCIs also need to be developed for users with unreliable gaze control or limited visual processing. Specifically, auditory BCIs without additional visual support or multi-channel sound sources can broaden the application areas of BCIs. This study aimed to design optimal stimuli for auditory BCIs among artificial (e.g., beep) and natural (e.g., human voice and animal sounds) sounds in such circumstances. In addition, it aimed to investigate differences between auditory and visual stimulations for online P300-based BCIs. As a result, natural sounds led to both higher online BCI performance and larger differences in ERP amplitudes between the target and non-target compared to artificial sounds. However, no single type of sound offered the best performance for all subjects; rather, each subject indicated different preferences between the human voice and animal sound. In line with previous reports, visual stimuli yielded higher BCI performance (average 77.56%) than auditory counterparts (average 54.67%). In addition, spatiotemporal patterns of the differences in ERP amplitudes between target and non-target were more dynamic with visual stimuli than with auditory stimuli. The results suggest that selecting a natural auditory stimulus optimal for individual users as well as making differences in ERP amplitudes between target and non-target stimuli more dynamic may further improve auditory P300-based BCIs. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-022-09901-3.
Collapse
Affiliation(s)
- Yun-Joo Choi
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919 Korea
| | - Oh-Sang Kwon
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919 Korea
| | - Sung-Phil Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919 Korea
| |
Collapse
|
3
|
李 奇, 张 庭, 宋 雨, 刘 玉, 孙 美. [A design and evaluation of wearable p300 brain-computer interface system based on Hololens2]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2023; 40:709-717. [PMID: 37666761 PMCID: PMC10477399 DOI: 10.7507/1001-5515.202207055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 06/06/2023] [Indexed: 09/06/2023]
Abstract
Patients with amyotrophic lateral sclerosis ( ALS ) often have difficulty in expressing their intentions through language and behavior, which prevents them from communicating properly with the outside world and seriously affects their quality of life. The brain-computer interface (BCI) has received much attention as an aid for ALS patients to communicate with the outside world, but the heavy device causes inconvenience to patients in the application process. To improve the portability of the BCI system, this paper proposed a wearable P300-speller brain-computer interface system based on the augmented reality (MR-BCI). This system used Hololens2 augmented reality device to present the paradigm, an OpenBCI device to capture EEG signals, and Jetson Nano embedded computer to process the data. Meanwhile, to optimize the system's performance for character recognition, this paper proposed a convolutional neural network classification method with low computational complexity applied to the embedded system for real-time classification. The results showed that compared with the P300-speller brain-computer interface system based on the computer screen (CS-BCI), MR-BCI induced an increase in the amplitude of the P300 component, an increase in accuracy of 1.7% and 1.4% in offline and online experiments, respectively, and an increase in the information transfer rate of 0.7 bit/min. The MR-BCI proposed in this paper achieves a wearable BCI system based on guaranteed system performance. It has a positive effect on the realization of the clinical application of BCI.
Collapse
Affiliation(s)
- 奇 李
- 长春理工大学 计算机科学技术学院(长春 130022)School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, P. R. China
- 长春理工大学中山研究院(广东中山 528437)Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, Guangdong 528437, P. R. China
| | - 庭嘉 张
- 长春理工大学 计算机科学技术学院(长春 130022)School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, P. R. China
| | - 雨 宋
- 长春理工大学 计算机科学技术学院(长春 130022)School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, P. R. China
| | - 玉龙 刘
- 长春理工大学 计算机科学技术学院(长春 130022)School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, P. R. China
| | - 美琪 孙
- 长春理工大学 计算机科学技术学院(长春 130022)School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, P. R. China
| |
Collapse
|
4
|
He C, Du Y, Zhao X. A separable convolutional neural network-based fast recognition method for AR-P300. Front Hum Neurosci 2022; 16:986928. [PMID: 36337859 PMCID: PMC9626510 DOI: 10.3389/fnhum.2022.986928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 09/07/2022] [Indexed: 11/13/2022] Open
Abstract
Augmented reality-based brain–computer interface (AR–BCI) has a low signal-to-noise ratio (SNR) and high real-time requirements. Classical machine learning algorithms that improve the recognition accuracy through multiple averaging significantly affect the information transfer rate (ITR) of the AR–SSVEP system. In this study, a fast recognition method based on a separable convolutional neural network (SepCNN) was developed for an AR-based P300 component (AR–P300). SepCNN achieved single extraction of AR–P300 features and improved the recognition speed. A nine-target AR–P300 single-stimulus paradigm was designed to be administered with AR holographic glasses to verify the effectiveness of SepCNN. Compared with four classical algorithms, SepCNN significantly improved the average target recognition accuracy (81.1%) and information transmission rate (57.90 bits/min) of AR–P300 single extraction. SepCNN with single extraction also attained better results than classical algorithms with multiple averaging.
Collapse
|
5
|
Zhang S, Gao X, Chen X. Humanoid Robot Walking in Maze Controlled by SSVEP-BCI Based on Augmented Reality Stimulus. Front Hum Neurosci 2022; 16:908050. [PMID: 35911600 PMCID: PMC9330178 DOI: 10.3389/fnhum.2022.908050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 06/22/2022] [Indexed: 11/17/2022] Open
Abstract
The application study of robot control based brain-computer interface (BCI) not only helps to promote the practicality of BCI but also helps to promote the advancement of robot technology, which is of great significance. Among the many obstacles, the importability of the stimulator brings much inconvenience to the robot control task. In this study, augmented reality (AR) technology was employed as the visual stimulator of steady-state visual evoked potential (SSVEP)-BCI and the robot walking experiment in the maze was designed to testify the applicability of the AR-BCI system. The online experiment was designed to complete the robot maze walking task and the robot walking commands were sent out by BCI system, in which human intentions were decoded by Filter Bank Canonical Correlation Analysis (FBCCA) algorithm. The results showed that all the 12 subjects could complete the robot walking task in the maze, which verified the feasibility of the AR-SSVEP-NAO system. This study provided an application demonstration for the robot control base on brain–computer interface, and further provided a new method for the future portable BCI system.
Collapse
Affiliation(s)
- Shangen Zhang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
- *Correspondence: Xiaogang Chen,
| |
Collapse
|
6
|
Zhang R, Xu Z, Zhang L, Cao L, Hu Y, Lu B, Shi L, Yao D, Zhao X. The effect of stimulus number on the recognition accuracy and information transfer rate of SSVEP-BCI in augmented reality. J Neural Eng 2022; 19. [PMID: 35477130 DOI: 10.1088/1741-2552/ac6ae5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Accepted: 04/26/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE The biggest advantage of steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) lies in its large command set and high information transfer rate (ITR). Almost all current SSVEP-BCIs use a computer screen (CS) to present flickering visual stimuli, which limits its flexible use in actual scenes. Augmented reality (AR) technology provides the ability to superimpose visual stimuli on the real world, and it considerably expands the application scenarios of SSVEP-BCI. However, whether the advantages of SSVEP-BCI can be maintained when moving the visual stimuli to AR glasses is not known. This study investigated the effects of the stimulus number for SSVEP-BCI in an AR context. APPROACH We designed SSVEP flickering stimulation interfaces with four different numbers of stimulus targets and put them in AR glasses and a CS to display. Three common recognition algorithms were used to analyze the influence of the stimulus number and stimulation time on the recognition accuracy and ITR of AR-SSVEP and CS-SSVEP. MAIN RESULTS The amplitude spectrum and signal-to-noise ratio of AR-SSVEP were not significantly different from CS-SSVEP at the fundamental frequency but were significantly lower than CS-SSVEP at the second harmonic. SSVEP recognition accuracy decreased as the stimulus number increased in AR-SSVEP but not in CS-SSVEP. When the stimulus number increased, the maximum ITR of CS-SSVEP also increased, but not for AR-SSVEP. When the stimulus number was 25, the maximum ITR (142.05 bits/min) was reached at 400 ms. The importance of stimulation time in SSVEP was confirmed. When the stimulation time became longer, the recognition accuracy of both AR-SSVEP and CS-SSVEP increased. The peak value was reached at 3 s. The ITR increased first and then slowly decreased after reaching the peak value. SIGNIFICANCE Our study indicates that the conclusions based on CS-SSVEP cannot be simply applied to AR-SSVEP, and it is not advisable to set too many stimulus targets in the AR display device.
Collapse
Affiliation(s)
- Rui Zhang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China, Zhengzhou university, Zhengzhou, 450000, CHINA
| | - Zongxin Xu
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China , Zhengzhou university, Zhengzhou, Henan, 450001, CHINA
| | - Lipeng Zhang
- Zhengzhou University, Zhengzhou university, Zhengzhou, 450001, CHINA
| | - Lijun Cao
- Zhengzhou University, Zhengzhou university, Zhengzhou, 450000, CHINA
| | - Yuxia Hu
- Zhengzhou University, Zhengzhou university, Zhengzhou, 450001, CHINA
| | - Beihan Lu
- Zhengzhou University, Zhengzhou university, Zhengzhou, 450001, CHINA
| | - Li Shi
- Department of Automation, Tsinghua University, BeiJing, Beijing, P. R, 100084, CHINA
| | - Dezhong Yao
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, Sichuan Province, chengdu, sichuan, 610054, CHINA
| | - Xincan Zhao
- Zhengzhou University, Zhengzhou university, Zhengzhou, 450001, CHINA
| |
Collapse
|
7
|
Andrews A. Integration of Augmented Reality and Brain-Computer Interface Technologies for Health Care Applications: Exploratory and Prototyping Study. JMIR Form Res 2022; 6:e18222. [PMID: 35451963 PMCID: PMC9073621 DOI: 10.2196/18222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 01/28/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
Background Augmented reality (AR) and brain-computer interface (BCI) are promising technologies that have a tremendous potential to revolutionize health care. While there has been a growing interest in these technologies for medical applications in the recent years, the combined use of AR and BCI remains a fairly unexplored area that offers significant opportunities for improving health care professional education and clinical practice. This paper describes a recent study to explore the integration of AR and BCI technologies for health care applications. Objective The described effort aims to advance an understanding of how AR and BCI technologies can effectively work together to transform modern health care practice by providing new mechanisms to improve patient and provider learning, communication, and shared decision-making. Methods The study methods included an environmental scan of AR and BCI technologies currently used in health care, a use case analysis for a combined AR-BCI capability, and development of an integrated AR-BCI prototype solution for health care applications. Results The study resulted in a novel interface technology solution that enables interoperability between consumer-grade wearable AR and BCI devices and provides the users with an ability to control digital objects in augmented reality using neural commands. The article discusses this novel solution within the context of practical digital health use cases developed during the course of the study where the combined AR and BCI technologies are anticipated to produce the most impact. Conclusions As one of the pioneering efforts in the area of AR and BCI integration, the study presents a practical implementation pathway for AR-BCI integration and provides directions for future research and innovation in this area.
Collapse
Affiliation(s)
- Anya Andrews
- Department of Internal Medicine, College of Medicine, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
8
|
Ravi A, Lu J, Pearce S, Jiang N. Enhanced System Robustness of Asynchronous BCI in Augmented Reality using Steady-state Motion Visual Evoked Potential. IEEE Trans Neural Syst Rehabil Eng 2022; 30:85-95. [PMID: 34990366 DOI: 10.1109/tnsre.2022.3140772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This study evaluated the effect of change in background on steady state visually evoked potentials (SSVEP) and steady state motion visually evoked potentials (SSMVEP) based brain computer interfaces (BCI) in a small-profile augmented reality (AR) headset. A four target SSVEP and SSMVEP BCI was implemented using the Cognixion AR headset prototype. An active (AB) and a non-active background (NB) were evaluated. The signal characteristics and classification performance of the two BCI paradigms were studied. Offline analysis was performed using canonical correlation analysis (CCA) and complex-spectrum based convolutional neural network (C-CNN). Finally, the asynchronous pseudo-online performance of the SSMVEP BCI was evaluated. Signal analysis revealed that the SSMVEP stimulus was more robust to change in background compared to SSVEP stimulus in AR. The decoding performance revealed that the C-CNN method outperformed CCA for both stimulus types and NB background, in agreement with results in the literature. The average offline accuracies for W=1s of C-CNN were (NB vs. AB): SSVEP: 82% ±15% vs. 60% ±21% and SSMVEP: 71.4% ± 22% vs. 63.5% ± 18%. Additionally, for W=2s, the AR-SSMVEP BCI with the C-CNN method was 83.3% ± 27% (NB) and 74.1% ±22% (AB). The results suggest that with the C-CNN method, the AR-SSMVEP BCI is both robust to change in background conditions and provides high decoding accuracy compared to the AR-SSVEP BCI. This study presents novel results that highlight the robustness and practical application of SSMVEP BCIs developed with a low-cost AR headset.
Collapse
|
9
|
A CNN-based multi-target fast classification method for AR-SSVEP. Comput Biol Med 2021; 141:105042. [PMID: 34802710 DOI: 10.1016/j.compbiomed.2021.105042] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 11/13/2021] [Accepted: 11/13/2021] [Indexed: 11/20/2022]
Abstract
Because an augmented-reality-based brain-computer interface (AR-BCI) is easily disturbed by external factors, the traditional electroencephalograph (EEG) classification algorithms fail to meet the real-time processing requirements with a large number of stimulus targets or in a real environment. We propose a multi-target fast classification method for augmented-reality-based steady-state visual evoked potential (AR-SSVEP), using a convolutional neural network (CNN). To explore the availability and accuracy of high-efficiency multi-target classification methods in AR-SSVEP with a short stimulation duration, a similar stimulus layout was used for a computer screen (PC) and an optical see-through head-mounted display (OST-HMD) device (HoloLens). The experiment included nine flicker stimuli of different frequencies, and a multi-target fast classification method based on a CNN was constructed to complete nine classification tasks, for which the average accuracy of AR-BCI in our CNN model at 0.5- and 1-s stimulus duration was 67.93% and 80.83%, respectively. These results verified the efficacy of the proposed model for processing multi-target classification in AR-BCI.
Collapse
|
10
|
P300 Brain-Computer Interface-Based Drone Control in Virtual and Augmented Reality. SENSORS 2021; 21:s21175765. [PMID: 34502655 PMCID: PMC8434009 DOI: 10.3390/s21175765] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 08/19/2021] [Accepted: 08/24/2021] [Indexed: 01/01/2023]
Abstract
Since the emergence of head-mounted displays (HMDs), researchers have attempted to introduce virtual and augmented reality (VR, AR) in brain–computer interface (BCI) studies. However, there is a lack of studies that incorporate both AR and VR to compare the performance in the two environments. Therefore, it is necessary to develop a BCI application that can be used in both VR and AR to allow BCI performance to be compared in the two environments. In this study, we developed an opensource-based drone control application using P300-based BCI, which can be used in both VR and AR. Twenty healthy subjects participated in the experiment with this application. They were asked to control the drone in two environments and filled out questionnaires before and after the experiment. We found no significant (p > 0.05) difference in online performance (classification accuracy and amplitude/latency of P300 component) and user experience (satisfaction about time length, program, environment, interest, difficulty, immersion, and feeling of self-control) between VR and AR. This indicates that the P300 BCI paradigm is relatively reliable and may work well in various situations.
Collapse
|
11
|
Using Brain Activity Patterns to Differentiate Real and Virtual Attended Targets during Augmented Reality Scenarios. INFORMATION 2021. [DOI: 10.3390/info12060226] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.
Collapse
|
12
|
Chen X, Huang X, Wang Y, Gao X. Combination of Augmented Reality Based Brain- Computer Interface and Computer Vision for High-Level Control of a Robotic Arm. IEEE Trans Neural Syst Rehabil Eng 2020; 28:3140-3147. [PMID: 33196442 DOI: 10.1109/tnsre.2020.3038209] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Recent advances in robotics, neuroscience, and signal processing make it possible to operate a robot through electroencephalography (EEG)-based brain-computer interface (BCI). Although some successful attempts have been made in recent years, the practicality of the entire system still has much room for improvement. The present study designed and realized a robotic arm control system by combing augmented reality (AR), computer vision, and steady-state visual evoked potential (SSVEP)-BCI. AR environment was implemented by a Microsoft HoloLens. Flickering stimuli for eliciting SSVEPs were presented on the HoloLens, which allowed users to see both the robotic arm and the user interface of the BCI. Thus users did not need to switch attention between the visual stimulator and the robotic arm. A four-command SSVEP-BCI was built for users to choose the specific object to be operated by the robotic arm. Once an object was selected, the computer vision would provide the location and color of the object in the workspace. Subsequently, the object was autonomously picked up and placed by the robotic arm. According to the online results obtained from twelve participants, the mean classification accuracy of the proposed system was 93.96 ± 5.05%. Moreover, all subjects could utilize the proposed system to successfully pick and place objects in a specific order. These results demonstrated the potential of combining AR-BCI and computer vision to control robotic arms, which is expected to further promote the practicality of BCI-controlled robots.
Collapse
|
13
|
Brain-Computer Interface-Based Humanoid Control: A Review. SENSORS 2020; 20:s20133620. [PMID: 32605077 PMCID: PMC7374399 DOI: 10.3390/s20133620] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 06/12/2020] [Accepted: 06/17/2020] [Indexed: 11/17/2022]
Abstract
A Brain-Computer Interface (BCI) acts as a communication mechanism using brain signals to control external devices. The generation of such signals is sometimes independent of the nervous system, such as in Passive BCI. This is majorly beneficial for those who have severe motor disabilities. Traditional BCI systems have been dependent only on brain signals recorded using Electroencephalography (EEG) and have used a rule-based translation algorithm to generate control commands. However, the recent use of multi-sensor data fusion and machine learning-based translation algorithms has improved the accuracy of such systems. This paper discusses various BCI applications such as tele-presence, grasping of objects, navigation, etc. that use multi-sensor fusion and machine learning to control a humanoid robot to perform a desired task. The paper also includes a review of the methods and system design used in the discussed applications.
Collapse
|
14
|
Benitez-Andonegui A, Burden R, Benning R, Möckel R, Lührs M, Sorger B. An Augmented-Reality fNIRS-Based Brain-Computer Interface: A Proof-of-Concept Study. Front Neurosci 2020; 14:346. [PMID: 32410938 PMCID: PMC7199634 DOI: 10.3389/fnins.2020.00346] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 03/23/2020] [Indexed: 02/04/2023] Open
Abstract
Augmented reality (AR) enhances the user's environment by projecting virtual objects into the real world in real-time. Brain-computer interfaces (BCIs) are systems that enable users to control external devices with their brain signals. BCIs can exploit AR technology to interact with the physical and virtual world and to explore new ways of displaying feedback. This is important for users to perceive and regulate their brain activity or shape their communication intentions while operating in the physical world. In this study, twelve healthy participants were introduced to and asked to choose between two motor-imagery tasks: mental drawing and interacting with a virtual cube. Participants first performed a functional localizer run, which was used to select a single fNIRS channel for decoding their intentions in eight subsequent choice-encoding runs. In each run participants were asked to select one choice of a six-item list. A rotating AR cube was displayed on a computer screen as the main stimulus, where each face of the cube was presented for 6 s and represented one choice of the six-item list. For five consecutive trials, participants were instructed to perform the motor-imagery task when the face of the cube that represented their choice was facing them (therewith temporally encoding the selected choice). In the end of each run, participants were provided with the decoded choice based on a joint analysis of all five trials. If the decoded choice was incorrect, an active error-correction procedure was applied by the participant. The choice list provided in each run was based on the decoded choice of the previous run. The experimental design allowed participants to navigate twice through a virtual menu that consisted of four levels if all choices were correctly decoded. Here we demonstrate for the first time that by using AR feedback and flexible choice encoding in form of search trees, we can increase the degrees of freedom of a BCI system. We also show that participants can successfully navigate through a nested menu and achieve a mean accuracy of 74% using a single motor-imagery task and a single fNIRS channel.
Collapse
Affiliation(s)
- Amaia Benitez-Andonegui
- Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Center, Maastricht University, Maastricht, Netherlands
- Laboratory for Cognitive Robotics and Complex Self-Organizing Systems, Department of Data Science and Knowledge Engineering, Faculty of Science and Engineering, Maastricht University, Maastricht, Netherlands
| | - Rodion Burden
- Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Center, Maastricht University, Maastricht, Netherlands
| | - Richard Benning
- Instrumentation Engineering, Dean and Directors Office, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Rico Möckel
- Laboratory for Cognitive Robotics and Complex Self-Organizing Systems, Department of Data Science and Knowledge Engineering, Faculty of Science and Engineering, Maastricht University, Maastricht, Netherlands
| | - Michael Lührs
- Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Center, Maastricht University, Maastricht, Netherlands
- Research Department, Brain Innovation B.V., Maastricht, Netherlands
| | - Bettina Sorger
- Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Center, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
15
|
Si-Mohammed H, Petit J, Jeunet C, Argelaguet F, Spindler F, Evain A, Roussel N, Casiez G, Lecuyer A. Towards BCI-Based Interfaces for Augmented Reality: Feasibility, Design and Evaluation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1608-1621. [PMID: 30295623 DOI: 10.1109/tvcg.2018.2873737] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Brain-Computer Interfaces (BCIs) enable users to interact with computers without any dedicated movement, bringing new hands-free interaction paradigms. In this paper we study the combination of BCI and Augmented Reality (AR). We first tested the feasibility of using BCI in AR settings based on Optical See-Through Head-Mounted Displays (OST-HMDs). Experimental results showed that a BCI and an OST-HMD equipment (EEG headset and Hololens in our case) are well compatible and that small movements of the head can be tolerated when using the BCI. Second, we introduced a design space for command display strategies based on BCI in AR, when exploiting a famous brain pattern called Steady-State Visually Evoked Potential (SSVEP). Our design space relies on five dimensions concerning the visual layout of the BCI menu; namely: orientation, frame-of-reference, anchorage, size and explicitness. We implemented various BCI-based display strategies and tested them within the context of mobile robot control in AR. Our findings were finally integrated within an operational prototype based on a real mobile robot that is controlled in AR using a BCI and a HoloLens headset. Taken together our results (4 user studies) and our methodology could pave the way to future interaction schemes in Augmented Reality exploiting 3D User Interfaces based on brain activity and BCIs.
Collapse
|
16
|
Liu Y, Liu Y, Tang J, Yin E, Hu D, Zhou Z. A self-paced BCI prototype system based on the incorporation of an intelligent environment-understanding approach for rehabilitation hospital environmental control. Comput Biol Med 2020; 118:103618. [DOI: 10.1016/j.compbiomed.2020.103618] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Revised: 01/10/2020] [Accepted: 01/10/2020] [Indexed: 11/30/2022]
|
17
|
Ke Y, Liu P, An X, Song X, Ming D. An online SSVEP-BCI system in an optical see-through augmented reality environment. J Neural Eng 2020; 17:016066. [PMID: 31614342 DOI: 10.1088/1741-2552/ab4dc6] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE This study aimed to design and evaluate a high-speed online steady-state visually evoked potential (SSVEP)-based brain-computer interface (BCI) in an optical see-through (OST) augmented reality (AR) environment. APPROACH An eight-class BCI was designed in an OST-AR headset which is wearable and allows users to see the user interface of the BCI and the device to be controlled in the same view field via the OST head-mounted display. The accuracies, information transfer rates (ITRs), and SSVEP signal characteristics of the AR-BCI were evaluated and compared with a computer screen-based BCI implemented with a laptop in offline and online cue-guided tasks. Then, the performance of the AR-BCI was evaluated in an online robotic arm control task. MAIN RESULTS The offline results obtained during the cue-guided task performed with the AR-BCI showed maximum averaged ITRs of 65.50 ± 9.86 bits min-1 according to the extended canonical correlation analysis-based target identification method. The online cue-guided task achieved averaged ITRs of 65.03 ± 11.40 bits min-1. The online robotic arm control task achieved averaged ITRs of 45.57 ± 7.40 bits min-1. Compared with the screen-based BCI, some limitations of the AR environment impaired BCI performance and the quality of SSVEP signals. SIGNIFICANCE The results showed the potential for providing a high-performance brain-control interaction method by combining AR and BCI. This study could provide methodological guidelines for developing more wearable BCIs in OST-AR environments and will also encourage more interesting applications involving BCIs and AR techniques.
Collapse
Affiliation(s)
- Yufeng Ke
- Academy of Medical Engineering and Translational Medicine, Tianjin International Joint Research Centre for Neural Engineering, and Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | | | | | | | | |
Collapse
|
18
|
Liu P, Ke Y, Du J, Liu W, Kong L, Wang N, An X, Ming D. An SSVEP-BCI in Augmented Reality. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5548-5551. [PMID: 31947111 DOI: 10.1109/embc.2019.8857859] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Steady-State Visual Evoked Potentials (SSVEP) based Brain-Computer Interface (BCI) has achieved very high information transmission rate (ITR), but its portability and fundamental interactions with the surrounding environment were limited. The combination of Augmented Reality (AR) and BCI is expected to solve these problems. In this paper, we combined AR with the SSVEP-BCI to build a more portable and natural BCI system in Microsoft HoloLens. We designed the AR-BCI system and studied the influence of different algorithms on the system performance. The analysis of SSVEP signals collected in AR environment shows that the extended filter bank canonical correlation analysis was better than task-related component analysis. The average recognition accuracy and ITR obtained by using Electroencephalography (EEG) data of 1s, 1.5s, and 2s length were 87.7%,95.4%, 97.6% and 64.6 bit/min, 62.9 bit/min, 55.6 bit/min, respectively. Compared with the existing AR-BCI studies, the ITR has been greatly improved in this study.
Collapse
|
19
|
Dey A, Billinghurst M, Lindeman RW, Swan JE. A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Front Robot AI 2018; 5:37. [PMID: 33500923 PMCID: PMC7805955 DOI: 10.3389/frobt.2018.00037] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 03/19/2018] [Indexed: 11/13/2022] Open
Abstract
Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.
Collapse
Affiliation(s)
- Arindam Dey
- Empathic Computing Laboratory, University of South Australia, Mawson Lakes, SA, Australia
| | - Mark Billinghurst
- Empathic Computing Laboratory, University of South Australia, Mawson Lakes, SA, Australia
| | - Robert W Lindeman
- Human Interface Technology Lab New Zealand (HIT Lab NZ), University of Canterbury, Christchurch, New Zealand
| | - J Edward Swan
- Mississippi State University, Starkville, MS, United States
| |
Collapse
|
20
|
A Prototype SSVEP Based Real Time BCI Gaming System. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2016; 2016:3861425. [PMID: 27051414 PMCID: PMC4804071 DOI: 10.1155/2016/3861425] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2015] [Revised: 01/06/2016] [Accepted: 01/10/2016] [Indexed: 11/17/2022]
Abstract
Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel.
Collapse
|
21
|
Käthner I, Kübler A, Halder S. Rapid P300 brain-computer interface communication with a head-mounted display. Front Neurosci 2015; 9:207. [PMID: 26097447 PMCID: PMC4456572 DOI: 10.3389/fnins.2015.00207] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2015] [Accepted: 05/23/2015] [Indexed: 11/29/2022] Open
Abstract
Visual ERP (P300) based brain-computer interfaces (BCIs) allow for fast and reliable spelling and are intended as a muscle-independent communication channel for people with severe paralysis. However, they require the presentation of visual stimuli in the field of view of the user. A head-mounted display could allow convenient presentation of visual stimuli in situations, where mounting a conventional monitor might be difficult or not feasible (e.g., at a patient's bedside). To explore if similar accuracies can be achieved with a virtual reality (VR) headset compared to a conventional flat screen monitor, we conducted an experiment with 18 healthy participants. We also evaluated it with a person in the locked-in state (LIS) to verify that usage of the headset is possible for a severely paralyzed person. Healthy participants performed online spelling with three different display methods. In one condition a 5 × 5 letter matrix was presented on a conventional 22 inch TFT monitor. Two configurations of the VR headset were tested. In the first (glasses A), the same 5 × 5 matrix filled the field of view of the user. In the second (glasses B), single letters of the matrix filled the field of view of the user. The participant in the LIS tested the VR headset on three different occasions (glasses A condition only). For healthy participants, average online spelling accuracies were 94% (15.5 bits/min) using three flash sequences for spelling with the monitor and glasses A and 96% (16.2 bits/min) with glasses B. In one session, the participant in the LIS reached an online spelling accuracy of 100% (10 bits/min) using the glasses A condition. We also demonstrated that spelling with one flash sequence is possible with the VR headset for healthy users (mean: 32.1 bits/min, maximum reached by one user: 71.89 bits/min at 100% accuracy). We conclude that the VR headset allows for rapid P300 BCI communication in healthy users and may be a suitable display option for severely paralyzed persons.
Collapse
Affiliation(s)
- Ivo Käthner
- Institute of Psychology, University of WürzburgWürzburg, Germany
| | - Andrea Kübler
- Institute of Psychology, University of WürzburgWürzburg, Germany
| | - Sebastian Halder
- Institute of Psychology, University of WürzburgWürzburg, Germany
- Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with DisabilitiesTokorozawa, Japan
| |
Collapse
|
22
|
Takano K, Ora H, Sekihara K, Iwaki S, Kansaku K. Coherent Activity in Bilateral Parieto-Occipital Cortices during P300-BCI Operation. Front Neurol 2014; 5:74. [PMID: 24860546 PMCID: PMC4030183 DOI: 10.3389/fneur.2014.00074] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2013] [Accepted: 05/01/2014] [Indexed: 12/02/2022] Open
Abstract
The visual P300 brain–computer interface (BCI), a popular system for electroencephalography (EEG)-based BCI, uses the P300 event-related potential to select an icon arranged in a flicker matrix. In earlier studies, we used green/blue (GB) luminance and chromatic changes in the P300-BCI system and reported that this luminance and chromatic flicker matrix was associated with better performance and greater subject comfort compared with the conventional white/gray (WG) luminance flicker matrix. To highlight areas involved in improved P300-BCI performance, we used simultaneous EEG–fMRI recordings and showed enhanced activities in bilateral and right lateralized parieto-occipital areas. Here, to capture coherent activities of the areas during P300-BCI, we collected whole-head 306-channel magnetoencephalography data. When comparing functional connectivity between the right and left parieto-occipital channels, significantly greater functional connectivity in the alpha band was observed under the GB flicker matrix condition than under the WG flicker matrix condition. Current sources were estimated with a narrow-band adaptive spatial filter, and mean imaginary coherence was computed in the alpha band. Significantly greater coherence was observed in the right posterior parietal cortex under the GB than under the WG condition. Re-analysis of previous EEG-based P300-BCI data showed significant correlations between the power of the coherence of the bilateral parieto-occipital cortices and their performance accuracy. These results suggest that coherent activity in the bilateral parieto-occipital cortices plays a significant role in effectively driving the P300-BCI.
Collapse
Affiliation(s)
- Kouji Takano
- Systems Neuroscience Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities , Tokorozawa , Japan
| | - Hiroki Ora
- Systems Neuroscience Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities , Tokorozawa , Japan
| | - Kensuke Sekihara
- Department of Systems Design and Engineering, Tokyo Metropolitan University , Tokyo , Japan
| | - Sunao Iwaki
- Cognition and Action Research Group, Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST) , Tsukuba , Japan
| | - Kenji Kansaku
- Systems Neuroscience Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities , Tokorozawa , Japan ; Brain Science Inspired Life Support Research Center, The University of Electro-Communications , Tokyo , Japan
| |
Collapse
|
23
|
Sakurada T, Kawase T, Takano K, Komatsu T, Kansaku K. A BMI-based occupational therapy assist suit: asynchronous control by SSVEP. Front Neurosci 2013; 7:172. [PMID: 24068982 PMCID: PMC3779864 DOI: 10.3389/fnins.2013.00172] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2013] [Accepted: 09/03/2013] [Indexed: 11/13/2022] Open
Abstract
A brain-machine interface (BMI) is an interface technology that uses neurophysiological signals from the brain to control external machines. Recent invasive BMI technologies have succeeded in the asynchronous control of robot arms for a useful series of actions, such as reaching and grasping. In this study, we developed non-invasive BMI technologies aiming to make such useful movements using the subject's own hands by preparing a BMI-based occupational therapy assist suit (BOTAS). We prepared a pre-recorded series of useful actions-a grasping-a-ball movement and a carrying-the-ball movement-and added asynchronous control using steady-state visual evoked potential (SSVEP) signals. A SSVEP signal was used to trigger the grasping-a-ball movement and another SSVEP signal was used to trigger the carrying-the-ball movement. A support vector machine was used to classify EEG signals recorded from the visual cortex (Oz) in real time. Untrained, able-bodied participants (n = 12) operated the system successfully. Classification accuracy and time required for SSVEP detection were ~88% and 3 s, respectively. We further recruited three patients with upper cervical spinal cord injuries (SCIs); they also succeeded in operating the system without training. These data suggest that our BOTAS system is potentially useful in terms of rehabilitation of patients with upper limb disabilities.
Collapse
Affiliation(s)
| | | | | | | | - Kenji Kansaku
- Systems Neuroscience Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with DisabilitiesTokorozawa, Japan
| |
Collapse
|
24
|
Toyama S, Takano K, Kansaku K. A non-adhesive solid-gel electrode for a non-invasive brain-machine interface. Front Neurol 2012; 3:114. [PMID: 22826701 PMCID: PMC3399135 DOI: 10.3389/fneur.2012.00114] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2012] [Accepted: 06/30/2012] [Indexed: 11/25/2022] Open
Abstract
A non-invasive brain–machine interface (BMI) or brain–computer interface is a technology for helping individuals with disabilities and utilizes neurophysiological signals from the brain to control external machines or computers without requiring surgery. However, when applying electroencephalography (EEG) methodology, users must place EEG electrodes on the scalp each time, and the development of easy-to-use electrodes for clinical use is required. In this study, we developed a conductive non-adhesive solid-gel electrode for practical non-invasive BMIs. We performed basic material testing, including examining the volume resistivity, viscoelasticity, and moisture-retention properties of the solid-gel. Then, we compared the performance of the solid-gel, a conventional paste, and an in-house metal-pin-based electrode using impedance measurements and P300-BMI testing. The solid-gel was observed to be conductive (volume resistivity 13.2 Ωcm) and soft (complex modulus 105.4 kPa), and it remained wet for a prolonged period (>10 h) in a dry environment. Impedance measurements revealed that the impedance of the solid-gel-based and conventional paste-based electrodes was superior to that of the pin-based electrode. The EEG measurement suggested that the signals obtained with the solid-gel electrode were comparable to those with the conventional paste-based electrode. Moreover, the P300-BMI study suggested that systems using the solid-gel or pin-based electrodes were effective. One of the advantages of the solid-gel is that it does not require cleaning after use, whereas the conventional paste adheres to the hair, which requires washing. Furthermore, the solid-gel electrode was not painful compared with a metal-pin electrode. Taken together, the results suggest that the solid-gel electrode worked well for practical BMIs and could be useful for bedridden patients such as those with amyotrophic lateral sclerosis.
Collapse
Affiliation(s)
- Shigeru Toyama
- Biotechnological Rehabilitation Section, Department of Rehabilitation Engineering, Research Institute of National Rehabilitation Center for Persons with Disabilities Tokorozawa, Japan
| | | | | |
Collapse
|