1
|
Vukelić M, Bui M, Vorreuther A, Lingelbach K. Combining brain-computer interfaces with deep reinforcement learning for robot training: a feasibility study in a simulation environment. FRONTIERS IN NEUROERGONOMICS 2023; 4:1274730. [PMID: 38234482 PMCID: PMC10790930 DOI: 10.3389/fnrgo.2023.1274730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/31/2023] [Indexed: 01/19/2024]
Abstract
Deep reinforcement learning (RL) is used as a strategy to teach robot agents how to autonomously learn complex tasks. While sparsity is a natural way to define a reward in realistic robot scenarios, it provides poor learning signals for the agent, thus making the design of good reward functions challenging. To overcome this challenge learning from human feedback through an implicit brain-computer interface (BCI) is used. We combined a BCI with deep RL for robot training in a 3-D physical realistic simulation environment. In a first study, we compared the feasibility of different electroencephalography (EEG) systems (wet- vs. dry-based electrodes) and its application for automatic classification of perceived errors during a robot task with different machine learning models. In a second study, we compared the performance of the BCI-based deep RL training to feedback explicitly given by participants. Our findings from the first study indicate the use of a high-quality dry-based EEG-system can provide a robust and fast method for automatically assessing robot behavior using a sophisticated convolutional neural network machine learning model. The results of our second study prove that the implicit BCI-based deep RL version in combination with the dry EEG-system can significantly accelerate the learning process in a realistic 3-D robot simulation environment. Performance of the BCI-based trained deep RL model was even comparable to that achieved by the approach with explicit human feedback. Our findings emphasize the usage of BCI-based deep RL methods as a valid alternative in those human-robot applications where no access to cognitive demanding explicit human feedback is available.
Collapse
Affiliation(s)
- Mathias Vukelić
- Applied Neurocognitive Systems, Fraunhofer Institute for Industrial Engineering (IAO), Stuttgart, Germany
| | - Michael Bui
- Applied Neurocognitive Systems, Fraunhofer Institute for Industrial Engineering (IAO), Stuttgart, Germany
| | - Anna Vorreuther
- Applied Neurocognitive Systems, Institute of Human Factors and Technology Management (IAT), University of Stuttgart, Stuttgart, Germany
| | - Katharina Lingelbach
- Applied Neurocognitive Systems, Fraunhofer Institute for Industrial Engineering (IAO), Stuttgart, Germany
| |
Collapse
|
2
|
Meng J, Zhao Y, Wang K, Sun J, Yi W, Xu F, Xu M, Ming D. Rhythmic temporal prediction enhances neural representations of movement intention for brain-computer interface. J Neural Eng 2023; 20:066004. [PMID: 37875107 DOI: 10.1088/1741-2552/ad0650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/24/2023] [Indexed: 10/26/2023]
Abstract
Objective.Detecting movement intention is a typical use of brain-computer interfaces (BCI). However, as an endogenous electroencephalography (EEG) feature, the neural representation of movement is insufficient for improving motor-based BCI. This study aimed to develop a new movement augmentation BCI encoding paradigm by incorporating the cognitive function of rhythmic temporal prediction, and test the feasibility of this new paradigm in optimizing detections of movement intention.Methods.A visual-motion synchronization task was designed with two movement intentions (left vs. right) and three rhythmic temporal prediction conditions (1000 ms vs. 1500 ms vs. no temporal prediction). Behavioural and EEG data of 24 healthy participants were recorded. Event-related potentials (ERPs), event-related spectral perturbation induced by left- and right-finger movements, the common spatial pattern (CSP) and support vector machine, Riemann tangent space algorithm and logistic regression were used and compared across the three temporal prediction conditions, aiming to test the impact of temporal prediction on movement detection.Results.Behavioural results showed significantly smaller deviation time for 1000 ms and 1500 ms conditions. ERP analyses revealed 1000 ms and 1500 ms conditions led to rhythmic oscillations with a time lag in contralateral and ipsilateral areas of movement. Compared with no temporal prediction, 1000 ms condition exhibited greater beta event-related desynchronization (ERD) lateralization in motor area (P< 0.001) and larger beta ERD in frontal area (P< 0.001). 1000 ms condition achieved an averaged left-right decoding accuracy of 89.71% using CSP and 97.30% using Riemann tangent space, both significantly higher than no temporal prediction. Moreover, movement and temporal information can be decoded simultaneously, achieving 88.51% four-classification accuracy.Significance.The results not only confirm the effectiveness of rhythmic temporal prediction in enhancing detection ability of motor-based BCI, but also highlight the dual encodings of movement and temporal information within a single BCI paradigm, which is promising to expand the range of intentions that can be decoded by the BCI.
Collapse
Affiliation(s)
- Jiayuan Meng
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| | - Yingru Zhao
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Kun Wang
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| | - Jinsong Sun
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Weibo Yi
- Beijing Machine and Equipment Institute, Beijing, People's Republic of China
| | - Fangzhou Xu
- International School for Optoelectronic Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan, People's Republic of China
| | - Minpeng Xu
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
- International School for Optoelectronic Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan, People's Republic of China
| | - Dong Ming
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| |
Collapse
|
3
|
Liu K, Yu Y, Liu Y, Tang J, Liang X, Chu X, Zhou Z. A novel brain-controlled wheelchair combined with computer vision and augmented reality. Biomed Eng Online 2022; 21:50. [PMID: 35883092 PMCID: PMC9327337 DOI: 10.1186/s12938-022-01020-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 07/11/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Brain-controlled wheelchairs (BCWs) are important applications of brain-computer interfaces (BCIs). Currently, most BCWs are semiautomatic. When users want to reach a target of interest in their immediate environment, this semiautomatic interaction strategy is slow. METHODS To this end, we combined computer vision (CV) and augmented reality (AR) with a BCW and proposed the CVAR-BCW: a BCW with a novel automatic interaction strategy. The proposed CVAR-BCW uses a translucent head-mounted display (HMD) as the user interface, uses CV to automatically detect environments, and shows the detected targets through AR technology. Once a user has chosen a target, the CVAR-BCW can automatically navigate to it. For a few scenarios, the semiautomatic strategy might be useful. We integrated a semiautomatic interaction framework into the CVAR-BCW. The user can switch between the automatic and semiautomatic strategies. RESULTS We recruited 20 non-disabled subjects for this study and used the accuracy, information transfer rate (ITR), and average time required for the CVAR-BCW to reach each designated target as performance metrics. The experimental results showed that our CVAR-BCW performed well in indoor environments: the average accuracies across all subjects were 83.6% (automatic) and 84.1% (semiautomatic), the average ITRs were 8.2 bits/min (automatic) and 8.3 bits/min (semiautomatic), the average times required to reach a target were 42.4 s (automatic) and 93.4 s (semiautomatic), and the average workloads and degrees of fatigue for the two strategies were both approximately 20. CONCLUSIONS Our CVAR-BCW provides a user-centric interaction approach and a good framework for integrating more advanced artificial intelligence technologies, which may be useful in the field of disability assistance.
Collapse
Affiliation(s)
- Kaixuan Liu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| | - Yang Yu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China.
| | - Yadong Liu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| | - Jingsheng Tang
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| | - Xinbin Liang
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| | - Xingxing Chu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| | - Zongtan Zhou
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha, 410073, Hunan, China
| |
Collapse
|
4
|
Riccio A, Schettini F, Galiotta V, Giraldi E, Grasso MG, Cincotti F, Mattia D. Usability of a Hybrid System Combining P300-Based Brain-Computer Interface and Commercial Assistive Technologies to Enhance Communication in People With Multiple Sclerosis. Front Hum Neurosci 2022; 16:868419. [PMID: 35721361 PMCID: PMC9204311 DOI: 10.3389/fnhum.2022.868419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 04/26/2022] [Indexed: 11/25/2022] Open
Abstract
Brain-computer interface (BCI) can provide people with motor disabilities with an alternative channel to access assistive technology (AT) software for communication and environmental interaction. Multiple sclerosis (MS) is a chronic disease of the central nervous system that mostly starts in young adulthood and often leads to a long-term disability, possibly exacerbated by the presence of fatigue. Patients with MS have been rarely considered as potential BCI end-users. In this pilot study, we evaluated the usability of a hybrid BCI (h-BCI) system that enables both a P300-based BCI and conventional input devices (i.e., muscular dependent) to access mainstream applications through the widely used AT software for communication “Grid 3.” The evaluation was performed according to the principles of the user-centered design (UCD) with the aim of providing patients with MS with an alternative control channel (i.e., BCI), potentially less sensitive to fatigue. A total of 13 patients with MS were enrolled. In session I, participants were presented with a widely validated P300-based BCI (P3-speller); in session II, they had to operate Grid 3 to access three mainstream applications with (1) an AT conventional input device and (2) the h-BCI. Eight patients completed the protocol. Five out of eight patients with MS were successfully able to access the Grid 3 via the BCI, with a mean online accuracy of 83.3% (± 14.6). Effectiveness (online accuracy), satisfaction, and workload were comparable between the conventional AT inputs and the BCI channel in controlling the Grid 3. As expected, the efficiency (time for correct selection) resulted to be significantly lower for the BCI with respect to the AT conventional channels (Z = 0.2, p < 0.05). Although cautious due to the limited sample size, these preliminary findings indicated that the BCI control channel did not have a detrimental effect with respect to conventional AT channels on the ability to operate an AT software (Grid 3). Therefore, we inferred that the usability of the two access modalities was comparable. The integration of BCI with commercial AT input devices to access a widely used AT software represents an important step toward the introduction of BCIs into the AT centers’ daily practice.
Collapse
Affiliation(s)
- Angela Riccio
- Neuroelectric Imaging and BCI Lab, Fondazione Santa Lucia (IRCCS), Rome, Italy
- Servizio Ausilioteca per la Riabilitazione Assistita con Tecnologia, Fondazione Santa Lucia (IRCCS), Rome, Italy
- *Correspondence: Angela Riccio,
| | - Francesca Schettini
- Neuroelectric Imaging and BCI Lab, Fondazione Santa Lucia (IRCCS), Rome, Italy
- Servizio Ausilioteca per la Riabilitazione Assistita con Tecnologia, Fondazione Santa Lucia (IRCCS), Rome, Italy
| | - Valentina Galiotta
- Neuroelectric Imaging and BCI Lab, Fondazione Santa Lucia (IRCCS), Rome, Italy
| | - Enrico Giraldi
- Neuroelectric Imaging and BCI Lab, Fondazione Santa Lucia (IRCCS), Rome, Italy
| | | | - Febo Cincotti
- Department of Computer, Control and Management Engineering Antonio Ruberti, Sapienza University of Rome, Rome, Italy
| | - Donatella Mattia
- Neuroelectric Imaging and BCI Lab, Fondazione Santa Lucia (IRCCS), Rome, Italy
- Servizio Ausilioteca per la Riabilitazione Assistita con Tecnologia, Fondazione Santa Lucia (IRCCS), Rome, Italy
| |
Collapse
|
5
|
Guan S, Li J, Wang F, Yuan Z, Kang X, Lu B. Discriminating three motor imagery states of the same joint for brain-computer interface. PeerJ 2021; 9:e12027. [PMID: 34513337 PMCID: PMC8395581 DOI: 10.7717/peerj.12027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/29/2021] [Indexed: 11/20/2022] Open
Abstract
The classification of electroencephalography (EEG) induced by the same joint is one of the major challenges for brain-computer interface (BCI) systems. In this paper, we propose a new framework, which includes two parts, feature extraction and classification. Based on local mean decomposition (LMD), cloud model, and common spatial pattern (CSP), a feature extraction method called LMD-CSP is proposed to extract distinguishable features. In order to improve the classification results multi-objective grey wolf optimization twin support vector machine (MOGWO-TWSVM) is applied to discriminate the extracted features. We evaluated the performance of the proposed framework on our laboratory data sets with three motor imagery (MI) tasks of the same joint (shoulder abduction, extension, and flexion), and the average classification accuracy was 91.27%. Further comparison with several widely used methods showed that the proposed method had better performance in feature extraction and pattern classification. Overall, this study can be used for developing high-performance BCI systems, enabling individuals to control external devices intuitively and naturally.
Collapse
Affiliation(s)
- Shan Guan
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Jixian Li
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Fuwang Wang
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Zhen Yuan
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Xiaogang Kang
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Bin Lu
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| |
Collapse
|