1
|
Wirth C, Toth J, Arvaneh M. Bayesian learning from multi-way EEG feedback for robot navigation and target identification. Sci Rep 2023; 13:16925. [PMID: 37805540 PMCID: PMC10560278 DOI: 10.1038/s41598-023-44077-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 10/03/2023] [Indexed: 10/09/2023] Open
Abstract
Many brain-computer interfaces require a high mental workload. Recent research has shown that this could be greatly alleviated through machine learning, inferring user intentions via reactive brain responses. These signals are generated spontaneously while users merely observe assistive robots performing tasks. Using reactive brain signals, existing studies have addressed robot navigation tasks with a very limited number of potential target locations. Moreover, they use only binary, error-vs-correct classification of robot actions, leaving more detailed information unutilised. In this study a virtual robot had to navigate towards, and identify, target locations in both small and large grids, wherein any location could be the target. For the first time, we apply a system utilising detailed EEG information: 4-way classification of movements is performed, including specific information regarding when the target is reached. Additionally, we classify whether targets are correctly identified. Our proposed Bayesian strategy infers the most likely target location from the brain's responses. The experimental results show that our novel use of detailed information facilitates a more efficient and robust system than the state-of-the-art. Furthermore, unlike state-of-the-art approaches, we show scalability of our proposed approach: By tuning parameters appropriately, our strategy correctly identifies 98% of targets, even in large search spaces.
Collapse
Affiliation(s)
- Christopher Wirth
- Automatic Control and Systems Engineering Department, University of Sheffield, Sheffield, S1 4DT, UK.
- School of Medical Sciences, University of Manchester, Manchester, M13 9NT, UK.
| | - Jake Toth
- Automatic Control and Systems Engineering Department, University of Sheffield, Sheffield, S1 4DT, UK
| | - Mahnaz Arvaneh
- Automatic Control and Systems Engineering Department, University of Sheffield, Sheffield, S1 4DT, UK
| |
Collapse
|
2
|
Leoni J, Strada SC, Tanelli M, Brusa A, Proverbio AM. Single-trial stimuli classification from detected P300 for augmented Brain–Computer Interface: A deep learning approach. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2022.100393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
3
|
Yavandhasani M, Ghaderi F. Visual Object Recognition from Single-Trial EEG Signals using Machine Learning Wrapper Techniques. IEEE Trans Biomed Eng 2021; 69:2176-2183. [PMID: 34951838 DOI: 10.1109/tbme.2021.3138157] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Responses of the human brain to different visual stimuli elicit specific patterns in electroencephalography (EEG) signals. It is confirmed that by analyzing these patterns, we can recognize the category of the visited objects. However, high levels of noise and artifacts in EEG signals and the discrepancies between the recorded data from different subjects in visual object recognition task make classification of cognitive states of subjects a serious challenge. In this research, we present a framework for evaluating machine learning and wrapper channel selection algorithms used for classifying single-trial EEG signals recorded in response to photographic stimuli. It is shown that by correctly mapping the entire EEG data space to informative feature spaces (IFS), the performance of the classification methods can improve significantly. Results outperform the state-of-the-art results and confirm efficiency of the proposed feature selection methods in capturing the most informative EEG channels. This can help to achieve high separability of object categories in single-trial visual object recognition task.
Collapse
|
4
|
Habelt B, Wirth C, Afanasenkau D, Mihaylova L, Winter C, Arvaneh M, Minev IR, Bernhardt N. A Multimodal Neuroprosthetic Interface to Record, Modulate and Classify Electrophysiological Biomarkers Relevant to Neuropsychiatric Disorders. Front Bioeng Biotechnol 2021; 9:770274. [PMID: 34805123 PMCID: PMC8595111 DOI: 10.3389/fbioe.2021.770274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 10/18/2021] [Indexed: 12/18/2022] Open
Abstract
Most mental disorders, such as addictive diseases or schizophrenia, are characterized by impaired cognitive function and behavior control originating from disturbances within prefrontal neural networks. Their often chronic reoccurring nature and the lack of efficient therapies necessitate the development of new treatment strategies. Brain-computer interfaces, equipped with multiple sensing and stimulation abilities, offer a new toolbox whose suitability for diagnosis and therapy of mental disorders has not yet been explored. This study, therefore, aimed to develop a biocompatible and multimodal neuroprosthesis to measure and modulate prefrontal neurophysiological features of neuropsychiatric symptoms. We used a 3D-printing technology to rapidly prototype customized bioelectronic implants through robot-controlled deposition of soft silicones and a conductive platinum ink. We implanted the device epidurally above the medial prefrontal cortex of rats and obtained auditory event-related brain potentials in treatment-naïve animals, after alcohol administration and following neuromodulation through implant-driven electrical brain stimulation and cortical delivery of the anti-relapse medication naltrexone. Towards smart neuroprosthetic interfaces, we furthermore developed machine learning algorithms to autonomously classify treatment effects within the neural recordings. The neuroprosthesis successfully captured neural activity patterns reflecting intact stimulus processing and alcohol-induced neural depression. Moreover, implant-driven electrical and pharmacological stimulation enabled successful enhancement of neural activity. A machine learning approach based on stepwise linear discriminant analysis was able to deal with sparsity in the data and distinguished treatments with high accuracy. Our work demonstrates the feasibility of multimodal bioelectronic systems to monitor, modulate and identify healthy and affected brain states with potential use in a personalized and optimized therapy of neuropsychiatric disorders.
Collapse
Affiliation(s)
- Bettina Habelt
- Department of Psychiatry and Psychotherapy, Medical Faculty Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Leibniz Institute of Polymer Research Dresden, Dresden, Germany
| | - Christopher Wirth
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Dzmitry Afanasenkau
- Biotechnology Center (BIOTEC), Center for Molecular and Cellular Bioengineering (CMCB), Technische Universität Dresden, Dresden, Germany
| | - Lyudmila Mihaylova
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Christine Winter
- Department of Psychiatry and Psychotherapy, Charite University Medicine Berlin, Campus Mitte, Berlin, Germany
| | - Mahnaz Arvaneh
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Ivan R. Minev
- Leibniz Institute of Polymer Research Dresden, Dresden, Germany
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Nadine Bernhardt
- Department of Psychiatry and Psychotherapy, Medical Faculty Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
5
|
Orlandi S, House SC, Karlsson P, Saab R, Chau T. Brain-Computer Interfaces for Children With Complex Communication Needs and Limited Mobility: A Systematic Review. Front Hum Neurosci 2021; 15:643294. [PMID: 34335203 PMCID: PMC8319030 DOI: 10.3389/fnhum.2021.643294] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 05/18/2021] [Indexed: 11/13/2022] Open
Abstract
Brain-computer interfaces (BCIs) represent a new frontier in the effort to maximize the ability of individuals with profound motor impairments to interact and communicate. While much literature points to BCIs' promise as an alternative access pathway, there have historically been few applications involving children and young adults with severe physical disabilities. As research is emerging in this sphere, this article aims to evaluate the current state of translating BCIs to the pediatric population. A systematic review was conducted using the Scopus, PubMed, and Ovid Medline databases. Studies of children and adolescents that reported BCI performance published in English in peer-reviewed journals between 2008 and May 2020 were included. Twelve publications were identified, providing strong evidence for continued research in pediatric BCIs. Research evidence was generally at multiple case study or exploratory study level, with modest sample sizes. Seven studies focused on BCIs for communication and five on mobility. Articles were categorized and grouped based on type of measurement (i.e., non-invasive and invasive), and the type of brain signal (i.e., sensory evoked potentials or movement-related potentials). Strengths and limitations of studies were identified and used to provide requirements for clinical translation of pediatric BCIs. This systematic review presents the state-of-the-art of pediatric BCIs focused on developing advanced technology to support children and youth with communication disabilities or limited manual ability. Despite a few research studies addressing the application of BCIs for communication and mobility in children, results are encouraging and future works should focus on customizable pediatric access technologies based on brain activity.
Collapse
Affiliation(s)
- Silvia Orlandi
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Sarah C. House
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Petra Karlsson
- Cerebral Palsy Alliance, The University of Sydney, Sydney, NSW, Australia
| | - Rami Saab
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Tom Chau
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
- Institute of Biomedical Engineering (BME), University of Toronto, Toronto, ON, Canada
| |
Collapse
|
6
|
Mick S, Segas E, Dure L, Halgand C, Benois-Pineau J, Loeb GE, Cattaert D, de Rugy A. Shoulder kinematics plus contextual target information enable control of multiple distal joints of a simulated prosthetic arm and hand. J Neuroeng Rehabil 2021; 18:3. [PMID: 33407618 PMCID: PMC7789560 DOI: 10.1186/s12984-020-00793-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 12/01/2020] [Indexed: 11/20/2022] Open
Abstract
Background Prosthetic restoration of reach and grasp function after a trans-humeral amputation requires control of multiple distal degrees of freedom in elbow, wrist and fingers. However, such a high level of amputation reduces the amount of available myoelectric and kinematic information from the residual limb. Methods To overcome these limits, we added contextual information about the target’s location and orientation such as can now be extracted from gaze tracking by computer vision tools. For the task of picking and placing a bottle in various positions and orientations in a 3D virtual scene, we trained artificial neural networks to predict postures of an intact subject’s elbow, forearm and wrist (4 degrees of freedom) either solely from shoulder kinematics or with additional knowledge of the movement goal. Subjects then performed the same tasks in the virtual scene with distal joints predicted from the context-aware network. Results Average movement times of 1.22s were only slightly longer than the naturally controlled movements (0.82 s). When using a kinematic-only network, movement times were much longer (2.31s) and compensatory movements from trunk and shoulder were much larger. Integrating contextual information also gave rise to motor synergies closer to natural joint coordination. Conclusions Although notable challenges remain before applying the proposed control scheme to a real-world prosthesis, our study shows that adding contextual information to command signals greatly improves prediction of distal joint angles for prosthetic control.
Collapse
Affiliation(s)
- Sébastien Mick
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287, CNRS and Univ. Bordeaux, 146 rue Léo Saignat, 33076, Bordeaux, France.
| | - Effie Segas
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287, CNRS and Univ. Bordeaux, 146 rue Léo Saignat, 33076, Bordeaux, France
| | - Lucas Dure
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287, CNRS and Univ. Bordeaux, 146 rue Léo Saignat, 33076, Bordeaux, France
| | - Christophe Halgand
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287, CNRS and Univ. Bordeaux, 146 rue Léo Saignat, 33076, Bordeaux, France
| | - Jenny Benois-Pineau
- Laboratoire Bordelais de Recherche en Informatique, UMR 5800, CNRS, Univ. Bordeaux and Bordeaux INP, 351 cours de la Libération, 33405, Talence, France
| | - Gerald E Loeb
- Department of Biomedical Engineering, Univ. Southern California, 1042 Downey Way, Los Angeles, CA, 90089, USA
| | - Daniel Cattaert
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287, CNRS and Univ. Bordeaux, 146 rue Léo Saignat, 33076, Bordeaux, France
| | - Aymar de Rugy
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287, CNRS and Univ. Bordeaux, 146 rue Léo Saignat, 33076, Bordeaux, France.,Centre for Sensorimotor Performance, School of Human Movement and Nutrition Sciences, Univ. Queensland, Blair Drive, Brisbane, QLD, 4059, Australia
| |
Collapse
|
7
|
Chailloux Peguero JD, Mendoza-Montoya O, Antelis JM. Single-Option P300-BCI Performance Is Affected by Visual Stimulation Conditions. SENSORS (BASEL, SWITZERLAND) 2020; 20:E7198. [PMID: 33339105 PMCID: PMC7765532 DOI: 10.3390/s20247198] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/08/2020] [Accepted: 12/10/2020] [Indexed: 01/18/2023]
Abstract
The P300 paradigm is one of the most promising techniques for its robustness and reliability in Brain-Computer Interface (BCI) applications, but it is not exempt from shortcomings. The present work studied single-trial classification effectiveness in distinguishing between target and non-target responses considering two conditions of visual stimulation and the variation of the number of symbols presented to the user in a single-option visual frame. In addition, we also investigated the relationship between the classification results of target and non-target events when training and testing the machine-learning model with datasets containing different stimulation conditions and different number of symbols. To this end, we designed a P300 experimental protocol considering, as conditions of stimulation: the color highlighting or the superimposing of a cartoon face and from four to nine options. These experiments were carried out with 19 healthy subjects in 3 sessions. The results showed that the Event-Related Potentials (ERP) responses and the classification accuracy are stronger with cartoon faces as stimulus type and similar irrespective of the amount of options. In addition, the classification performance is reduced when using datasets with different type of stimulus, but it is similar when using datasets with different the number of symbols. These results have a special connotation for the design of systems, in which it is intended to elicit higher levels of evoked potentials and, at the same time, optimize training time.
Collapse
|
8
|
Wirth C, Toth J, Arvaneh M. Four-Way Classification of EEG Responses To Virtual Robot Navigation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:3050-3053. [PMID: 33018648 DOI: 10.1109/embc44109.2020.9176230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Studies have shown the possibility of using brain signals that are automatically generated while observing a navigation task as feedback for semi-autonomous control of a robot. This allows the robot to learn quasi-optimal routes to intended targets. We have combined the subclassification of two different types of navigational errors, with the subclassification of two different types of correct navigational actions, to create a 4-way classification strategy, providing detailed information about the type of action the robot performed. We used a 2-stage stepwise linear discriminant analysis approach, and tested this using brain signals from 8 and 14 participants observing two robot navigation tasks. Classification results were significantly above the chance level, with mean overall accuracy of 44.3% and 36.0% for the two datasets. As a proof of concept, we have shown that it is possible to perform fine-grained, 4-way classification of robot navigational actions, based on the electroencephalogram responses of participants who only had to observe the task. This study provides the next step towards comprehensive implicit brain-machine communication, and towards an efficient semi-autonomous brain-computer interface.
Collapse
|
9
|
Brain-Computer Interface-Based Humanoid Control: A Review. SENSORS 2020; 20:s20133620. [PMID: 32605077 PMCID: PMC7374399 DOI: 10.3390/s20133620] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 06/12/2020] [Accepted: 06/17/2020] [Indexed: 11/17/2022]
Abstract
A Brain-Computer Interface (BCI) acts as a communication mechanism using brain signals to control external devices. The generation of such signals is sometimes independent of the nervous system, such as in Passive BCI. This is majorly beneficial for those who have severe motor disabilities. Traditional BCI systems have been dependent only on brain signals recorded using Electroencephalography (EEG) and have used a rule-based translation algorithm to generate control commands. However, the recent use of multi-sensor data fusion and machine learning-based translation algorithms has improved the accuracy of such systems. This paper discusses various BCI applications such as tele-presence, grasping of objects, navigation, etc. that use multi-sensor fusion and machine learning to control a humanoid robot to perform a desired task. The paper also includes a review of the methods and system design used in the discussed applications.
Collapse
|