1
|
Wilson H, Golbabaee M, Proulx MJ, Charles S, O'Neill E. EEG-based BCI Dataset of Semantic Concepts for Imagination and Perception Tasks. Sci Data 2023; 10:386. [PMID: 37322034 PMCID: PMC10272218 DOI: 10.1038/s41597-023-02287-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 06/02/2023] [Indexed: 06/17/2023] Open
Abstract
Electroencephalography (EEG) is a widely-used neuroimaging technique in Brain Computer Interfaces (BCIs) due to its non-invasive nature, accessibility and high temporal resolution. A range of input representations has been explored for BCIs. The same semantic meaning can be conveyed in different representations, such as visual (orthographic and pictorial) and auditory (spoken words). These stimuli representations can be either imagined or perceived by the BCI user. In particular, there is a scarcity of existing open source EEG datasets for imagined visual content, and to our knowledge there are no open source EEG datasets for semantics captured through multiple sensory modalities for both perceived and imagined content. Here we present an open source multisensory imagination and perception dataset, with twelve participants, acquired with a 124 EEG channel system. The aim is for the dataset to be open for purposes such as BCI related decoding and for better understanding the neural mechanisms behind perception, imagination and across the sensory modalities when the semantic category is held constant.
Collapse
Affiliation(s)
- Holly Wilson
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK.
| | - Mohammad Golbabaee
- Department of Engineering Mathematics, University of Bristol, Bristol, BS8 1TW, UK
| | | | - Stephen Charles
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK
| | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK.
| |
Collapse
|
2
|
Grani F, Soto-Sánchez C, Fimia A, Fernández E. Toward a personalized closed-loop stimulation of the visual cortex: Advances and challenges. Front Cell Neurosci 2022; 16:1034270. [PMID: 36582211 PMCID: PMC9792612 DOI: 10.3389/fncel.2022.1034270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 11/24/2022] [Indexed: 12/15/2022] Open
Abstract
Current cortical visual prosthesis approaches are primarily unidirectional and do not consider the feed-back circuits that exist in just about every part of the nervous system. Herein, we provide a brief overview of some recent developments for better controlling brain stimulation and present preliminary human data indicating that closed-loop strategies could considerably enhance the effectiveness, safety, and long-term stability of visual cortex stimulation. We propose that the development of improved closed-loop strategies may help to enhance our capacity to communicate with the brain.
Collapse
Affiliation(s)
- Fabrizio Grani
- Institute of Bioengineering, Universidad Miguel Hernández de Elche, Elche, Spain
| | - Cristina Soto-Sánchez
- Institute of Bioengineering, Universidad Miguel Hernández de Elche, Elche, Spain,Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain
| | - Antonio Fimia
- Departamento de Ciencia de Materiales, Óptica y Tecnología Electrónica, Universidad Miguel Hernández de Elche, Elche, Spain
| | - Eduardo Fernández
- Institute of Bioengineering, Universidad Miguel Hernández de Elche, Elche, Spain,Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain,*Correspondence: Eduardo Fernández,
| |
Collapse
|
3
|
Meng L, Ge K. Decoding Visual fMRI Stimuli from Human Brain Based on Graph Convolutional Neural Network. Brain Sci 2022; 12:brainsci12101394. [PMID: 36291327 PMCID: PMC9599823 DOI: 10.3390/brainsci12101394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/12/2022] [Accepted: 10/14/2022] [Indexed: 11/21/2022] Open
Abstract
Brain decoding is to predict the external stimulus information from the collected brain response activities, and visual information is one of the most important sources of external stimulus information. Decoding functional magnetic resonance imaging (fMRI) based on visual stimulation is helpful in understanding the working mechanism of the brain visual function regions. Traditional brain decoding algorithms cannot accurately extract stimuli features from fMRI. To address these shortcomings, this paper proposed a brain decoding algorithm based on a graph convolution network (GCN). Firstly, 11 regions of interest (ROI) were selected according to the human brain visual function regions, which can avoid the noise interference of the non-visual regions of the human brain; then, a deep three-dimensional convolution neural network was specially designed to extract the features of these 11 regions; next, the GCN was used to extract the functional correlation features between the different human brain visual regions. Furthermore, to avoid the problem of gradient disappearance when there were too many layers of graph convolutional neural network, the residual connections were adopted in our algorithm, which helped to integrate different levels of features in order to improve the accuracy of the proposed GCN. The proposed algorithm was tested on the public dataset, and the recognition accuracy reached 98.67%. Compared with the other state-of-the-art algorithms, the proposed algorithm performed the best.
Collapse
|
4
|
Loriette C, Amengual JL, Ben Hamed S. Beyond the brain-computer interface: Decoding brain activity as a tool to understand neuronal mechanisms subtending cognition and behavior. Front Neurosci 2022; 16:811736. [PMID: 36161174 PMCID: PMC9492914 DOI: 10.3389/fnins.2022.811736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 08/23/2022] [Indexed: 11/13/2022] Open
Abstract
One of the major challenges in system neurosciences consists in developing techniques for estimating the cognitive information content in brain activity. This has an enormous potential in different domains spanning from clinical applications, cognitive enhancement to a better understanding of the neural bases of cognition. In this context, the inclusion of machine learning techniques to decode different aspects of human cognition and behavior and its use to develop brain-computer interfaces for applications in neuroprosthetics has supported a genuine revolution in the field. However, while these approaches have been shown quite successful for the study of the motor and sensory functions, success is still far from being reached when it comes to covert cognitive functions such as attention, motivation and decision making. While improvement in this field of BCIs is growing fast, a new research focus has emerged from the development of strategies for decoding neural activity. In this review, we aim at exploring how the advanced in decoding of brain activity is becoming a major neuroscience tool moving forward our understanding of brain functions, providing a robust theoretical framework to test predictions on the relationship between brain activity and cognition and behavior.
Collapse
Affiliation(s)
- Célia Loriette
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Université Claude Bernard Lyon 1, Bron, France
| | | | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Université Claude Bernard Lyon 1, Bron, France
| |
Collapse
|
5
|
High-Level Visual Encoding Model Framework with Hierarchical Ventral Stream-Optimized Neural Networks. Brain Sci 2022; 12:brainsci12081101. [PMID: 36009164 PMCID: PMC9406060 DOI: 10.3390/brainsci12081101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/26/2022] [Accepted: 08/18/2022] [Indexed: 11/20/2022] Open
Abstract
Visual encoding models based on deep neural networks (DNN) show good performance in predicting brain activity in low-level visual areas. However, due to the amount of neural data limitation, DNN-based visual encoding models are difficult to fit for high-level visual areas, resulting in insufficient encoding performance. The ventral stream suggests that higher visual areas receive information from lower visual areas, which is not fully reflected in the current encoding models. In the present study, we propose a novel visual encoding model framework which uses the hierarchy of representations in the ventral stream to improve the model’s performance in high-level visual areas. Under the framework, we propose two categories of hierarchical encoding models from the voxel and the feature perspectives to realize the hierarchical representations. From the voxel perspective, we first constructed an encoding model for the low-level visual area (V1 or V2) and extracted the voxel space predicted by the model. Then we use the extracted voxel space of the low-level visual area to predict the voxel space of the high-level visual area (V4 or LO) via constructing a voxel-to-voxel model. From the feature perspective, the feature space of the first model is extracted to predict the voxel space of the high-level visual area. The experimental results show that two categories of hierarchical encoding models effectively improve the encoding performance in V4 and LO. In addition, the proportion of the best-encoded voxels for different models in V4 and LO show that our proposed models have obvious advantages in prediction accuracy. We find that the hierarchy of representations in the ventral stream has a positive effect on improving the performance of the existing model in high-level visual areas.
Collapse
|
6
|
Du B, Cheng X, Duan Y, Ning H. fMRI Brain Decoding and Its Applications in Brain-Computer Interface: A Survey. Brain Sci 2022; 12:228. [PMID: 35203991 PMCID: PMC8869956 DOI: 10.3390/brainsci12020228] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 01/29/2022] [Accepted: 01/30/2022] [Indexed: 11/25/2022] Open
Abstract
Brain neural activity decoding is an important branch of neuroscience research and a key technology for the brain-computer interface (BCI). Researchers initially developed simple linear models and machine learning algorithms to classify and recognize brain activities. With the great success of deep learning on image recognition and generation, deep neural networks (DNN) have been engaged in reconstructing visual stimuli from human brain activity via functional magnetic resonance imaging (fMRI). In this paper, we reviewed the brain activity decoding models based on machine learning and deep learning algorithms. Specifically, we focused on current brain activity decoding models with high attention: variational auto-encoder (VAE), generative confrontation network (GAN), and the graph convolutional network (GCN). Furthermore, brain neural-activity-decoding-enabled fMRI-based BCI applications in mental and psychological disease treatment are presented to illustrate the positive correlation between brain decoding and BCI. Finally, existing challenges and future research directions are addressed.
Collapse
Affiliation(s)
- Bing Du
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; (B.D.); (X.C.)
| | - Xiaomu Cheng
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; (B.D.); (X.C.)
| | - Yiping Duan
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China;
| | - Huansheng Ning
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; (B.D.); (X.C.)
| |
Collapse
|
7
|
Deep learning helps EEG signals predict different stages of visual processing in the human brain. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102996] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
8
|
Svanera M, Morgan AT, Petro LS, Muckli L. A self-supervised deep neural network for image completion resembles early visual cortex fMRI activity patterns for occluded scenes. J Vis 2021; 21:5. [PMID: 34259828 PMCID: PMC8288063 DOI: 10.1167/jov.21.7.5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 05/14/2021] [Indexed: 11/24/2022] Open
Abstract
The promise of artificial intelligence in understanding biological vision relies on the comparison of computational models with brain data with the goal of capturing functional principles of visual information processing. Convolutional neural networks (CNN) have successfully matched the transformations in hierarchical processing occurring along the brain's feedforward visual pathway, extending into ventral temporal cortex. However, we are still to learn if CNNs can successfully describe feedback processes in early visual cortex. Here, we investigated similarities between human early visual cortex and a CNN with encoder/decoder architecture, trained with self-supervised learning to fill occlusions and reconstruct an unseen image. Using representational similarity analysis (RSA), we compared 3T functional magnetic resonance imaging (fMRI) data from a nonstimulated patch of early visual cortex in human participants viewing partially occluded images, with the different CNN layer activations from the same images. Results show that our self-supervised image-completion network outperforms a classical object-recognition supervised network (VGG16) in terms of similarity to fMRI data. This work provides additional evidence that optimal models of the visual system might come from less feedforward architectures trained with less supervision. We also find that CNN decoder pathway activations are more similar to brain processing compared to encoder activations, suggesting an integration of mid- and low/middle-level features in early visual cortex. Challenging an artificial intelligence model to learn natural image representations via self-supervised learning and comparing them with brain data can help us to constrain our understanding of information processing, such as neuronal predictive coding.
Collapse
Affiliation(s)
- Michele Svanera
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, UK
| | - Andrew T Morgan
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, UK
| | - Lucy S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, UK
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, UK
| |
Collapse
|
9
|
Grzywacz NM. Stochasticity, Nonlinear Value Functions, and Update Rules in Learning Aesthetic Biases. Front Hum Neurosci 2021; 15:639081. [PMID: 34040509 PMCID: PMC8141583 DOI: 10.3389/fnhum.2021.639081] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 03/31/2021] [Indexed: 11/29/2022] Open
Abstract
A theoretical framework for the reinforcement learning of aesthetic biases was recently proposed based on brain circuitries revealed by neuroimaging. A model grounded on that framework accounted for interesting features of human aesthetic biases. These features included individuality, cultural predispositions, stochastic dynamics of learning and aesthetic biases, and the peak-shift effect. However, despite the success in explaining these features, a potential weakness was the linearity of the value function used to predict reward. This linearity meant that the learning process employed a value function that assumed a linear relationship between reward and sensory stimuli. Linearity is common in reinforcement learning in neuroscience. However, linearity can be problematic because neural mechanisms and the dependence of reward on sensory stimuli were typically nonlinear. Here, we analyze the learning performance with models including optimal nonlinear value functions. We also compare updating the free parameters of the value functions with the delta rule, which neuroscience models use frequently, vs. updating with a new Phi rule that considers the structure of the nonlinearities. Our computer simulations showed that optimal nonlinear value functions resulted in improvements of learning errors when the reward models were nonlinear. Similarly, the new Phi rule led to improvements in these errors. These improvements were accompanied by the straightening of the trajectories of the vector of free parameters in its phase space. This straightening meant that the process became more efficient in learning the prediction of reward. Surprisingly, however, this improved efficiency had a complex relationship with the rate of learning. Finally, the stochasticity arising from the probabilistic sampling of sensory stimuli, rewards, and motivations helped the learning process narrow the range of free parameters to nearly optimal outcomes. Therefore, we suggest that value functions and update rules optimized for social and ecological constraints are ideal for learning aesthetic biases.
Collapse
Affiliation(s)
- Norberto M Grzywacz
- Department of Psychology, Loyola University Chicago, Chicago, IL, United States.,Department of Molecular Pharmacology and Neuroscience, Loyola University Chicago, Chicago, IL, United States
| |
Collapse
|
10
|
Yoo SH, Santosa H, Kim CS, Hong KS. Decoding Multiple Sound-Categories in the Auditory Cortex by Neural Networks: An fNIRS Study. Front Hum Neurosci 2021; 15:636191. [PMID: 33994978 PMCID: PMC8113416 DOI: 10.3389/fnhum.2021.636191] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 03/31/2021] [Indexed: 11/13/2022] Open
Abstract
This study aims to decode the hemodynamic responses (HRs) evoked by multiple sound-categories using functional near-infrared spectroscopy (fNIRS). The six different sounds were given as stimuli (English, non-English, annoying, nature, music, and gunshot). The oxy-hemoglobin (HbO) concentration changes are measured in both hemispheres of the auditory cortex while 18 healthy subjects listen to 10-s blocks of six sound-categories. Long short-term memory (LSTM) networks were used as a classifier. The classification accuracy was 20.38 ± 4.63% with six class classification. Though LSTM networks' performance was a little higher than chance levels, it is noteworthy that we could classify the data subject-wise without feature selections.
Collapse
Affiliation(s)
- So-Hyeon Yoo
- School of Mechanical Engineering, Pusan National University, Busan, South Korea
| | - Hendrik Santosa
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Chang-Seok Kim
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan, South Korea
| | - Keum-Shik Hong
- School of Mechanical Engineering, Pusan National University, Busan, South Korea
| |
Collapse
|
11
|
Livezey JA, Glaser JI. Deep learning approaches for neural decoding across architectures and recording modalities. Brief Bioinform 2020; 22:1577-1591. [PMID: 33372958 DOI: 10.1093/bib/bbaa355] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 10/31/2020] [Accepted: 11/04/2020] [Indexed: 12/19/2022] Open
Abstract
Decoding behavior, perception or cognitive state directly from neural signals is critical for brain-computer interface research and an important tool for systems neuroscience. In the last decade, deep learning has become the state-of-the-art method in many machine learning tasks ranging from speech recognition to image segmentation. The success of deep networks in other domains has led to a new wave of applications in neuroscience. In this article, we review deep learning approaches to neural decoding. We describe the architectures used for extracting useful features from neural recording modalities ranging from spikes to functional magnetic resonance imaging. Furthermore, we explore how deep learning has been leveraged to predict common outputs including movement, speech and vision, with a focus on how pretrained deep networks can be incorporated as priors for complex decoding targets like acoustic speech or images. Deep learning has been shown to be a useful tool for improving the accuracy and flexibility of neural decoding across a wide range of tasks, and we point out areas for future scientific development.
Collapse
Affiliation(s)
- Jesse A Livezey
- Neural Systems and Data Science Laboratory at the Lawrence Berkeley National Laboratory. He obtained his PhD in Physics from the University of California, Berkeley
| | - Joshua I Glaser
- Center for Theoretical Neuroscience and Department of Statistics at Columbia University. He obtained his PhD in Neuroscience from Northwestern University
| |
Collapse
|
12
|
BigGAN-based Bayesian Reconstruction of Natural Images from Human Brain Activity. Neuroscience 2020; 444:92-105. [DOI: 10.1016/j.neuroscience.2020.07.040] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2019] [Revised: 07/17/2020] [Accepted: 07/21/2020] [Indexed: 11/20/2022]
|