1
|
Quattrone D, Santambrogio F, Scarpellini A, Sgherzi F, Poles I, Clementi L, Santambrogio MD. Analysis and Classification of Event-Related Potentials During Image Observation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083339 DOI: 10.1109/embc40787.2023.10340052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
In the field of cognitive neuroscience, researchers have conducted extensive studies on object categorization using Event-Related Potential (ERP) analysis, specifically by analyzing electroencephalographic (EEG) response signals triggered by visual stimuli. The most common approach for visual ERP analysis is to use a low presentation rate of images and an active task where participants actively discriminate between target and non-target images. However, researchers are also interested in understanding how the human brain processes visual information in real-world scenarios. To simulate real-life object recognition, this study proposes an analysis pipeline of visual ERPs evoked by images presented in a Rapid Serial Visual Presentation (RSVP) paradigm. Such an approach allows for the investigation of recurrent patterns of visual ERP signals across specific categories and subjects. The pipeline includes segmentation of the EEGs in epochs, and the use of the resulting features as inputs for Support Vector Machine (SVM) classification. Results demonstrate common ERP patterns across the selected categories and the ability to obtain discriminant information from single visual stimuli presented in the RSVP paradigm.
Collapse
|
2
|
Amin HU, Ullah R, Reza MF, Malik AS. Single-trial extraction of event-related potentials (ERPs) and classification of visual stimuli by ensemble use of discrete wavelet transform with Huffman coding and machine learning techniques. J Neuroeng Rehabil 2023; 20:70. [PMID: 37269019 DOI: 10.1186/s12984-023-01179-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 04/19/2023] [Indexed: 06/04/2023] Open
Abstract
BACKGROUND Presentation of visual stimuli can induce changes in EEG signals that are typically detectable by averaging together data from multiple trials for individual participant analysis as well as for groups or conditions analysis of multiple participants. This study proposes a new method based on the discrete wavelet transform with Huffman coding and machine learning for single-trial analysis of evenal (ERPs) and classification of different visual events in the visual object detection task. METHODS EEG single trials are decomposed with discrete wavelet transform (DWT) up to the [Formula: see text] level of decomposition using a biorthogonal B-spline wavelet. The coefficients of DWT in each trial are thresholded to discard sparse wavelet coefficients, while the quality of the signal is well maintained. The remaining optimum coefficients in each trial are encoded into bitstreams using Huffman coding, and the codewords are represented as a feature of the ERP signal. The performance of this method is tested with real visual ERPs of sixty-eight subjects. RESULTS The proposed method significantly discards the spontaneous EEG activity, extracts the single-trial visual ERPs, represents the ERP waveform into a compact bitstream as a feature, and achieves promising results in classifying the visual objects with classification performance metrics: accuracies 93.60[Formula: see text], sensitivities 93.55[Formula: see text], specificities 94.85[Formula: see text], precisions 92.50[Formula: see text], and area under the curve (AUC) 0.93[Formula: see text] using SVM and k-NN machine learning classifiers. CONCLUSION The proposed method suggests that the joint use of discrete wavelet transform (DWT) with Huffman coding has the potential to efficiently extract ERPs from background EEG for studying evoked responses in single-trial ERPs and classifying visual stimuli. The proposed approach has O(N) time complexity and could be implemented in real-time systems, such as the brain-computer interface (BCI), where fast detection of mental events is desired to smoothly operate a machine with minds.
Collapse
Affiliation(s)
- Hafeez Ullah Amin
- School of Computer Science, Faculty of Science and Engineering, University of Nottingham, Jalan Broga, 43500, Semenyih, Malaysia
| | - Rafi Ullah
- Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, 32610, Seri Iskandar, Malaysia
| | - Mohammed Faruque Reza
- Department of Neurosciences, School of Medical Sciences, Hospital Universiti Sains Malaysia, Kubang Kerian, 16150, Kota Bharu, Malaysia
| | - Aamir Saeed Malik
- Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic.
| |
Collapse
|
3
|
Deng Y, Ding S, Li W, Lai Q, Cao L. EEG-based visual stimuli classification via reusable LSTM. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
4
|
Li R, Hu H, Zhao X, Wang Z, Xu G. A static paradigm based on illusion-induced VEP for brain-computer interfaces. J Neural Eng 2023; 20:026006. [PMID: 36808912 DOI: 10.1088/1741-2552/acbdc0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
OBJECTIVE Visual evoked potentials (VEPs) have been commonly applied in brain-computer interfaces (BCIs) due to their satisfactory classification performance recently. However, most existing methods with flickering or oscillating stimuli will induce visual fatigue under long-term training, thus restricting the implementation of VEP-based BCIs. To address this issue, a novel paradigm adopting static motion illusion based on illusion-induced visual evoked potential (IVEP) is proposed for BCIs to enhance visual experience and practicality. APPROACH This study explored the responses to baseline and illusion tasks including the Rotating-Tilted-Lines (RTL) illusion and Rotating-Snakes (RS) illusion. The distinguishable features were examined between different illusions by analyzing the event-related potentials (ERPs) and amplitude modulation of evoked oscillatory responses. MAIN RESULTS The illusion stimuli elicited VEPs in an early time window encompassing a negative component (N1) from 110 to 200 ms and a positive component (P2) between 210 and 300 ms. Based on the feature analysis, a filter bank was designed to extract discriminative signals. The task-related component analysis (TRCA) was used to evaluate the binary classification task performance of the proposed method. Then the highest accuracy of 86.67% was achieved with a data length of 0.6 s. SIGNIFICANCE The results of this study demonstrate that the static motion illusion paradigm has the feasibility of implementation and is promising for VEP-based BCI applications.
Collapse
Affiliation(s)
- Ruxue Li
- Intelligent Information and Communication Technology Research and Development Center, Shanghai Advanced Research Institute Chinese Academy of Sciences, 99 Haike Road, Pudong New Area, Shanghai, Shanghai, 201210, CHINA
| | - Honglin Hu
- Intelligent Information and Communication Technology Research and Development Center, Shanghai Advanced Research Institute Chinese Academy of Sciences, 99 Haike Road, Pudong New Area, Shanghai, Shanghai, 201210, CHINA
| | - Xi Zhao
- Intelligent Information and Communication Technology Research and Development Center, Shanghai Advanced Research Institute, 99 Haike Road, Pudong New Area, Shanghai, Shanghai, 201210, CHINA
| | - Zhenyu Wang
- Intelligent Information and Communication Technology Research and Development Center, Shanghai Advanced Research Institute Chinese Academy of Sciences, 99 Haike Road, Pudong New Area, Shanghai, Shanghai, 201210, CHINA
| | - Guiying Xu
- Intelligent Information and Communication Technology Research and Development Center, Shanghai Advanced Research Institute, 99 Haike Road, Pudong New Area, Shanghai, Shanghai, Shanghai, 201210, CHINA
| |
Collapse
|
5
|
Deng X, Wang Z, Liu K, Xiang X. A GAN model encoded by CapsEEGNet for visual EEG encoding and image reproduction. J Neurosci Methods 2023; 384:109747. [PMID: 36427669 DOI: 10.1016/j.jneumeth.2022.109747] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 10/27/2022] [Accepted: 11/11/2022] [Indexed: 11/25/2022]
Abstract
In last few decades, reading the human mind is an innovative topic in scientific research. Recent studies in neuroscience indicate that it is possible to decode the signals of the human brain based on the neuroimaging data. The work in this paper explores the possibility of building an end-to-end BCI system to learn and visualize the brain thoughts evoked by the stimulating images. To achieve this goal, it designs an experiment to collect the EEG signals evoked by randomly presented images. Based on these data, this work analyzes and compares the classification abilities by several improved methods, including the Transformer, CapsNet and the ensemble strategies. After obtaining the optimal method to be the encoder, this paper proposes a distribution-to-distribution mapping network to transform an encoded latent feature vector into a prior image feature vector. To visualize the brain thoughts, a pretrained IC-GAN model is used to receive these image feature vectors and generate images. Extensive experiments are carried out and the results show that the proposed method can effectively deal with the small sample data original from the less electrode channels. By examining the generated images coming from the EEG signals, it verifies that the proposed model is capable of reproducing the images seen by human eyes to some extent.
Collapse
Affiliation(s)
- Xin Deng
- Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, 40065, China.
| | - Zhongyin Wang
- Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, 40065, China.
| | - Ke Liu
- Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, 40065, China.
| | - Xiaohong Xiang
- Department of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, 40065, China.
| |
Collapse
|
6
|
Zeltser G, Sukhanov IM, Nevorotin AJ. MMM - The molecular model of memory. J Theor Biol 2022; 549:111219. [PMID: 35810778 DOI: 10.1016/j.jtbi.2022.111219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 07/01/2022] [Accepted: 07/04/2022] [Indexed: 11/17/2022]
Abstract
Identifying mechanisms underlying neurons ability to process information including acquisition, storage, and retrieval plays an important role in the understanding of the different types of memory, pathogenesis of many neurological diseases affecting memory and therapeutic target discovery. However, the traditional understanding of the mechanisms of memory associated with the electrical signals having a unique combination of frequency and amplitude does not answer the question how the memories can survive for life-long periods of time, while exposed to synaptic noise. Recent evidence suggests that, apart from neuronal circuits, a diversity of the molecular memory (MM) carriers, are essential for memory performance. The molecular model of memory (MMM) is proposed, according to which each item of incoming information (the elementary memory item - eMI) is encoded by both circuitries, with the unique for a given MI electrical parameters, and also the MM carriers, unique by its molecular composition. While operating as the carriers of incoming information, the MMs, are functioning within the neuron plasma membrane. Inactive (latent) initially, during acquisition each of the eMIs is activated to become a virtual copy of some real fact or events bygone. This activation is accompanied by the considerable remodeling of the MM molecule associated with the resonance effect.
Collapse
Affiliation(s)
| | - Ilya M Sukhanov
- Lab. Behavioral Pharmacology, Dept. Psychopharmacology, Valdman Institute of Pharmacology, I.P. Pavlov Medical University, Leo Tolstoi Street 6/8, St. Petersburg 197022, The Russian Federation
| | - Alexey J Nevorotin
- Laboratory of Electron Microscopy, I.P. Pavlov Medical University, Leo Tolstoi Street 6/8, St. Petersburg 197022, The Russian Federation
| |
Collapse
|
7
|
The Neural Responses of Visual Complexity in the Oddball Paradigm: An ERP Study. Brain Sci 2022; 12:brainsci12040447. [PMID: 35447979 PMCID: PMC9032384 DOI: 10.3390/brainsci12040447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 12/10/2022] Open
Abstract
This research measured human neural responses to images of different visual complexity levels using the oddball paradigm to explore the neurocognitive responses of complexity perception in visual processing. In the task, 24 participants (12 females) were required to react to images with high complexity for all stimuli. We hypothesized that high-complexity stimuli would induce early visual and attentional processing effects and may elicit the visual mismatch negativity responses and the emergence of error-related negativity. Our results showed that the amplitude of P1 and N1 were unaffected by complexity in the early visual processing. Under the target stimuli, both N2 and P3b components were reported, suggesting that the N2 component was sensitive to the complexity deviation, and the attentional processing related to complexity may be derived from the occipital zone according to the feature of the P3b component. In addition, compared with the low-complexity stimulus, the high-complexity stimulus aroused a larger amplitude of the visual mismatch negativity. The detected error negativity (Ne) component reflected the error detection of the participants’ mismatch between visual complexity and psychological expectations.
Collapse
|
8
|
Karimi-Rouzbahani H, Woolgar A. When the Whole Is Less Than the Sum of Its Parts: Maximum Object Category Information and Behavioral Prediction in Multiscale Activation Patterns. Front Neurosci 2022; 16:825746. [PMID: 35310090 PMCID: PMC8924472 DOI: 10.3389/fnins.2022.825746] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 01/24/2022] [Indexed: 11/19/2022] Open
Abstract
Neural codes are reflected in complex neural activation patterns. Conventional electroencephalography (EEG) decoding analyses summarize activations by averaging/down-sampling signals within the analysis window. This diminishes informative fine-grained patterns. While previous studies have proposed distinct statistical features capable of capturing variability-dependent neural codes, it has been suggested that the brain could use a combination of encoding protocols not reflected in any one mathematical feature alone. To check, we combined 30 features using state-of-the-art supervised and unsupervised feature selection procedures (n = 17). Across three datasets, we compared decoding of visual object category between these 17 sets of combined features, and between combined and individual features. Object category could be robustly decoded using the combined features from all of the 17 algorithms. However, the combination of features, which were equalized in dimension to the individual features, were outperformed across most of the time points by the multiscale feature of Wavelet coefficients. Moreover, the Wavelet coefficients also explained the behavioral performance more accurately than the combined features. These results suggest that a single but multiscale encoding protocol may capture the EEG neural codes better than any combination of protocols. Our findings put new constraints on the models of neural information encoding in EEG.
Collapse
Affiliation(s)
- Hamid Karimi-Rouzbahani
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
- Department of Cognitive Science, Perception in Action Research Centre, Macquarie University, Sydney, NSW, Australia
- Department of Computing, Macquarie University, Sydney, NSW, Australia
| | - Alexandra Woolgar
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
- Department of Cognitive Science, Perception in Action Research Centre, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
9
|
Shi R, Zhao Y, Cao Z, Liu C, Kang Y, Zhang J. Categorizing objects from MEG signals using EEGNet. Cogn Neurodyn 2021; 16:365-377. [PMID: 35401863 PMCID: PMC8934895 DOI: 10.1007/s11571-021-09717-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/09/2021] [Accepted: 09/02/2021] [Indexed: 11/25/2022] Open
Abstract
Magnetoencephalography (MEG) signals have demonstrated their practical application to reading human minds. Current neural decoding studies have made great progress to build subject-wise decoding models to extract and discriminate the temporal/spatial features in neural signals. In this paper, we used a compact convolutional neural network-EEGNet-to build a common decoder across subjects, which deciphered the categories of objects (faces, tools, animals, and scenes) from MEG data. This study investigated the influence of the spatiotemporal structure of MEG on EEGNet's classification performance. Furthermore, the EEGNet replaced its convolution layers with two sets of parallel convolution structures to extract the spatial and temporal features simultaneously. Our results showed that the organization of MEG data fed into the EEGNet has an effect on EEGNet classification accuracy, and the parallel convolution structures in EEGNet are beneficial to extracting and fusing spatial and temporal MEG features. The classification accuracy demonstrated that the EEGNet succeeds in building the common decoder model across subjects, and outperforms several state-of-the-art feature fusing methods.
Collapse
Affiliation(s)
- Ran Shi
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Yanyu Zhao
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Zhiyuan Cao
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Chunyu Liu
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Yi Kang
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Jiacai Zhang
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
- Engineering Research Center of Intelligent Technology and Educational Application, Ministry of Education, Beijing, 100875, China
| |
Collapse
|
10
|
Krigolson OE, Hammerstrom MR, Abimbola W, Trska R, Wright BW, Hecker KG, Binsted G. Using Muse: Rapid Mobile Assessment of Brain Performance. Front Neurosci 2021; 15:634147. [PMID: 33584194 PMCID: PMC7876403 DOI: 10.3389/fnins.2021.634147] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 01/11/2021] [Indexed: 11/13/2022] Open
Abstract
The advent of mobile electroencephalography (mEEG) has created a means for large scale collection of neural data thus affording a deeper insight into cognitive phenomena such as cognitive fatigue. Cognitive fatigue - a neural state that is associated with an increased incidence of errorful performance - is responsible for accidents on a daily basis which at times can cost human lives. To gain better insight into the neural signature of cognitive fatigue in the present study we used mEEG to examine the relationship between perceived cognitive fatigue and human-event related brain potentials (ERPs) and electroencephalographic (EEG) oscillations in a sample of 1,000 people. As a secondary goal, we wanted to further demonstrate the capability of mEEG to accurately measure ERP and EEG data. To accomplish these goals, participants performed a standard visual oddball task on an Apple iPad while EEG data were recorded from a Muse EEG headband. Counter to traditional EEG studies, experimental setup and data collection was completed in less than seven minutes on average. An analysis of our EEG data revealed robust N200 and P300 ERP components and neural oscillations in the delta, theta, alpha, and beta bands. In line with previous findings we observed correlations between ERP components and EEG power and perceived cognitive fatigue. Further, we demonstrate here that a linear combination of ERP and EEG features is a significantly better predictor of perceived cognitive fatigue than any ERP or EEG feature on its own. In sum, our results provide validation of mEEG as a viable tool for research and provide further insight into the impact of cognitive fatigue on the human brain.
Collapse
Affiliation(s)
- Olave E Krigolson
- Centre for Biomedical Research, University of Victoria, Victoria, BC, Canada
| | | | - Wande Abimbola
- Centre for Biomedical Research, University of Victoria, Victoria, BC, Canada
| | - Robert Trska
- Centre for Biomedical Research, University of Victoria, Victoria, BC, Canada
| | - Bruce W Wright
- Division of Medical Sciences, University of Victoria, Victoria, BC, Canada
| | - Kent G Hecker
- Faculty of Veterinary Medicine, University of Calgary, Calgary, AB, Canada
| | - Gordon Binsted
- Faculty of Health and Social Development, University of British Columbia Okanagan, Kelowna, BC, Canada
| |
Collapse
|