1
|
Feng X, Cong P, Dong L, Xin Y, Miao F, Xin R. Channel attention convolutional aggregation network based on video-level features for EEG emotion recognition. Cogn Neurodyn 2024; 18:1689-1707. [PMID: 39104696 PMCID: PMC11297860 DOI: 10.1007/s11571-023-10034-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 06/01/2023] [Accepted: 06/27/2023] [Indexed: 08/07/2024] Open
Abstract
Electroencephalogram (EEG) emotion recognition plays a vital role in affective computing. A limitation of the EEG emotion recognition task is that the features of multiple domains are rarely included in the analysis simultaneously because of the lack of an effective feature organization form. This paper proposes a video-level feature organization method to effectively organize the temporal, frequency and spatial domain features. In addition, a deep neural network, Channel Attention Convolutional Aggregation Network, is designed to explore deeper emotional information from video-level features. The network uses a channel attention mechanism to adaptively captures critical EEG frequency bands. Then the frame-level representation of each time point is obtained by multi-layer convolution. Finally, the frame-level features are aggregated through NeXtVLAD to learn the time-sequence-related features. The method proposed in this paper achieves the best classification performance in SEED and DEAP datasets. The mean accuracy and standard deviation of the SEED dataset are 95.80% and 2.04%. In the DEAP dataset, the average accuracy with the standard deviation of arousal and valence are 98.97% ± 1.13% and 98.98% ± 0.98%, respectively. The experimental results show that our approach based on video-level features is effective for EEG emotion recognition tasks.
Collapse
Affiliation(s)
- Xin Feng
- School of Science, Jilin Institute of Chemical Technology, Jilin, 130000 People’s Republic of China
| | - Ping Cong
- College of Information and Control Engineering, Jilin Institute of Chemical Technology, Jilin, 130000 People’s Republic of China
| | - Lin Dong
- Department of Epidemiology and Biostatistics, School of Public Health, Jilin University, Changchun, 130012 People’s Republic of China
| | - Yongxian Xin
- College of Business and Economics, Australian National University, Act, Canberra, 2601 Australia
| | - Fengbo Miao
- College of Electronics and Information Engineering, Tiangong University, Tianjin, 300387 People’s Republic of China
| | - Ruihao Xin
- College of Information and Control Engineering, Jilin Institute of Chemical Technology, Jilin, 130000 People’s Republic of China
- College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012 People’s Republic of China
| |
Collapse
|
2
|
Gao D, Wang K, Wang M, Zhou J, Zhang Y. SFT-Net: A Network for Detecting Fatigue From EEG Signals by Combining 4D Feature Flow and Attention Mechanism. IEEE J Biomed Health Inform 2024; 28:4444-4455. [PMID: 37310832 DOI: 10.1109/jbhi.2023.3285268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Fatigued driving is a leading cause of traffic accidents, and accurately predicting driver fatigue can significantly reduce their occurrence. However, modern fatigue detection models based on neural networks often face challenges such as poor interpretability and insufficient input feature dimensions. This article proposes a novel Spatial-Frequency-Temporal Network (SFT-Net) method for detecting driver fatigue using electroencephalogram (EEG) data. Our approach integrates EEG signals' spatial, frequency, and temporal information to improve recognition performance. We transform the differential entropy of five frequency bands of EEG signals into a 4D feature tensor to preserve these three types of information. An attention module is then used to recalibrate the spatial and frequency information of each input 4D feature tensor time slice. The output of this module is fed into a depthwise separable convolution (DSC) module, which extracts spatial and frequency features after attention fusion. Finally, long short-term memory (LSTM) is used to extract the temporal dependence of the sequence, and the final features are output through a linear layer. We validate the effectiveness of our model on the SEED-VIG dataset, and experimental results demonstrate that SFT-Net outperforms other popular models for EEG fatigue detection. Interpretability analysis supports the claim that our model has a certain level of interpretability. Our work addresses the challenge of detecting driver fatigue from EEG data and highlights the importance of integrating spatial, frequency, and temporal information.
Collapse
|
3
|
Chen W, Liao Y, Dai R, Dong Y, Huang L. EEG-based emotion recognition using graph convolutional neural network with dual attention mechanism. Front Comput Neurosci 2024; 18:1416494. [PMID: 39099770 PMCID: PMC11294218 DOI: 10.3389/fncom.2024.1416494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 06/26/2024] [Indexed: 08/06/2024] Open
Abstract
EEG-based emotion recognition is becoming crucial in brain-computer interfaces (BCI). Currently, most researches focus on improving accuracy, while neglecting further research on the interpretability of models, we are committed to analyzing the impact of different brain regions and signal frequency bands on emotion generation based on graph structure. Therefore, this paper proposes a method named Dual Attention Mechanism Graph Convolutional Neural Network (DAMGCN). Specifically, we utilize graph convolutional neural networks to model the brain network as a graph to extract representative spatial features. Furthermore, we employ the self-attention mechanism of the Transformer model which allocates more electrode channel weights and signal frequency band weights to important brain regions and frequency bands. The visualization of attention mechanism clearly demonstrates the weight allocation learned by DAMGCN. During the performance evaluation of our model on the DEAP, SEED, and SEED-IV datasets, we achieved the best results on the SEED dataset, showing subject-dependent experiments' accuracy of 99.42% and subject-independent experiments' accuracy of 73.21%. The results are demonstrably superior to the accuracies of most existing models in the realm of EEG-based emotion recognition.
Collapse
Affiliation(s)
| | | | | | | | - Liya Huang
- College of Electronic and Optical Engineering & College of Flexible Electronics (Future Technology), Nanjing University of Posts and Telecommunications, Nanjing, China
| |
Collapse
|
4
|
Zhou H, Zhang J, Gao J, Zeng X, Min X, Zhan H, Zheng H, Hu H, Yang Y, Wei S. Identification of Methamphetamine Abusers Can Be Supported by EEG-Based Wavelet Transform and BiLSTM Networks. Brain Topogr 2024:10.1007/s10548-024-01062-2. [PMID: 38955901 DOI: 10.1007/s10548-024-01062-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 06/04/2024] [Indexed: 07/04/2024]
Abstract
Methamphetamine (MA) is a neurological drug, which is harmful to the overall brain cognitive function when abused. Based on this property of MA, people can be divided into those with MA abuse and healthy people. However, few studies to date have investigated automatic detection of MA abusers based on the neural activity. For this reason, the purpose of this research was to investigate the difference in the neural activity between MA abusers and healthy persons and accordingly discriminate MA abusers. First, we performed event-related potential (ERP) analysis to determine the time range of P300. Then, the wavelet coefficients of the P300 component were extracted as the main features, along with the time and frequency domain features within the selected P300 range to classify. To optimize the feature set, F_score was used to remove features below the average score. Finally, a Bidirectional Long Short-term Memory (BiLSTM) network was performed for classification. The experimental result showed that the detection accuracy of BiLSTM could reach 83.85%. In conclusion, the P300 component of EEG signals of MA abusers is different from that in normal persons. Based on this difference, this study proposes a novel way for the prevention and diagnosis of MA abuse.
Collapse
Affiliation(s)
- Hui Zhou
- Key Laboratory of Cognitive Science of State Ethnic Affairs Commission, College of Biomedical Engineering, South-Central Minzu University, Minzu Road, Wuhan, 430070, China
- Hubei Key Laboratory of Medical Information Analysis & Tumor Diagnosis and Treatment, Minzu Road, Wuhan, 430070, China
| | - Jiaqi Zhang
- Hubei Key Laboratory of Medical Information Analysis & Tumor Diagnosis and Treatment, Minzu Road, Wuhan, 430070, China
| | - Junfeng Gao
- Key Laboratory of Cognitive Science of State Ethnic Affairs Commission, College of Biomedical Engineering, South-Central Minzu University, Minzu Road, Wuhan, 430070, China.
- Hubei Key Laboratory of Medical Information Analysis & Tumor Diagnosis and Treatment, Minzu Road, Wuhan, 430070, China.
| | - Xuanwei Zeng
- Key Laboratory of Cognitive Science of State Ethnic Affairs Commission, College of Biomedical Engineering, South-Central Minzu University, Minzu Road, Wuhan, 430070, China
| | - Xiangde Min
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Huimiao Zhan
- Key Laboratory of Cognitive Science of State Ethnic Affairs Commission, College of Biomedical Engineering, South-Central Minzu University, Minzu Road, Wuhan, 430070, China
| | - Hua Zheng
- Department of anesthesiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Huaifei Hu
- Key Laboratory of Cognitive Science of State Ethnic Affairs Commission, College of Biomedical Engineering, South-Central Minzu University, Minzu Road, Wuhan, 430070, China
- Hubei Key Laboratory of Medical Information Analysis & Tumor Diagnosis and Treatment, Minzu Road, Wuhan, 430070, China
| | - Yong Yang
- School of Computer Science and Technology, Tiangong University, Tianjin, 300387, China
| | - Shuguang Wei
- Department of Psychology, College of Education, Hebei Normal University, Shijiazhuang, 050054, China.
| |
Collapse
|
5
|
Zhang X, Xu K, Zhang L, Zhao R, Wei W, She Y. Optimal channel dynamic selection for Constructing lightweight Data EEG-based emotion recognition. Heliyon 2024; 10:e30174. [PMID: 38694096 PMCID: PMC11061731 DOI: 10.1016/j.heliyon.2024.e30174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 04/12/2024] [Accepted: 04/22/2024] [Indexed: 05/03/2024] Open
Abstract
At present, most methods to improve the accuracy of emotion recognition based on electroencephalogram (EEG) are achieved by means of increasing the number of channels and feature types. This is to use the big data to train the classification model but it also increases the code complexity and consumes a large amount of computer time. We propose a method of Ant Colony Optimization with Convolutional Neural Networks and Long Short-Term Memory (ACO-CNN-LSTM) which can attain the dynamic optimal channels for lightweight data. First, transform the time-domain EEG signal to the frequency domain by Fast Fourier Transform (FFT), and the Differential Entropy (DE) of the three frequency bands (α , β and γ ) are extracted as the feature data; Then, based on the DE feature dataset, ACO is employed to plan the path where the electrodes are located in the brain map. The classification accuracy of CNN-LSTM is used as the objective function for path determination, and the electrodes on the optimal path are used as the optimal channels; Next, the initial learning rate and batchsize parameters are exactly matched the data characteristics, which can obtain the best initial learning rate and batchsize; Finally, the SJTU Emotion EEG Dataset (SEED) dataset is used for emotion recognition based on the ACO-CNN-LSTM. From the experimental results, it can be seen that: the average accuracy of three-classification (positive, neutral, negative) can achieve 96.59 %, which is based on the lightweight data by means of ACO-CNN-LSTM proposed in the paper. Meanwhile, the computer time consumed is reduced. The computational efficiency is increased by 15.85 % compared with the traditional CNN-LSTM method. The accuracy can achieve more than 90 % when the data volume is reduced to 50 %. In summary, the proposed method of ACO-CNN-LSTM in the paper can get higher efficiency and accuracy.
Collapse
Affiliation(s)
- Xiaodan Zhang
- School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi, 710600, China
| | - Kemeng Xu
- School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi, 710600, China
| | - Lu Zhang
- School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi, 710600, China
| | - Rui Zhao
- School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi, 710600, China
| | - Wei Wei
- School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi, 710600, China
| | - Yichong She
- School of Life Sciences, Xi Dian University, Xi'an, Shaanxi, 710126, China
| |
Collapse
|
6
|
Qiao Y, Mu J, Xie J, Hu B, Liu G. Music emotion recognition based on temporal convolutional attention network using EEG. Front Hum Neurosci 2024; 18:1324897. [PMID: 38617132 PMCID: PMC11010638 DOI: 10.3389/fnhum.2024.1324897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 03/08/2024] [Indexed: 04/16/2024] Open
Abstract
Music is one of the primary ways to evoke human emotions. However, the feeling of music is subjective, making it difficult to determine which emotions music triggers in a given individual. In order to correctly identify emotional problems caused by different types of music, we first created an electroencephalogram (EEG) data set stimulated by four different types of music (fear, happiness, calm, and sadness). Secondly, the differential entropy features of EEG were extracted, and then the emotion recognition model CNN-SA-BiLSTM was established to extract the temporal features of EEG, and the recognition performance of the model was improved by using the global perception ability of the self-attention mechanism. The effectiveness of the model was further verified by the ablation experiment. The classification accuracy of this method in the valence and arousal dimensions is 93.45% and 96.36%, respectively. By applying our method to a publicly available EEG dataset DEAP, we evaluated the generalization and reliability of our method. In addition, we further investigate the effects of different EEG bands and multi-band combinations on music emotion recognition, and the results confirm relevant neuroscience studies. Compared with other representative music emotion recognition works, this method has better classification performance, and provides a promising framework for the future research of emotion recognition system based on brain computer interface.
Collapse
Affiliation(s)
- Yinghao Qiao
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Jiajia Mu
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Jialan Xie
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Binghui Hu
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Guangyuan Liu
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| |
Collapse
|
7
|
An Y, Hu S, Liu S, Li B. BiTCAN: A emotion recognition network based on saliency in brain cognition. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:21537-21562. [PMID: 38124609 DOI: 10.3934/mbe.2023953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
In recent years, with the continuous development of artificial intelligence and brain-computer interfaces, emotion recognition based on electroencephalogram (EEG) signals has become a prosperous research direction. Due to saliency in brain cognition, we construct a new spatio-temporal convolutional attention network for emotion recognition named BiTCAN. First, in the proposed method, the original EEG signals are de-baselined, and the two-dimensional mapping matrix sequence of EEG signals is constructed by combining the electrode position. Second, on the basis of the two-dimensional mapping matrix sequence, the features of saliency in brain cognition are extracted by using the Bi-hemisphere discrepancy module, and the spatio-temporal features of EEG signals are captured by using the 3-D convolution module. Finally, the saliency features and spatio-temporal features are fused into the attention module to further obtain the internal spatial relationships between brain regions, and which are input into the classifier for emotion recognition. Many experiments on DEAP and SEED (two public datasets) show that the accuracies of the proposed algorithm on both are higher than 97%, which is superior to most existing emotion recognition algorithms.
Collapse
Affiliation(s)
- Yanling An
- Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
| | - Shaohai Hu
- Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China
| | - Shuaiqi Liu
- College of Electronic and Information Engineering, Hebei University, Baoding 071000, China
- Machine Vision Technology Innovation Center of Hebei Province, Baoding 071000, China
- The State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Bing Li
- The State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
8
|
Jafari M, Shoeibi A, Khodatars M, Bagherzadeh S, Shalbaf A, García DL, Gorriz JM, Acharya UR. Emotion recognition in EEG signals using deep learning methods: A review. Comput Biol Med 2023; 165:107450. [PMID: 37708717 DOI: 10.1016/j.compbiomed.2023.107450] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/03/2023] [Accepted: 09/01/2023] [Indexed: 09/16/2023]
Abstract
Emotions are a critical aspect of daily life and serve a crucial role in human decision-making, planning, reasoning, and other mental states. As a result, they are considered a significant factor in human interactions. Human emotions can be identified through various sources, such as facial expressions, speech, behavior (gesture/position), or physiological signals. The use of physiological signals can enhance the objectivity and reliability of emotion detection. Compared with peripheral physiological signals, electroencephalogram (EEG) recordings are directly generated by the central nervous system and are closely related to human emotions. EEG signals have the great spatial resolution that facilitates the evaluation of brain functions, making them a popular modality in emotion recognition studies. Emotion recognition using EEG signals presents several challenges, including signal variability due to electrode positioning, individual differences in signal morphology, and lack of a universal standard for EEG signal processing. Moreover, identifying the appropriate features for emotion recognition from EEG data requires further research. Finally, there is a need to develop more robust artificial intelligence (AI) including conventional machine learning (ML) and deep learning (DL) methods to handle the complex and diverse EEG signals associated with emotional states. This paper examines the application of DL techniques in emotion recognition from EEG signals and provides a detailed discussion of relevant articles. The paper explores the significant challenges in emotion recognition using EEG signals, highlights the potential of DL techniques in addressing these challenges, and suggests the scope for future research in emotion recognition using DL techniques. The paper concludes with a summary of its findings.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Afshin Shoeibi
- Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Sara Bagherzadeh
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - David López García
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
9
|
Gao C, Uchitomi H, Miyake Y. Cross-Sensory EEG Emotion Recognition with Filter Bank Riemannian Feature and Adversarial Domain Adaptation. Brain Sci 2023; 13:1326. [PMID: 37759927 PMCID: PMC10526196 DOI: 10.3390/brainsci13091326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 09/04/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023] Open
Abstract
Emotion recognition is crucial in understanding human affective states with various applications. Electroencephalography (EEG)-a non-invasive neuroimaging technique that captures brain activity-has gained attention in emotion recognition. However, existing EEG-based emotion recognition systems are limited to specific sensory modalities, hindering their applicability. Our study innovates EEG emotion recognition, offering a comprehensive framework for overcoming sensory-focused limits and cross-sensory challenges. We collected cross-sensory emotion EEG data using multimodal emotion simulations (three sensory modalities: audio/visual/audio-visual with two emotion states: pleasure or unpleasure). The proposed framework-filter bank adversarial domain adaptation Riemann method (FBADR)-leverages filter bank techniques and Riemannian tangent space methods for feature extraction from cross-sensory EEG data. Compared with Riemannian methods, filter bank and adversarial domain adaptation could improve average accuracy by 13.68% and 8.36%, respectively. Comparative analysis of classification results proved that the proposed FBADR framework achieved a state-of-the-art cross-sensory emotion recognition performance and reached an average accuracy of 89.01% ± 5.06%. Moreover, the robustness of the proposed methods could ensure high cross-sensory recognition performance under a signal-to-noise ratio (SNR) ≥ 1 dB. Overall, our study contributes to the EEG-based emotion recognition field by providing a comprehensive framework that overcomes limitations of sensory-oriented approaches and successfully tackles the difficulties of cross-sensory situations.
Collapse
Affiliation(s)
- Chenguang Gao
- Department of Computer Science, Tokyo Institute of Technology, Yokohama 226-8502, Japan; (H.U.); (Y.M.)
| | | | | |
Collapse
|
10
|
Qiu X, Wang S, Wang R, Zhang Y, Huang L. A multi-head residual connection GCN for EEG emotion recognition. Comput Biol Med 2023; 163:107126. [PMID: 37327757 DOI: 10.1016/j.compbiomed.2023.107126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 03/22/2023] [Accepted: 06/01/2023] [Indexed: 06/18/2023]
Abstract
Electroencephalography (EEG) emotion recognition is a crucial aspect of human-computer interaction. However, conventional neural networks have limitations in extracting profound EEG emotional features. This paper introduces a novel multi-head residual graph convolutional neural network (MRGCN) model that incorporates complex brain networks and graph convolution networks. The decomposition of multi-band differential entropy (DE) features exposes the temporal intricacy of emotion-linked brain activity, and the combination of short and long-distance brain networks can explore complex topological characteristics. Moreover, the residual-based architecture not only enhances performance but also augments classification stability across subjects. The visualization of brain network connectivity offers a practical technique for investigating emotional regulation mechanisms. The MRGCN model exhibits average classification accuracies of 95.8% and 98.9% for the DEAP and SEED datasets, respectively, highlighting its excellent performance and robustness.
Collapse
Affiliation(s)
- Xiangkai Qiu
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Shenglin Wang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Ruqing Wang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Yiling Zhang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Liya Huang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China; National and Local Joint Engineering Laboratory of RF Integration and Micro-Assembly Technology, Nanjing, China.
| |
Collapse
|
11
|
Tripathi B, Sharma RK. EEG-Based Emotion Classification in Financial Trading Using Deep Learning: Effects of Risk Control Measures. SENSORS (BASEL, SWITZERLAND) 2023; 23:3474. [PMID: 37050533 PMCID: PMC10098917 DOI: 10.3390/s23073474] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 03/14/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
Day traders in the financial markets are under constant pressure to make rapid decisions and limit capital losses in response to fluctuating market prices. As such, their emotional state can greatly influence their decision-making, leading to suboptimal outcomes in volatile market conditions. Despite the use of risk control measures such as stop loss and limit orders, it is unclear if these strategies have a substantial impact on the emotional state of traders. In this paper, we aim to determine if the use of limit orders and stop loss has a significant impact on the emotional state of traders compared to when these risk control measures are not applied. The paper provides a technical framework for valence-arousal classification in financial trading using EEG data and deep learning algorithms. We conducted two experiments: the first experiment employed predetermined stop loss and limit orders to lock in profit and risk objectives, while the second experiment did not employ limit orders or stop losses. We also proposed a novel hybrid neural architecture that integrates a Conditional Random Field with a CNN-BiLSTM model and employs Bayesian Optimization to systematically determine the optimal hyperparameters. The best model in the framework obtained classification accuracies of 85.65% and 85.05% in the two experiments, outperforming previous studies. Results indicate that the emotions associated with Low Valence and High Arousal, such as fear and worry, were more prevalent in the second experiment. The emotions associated with High Valence and High Arousal, such as hope, were more prevalent in the first experiment employing limit orders and stop loss. In contrast, High Valence and Low Arousal (calmness) emotions were most prominent in the control group which did not engage in trading activities. Our results demonstrate the efficacy of our proposed framework for emotion classification in financial trading and aid in the risk-related decision-making abilities of day traders. Further, we present the limitations of the current work and directions for future research.
Collapse
|
12
|
Peng G, Zhao K, Zhang H, Xu D, Kong X. Temporal relative transformer encoding cooperating with channel attention for EEG emotion analysis. Comput Biol Med 2023; 154:106537. [PMID: 36682180 DOI: 10.1016/j.compbiomed.2023.106537] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 12/18/2022] [Accepted: 01/10/2023] [Indexed: 01/18/2023]
Abstract
Electroencephalogram (EEG)-based emotion computing has become a hot topic of brain-computer fusion. EEG signals have inherent temporal and spatial characteristics. However, existing studies did not fully consider the two properties. In addition, the position encoding mechanism in the vanilla transformer cannot effectively encode the continuous temporal character of the emotion. A temporal relative (TR) encoding mechanism is proposed to encode the temporal EEG signals for constructing the temporality self-attention in the transformer. To explore the contribution of each EEG channel corresponding to the electrode on the cerebral cortex to emotion analysis, a channel-attention (CA) mechanism is presented. The temporality self-attention mechanism cooperates with the channel-attention mechanism to utilize the temporal and spatial information of EEG signals simultaneously by preprocessing. Exhaustive experiments are conducted on the DEAP dataset, including the binary classification on valence, arousal, dominance, and liking. Furthermore, the discrete emotion category classification task is also conducted by mapping the dimensional annotations of DEAP into discrete emotion categories (5-class). Experimental results demonstrate that our model outperforms the advanced methods for all classification tasks.
Collapse
Affiliation(s)
- Guoqin Peng
- Yunnan University, Kunming, 650500, China; Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, 310013, China; Department of Psychiatry of Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, 310013, China.
| | | | - Hao Zhang
- Yunnan University, Kunming, 650500, China
| | - Dan Xu
- Yunnan University, Kunming, 650500, China.
| | - Xiangzhen Kong
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, 310013, China; Department of Psychiatry of Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, 310013, China.
| |
Collapse
|
13
|
Akter S, Prodhan RA, Pias TS, Eisenberg D, Fresneda Fernandez J. M1M2: Deep-Learning-Based Real-Time Emotion Recognition from Neural Activity. SENSORS (BASEL, SWITZERLAND) 2022; 22:8467. [PMID: 36366164 PMCID: PMC9654596 DOI: 10.3390/s22218467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/20/2022] [Accepted: 10/28/2022] [Indexed: 06/16/2023]
Abstract
Emotion recognition, or the ability of computers to interpret people's emotional states, is a very active research area with vast applications to improve people's lives. However, most image-based emotion recognition techniques are flawed, as humans can intentionally hide their emotions by changing facial expressions. Consequently, brain signals are being used to detect human emotions with improved accuracy, but most proposed systems demonstrate poor performance as EEG signals are difficult to classify using standard machine learning and deep learning techniques. This paper proposes two convolutional neural network (CNN) models (M1: heavily parameterized CNN model and M2: lightly parameterized CNN model) coupled with elegant feature extraction methods for effective recognition. In this study, the most popular EEG benchmark dataset, the DEAP, is utilized with two of its labels, valence, and arousal, for binary classification. We use Fast Fourier Transformation to extract the frequency domain features, convolutional layers for deep features, and complementary features to represent the dataset. The M1 and M2 CNN models achieve nearly perfect accuracy of 99.89% and 99.22%, respectively, which outperform every previous state-of-the-art model. We empirically demonstrate that the M2 model requires only 2 seconds of EEG signal for 99.22% accuracy, and it can achieve over 96% accuracy with only 125 milliseconds of EEG data for valence classification. Moreover, the proposed M2 model achieves 96.8% accuracy on valence using only 10% of the training dataset, demonstrating our proposed system's effectiveness. Documented implementation codes for every experiment are published for reproducibility.
Collapse
Affiliation(s)
- Sumya Akter
- Martin Tuchman School of Management, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - Rumman Ahmed Prodhan
- Martin Tuchman School of Management, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - Tanmoy Sarkar Pias
- Department of Computer Science, Virginia Tech, Blacksburg, VA 24061, USA
| | - David Eisenberg
- Department of Information Systems, Ying Wu College of Computing, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | | |
Collapse
|
14
|
EEG Emotion Classification Network Based on Attention Fusion of Multi-Channel Band Features. SENSORS 2022; 22:s22145252. [PMID: 35890933 PMCID: PMC9318779 DOI: 10.3390/s22145252] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 07/06/2022] [Accepted: 07/11/2022] [Indexed: 01/18/2023]
Abstract
Understanding learners’ emotions can help optimize instruction sand further conduct effective learning interventions. Most existing studies on student emotion recognition are based on multiple manifestations of external behavior, which do not fully use physiological signals. In this context, on the one hand, a learning emotion EEG dataset (LE-EEG) is constructed, which captures physiological signals reflecting the emotions of boredom, neutrality, and engagement during learning; on the other hand, an EEG emotion classification network based on attention fusion (ECN-AF) is proposed. To be specific, on the basis of key frequency bands and channels selection, multi-channel band features are first extracted (using a multi-channel backbone network) and then fused (using attention units). In order to verify the performance, the proposed model is tested on an open-access dataset SEED (N = 15) and the self-collected dataset LE-EEG (N = 45), respectively. The experimental results using five-fold cross validation show the following: (i) on the SEED dataset, the highest accuracy of 96.45% is achieved by the proposed model, demonstrating a slight increase of 1.37% compared to the baseline models; and (ii) on the LE-EEG dataset, the highest accuracy of 95.87% is achieved, demonstrating a 21.49% increase compared to the baseline models.
Collapse
|