Chen Q, Mao X, Song Y, Wang K. An EEG-based emotion recognition method by fusing multi-frequency-spatial features under multi-frequency bands.
J Neurosci Methods 2025;
415:110360. [PMID:
39778774 DOI:
10.1016/j.jneumeth.2025.110360]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 12/23/2024] [Accepted: 01/03/2025] [Indexed: 01/11/2025]
Abstract
BACKGROUND
Recognition of emotion changes is of great significance to a person's physical and mental health. At present, EEG-based emotion recognition methods are mainly focused on time or frequency domains, but rarely on spatial information. Therefore, the goal of this study is to improve the performance of emotion recognition by integrating frequency and spatial domain information under multi-frequency bands.
NEW METHODS
Firstly, EEG signals of four frequency bands are extracted, and then three frequency-spatial features of differential entropy (DE) symmetric difference (SD) and symmetric quotient (SQ) are separately calculated. Secondly, according to the distribution of EEG electrodes, a series of brain maps are constructed by three frequency-spatial features for each frequency band. Thirdly, a Multi-Parallel-Input Convolutional Neural Network (MPICNN) uses the constructed brain maps to train and obtain the emotion recognition model. Finally, the subject-dependent experiments are conducted on DEAP and SEED-IV datasets.
RESULTS
The experimental results of DEAP dataset show that the average accuracy of four-class emotion recognition, namely, high-valence high-arousal, high-valence low-arousal, low-valence high-arousal and low-valence low-arousal, reaches 98.71 %. The results of SEED-IV dataset show the average accuracy of four-class emotion recognition, namely, happy, sad, neutral and fear reaches 92.55 %.
COMPARISON WITH EXISTING METHODS
This method has a best classification performance compared with the state-of-the-art methods on both four-class emotion recognition datasets.
CONCLUSIONS
This EEG-based emotion recognition method fused multi-frequency-spatial features under multi-frequency bands, and effectively improved the recognition performance compared with the existing methods.
Collapse