Fu B, Yu X, Jiang G, Sun N, Liu Y. Enhancing local representation learning through global-local integration with functional connectivity for EEG-based emotion recognition.
Comput Biol Med 2024;
179:108857. [PMID:
39018882 DOI:
10.1016/j.compbiomed.2024.108857]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 06/21/2024] [Accepted: 07/06/2024] [Indexed: 07/19/2024]
Abstract
Emotion recognition based on electroencephalogram (EEG) signals is crucial in understanding human affective states. Current research has limitations in extracting local features. The representation capabilities of local features are limited, making it difficult to comprehensively capture emotional information. In this study, a novel approach is proposed to enhance local representation learning through global-local integration with functional connectivity for EEG-based emotion recognition. By leveraging the functional connectivity of brain regions, EEG signals are divided into global embeddings that represent comprehensive brain connectivity patterns throughout the entire process and local embeddings that reflect dynamic interactions within specific brain functional networks at particular moments. Firstly, a convolutional feature extraction branch based on the residual network is designed to extract local features from the global embedding. To further improve the representation ability and accuracy of local features, a multidimensional collaborative attention (MCA) module is introduced. Secondly, the local features and patch embedded local embeddings are integrated into the feature coupling module (FCM), which utilizes hierarchical connections and enhanced cross-attention to couple region-level features, thereby enhancing local representation learning. Experimental results on three public datasets show that compared with other methods, this method improves accuracy by 4.92% on the DEAP, by 1.11% on the SEED, and by 7.76% on the SEED-IV, demonstrating its superior performance in emotion recognition tasks.
Collapse