Jia X. Music Emotion Classification Method Based on Deep Learning and Explicit Sparse Attention Network.
COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022;
2022:3920663. [PMID:
35774442 PMCID:
PMC9239758 DOI:
10.1155/2022/3920663]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 05/20/2022] [Accepted: 05/26/2022] [Indexed: 11/18/2022]
Abstract
In order to improve the accuracy of music emotion recognition and classification, this study combines an explicit sparse attention network with deep learning and proposes an effective emotion recognition and classification method for complex music data sets. First, the method uses fine-grained segmentation and other methods to preprocess the sample data set, so as to provide a high-quality input data sample set for the classification model. The explicit sparse attention network is introduced into the deep learning network to reduce the influence of irrelevant information on the recognition results and improve the emotion classification and recognition ability of music sample data set. The simulation experiment is based on the actual data set of the network. The experimental results show that the recognition accuracy of the proposed method is 0.71 for happy emotions and 0.688 for sad emotions. It has a good ability of music emotion recognition and classification.
Collapse