51
|
Li D, Xie L, Chai B, Wang Z, Yang H. Spatial-frequency convolutional self-attention network for EEG emotion recognition. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108740] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
52
|
Goshvarpour A, Goshvarpour A. Innovative Poincare's plot asymmetry descriptors for EEG emotion recognition. Cogn Neurodyn 2022; 16:545-559. [PMID: 35603058 PMCID: PMC9120274 DOI: 10.1007/s11571-021-09735-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 09/18/2021] [Accepted: 10/13/2021] [Indexed: 10/20/2022] Open
Abstract
Given the importance of emotion recognition in both medical and non-medical applications, designing an automatic system has captured the attention of several scholars. Currently, EEG-based emotion recognition has a special position, which has not fulfilled the desired accuracy rates yet. This experiment intended to provide novel EEG asymmetry measures to improve emotion recognition rates. Four emotional states have been classified using the k-nearest neighbor (kNN), support vector machine, and Naïve Bayes. Feature selection has been performed, and the role of employing a different number of top-ranked features on emotion recognition rates has been assessed. To validate the efficiency of the proposed scheme, two public databases, including the SJTU Emotion EEG Dataset-IV (SEED-IV) and a Database for Emotion Analysis using Physiological signals (DEAP) were evaluated. The experimental results indicated that kNN outperformed the other classifiers with a maximum accuracy of 95.49 and 98.63% using SEED-IV and DEAP datasets, respectively. In conclusion, the results of the proposed novel EEG-asymmetry measures make the framework a superior one compared to the state-of-art EEG emotion recognition approaches.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| | - Ateke Goshvarpour
- Department of Biomedical Engineering, Imam Reza International University, Rezvan Campus, Phalestine Sq., Mashhad, Razavi Khorasan Iran
| |
Collapse
|
53
|
Ji Y, Li F, Fu B, Li Y, Zhou Y, Niu Y, Zhang L, Chen Y, Shi G. Spatial-temporal Network for Fine-grained-level Emotion EEG Recognition. J Neural Eng 2022; 19. [PMID: 35523129 DOI: 10.1088/1741-2552/ac6d7d] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/05/2022] [Indexed: 11/12/2022]
Abstract
Electroencephalogram (EEG)-based affective computing brain-computer interfaces provide the capability for machines to understand human intentions. In practice, people are more concerned with the strength of a certain emotional state over a short period of time, which was called as fine-grained-level emotion in this paper. In this study, we built a fine-grained-level emotion EEG dataset that contains two coarse-grained emotions and four corresponding fine-grained-level emotions. To fully extract the features of the EEG signals, we proposed a corresponding fine-grained emotion EEG network (FG-emotionNet) for spatial-temporal feature extraction. Each feature extraction layer is linked to raw EEG signals to alleviate overfitting and ensure that the spatial features of each scale can be extracted from the raw signals. Moreover, all previous scale features are fused before the current spatial-feature layer to enhance the scale features in the spatial block. Additionally, long short-term memory is adopted as the temporal block to extract the temporal features based on spatial features and classify the category of fine-grained emotions. Subject-dependent and cross-session experiments demonstrated that the performance of the proposed method is superior to that of the representative methods in emotion recognition and similar structure methods with proposed method.
Collapse
Affiliation(s)
- Youshuo Ji
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Fu Li
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Boxun Fu
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Yang Li
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, 710071, CHINA
| | - YiJin Zhou
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Yi Niu
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Lijian Zhang
- Beijing Institute of Mechanical Equipment, No. 50 Yongding Road, Haidian District, Beijing, China, Beijing, 100854, CHINA
| | - Yuanfang Chen
- Beijing Institute of Mechanical Equipment, No. 50, Yongding Road, Haidian District, Beijing, China, Beijing, 100854, CHINA
| | | |
Collapse
|
54
|
An A, Hoang H, Trang L, Vo Q, Tran L, Le T, Le A, McCormick A, Du Old K, Williams NS, Mackellar G, Nguyen E, Luong T, Nguyen V, Nguyen K, Ha H. Investigating the effect of Mindfulness-Based Stress Reduction on stress level and brain activity of college students. IBRO Neurosci Rep 2022; 12:399-410. [PMID: 35601693 PMCID: PMC9121238 DOI: 10.1016/j.ibneur.2022.05.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 05/08/2022] [Accepted: 05/11/2022] [Indexed: 11/25/2022] Open
Abstract
Financial constraints usually hinder students, especially those in low-middle income countries (LMICs), from seeking mental health interventions. Hence, it is necessary to identify effective, affordable and sustainable counter-stress measures for college students in the LMICs context. This study examines the sustained effects of mindfulness practice on the psychological outcomes and brain activity of students, especially when they are exposed to stressful situations. Here, we combined psychological and electrophysiological methods (EEG) to investigate the sustained effects of an 8-week-long standardized Mindfulness-Based Stress Reduction (MBSR) intervention on the brain activity of college students. We found that the Test group showed a decrease in negative emotional states after the intervention, compared to the no statistically significant result of the Control group, as indicated by the Perceived Stress Scale (PSS) (33% reduction in the negative score) and Depression, Anxiety, Stress Scale (DASS-42) scores (nearly 40% reduction of three subscale scores). Spectral analysis of EEG data showed that this intervention is longitudinally associated with increased frontal and occipital lobe alpha band power. Additionally, the increase in alpha power is more prevalent when the Test group was being stress-induced by cognitive tasks, suggesting that practicing MBSR might enhance the practitioners’ tolerance of negative emotional states. In conclusion, MBSR intervention led to a sustained reduction of negative emotional states as measured by both psychological and electrophysiological metrics, which supports the adoption of MBSR as an effective and sustainable stress-countering approach for students in LMICs.
Collapse
|
55
|
Effects of facial expression and gaze interaction on brain dynamics during a working memory task in preschool children. PLoS One 2022; 17:e0266713. [PMID: 35482742 PMCID: PMC9049575 DOI: 10.1371/journal.pone.0266713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Accepted: 03/25/2022] [Indexed: 11/19/2022] Open
Abstract
Executive functioning in preschool children is important for building social relationships during the early stages of development. We investigated the brain dynamics of preschool children during an attention-shifting task involving congruent and incongruent gaze directions in emotional facial expressions (neutral, angry, and happy faces). Ignoring distracting stimuli (gaze direction and expression), participants (17 preschool children and 17 young adults) were required to detect and memorize the location (left or right) of a target symbol as a simple working memory task (i.e., no general priming paradigm in which a target appears after a cue stimulus). For the preschool children, the frontal late positive response and the central and parietal P3 responses increased for angry faces. In addition, a parietal midline α (Pmα) power to change attention levels decreased mainly during the encoding of a target for angry faces, possibly causing an association of no congruency effect on reaction times (i.e., no faster response in the congruent than incongruent gaze condition). For the adults, parietal P3 response and frontal midline θ (Fmθ) power increased mainly during the encoding period for incongruent gaze shifts in happy faces. The Pmα power for happy faces decreased for incongruent gaze during the encoding period and increased for congruent gaze during the first retention period. These results suggest that adults can quickly shift attention to a target in happy faces, sufficiently allocating attentional resources to ignore incongruent gazes and detect a target, which can attenuate a congruency effect on reaction times. By contrast, possibly because of underdeveloped brain activity, preschool children did not show the happy face superiority effect and they may be more responsive to angry faces. These observations imply a crucial key point to build better relationships between developing preschoolers and their parents and educators, incorporating nonverbal communication into social and emotional learning.
Collapse
|
56
|
Fusion of EEG-Based Activation, Spatial, and Connection Patterns for Fear Emotion Recognition. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3854513. [PMID: 35463262 PMCID: PMC9020909 DOI: 10.1155/2022/3854513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/19/2022] [Indexed: 11/29/2022]
Abstract
At present, emotion recognition based on electroencephalograms (EEGs) has attracted much more attention. Current studies of affective brain-computer interfaces (BCIs) focus on the recognition of happiness and sadness using brain activation patterns. Fear recognition involving brain activities in different spatial distributions and different brain functional networks has been scarcely investigated. In this study, we propose a multifeature fusion method combining energy activation, spatial distribution, and brain functional connection network (BFCN) features for fear emotion recognition. The affective brain pattern was identified by not only the power activation features of differential entropy (DE) but also the spatial distribution features of the common spatial pattern (CSP) and the EEG phase synchronization features of phase lock value (PLV). A total of 15 healthy subjects took part in the experiment, and the average accuracy rate was 85.00% ± 8.13%. The experimental results showed that the fear emotions of subjects were fully stimulated and effectively identified. The proposed fusion method on fear recognition was thus validated and is of great significance to the development of effective emotional BCI systems.
Collapse
|
57
|
Balconi M, Cassioli F. "We will be in touch". A neuroscientific assessment of remote vs. face-to-face job interviews via EEG hyperscanning. Soc Neurosci 2022; 17:209-224. [PMID: 35395918 DOI: 10.1080/17470919.2022.2064910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In the last decades, improving remote communications in companies has been a compelling issue. With the outspread of SARS-CoV-2 pandemic, this phenomenon has undergone an acceleration. Despite this, little to no research, considering neurocognitive and emotional systems, was conducted on job interview, a critical organizational phase which significantly contributes to a company long-term success.In this study, we aimed at exploring the emotional and cognitive processes related to different phases of a job interview (introductory, attitudinal, technical and conclusion), when considering two conditions: face-to-face and remote, by simultaneously gathering EEG (frequency bands: alpha, beta, delta, and theta) and autonomic data (skin-conductance-level, SCL, skin-conductance-response, SCR, and heart rate, HR) in both candidates and recruiters. Data highlighted a generalized alpha desynchronization during the job interview interaction. Recruiters showed increased frontal theta activity, which is connected to socio-emotional situations and emotional processing. In addition, results showed how face-to-face condition is related to increased SCL and theta power in the central-brain area, associated with learning processes, via the mid-brain dopamine system and the anterior cingulate cortex. Furthermore, we found higher HR in the candidates. Present results call to re-examine the impact of information-technology in the organization, opening to translational opportunities.
Collapse
Affiliation(s)
- Michela Balconi
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Università Cattolica del Sacro Cuore, Largo A. Gemelli 1, 20123, Milano, Italy.,Research Unit in Affective and Social Neuroscience, Department of Psychology, Università Cattolica del Sacro Cuore, Largo A. Gemelli 1, 20123, Milano, Italy
| | - Federico Cassioli
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Università Cattolica del Sacro Cuore, Largo A. Gemelli 1, 20123, Milano, Italy.,Research Unit in Affective and Social Neuroscience, Department of Psychology, Università Cattolica del Sacro Cuore, Largo A. Gemelli 1, 20123, Milano, Italy
| |
Collapse
|
58
|
Chang H, Zong Y, Zheng W, Tang C, Zhu J, Li X. Depression Assessment Method: An EEG Emotion Recognition Framework Based on Spatiotemporal Neural Network. Front Psychiatry 2022; 12:837149. [PMID: 35368726 PMCID: PMC8967371 DOI: 10.3389/fpsyt.2021.837149] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 12/27/2021] [Indexed: 12/05/2022] Open
Abstract
The main characteristic of depression is emotional dysfunction, manifested by increased levels of negative emotions and decreased levels of positive emotions. Therefore, accurate emotion recognition is an effective way to assess depression. Among the various signals used for emotion recognition, electroencephalogram (EEG) signal has attracted widespread attention due to its multiple advantages, such as rich spatiotemporal information in multi-channel EEG signals. First, we use filtering and Euclidean alignment for data preprocessing. In the feature extraction, we use short-time Fourier transform and Hilbert-Huang transform to extract time-frequency features, and convolutional neural networks to extract spatial features. Finally, bi-directional long short-term memory explored the timing relationship. Before performing the convolution operation, according to the unique topology of the EEG channel, the EEG features are converted into 3D tensors. This study has achieved good results on two emotion databases: SEED and Emotional BCI of 2020 WORLD ROBOT COMPETITION. We applied this method to the recognition of depression based on EEG and achieved a recognition rate of more than 70% under the five-fold cross-validation. In addition, the subject-independent protocol on SEED data has achieved a state-of-the-art recognition rate, which exceeds the existing research methods. We propose a novel EEG emotion recognition framework for depression detection, which provides a robust algorithm for real-time clinical depression detection based on EEG.
Collapse
Affiliation(s)
- Hongli Chang
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
- School of Information Science and Engineering, Southeast University, Nanjing, China
| | - Yuan Zong
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
| | - Wenming Zheng
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
| | - Chuangao Tang
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
| | - Jie Zhu
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
- School of Information Science and Engineering, Southeast University, Nanjing, China
| | - Xuejun Li
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
| |
Collapse
|
59
|
Two-dimensional CNN-based distinction of human emotions from EEG channels selected by multi-objective evolutionary algorithm. Sci Rep 2022; 12:3523. [PMID: 35241745 PMCID: PMC8894479 DOI: 10.1038/s41598-022-07517-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 02/21/2022] [Indexed: 01/17/2023] Open
Abstract
In this study we explore how different levels of emotional intensity (Arousal) and pleasantness (Valence) are reflected in electroencephalographic (EEG) signals. We performed the experiments on EEG data of 32 subjects from the DEAP public dataset, where the subjects were stimulated using 60-s videos to elicitate different levels of Arousal/Valence and then self-reported the rating from 1 to 9 using the self-assessment Manikin (SAM). The EEG data was pre-processed and used as input to a convolutional neural network (CNN). First, the 32 EEG channels were used to compute the maximum accuracy level obtainable for each subject as well as for creating a single model using data from all the subjects. The experiment was repeated using one channel at a time, to see if specific channels contain more information to discriminate between low vs high arousal/valence. The results indicate than using one channel the accuracy is lower compared to using all the 32 channels. An optimization process for EEG channel selection is then designed with the Non-dominated Sorting Genetic Algorithm II (NSGA-II) with the objective to obtain optimal channel combinations with high accuracy recognition. The genetic algorithm evaluates all possible combinations using a chromosome representation for all the 32 channels, and the EEG data from each chromosome in the different populations are tested iteratively solving two unconstrained objectives; to maximize classification accuracy and to reduce the number of required EEG channels for the classification process. Best combinations obtained from a Pareto-front suggests that as few as 8–10 channels can fulfill this condition and provide the basis for a lighter design of EEG systems for emotion recognition. In the best case, the results show accuracies of up to 1.00 for low vs high arousal using eight EEG channels, and 1.00 for low vs high valence using only two EEG channels. These results are encouraging for research and healthcare applications that will require automatic emotion recognition with wearable EEG.
Collapse
|
60
|
Smit EA, Milne AJ, Escudero P. Music Perception Abilities and Ambiguous Word Learning: Is There Cross-Domain Transfer in Nonmusicians? Front Psychol 2022; 13:801263. [PMID: 35401340 PMCID: PMC8984940 DOI: 10.3389/fpsyg.2022.801263] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 02/08/2022] [Indexed: 11/14/2022] Open
Abstract
Perception of music and speech is based on similar auditory skills, and it is often suggested that those with enhanced music perception skills may perceive and learn novel words more easily. The current study tested whether music perception abilities are associated with novel word learning in an ambiguous learning scenario. Using a cross-situational word learning (CSWL) task, nonmusician adults were exposed to word-object pairings between eight novel words and visual referents. Novel words were either non-minimal pairs differing in all sounds or minimal pairs differing in their initial consonant or vowel. In order to be successful in this task, learners need to be able to correctly encode the phonological details of the novel words and have sufficient auditory working memory to remember the correct word-object pairings. Using the Mistuning Perception Test (MPT) and the Melodic Discrimination Test (MDT), we measured learners’ pitch perception and auditory working memory. We predicted that those with higher MPT and MDT values would perform better in the CSWL task and in particular for novel words with high phonological overlap (i.e., minimal pairs). We found that higher musical perception skills led to higher accuracy for non-minimal pairs and minimal pairs differing in their initial consonant. Interestingly, this was not the case for vowel minimal pairs. We discuss the results in relation to theories of second language word learning such as the Second Language Perception model (L2LP).
Collapse
Affiliation(s)
- Eline A. Smit
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
- ARC Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
- *Correspondence: Eline A. Smit,
| | - Andrew J. Milne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| | - Paola Escudero
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
- ARC Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
| |
Collapse
|
61
|
Chabin T, Gabriel D, Comte A, Haffen E, Moulin T, Pazart L. Interbrain emotional connection during music performances is driven by physical proximity and individual traits. Ann N Y Acad Sci 2021; 1508:178-195. [PMID: 34750828 DOI: 10.1111/nyas.14711] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 09/30/2021] [Accepted: 10/06/2021] [Indexed: 12/23/2022]
Abstract
How musical emotions and the pleasure derived from music, regardless of the musical valence, can be shared between individuals is a fascinating question, and investigating it can shed light on the function of musical reward. We carried out our investigations in a natural setting during an international competition for orchestra conductors. Participants (n = 15) used a dedicated smartphone app to report their subjective emotional experiences in real time while we recorded their cerebral activity using electroencephalography and their electrodermal activity. The overall behavioral real-time behavioral ratings suggest a possible social influence on the reported and felt pleasure. The physically closer the participants, the more similar their reported pleasure. By calculating the interindividual cerebral coherence (n = 21 pairs), we showed that when people simultaneously reported either high or low pleasure, their cerebral activities were closer than for simultaneous neutral pleasure reports. Participants' skin conductance levels were also more coupled when reporting higher emotional degrees simultaneously. More importantly, the participants who were physically closer had higher cerebral coherence, but only when they simultaneously reported a high level of pleasure. We propose that emotional contagion and/or emotional resonance mechanisms could explain why a form of "emotional connecting force" arises between people during shared appraisal situations.
Collapse
Affiliation(s)
- Thibault Chabin
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France.,Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Besançon, France.,Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| | - Damien Gabriel
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France.,Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Besançon, France.,Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| | - Alexandre Comte
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France.,Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Besançon, France.,Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| | - Emmanuel Haffen
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France.,Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Besançon, France.,Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| | - Thierry Moulin
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France.,Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Besançon, France.,Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| | - Lionel Pazart
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France.,Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Besançon, France.,Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| |
Collapse
|
62
|
Apicella A, Arpaia P, Mastrati G, Moccaldi N. EEG-based detection of emotional valence towards a reproducible measurement of emotions. Sci Rep 2021; 11:21615. [PMID: 34732756 PMCID: PMC8566577 DOI: 10.1038/s41598-021-00812-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 09/20/2021] [Indexed: 11/09/2022] Open
Abstract
A methodological contribution to a reproducible Measurement of Emotions for an EEG-based system is proposed. Emotional Valence detection is the suggested use case. Valence detection occurs along the interval scale theorized by the Circumplex Model of emotions. The binary choice, positive valence vs negative valence, represents a first step towards the adoption of a metric scale with a finer resolution. EEG signals were acquired through a 8-channel dry electrode cap. An implicit-more controlled EEG paradigm was employed to elicit emotional valence through the passive view of standardized visual stimuli (i.e., Oasis dataset) in 25 volunteers without depressive disorders. Results from the Self Assessment Manikin questionnaire confirmed the compatibility of the experimental sample with that of Oasis. Two different strategies for feature extraction were compared: (i) based on a-priory knowledge (i.e., Hemispheric Asymmetry Theories), and (ii) automated (i.e., a pipeline of a custom 12-band Filter Bank and Common Spatial Pattern). An average within-subject accuracy of 96.1 %, was obtained by a shallow Artificial Neural Network, while k-Nearest Neighbors allowed to obtain a cross-subject accuracy equal to 80.2%.
Collapse
Affiliation(s)
- Andrea Apicella
- Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
| | - Pasquale Arpaia
- Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy.
- Interdepartmental Center for Research on Management and Innovation in Healthcare (CIRMIS), University of Naples Federico II, Naples, Italy.
| | - Giovanna Mastrati
- Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
| | - Nicola Moccaldi
- Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
| |
Collapse
|
63
|
Xie L, Lu C, Liu Z, Yan L, Xu T. Study of Auditory Brain Cognition Laws-Based Recognition Method of Automobile Sound Quality. Front Hum Neurosci 2021; 15:663049. [PMID: 34690716 PMCID: PMC8533456 DOI: 10.3389/fnhum.2021.663049] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 06/04/2021] [Indexed: 11/17/2022] Open
Abstract
The research shows that subjective feelings of people, such as emotions and fatigue, can be objectively reflected by electroencephalography (EEG) physiological signals Thus, an evaluation method based on EEG, which is used to explore auditory brain cognition laws, is introduced in this study. The brain cognition laws are summarized by analyzing the EEG power topographic map under the stimulation of three kinds of automobile sound, namely, quality of comfort, powerfulness, and acceleration. Then, the EEG features of the subjects are classified through a machine learning algorithm, by which the recognition of diversified automobile sound is realized. In addition, the Kalman smoothing and minimal redundancy maximal relevance (mRMR) algorithm is used to improve the recognition accuracy. The results show that there are differences in the neural characteristics of diversified automobile sound quality, with a positive correlation between EEG energy and sound intensity. Furthermore, by using the Kalman smoothing and mRMR algorithm, recognition accuracy is improved, and the amount of calculation is reduced. The novel idea and method to explore the cognitive laws of automobile sound quality from the field of brain-computer interface technology are provided in this study.
Collapse
Affiliation(s)
- Liping Xie
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, China.,Foshan Xianhu Laboratory of the Advanced Energy Science and Technology Guangdong Laboratory, Foshan, China
| | - Chihua Lu
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, China.,Foshan Xianhu Laboratory of the Advanced Energy Science and Technology Guangdong Laboratory, Foshan, China
| | - Zhien Liu
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, China.,Foshan Xianhu Laboratory of the Advanced Energy Science and Technology Guangdong Laboratory, Foshan, China
| | - Lirong Yan
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, China.,Foshan Xianhu Laboratory of the Advanced Energy Science and Technology Guangdong Laboratory, Foshan, China
| | - Tao Xu
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, China.,Foshan Xianhu Laboratory of the Advanced Energy Science and Technology Guangdong Laboratory, Foshan, China
| |
Collapse
|
64
|
Different theta connectivity patterns underlie pleasantness evoked by familiar and unfamiliar music. Sci Rep 2021; 11:18523. [PMID: 34535731 PMCID: PMC8448873 DOI: 10.1038/s41598-021-98033-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 08/30/2021] [Indexed: 12/05/2022] Open
Abstract
Music-evoked pleasantness has been extensively reported to be modulated by familiarity. Nevertheless, while the brain temporal dynamics underlying the process of giving value to music are beginning to be understood, little is known about how familiarity might modulate the oscillatory activity associated with music-evoked pleasantness. The goal of the present experiment was to study the influence of familiarity in the relation between theta phase synchronization and music-evoked pleasantness. EEG was recorded from 22 healthy participants while they were listening to both familiar and unfamiliar music and rating the experienced degree of evoked pleasantness. By exploring interactions, we found that right fronto-temporal theta synchronization was positively associated with music-evoked pleasantness when listening to unfamiliar music. On the contrary, inter-hemispheric temporo-parietal theta synchronization was positively associated with music-evoked pleasantness when listening to familiar music. These results shed some light on the possible oscillatory mechanisms underlying fronto-temporal and temporo-parietal connectivity and their relationship with music-evoked pleasantness and familiarity.
Collapse
|
65
|
Musical components important for the Mozart K448 effect in epilepsy. Sci Rep 2021; 11:16490. [PMID: 34531410 PMCID: PMC8446029 DOI: 10.1038/s41598-021-95922-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 07/28/2021] [Indexed: 11/08/2022] Open
Abstract
There is growing evidence for the efficacy of music, specifically Mozart’s Sonata for Two Pianos in D Major (K448), at reducing ictal and interictal epileptiform activity. Nonetheless, little is known about the mechanism underlying this beneficial “Mozart K448 effect” for persons with epilepsy. Here, we measured the influence that K448 had on intracranial interictal epileptiform discharges (IEDs) in sixteen subjects undergoing intracranial monitoring for refractory focal epilepsy. We found reduced IEDs during the original version of K448 after at least 30-s of exposure. Nonsignificant IED rate reductions were witnessed in all brain regions apart from the bilateral frontal cortices, where we observed increased frontal theta power during transitions from prolonged musical segments. All other presented musical stimuli were associated with nonsignificant IED alterations. These results suggest that the “Mozart K448 effect” is dependent on the duration of exposure and may preferentially modulate activity in frontal emotional networks, providing insight into the mechanism underlying this response. Our findings encourage the continued evaluation of Mozart’s K448 as a noninvasive, non-pharmacological intervention for refractory epilepsy.
Collapse
|
66
|
Spence C. Musical Scents: On the Surprising Absence of Scented Musical/Auditory Events, Entertainments, and Experiences. Iperception 2021; 12:20416695211038747. [PMID: 34589196 PMCID: PMC8474342 DOI: 10.1177/20416695211038747] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 07/23/2021] [Indexed: 11/15/2022] Open
Abstract
The matching of scents with music is both one of the most natural (or intuitive) of crossmodal correspondences and, at the same time, one of the least frequently explored combinations of senses in an entertainment and multisensory experiential design context. This narrative review highlights the various occasions over the last century or two when scents and sounds have coincided, and the various motivations behind those who have chosen to bring these senses together: This has included everything from the masking of malodour to the matching of the semantic meaning or arousal potential of the two senses, through to the longstanding and recently-reemerging interest in the crossmodal correspondences (now that they have been distinguished from the superficially similar phenomenon of synaesthesia, with which they were previously often confused). As such, there exist a number of ways in which these two senses can be incorporated into meaningful multisensory experiences that can potentially resonate with the public. Having explored the deliberate combination of scent and music (or sound) in everything from "scent-sory" marketing through to fragrant discos and olfactory storytelling, I end by summarizing some of the opportunities around translating such unusual multisensory experiences from the public to the private sphere. This will likely be via the widespread dissemination of sensory apps that promise to convert (or translate) from one sense (likely scent) to another (e.g., music), as has, for example already started to occur in the world of music selections to match the flavour of specific wines.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, University
of Oxford, Oxford, United Kingdom
| |
Collapse
|
67
|
Rahman MA, Anjum A, Milu MMH, Khanam F, Uddin MS, Mollah MN. Emotion recognition from EEG-based relative power spectral topography using convolutional neural network. ARRAY 2021. [DOI: 10.1016/j.array.2021.100072] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
68
|
Islam MR, Islam MM, Rahman MM, Mondal C, Singha SK, Ahmad M, Awal A, Islam MS, Moni MA. EEG Channel Correlation Based Model for Emotion Recognition. Comput Biol Med 2021; 136:104757. [PMID: 34416570 DOI: 10.1016/j.compbiomed.2021.104757] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 08/05/2021] [Accepted: 08/06/2021] [Indexed: 11/26/2022]
Abstract
Emotion recognition using Artificial Intelligence (AI) is a fundamental prerequisite to improve Human-Computer Interaction (HCI). Recognizing emotion from Electroencephalogram (EEG) has been globally accepted in many applications such as intelligent thinking, decision-making, social communication, feeling detection, affective computing, etc. Nevertheless, due to having too low amplitude variation related to time on EEG signal, the proper recognition of emotion from this signal has become too challenging. Usually, considerable effort is required to identify the proper feature or feature set for an effective feature-based emotion recognition system. To extenuate the manual human effort of feature extraction, we proposed a deep machine-learning-based model with Convolutional Neural Network (CNN). At first, the one-dimensional EEG data were converted to Pearson's Correlation Coefficient (PCC) featured images of channel correlation of EEG sub-bands. Then the images were fed into the CNN model to recognize emotion. Two protocols were conducted, namely, protocol-1 to identify two levels and protocol-2 to recognize three levels of valence and arousal that demonstrate emotion. We investigated that only the upper triangular portion of the PCC featured images reduced the computational complexity and size of memory without hampering the model accuracy. The maximum accuracy of 78.22% on valence and 74.92% on arousal were obtained using the internationally authorized DEAP dataset.
Collapse
Affiliation(s)
- Md Rabiul Islam
- Electrical and Electronic Engineering, Bangladesh Army University of Engineering & Technology, Natore, 6431, Bangladesh; Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Md Milon Islam
- Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Md Mustafizur Rahman
- Electrical and Electronic Engineering, Jashore University of Science and Technology, Jashore, 7408, Bangladesh.
| | - Chayan Mondal
- Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Suvojit Kumar Singha
- Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Mohiuddin Ahmad
- Electrical and Electronic Engineering, Khulna University of Engineering & Technology, Khulna, 9203, Bangladesh.
| | - Abdul Awal
- Electronics and Communication Engineering, Khulna University, Khulna, 9208, Bangladesh.
| | - Md Saiful Islam
- School of Information and Communication Technology, Griffith University, Gold Coast, Australia.
| | - Mohammad Ali Moni
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, QLD, 4072, Australia.
| |
Collapse
|
69
|
Li Y, Fu B, Li F, Shi G, Zheng W. A novel transferability attention neural network model for EEG emotion recognition. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.02.048] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
70
|
Tian Z, Huang D, Zhou S, Zhao Z, Jiang D. Personality first in emotion: a deep neural network based on electroencephalogram channel attention for cross-subject emotion recognition. ROYAL SOCIETY OPEN SCIENCE 2021; 8:201976. [PMID: 34457321 PMCID: PMC8371362 DOI: 10.1098/rsos.201976] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 07/12/2021] [Indexed: 06/13/2023]
Abstract
In recent years, more and more researchers have focused on emotion recognition methods based on electroencephalogram (EEG) signals. However, most studies only consider the spatio-temporal characteristics of EEG and the modelling based on this feature, without considering personality factors, let alone studying the potential correlation between different subjects. Considering the particularity of emotions, different individuals may have different subjective responses to the same physical stimulus. Therefore, emotion recognition methods based on EEG signals should tend to be personalized. This paper models the personalized EEG emotion recognition from the macro and micro levels. At the macro level, we use personality characteristics to classify the individuals' personalities from the perspective of 'birds of a feather flock together'. At the micro level, we employ deep learning models to extract the spatio-temporal feature information of EEG. To evaluate the effectiveness of our method, we conduct an EEG emotion recognition experiment on the ASCERTAIN dataset. Our experimental results demonstrate that the recognition accuracy of our proposed method is 72.4% and 75.9% on valence and arousal, respectively, which is 10.2% and 9.1% higher than that of no consideration of personalization.
Collapse
Affiliation(s)
- Zhihang Tian
- Department of Computer Science, School of Engineering, Shantou University, Shantou 515063, People’s Republic of China
- Key Laboratory of Intelligent Manufacturing Technology (Ministry of Education), Shantou University, Shantou 515063, People’s Republic of China
| | - Dongmin Huang
- Department of Computer Science, School of Engineering, Shantou University, Shantou 515063, People’s Republic of China
- Key Laboratory of Intelligent Manufacturing Technology (Ministry of Education), Shantou University, Shantou 515063, People’s Republic of China
| | - Sijin Zhou
- Department of Computer Science, School of Engineering, Shantou University, Shantou 515063, People’s Republic of China
- Key Laboratory of Intelligent Manufacturing Technology (Ministry of Education), Shantou University, Shantou 515063, People’s Republic of China
| | - Zhidan Zhao
- Department of Computer Science, School of Engineering, Shantou University, Shantou 515063, People’s Republic of China
- Key Laboratory of Intelligent Manufacturing Technology (Ministry of Education), Shantou University, Shantou 515063, People’s Republic of China
| | - Dazhi Jiang
- Department of Computer Science, School of Engineering, Shantou University, Shantou 515063, People’s Republic of China
- Key Laboratory of Intelligent Manufacturing Technology (Ministry of Education), Shantou University, Shantou 515063, People’s Republic of China
| |
Collapse
|
71
|
Zhu L, Wu Y. Love Your Country: EEG Evidence of Actor Preferences of Audiences in Patriotic Movies. Front Psychol 2021; 12:717025. [PMID: 34335430 PMCID: PMC8322844 DOI: 10.3389/fpsyg.2021.717025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 06/17/2021] [Indexed: 11/16/2022] Open
Abstract
Movie watching is one of the common ways to spark love for the country. A good patriotic movie can arouse love and pride, encourage people to stand by their countries, and reinforce a sense of national belonging. To evoke audience emotion and enhance patriotism, the choice of actors is fundamental and is a dilemma for film producers. In this exploratory study, an electroencephalogram (EEG) with a rating task was used to investigate how actor types (i.e., skilled vs. publicity) in patriotic movies modulate the willingness of audiences to watch a film and their emotional responses. Behavioral results showed that audiences are more willing to watch patriotic movies starring skilled actors than to watch patriotic movies starring publicity actors. Furthermore, brain results indicated that smaller P3 and late positive potential (LPP) were elicited in response to skilled actors than to publicity actors in patriotic movies. A larger theta oscillation was also observed with skilled actors than with publicity actors. These findings demonstrate that the willingness of audiences to watch a movie is deeply affected by actor types in patriotic films. Specifically, skilled actors engage audiences emotionally, more so than publicity actors, and increase the popularity of patriotic movies. This study is the first to employ neuroscientific technology to study movie casting, which advances film studies with careful scientific measurements and a possible new direction. La première des vertus est le dévouement à la patrie. Napoléon Bonaparte.
Collapse
Affiliation(s)
- Lian Zhu
- School of Journalism and Communication, Shanghai International Studies University, Shanghai, China
| | | |
Collapse
|
72
|
Wang W. Brain network features based on theta-gamma cross-frequency coupling connections in EEG for emotion recognition. Neurosci Lett 2021; 761:136106. [PMID: 34252515 DOI: 10.1016/j.neulet.2021.136106] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Revised: 06/28/2021] [Accepted: 07/06/2021] [Indexed: 10/20/2022]
Abstract
Emotion recognition is a hot topic in the field of cognitive neuroscience and interpersonal interaction, and EEG feature selection is an important classification technology. At present, the mainstream method of EEG feature selection is to extract non-interactive features of channels such as power spectral density, or correlation features among local multi-channels. With the application of complex network graph theory, the connection network between multiple brain regions is gradually included in feature selection. However, in the process of brain network construction, most of the current connections adopt simple signal phase or amplitude synchronization. In recent years, it has been found that in the process of emotion, memory, learning, and other advanced cognitive processes, the large-scale connection and communication between the brain regions are mainly completed by the cross-frequency coupling(CFC) between the low-frequency phase and the high-frequency amplitude of neural oscillations. Based on this, we use CFC to update the connection mode, reconstruct the brain network, and extract features for emotion recognition research. Our results show that the EEG network based on CFC performs better than other EEG synchronization networks in emotion classification. Moreover, the combination of global features and local features of the brain network, as well as the dynamic network features with continuous time-windows, can effectively improve the accuracy of emotion recognition. This study provides a new idea of network connection for the follow-up study of emotion recognition and other advanced cognitive activities and makes a pioneering exploration for further research on feature selection of emotion recognition and related neural circuits at the brain network level of functional connectivity.
Collapse
Affiliation(s)
- Wenjing Wang
- College of Education and Sports Sciences, Yangtze University, Hubei 434023, China.
| |
Collapse
|
73
|
Fuentes-Sánchez N, Pastor R, Escrig MA, Elipe-Miravet M, Pastor MC. Emotion elicitation during music listening: Subjective self-reports, facial expression, and autonomic reactivity. Psychophysiology 2021; 58:e13884. [PMID: 34145586 DOI: 10.1111/psyp.13884] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 05/30/2021] [Accepted: 06/01/2021] [Indexed: 11/30/2022]
Abstract
The use of music as emotional stimuli in experimental studies has grown in recent years. However, prior studies have mainly focused on self-reports and central measures, with a few works exploring the time course of psychophysiological correlates. Moreover, most of the previous research has been carried out either from the dimensional or categorical model but not combining both approaches to emotions. This study aimed to investigate subjective and physiological correlates of emotion elicitation through music, following the three-dimensional and the discrete emotion model. A sample of 50 healthy volunteers (25 women) took part in this experiment by listening to 42 film music excerpts (14 pleasant, 14 unpleasant, 14 neutral) presented during 8 s, while peripheral measures were continuously recorded. After music offset, affective dimensions (valence, energy arousal, and tension arousal) as well as discrete emotions (happiness, sadness, tenderness, fear, and anger) were collected using a 9-point scale. Results showed an effect of the music category on subjective and psychophysiological measures. In peripheral physiology, greater electrodermal activity, heart rate acceleration, and zygomatic responses, besides lower corrugator amplitude, were observed for pleasant excerpts in comparison to neutral and unpleasant music, from 2 s after stimulus onset until the end of its duration. Overall, our results add evidence for the efficacy of standardized film music excerpts to evoke powerful emotions in laboratory settings; thus, opening a path to explore interventions based on music in pathologies with underlying emotion deregulatory processes.
Collapse
Affiliation(s)
- Nieves Fuentes-Sánchez
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - Raúl Pastor
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - Miguel A Escrig
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - Marcel Elipe-Miravet
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - M Carmen Pastor
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| |
Collapse
|
74
|
Chan HL, Low I, Chen LF, Chen YS, Chu IT, Hsieh JC. A novel beamformer-based imaging of phase-amplitude coupling (BIPAC) unveiling the inter-regional connectivity of emotional prosody processing in women with primary dysmenorrhea. J Neural Eng 2021; 18. [PMID: 33691295 DOI: 10.1088/1741-2552/abed83] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 03/10/2021] [Indexed: 12/30/2022]
Abstract
Objective. Neural communication or the interactions of brain regions play a key role in the formation of functional neural networks. A type of neural communication can be measured in the form of phase-amplitude coupling (PAC), which is the coupling between the phase of low-frequency oscillations and the amplitude of high-frequency oscillations. This paper presents a beamformer-based imaging method, beamformer-based imaging of PAC (BIPAC), to quantify the strength of PAC between a seed region and other brain regions.Approach. A dipole is used to model the ensemble of neural activity within a group of nearby neurons and represents a mixture of multiple source components of cortical activity. From ensemble activity at each brain location, the source component with the strongest coupling to the seed activity is extracted, while unrelated components are suppressed to enhance the sensitivity of coupled-source estimation.Main results. In evaluations using simulation data sets, BIPAC proved advantageous with regard to estimation accuracy in source localization, orientation, and coupling strength. BIPAC was also applied to the analysis of magnetoencephalographic signals recorded from women with primary dysmenorrhea in an implicit emotional prosody experiment. In response to negative emotional prosody, auditory areas revealed strong PAC with the ventral auditory stream and occipitoparietal areas in the theta-gamma and alpha-gamma bands, which may respectively indicate the recruitment of auditory sensory memory and attention reorientation. Moreover, patients with more severe pain experience appeared to have stronger coupling between auditory areas and temporoparietal regions.Significance. Our findings indicate that the implicit processing of emotional prosody is altered by menstrual pain experience. The proposed BIPAC is feasible and applicable to imaging inter-regional connectivity based on cross-frequency coupling estimates. The experimental results also demonstrate that BIPAC is capable of revealing autonomous brain processing and neurodynamics, which are more subtle than active and attended task-driven processing.
Collapse
Affiliation(s)
- Hui-Ling Chan
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Intan Low
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Li-Fen Chen
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan.,Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yong-Sheng Chen
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Ian-Ting Chu
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jen-Chuen Hsieh
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan
| |
Collapse
|
75
|
Plourde-Kelly AD, Saroka KS, Dotta BT. The impact of emotionally valenced music on emotional state and EEG profile: Convergence of self-report and quantitative data. Neurosci Lett 2021; 758:136009. [PMID: 34098026 DOI: 10.1016/j.neulet.2021.136009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 05/17/2021] [Accepted: 05/31/2021] [Indexed: 11/25/2022]
Abstract
Musical stimuli can induce a variety of emotions in individuals. We sought to determine whether different valenced music would induce EEG profile changes and self-reported emotional states in individuals following the viewing of a complex video with a concrete narrative and emotional ambivalence. We used a five-minute video titled "El Empleo", coupled with either joyful, fearful, or no music. EEG recordings were taken throughout the duration of the experiment and a self-reported questionnaire on emotional state was administered after viewing of the video. We found self-reported measures of happiness increased following viewing of the video paired with joyful music, while EEG data demonstrated that the following brain regions displayed significant changes in activity following both fearful and joyful music: the right inferior parietal lobule, left uncus, and left insula. Additionally, we found that anxiety self-report scores correlated negatively with average gamma activity within the insula within each group. The convergence of self-reported data and quantitative EEG data was consistent across 27 participants. These data indicate that different valenced music can alter EEG activity in emotion specific regions, reflected in participants perceived emotional state.
Collapse
Affiliation(s)
- Adam D Plourde-Kelly
- Behavioural Neuroscience Program, Laurentian University, Canada; Department of Biology, Laurentian University, Canada
| | - Kevin S Saroka
- Behavioural Neuroscience Program, Laurentian University, Canada; Department of Psychology, Laurentian University, Canada
| | - Blake T Dotta
- Behavioural Neuroscience Program, Laurentian University, Canada; Department of Biology, Laurentian University, Canada; Department of Psychology, Laurentian University, Canada.
| |
Collapse
|
76
|
Bakas S, Adamos DA, Laskaris N. On the estimate of music appraisal from surface EEG: a dynamic-network approach based on cross-sensor PAC measurements. J Neural Eng 2021; 18. [PMID: 33975291 DOI: 10.1088/1741-2552/abffe6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 05/11/2021] [Indexed: 11/11/2022]
Abstract
Objective.The aesthetic evaluation of music is strongly dependent on the listener and reflects manifold brain processes that go well beyond the perception of incident sound. Being a high-level cognitive reaction, it is difficult to predict merely from the acoustic features of the audio signal and this poses serious challenges to contemporary music recommendation systems. We attempted to decode music appraisal from brain activity, recorded via wearable EEG, during music listening.Approach.To comply with the dynamic nature of music stimuli, cross-frequency coupling measurements were employed in a time-evolving manner to capture the evolving interactions between distinct brain-rhythms during music listening. Brain response to music was first represented as a continuous flow of functional couplings referring to both regional and inter-regional brain dynamics and then modelled as an ensemble of time-varying (sub)networks. Dynamic graph centrality measures were derived, next, as the final feature-engineering step and, lastly, a support-vector machine was trained to decode the subjective music appraisal. A carefully designed experimental paradigm provided the labeled brain signals.Main results.Using data from 20 subjects, dynamic programming to tailor the decoder to each subject individually and cross-validation, we demonstrated highly satisfactory performance (MAE= 0.948,R2= 0.63) that can be attributed, mostly, to interactions of left frontal gamma rhythm. In addition, our music-appraisal decoder was also employed in a part of the DEAP dataset with similar success. Finally, even a generic version of the decoder (common for all subjects) was found to perform sufficiently.Significance.A novel brain signal decoding scheme was introduced and validated empirically on suitable experimental data. It requires simple operations and leaves room for real-time implementation. Both the code and the experimental data are publicly available.
Collapse
Affiliation(s)
- Stylianos Bakas
- Department of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece.,Neuroinformatics GRoup, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Dimitrios A Adamos
- School of Music Studies, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece.,Department of Computing, Imperial College London, SW7 2AZ London, United Kingdom.,Neuroinformatics GRoup, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolaos Laskaris
- Department of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece.,Neuroinformatics GRoup, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|
77
|
Maheshwari D, Ghosh SK, Tripathy RK, Sharma M, Acharya UR. Automated accurate emotion recognition system using rhythm-specific deep convolutional neural network technique with multi-channel EEG signals. Comput Biol Med 2021; 134:104428. [PMID: 33984749 DOI: 10.1016/j.compbiomed.2021.104428] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 04/15/2021] [Accepted: 04/19/2021] [Indexed: 10/21/2022]
Abstract
Emotion is interpreted as a psycho-physiological process, and it is associated with personality, behavior, motivation, and character of a person. The objective of affective computing is to recognize different types of emotions for human-computer interaction (HCI) applications. The spatiotemporal brain electrical activity is measured using multi-channel electroencephalogram (EEG) signals. Automated emotion recognition using multi-channel EEG signals is an exciting research topic in cognitive neuroscience and affective computing. This paper proposes the rhythm-specific multi-channel convolutional neural network (CNN) based approach for automated emotion recognition using multi-channel EEG signals. The delta (δ), theta (θ), alpha (α), beta (β), and gamma (γ) rhythms of EEG signal for each channel are evaluated using band-pass filters. The EEG rhythms from the selected channels coupled with deep CNN are used for emotion classification tasks such as low-valence (LV) vs. high valence (HV), low-arousal (LA) vs. high-arousal (HA), and low-dominance (LD) vs. high dominance (HD) respectively. The deep CNN architecture considered in the proposed work has eight convolutions, three average pooling, four batch-normalization, three spatial drop-outs, two drop-outs, one global average pooling and, three dense layers. We have validated our developed model using three publicly available databases: DEAP, DREAMER, and DASPS. The results reveal that the proposed multivariate deep CNN approach coupled with β-rhythm has obtained the accuracy values of 98.91%, 98.45%, and 98.69% for LV vs. HV, LA vs. HA, and LD vs. HD emotion classification strategies, respectively using DEAP database with 10-fold cross-validation (CV) scheme. Similarly, the accuracy values of 98.56%, 98.82%, and 98.99% are obtained for LV vs. HV, LA vs. HA, and LD vs. HD classification schemes, respectively, using deep CNN and θ-rhythm. The proposed multi-channel rhythm-specific deep CNN classification model has obtained the average accuracy value of 57.14% using α-rhythm and trial-specific CV using DASPS database. Moreover, for 8-quadrant based emotion classification strategy, the deep CNN based classifier has obtained an overall accuracy value of 24.37% using γ-rhythms of multi-channel EEG signals. Our developed deep CNN model can be used for real-time automated emotion recognition applications.
Collapse
Affiliation(s)
- Daksh Maheshwari
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India
| | - S K Ghosh
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India
| | - R K Tripathy
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India.
| | - Manish Sharma
- Department of Electrical and Computer Science Engineering, IITRAM, Ahmedabad, India
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan; International Research Organization for Advanced Science and Technology, Kumamoto University, Kumamoto, Japan
| |
Collapse
|
78
|
Tseng KC. Electrophysiological Correlation Underlying the Effects of Music Preference on the Prefrontal Cortex Using a Brain-Computer Interface. SENSORS 2021; 21:s21062161. [PMID: 33808786 PMCID: PMC8003564 DOI: 10.3390/s21062161] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 02/28/2021] [Accepted: 03/09/2021] [Indexed: 11/16/2022]
Abstract
This study aims to research the task of recognising brain activities in the prefrontal cortex that correspond to music at different preference levels. Since task performance regarding the effects of the subjects’ favourite music can lead to better outcomes, we focus on the physical interpretation of electroencephalography (EEG) bands underlying the preference level for music. The experiment was implemented using a continuous response digital interface for the preference classification of three types of musical stimuli. The results showed that favourite songs more significantly evoked frontal theta than did the music of low and moderate preference levels. Additionally, correlations of frontal theta with cognitive state indicated that the frontal theta is associated not only with the cognitive state but also with emotional processing. These findings demonstrate that favourite songs can have more positive effects on listeners than less favourable music and suggest that theta and lower alpha in the frontal cortex are good indicators of both cognitive state and emotion.
Collapse
Affiliation(s)
- Kevin C Tseng
- Product Design and Development Laboratory, Department of Industrial Design, National Taipei University of Technology, Taipei City 106344, Taiwan
| |
Collapse
|
79
|
Giordano V, Goeral K, Schrage-Leitner L, Berger A, Olischar M. The Effect of Music on aEEG Cyclicity in Preterm Neonates. CHILDREN-BASEL 2021; 8:children8030208. [PMID: 33803493 PMCID: PMC8000223 DOI: 10.3390/children8030208] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 02/26/2021] [Accepted: 03/05/2021] [Indexed: 12/25/2022]
Abstract
Several methods can be used in the neonatal intensive care unit (NICU) to reduce stress and optimize the quality of life during this period of hospitalization. Among these, music could play an important role. We investigated the effect of different kinds of music therapies on the brain activity of very preterm infants using amplitude-integrated EEG. Sixty-four patients were included and randomly assigned to three different groups: live music group, recorded music group, and control group. In both intervention groups, music was started after the appearance of the first quiet-sleep phase, with a subsequent duration of 20 min. Changes between the first and second quiet-sleep epochs were analyzed using the amplitude-integrated EEG. When looking at single parameters of the amplitude-integrated EEG trace, no differences could be found between the groups when comparing their first and second quiet-sleep phase regarding the parameters of change from baseline, quality of the quiet-sleep epoch, and duration. However, when looking at the total cyclicity score of the second quiet-sleep phase, a difference between both intervention groups and the control group could be found (live music therapy vs. control, p = 0.003; recorded music therapy vs. control, p = 0.006). Improvement within the first and second quiet-sleep epochs were detected in both music groups, but not in the control group. We concluded that our study added evidence of the beneficial effect of music on the amplitude-integrated EEG activity in preterm infants.
Collapse
Affiliation(s)
- Vito Giordano
- Department of Pediatrics and Adolescent Medicine, Division of Neonatology, Pediatric Intensive Care and Neuropediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, 1090 Vienna, Austria; (K.G.); (A.B.); (M.O.)
- Correspondence: ; Tel.: +43-40400-3232 or +43-69918-186496; Fax: +43-40400-2929
| | - Katharina Goeral
- Department of Pediatrics and Adolescent Medicine, Division of Neonatology, Pediatric Intensive Care and Neuropediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, 1090 Vienna, Austria; (K.G.); (A.B.); (M.O.)
| | - Leslie Schrage-Leitner
- Department of Music Therapy, University of Music and Performing Arts, Seilerstätte 26, 1010 Vienna, Austria;
| | - Angelika Berger
- Department of Pediatrics and Adolescent Medicine, Division of Neonatology, Pediatric Intensive Care and Neuropediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, 1090 Vienna, Austria; (K.G.); (A.B.); (M.O.)
| | - Monika Olischar
- Department of Pediatrics and Adolescent Medicine, Division of Neonatology, Pediatric Intensive Care and Neuropediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, 1090 Vienna, Austria; (K.G.); (A.B.); (M.O.)
| |
Collapse
|
80
|
Stomp M, d’Ingeo S, Henry S, Cousillas H, Hausberger M. Brain activity reflects (chronic) welfare state: Evidence from individual electroencephalography profiles in an animal model. Appl Anim Behav Sci 2021. [DOI: 10.1016/j.applanim.2021.105271] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
81
|
Hasanzadeh F, Annabestani M, Moghimi S. Continuous emotion recognition during music listening using EEG signals: A fuzzy parallel cascades model. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2020.107028] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
82
|
Hu W, Huang G, Li L, Zhang L, Zhang Z, Liang Z. Video‐triggered EEG‐emotion public databases and current methods: A survey. BRAIN SCIENCE ADVANCES 2021. [DOI: 10.26599/bsa.2020.9050026] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Emotions, formed in the process of perceiving external environment, directly affect human daily life, such as social interaction, work efficiency, physical wellness, and mental health. In recent decades, emotion recognition has become a promising research direction with significant application values. Taking the advantages of electroencephalogram (EEG) signals (i.e., high time resolution) and video‐based external emotion evoking (i.e., rich media information), video‐triggered emotion recognition with EEG signals has been proven as a useful tool to conduct emotion‐related studies in a laboratory environment, which provides constructive technical supports for establishing real‐time emotion interaction systems. In this paper, we will focus on video‐triggered EEG‐based emotion recognition and present a systematical introduction of the current available video‐triggered EEG‐based emotion databases with the corresponding analysis methods. First, current video‐triggered EEG databases for emotion recognition (e.g., DEAP, MAHNOB‐HCI, SEED series databases) will be presented with full details. Then, the commonly used EEG feature extraction, feature selection, and modeling methods in video‐triggered EEG‐based emotion recognition will be systematically summarized and a brief review of current situation about video‐triggered EEG‐based emotion studies will be provided. Finally, the limitations and possible prospects of the existing video‐triggered EEG‐emotion databases will be fully discussed.
Collapse
Affiliation(s)
- Wanrou Hu
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
| | - Gan Huang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
| | - Linling Li
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
| | - Li Zhang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
| | - Zhiguo Zhang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
- Peng Cheng Laboratory, Shenzhen 518055, Guangdong, China
| | - Zhen Liang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
| |
Collapse
|
83
|
|
84
|
Amezcua-Gutiérrez C, Hernández-González M, Guasti AF, Aguilar MAC, Guevara MA. Observing Erotic Videos with Heterosexual Content Induces Different Cerebral Responses in Homosexual and Heterosexual Men. JOURNAL OF HOMOSEXUALITY 2021; 68:138-156. [PMID: 31430230 DOI: 10.1080/00918369.2019.1648079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
To determine the cerebral functionality associated with the perception and processing of erotic stimuli in men with different sexual orientation, this work evaluated the electroencephalographic activity (EEG) from several cortical areas, as well as subjective arousal in homosexual and heterosexual men during observation of an erotic film with heterosexual content. The heterosexual men rated the erotic video with higher general and sexual arousal than the homosexual participants. During observation of the neutral and erotic videos, both groups showed a decreased amplitude of the alpha band in prefrontal and parietal cortices, indicating increased attention. When watching the erotic video, the homosexual men showed an increased amplitude of the theta and fast bands only in the prefrontal cortex, which could be related to the cognitive processing of the erotic stimulus. These EEG results should broaden our knowledge of the cortical mechanisms related to the different perception and processing of erotic stimuli in men with different sexual orientations.
Collapse
Affiliation(s)
| | | | | | - Manuel Alejandro Cruz Aguilar
- National Institute of Psychiatry, "Ramón de la Fuente Muñiz", Neuroscience Research Direction, Chronobiology and Sleep Laboratory , Mexico, México
| | - Miguel Angel Guevara
- Institute of Neuroscience, CUCBA, University of Guadalajara , Guadalajara, Jalisco, México
| |
Collapse
|
85
|
Wind J, Horst F, Rizzi N, John A, Schöllhorn WI. Electrical Brain Activity and Its Functional Connectivity in the Physical Execution of Modern Jazz Dance. Front Psychol 2021; 11:586076. [PMID: 33384641 PMCID: PMC7769774 DOI: 10.3389/fpsyg.2020.586076] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 11/02/2020] [Indexed: 11/16/2022] Open
Abstract
Besides the pure pleasure of watching a dance performance, dance as a whole-body movement is becoming increasingly popular for health-related interventions. However, the science-based evidence for improvements in health or well-being through dance is still ambiguous and little is known about the underlying neurophysiological mechanisms. This may be partly related to the fact that previous studies mostly examined the neurophysiological effects of imagination and observation of dance rather than the physical execution itself. The objective of this pilot study was to investigate acute effects of a physically executed dance with its different components (recalling the choreography and physical activity to music) on the electrical brain activity and its functional connectivity using electroencephalographic (EEG) analysis. Eleven dance-inexperienced female participants first learned a Modern Jazz Dance (MJD) choreography over three weeks (1 h sessions per week). Afterwards, the acute effects on the EEG brain activity were compared between four different test conditions: physically executing the MJD choreography with music, physically executing the choreography without music, imaging the choreography with music, and imaging the choreography without music. Every participant passed each test condition in a randomized order within a single day. EEG rest-measurements were conducted before and after each test condition. Considering time effects the physically executed dance without music revealed in brain activity analysis most increases in alpha frequency and in functional connectivity analysis in all frequency bands. In comparison, physically executed dance with music as well as imagined dance with music led to fewer increases and imagined dance without music provoked noteworthy brain activity and connectivity decreases at all frequency bands. Differences between the test conditions were found in alpha and beta frequency between the physically executed dance and the imagined dance without music as well as between the physically executed dance with and without music in the alpha frequency. The study highlights different effects of a physically executed dance compared to an imagined dance on many brain areas for all measured frequency bands. These findings provide first insights into the still widely unexplored field of neurological effects of dance and encourages further research in this direction.
Collapse
Affiliation(s)
- Johanna Wind
- Training and Movement Science, Institute of Sport Science, Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Fabian Horst
- Training and Movement Science, Institute of Sport Science, Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Nikolas Rizzi
- Training and Movement Science, Institute of Sport Science, Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Alexander John
- Training and Movement Science, Institute of Sport Science, Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Wolfgang I Schöllhorn
- Training and Movement Science, Institute of Sport Science, Johannes Gutenberg-University Mainz, Mainz, Germany
| |
Collapse
|
86
|
Are the new mobile wireless EEG headsets reliable for the evaluation of musical pleasure? PLoS One 2021; 15:e0244820. [PMID: 33382801 PMCID: PMC7775075 DOI: 10.1371/journal.pone.0244820] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 12/16/2020] [Indexed: 11/19/2022] Open
Abstract
Since the beginning of the 20th century, electroencephalography (EEG) has been used in a wide variety of applications, both for medical needs and for the study of various cerebral processes. With the rapid development of the technique, more and more precise and advanced tools have emerged for research purposes. However, the main constraints of these devices have often been the high price and, for some devices the low transportability and the long set-up time. Nevertheless, a broad range of wireless EEG devices have emerged on the market without these constraints, but with a lower signal quality. The development of EEG recording on multiple participants simultaneously, and new technological solutions provides further possibilities to understand the cerebral emotional dynamics of a group. A great number of studies have compared and tested many mobile devices, but have provided contradictory results. It is therefore important to test the reliability of specific wireless devices in a specific research context before developing a large-scale study. The aim of this study was to assess the reliability of two wireless devices (g.tech Nautilus SAHARA electrodes and Emotiv™ Epoc +) for the detection of musical emotions, in comparison with a gold standard EEG device. Sixteen participants reported feeling emotional pleasure (from low pleasure up to musical chills) when listening to their favorite chill-inducing musical excerpts. In terms of emotion detection, our results show statistically significant concordance between Epoc + and the gold standard device in the left prefrontal and left temporal areas in the alpha frequency band. We validated the use of the Emotiv™ Epoc + for research into musical emotion. We did not find any significant concordance between g.tech and the gold standard. This suggests that Emotiv Epoc is more appropriate for musical emotion investigations in natural settings.
Collapse
|
87
|
Fritz TH, Liebau G, Löhle M, Hartjen B, Czech P, Schneider L, Sehm B, Kotz SA, Ziemssen T, Storch A, Villringer A. Dissonance in Music Impairs Spatial Gait Parameters in Patients with Parkinson's Disease. JOURNAL OF PARKINSONS DISEASE 2020; 11:363-372. [PMID: 33285641 DOI: 10.3233/jpd-202413] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND It is known that music influences gait parameters in Parkinson's disease (PD). However, it remains unclear whether this effect is merely due to temporal aspects of music (rhythm and tempo) or other musical parameters. OBJECTIVE To examine the influence of pleasant and unpleasant music on spatiotemporal gait parameters in PD, while controlling for rhythmic aspects of the musical signal. METHODS We measured spatiotemporal gait parameters of 18 patients suffering from mild PD (50%men, mean±SD age of 64±6 years; mean disease duration of 6±5 years; mean Unified PD Rating scale [UPDRS] motor score of 15±7) who listened to eight different pieces of music. Music pieces varied in harmonic consonance/dissonance to create the experience of pleasant/unpleasant feelings. To measure gait parameters, we used an established analysis of spatiotemporal gait, which consists of a walkway containing pressure-receptive sensors (GAITRite®). Repeated measures analyses of variance were used to evaluate effects of auditory stimuli. In addition, linear regression was used to evaluate effects of valence on gait. RESULTS Sensory dissonance modulated spatiotemporal and spatial gait parameters, namely velocity and stride length, while temporal gait parameters (cadence, swing duration) were not affected. In contrast, valence in music as perceived by patients was not associated with gait parameters. Motor and musical abilities did not relevantly influence the modulation of gait by auditory stimuli. CONCLUSION Our observations suggest that dissonant music negatively affects particularly spatial gait parameters in PD by yet unknown mechanisms, but putatively through increased cognitive interference reducing attention in auditory cueing.
Collapse
Affiliation(s)
- Thomas H Fritz
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Nuclear Medicine, University of Leipzig, Leipzig, Germany.,Institute for Psychoacoustics and Electronic Music (IPEM), Gent, Belgium
| | - Gefion Liebau
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Matthias Löhle
- Division of Neurodegenerative Diseases, Department of Neurology, Technische Universität Dresden, Dresden, Germany.,Department of Neurology, University of Rostock, Rostock, Germany
| | - Berit Hartjen
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Psychology, University of Leipzig, Leipzig, Germany
| | - Phillip Czech
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Lydia Schneider
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Bernhard Sehm
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neurology, Martin-Luther-University of Halle-Wittenberg, Halle (Saale), Germany
| | - Sonja A Kotz
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, The Netherlands
| | - Tjalf Ziemssen
- Department of Neurology, Technische Universität Dresden, Dresden, Germany
| | - Alexander Storch
- Division of Neurodegenerative Diseases, Department of Neurology, Technische Universität Dresden, Dresden, Germany.,Department of Neurology, University of Rostock, Rostock, Germany.,German Centre for Neurodegenerative Diseases (DZNE) Rostock/Greifswald, Rostock, Germany
| | - Arno Villringer
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
88
|
Park KS, Hass CJ, Patel B, Janelle CM. Musical pleasure beneficially alters stride and arm swing amplitude during rhythmically-cued walking in people with Parkinson's disease. Hum Mov Sci 2020; 74:102718. [DOI: 10.1016/j.humov.2020.102718] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 11/09/2020] [Accepted: 11/11/2020] [Indexed: 11/16/2022]
|
89
|
Chabin T, Gabriel D, Chansophonkul T, Michelant L, Joucla C, Haffen E, Moulin T, Comte A, Pazart L. Cortical Patterns of Pleasurable Musical Chills Revealed by High-Density EEG. Front Neurosci 2020; 14:565815. [PMID: 33224021 PMCID: PMC7670092 DOI: 10.3389/fnins.2020.565815] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 09/29/2020] [Indexed: 01/02/2023] Open
Abstract
Music has the capacity to elicit strong positive feelings in humans by activating the brain's reward system. Because group emotional dynamics is a central concern of social neurosciences, the study of emotion in natural/ecological conditions is gaining interest. This study aimed to show that high-density EEG (HD-EEG) is able to reveal patterns of cerebral activities previously identified by fMRI or PET scans when the subject experiences pleasurable musical chills. We used HD-EEG to record participants (11 female, 7 male) while listening to their favorite pleasurable chill-inducing musical excerpts; they reported their subjective emotional state from low pleasure up to chills. HD-EEG results showed an increase of theta activity in the prefrontal cortex when arousal and emotional ratings increased, which are associated with orbitofrontal cortex activation localized using source localization algorithms. In addition, we identified two specific patterns of chills: a decreased theta activity in the right central region, which could reflect supplementary motor area activation during chills and may be related to rhythmic anticipation processing, and a decreased theta activity in the right temporal region, which may be related to musical appreciation and could reflect the right superior temporal gyrus activity. The alpha frontal/prefrontal asymmetry did not reflect the felt emotional pleasure, but the increased frontal beta to alpha ratio (measure of arousal) corresponded to increased emotional ratings. These results suggest that EEG may be a reliable method and a promising tool for the investigation of group musical pleasure through musical reward processing.
Collapse
Affiliation(s)
- Thibault Chabin
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
| | - Damien Gabriel
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation – Neuraxess, Centre Hospitalier Universitaire de Besançon, Université Bourgogne Franche-Comté, Besançon, France
| | - Tanawat Chansophonkul
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
| | - Lisa Michelant
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
| | - Coralie Joucla
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
| | - Emmanuel Haffen
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation – Neuraxess, Centre Hospitalier Universitaire de Besançon, Université Bourgogne Franche-Comté, Besançon, France
| | - Thierry Moulin
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation – Neuraxess, Centre Hospitalier Universitaire de Besançon, Université Bourgogne Franche-Comté, Besançon, France
| | - Alexandre Comte
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation – Neuraxess, Centre Hospitalier Universitaire de Besançon, Université Bourgogne Franche-Comté, Besançon, France
| | - Lionel Pazart
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation – Neuraxess, Centre Hospitalier Universitaire de Besançon, Université Bourgogne Franche-Comté, Besançon, France
| |
Collapse
|
90
|
Li W, Hu X, Long X, Tang L, Chen J, Wang F, Zhang D. EEG responses to emotional videos can quantitatively predict big-five personality traits. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.123] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
91
|
Merrill J, Omigie D, Wald-Fuhrmann M. Locus of emotion influences psychophysiological reactions to music. PLoS One 2020; 15:e0237641. [PMID: 32841260 PMCID: PMC7447055 DOI: 10.1371/journal.pone.0237641] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 07/30/2020] [Indexed: 11/24/2022] Open
Abstract
It is now widely accepted that the perception of emotional expression in music can be vastly different from the feelings evoked by it. However, less understood is how the locus of emotion affects the experience of music, that is how the act of perceiving the emotion in music compares with the act of assessing the emotion induced in the listener by the music. In the current study, we compared these two emotion loci based on the psychophysiological response of 40 participants listening to 32 musical excerpts taken from movie soundtracks. Facial electromyography, skin conductance, respiration and heart rate were continuously measured while participants were required to assess either the emotion expressed by, or the emotion they felt in response to the music. Using linear mixed effects models, we found a higher mean response in psychophysiological measures for the “perceived” than the “felt” task. This result suggested that the focus on one’s self distracts from the music, leading to weaker bodily reactions during the “felt” task. In contrast, paying attention to the expression of the music and consequently to changes in timbre, loudness and harmonic progression enhances bodily reactions. This study has methodological implications for emotion induction research using psychophysiology and the conceptualization of emotion loci. Firstly, different tasks can elicit different psychophysiological responses to the same stimulus and secondly, both tasks elicit bodily responses to music. The latter finding questions the possibility of a listener taking on a purely cognitive mode when evaluating emotion expression.
Collapse
Affiliation(s)
- Julia Merrill
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Institute of Music, University of Kassel, Kassel, Germany
- * E-mail:
| | - Diana Omigie
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Goldsmiths University of London, London, United Kingdom
| | | |
Collapse
|
92
|
DE-CNN: An Improved Identity Recognition Algorithm Based on the Emotional Electroencephalography. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:7574531. [PMID: 32849910 PMCID: PMC7439782 DOI: 10.1155/2020/7574531] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2019] [Accepted: 02/05/2020] [Indexed: 11/17/2022]
Abstract
In the past few decades, identification recognition based on electroencephalography (EEG) has received extensive attention to resolve the security problems of conventional biometric systems. In the present study, a novel EEG-based identification system with different entropy and a continuous convolution neural network (CNN) classifier is proposed. The performance of the proposed method is experimentally evaluated through the emotional EEG data. The conducted experiment shows that the proposed method approaches the stunning accuracy (ACC) of 99.7% on average and can rapidly train and update the DE-CNN model. Then, the effects of different emotions and the impact of different time intervals on the identification performance are investigated. Obtained results show that different emotions affect the identification accuracy, where the negative and neutral mood EEG has a better robustness than positive emotions. For a video signal as the EEG stimulant, it is found that the proposed method with 0–75 Hz is more robust than a single band, while the 15–32 Hz band presents overfitting and reduces the accuracy of the cross-emotion test. It is concluded that time interval reduces the accuracy and the 15–32 Hz band has the best compatibility in terms of the attenuation.
Collapse
|
93
|
Neural and physiological data from participants listening to affective music. Sci Data 2020; 7:177. [PMID: 32541806 PMCID: PMC7295758 DOI: 10.1038/s41597-020-0507-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 05/07/2020] [Indexed: 11/09/2022] Open
Abstract
Music provides a means of communicating affective meaning. However, the neurological mechanisms by which music induces affect are not fully understood. Our project sought to investigate this through a series of experiments into how humans react to affective musical stimuli and how physiological and neurological signals recorded from those participants change in accordance with self-reported changes in affect. In this paper, the datasets recorded over the course of this project are presented, including details of the musical stimuli, participant reports of their felt changes in affective states as they listened to the music, and concomitant recordings of physiological and neurological activity. We also include non-identifying meta data on our participant populations for purposes of further exploratory analysis. This data provides a large and valuable novel resource for researchers investigating emotion, music, and how they affect our neural and physiological activity.
Collapse
|
94
|
Wang F, Wu S, Zhang W, Xu Z, Zhang Y, Wu C, Coleman S. Emotion recognition with convolutional neural network and EEG-based EFDMs. Neuropsychologia 2020; 146:107506. [PMID: 32497532 DOI: 10.1016/j.neuropsychologia.2020.107506] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 05/23/2020] [Accepted: 05/26/2020] [Indexed: 01/02/2023]
Abstract
Electroencephalogram (EEG), as a direct response to brain activity, can be used to detect mental states and physical conditions. Among various EEG-based emotion recognition studies, due to the non-linear, non-stationary and the individual difference of EEG signals, traditional recognition methods still have the disadvantages of complicated feature extraction and low recognition rates. Thus, this paper first proposes a novel concept of electrode-frequency distribution maps (EFDMs) with short-time Fourier transform (STFT). Residual block based deep convolutional neural network (CNN) is proposed for automatic feature extraction and emotion classification with EFDMs. Aim at the shortcomings of the small amount of EEG samples and the challenge of differences in individual emotions, which makes it difficult to construct a universal model, this paper proposes a cross-datasets emotion recognition method of deep model transfer learning. Experiments carried out on two publicly available datasets. The proposed method achieved an average classification score of 90.59% based on a short length of EEG data on SEED, which is 4.51% higher than the baseline method. Then, the pre-trained model was applied to DEAP through deep model transfer learning with a few samples, resulted an average accuracy of 82.84%. Finally, this paper adopts the gradient weighted class activation mapping (Grad-CAM) to get a glimpse of what features the CNN has learned during training from EFDMs and concludes that the high frequency bands are more favorable for emotion recognition.
Collapse
Affiliation(s)
- Fei Wang
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110169, China.
| | - Shichao Wu
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110169, China
| | - Weiwei Zhang
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110169, China
| | - Zongfeng Xu
- College of Information Science and Engineering, Northeastern University, Shenyang, 110819, China
| | - Yahui Zhang
- College of Information Science and Engineering, Northeastern University, Shenyang, 110819, China
| | - Chengdong Wu
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110169, China
| | - Sonya Coleman
- Intelligent Systems Research Centre, Ulster University, Londonderry, United Kingdom
| |
Collapse
|
95
|
Aguirre RMH, González MH, Hernández MP, Gutiérrez CDCA, Guevara MÁ. Observing baby or sexual videos changes the functional synchronization between the prefrontal and parietal cortices in mothers in different postpartum periods. Soc Neurosci 2020; 15:489-504. [PMID: 32402224 DOI: 10.1080/17470919.2020.1761447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
During the postpartum period (PP), mothers are more sensitive to sensory stimuli related to babies and less sensitive to those with sexual significance. The processing of emotional stimuli requires synchronization among different cerebral areas. This study characterized the cortical electroencephalographic (EEG) correlation in mothers from 1½ to 3 months (PP1), 4 to 5½ months (PP2) and over 6½ months, postpartum (PP3) while observing two videos: one of a baby (BV) and one with sexual content (SV). EEGs were recorded from the frontopolar, dorsolateral and parietal cortices. All three groups rated the BV as pleasant, but only PP3 reported higher sexual arousal with the SV. While watching the BV, PP1 showed a higher correlation among all cortical areas; PP2 manifested a decreased correlation between the prefrontal and parietal cortices, likely associated with the lower emotional modulation of the BV; and PP3 presented a higher synchronization among fewer cortical areas, probably related to longer maternal experience. These cortical synchronization patterns could represent adaptive mechanisms that enable the adequate processing of baby stimuli in new mothers. These data increase our knowledge of the cerebral processes associated with distinct sensitivities to the emotional stimuli that mothers experience during the PP.
Collapse
Affiliation(s)
- Rosa María Hidalgo Aguirre
- Laboratorio de Neuropsicología, Centro Universitario de los Valles, Universidad de Guadalajara , Ameca, México.,Instituto de Neurociencias, Universidad de Guadalajara , Guadalajara, México
| | | | - Marai Pérez Hernández
- Laboratorio de Neurociencias, Centro Universitario del Norte, Universidad de Guadalajara , Guadalajara, Mexico
| | | | | |
Collapse
|
96
|
Fronto-temporal theta phase-synchronization underlies music-evoked pleasantness. Neuroimage 2020; 212:116665. [PMID: 32087373 DOI: 10.1016/j.neuroimage.2020.116665] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/12/2020] [Accepted: 02/17/2020] [Indexed: 01/08/2023] Open
Abstract
Listening to pleasant music engages a complex distributed network including pivotal areas for auditory, reward, emotional and memory processing. On the other hand, frontal theta rhythms appear to be relevant in the process of giving value to music. However, it is not clear to which extent this oscillatory mechanism underlies the brain interactions that characterize music-evoked pleasantness and its related processes. The goal of the present experiment was to study brain synchronization in this oscillatory band as a function of music-evoked pleasantness. EEG was recorded from 25 healthy subjects while they were listening to music and rating the experienced degree of induced pleasantness. By using a multilevel Bayesian approach we found that phase synchronization in the theta band between right temporal and frontal signals increased with the degree of pleasure experienced by participants. These results show that slow fronto-temporal loops play a key role in music-evoked pleasantness.
Collapse
|
97
|
Wankhade SB, Doye DD. Deep Learning of Empirical Mean Curve Decomposition-Wavelet Decomposed EEG Signal for Emotion Recognition. INT J UNCERTAIN FUZZ 2020. [DOI: 10.1142/s0218488520500075] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Recently, the emotional state recognition of humans via Electroencephalogram (EEG) is one of the emerging topics that grasp the attention of researchers too. This EEG based recognition is normally an effective model for many of the real-time applications, especially for disabled people. A number of researchers are in progress to make the recognition model more effective in terms of accurate emotion recognition. However, it is not so satisfactory in the precise accurate progressing. Hence this paper intends to recognize the human emotional states or affects through EEG signals by adopting advanced features and classifier models. In the first stage of recognition procedure, this paper exploits 2501 (EMCD) and Wavelet Transformation to represent the EEG signal in low dimension as well as descriptive. By EMCD, the EEG redundancy can be neglected, and the significant information can be extracted. The classification processes using the extracted features with the aid of a classifier named Deep Belief Network (DBN). The performance of the proposed Wavelet-EMCD (WE) approach is analyzed in terms of measures such as Accuracy, Sensitivity, Specificity, Precision, False positive rate (FPR), False negative rate (FNR), Negative Predictive Value (NPV), False Discovery Rate (FDR), F1Score and Mathews correlation coefficient (MCC) and proven the superiority of proposed work in recognizing the emotions more accurately.
Collapse
Affiliation(s)
- Sujata Bhimrao Wankhade
- Department of Computer Science & Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Vishnupuri, Nanded, Maharashtra 431 606, India
| | - Dharmpal Dronacharya Doye
- Department of Electronic and Telecommunication Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Vishnupuri, Nanded, Maharashtra 431 606, India
| |
Collapse
|
98
|
Ahuja S, Gupta RK, Damodharan D, Philip M, Venkatasubramanian G, Keshavan MS, Hegde S. Effect of music listening on P300 event-related potential in patients with schizophrenia: A pilot study. Schizophr Res 2020; 216:85-96. [PMID: 31924375 PMCID: PMC7613152 DOI: 10.1016/j.schres.2019.12.026] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Revised: 11/30/2019] [Accepted: 12/20/2019] [Indexed: 02/05/2023]
Abstract
Reduced amplitude and increased latency of P300 auditory event-related potential (ERP) in patients with schizophrenia (SZ) indicate impairment in attention. Overall arousal level can determine the amount of processing capacity required for attention allocation. Music evokes strong emotions and regulates arousability. Music has been used to modulate P300, especially in normals. This exploratory study examined the effect of music listening on the amplitude and latency of P300 in SZ patients. EEG/ERP was recorded (32-channels) while SZ patients (n = 20; 18-45 years) performed an auditory oddball P300 task after the eyes-closed rest condition (Condition-A) and ten-minute music listening condition (Condition-B) as per the complete counterbalancing design (AB-BA). Patients listened to the researcher chosen, instrumental presentation of raag-Bhoopali in the North-Indian-Classical-Music, for ten-minutes. All patients rated the music excerpt as a relaxing and positively valenced. A significant increase in accuracy score and reaction time during the oddball task after music listening was noted. There was an increase in amplitude at TP7. A trend of increased amplitude was noted across all electrodes in the music condition compared to the rest condition. Mean amplitude in an apriori defined time window of interest (250 to 750 ms) showed significant changes in the frontal and central electrode sites. Power spectral analysis indicated a slight increase in frontal and central alpha and theta activity during music listening. However, this was not statistically significant. Findings add further impetus to examine the effect of music in chronic psychiatric conditions. Need for systematic studies on a larger cohort is underscored.
Collapse
Affiliation(s)
- Shikha Ahuja
- Clinical Neuropsychology & Cognitive Neuroscience Centre and Music Cognition Laboratory, Department of Clinical Psychology, National Institute of Mental Health & Neuro Sciences (NIMHANS), Bengaluru, KA, India
| | - Rajnish Kumar Gupta
- Clinical Neuropsychology & Cognitive Neuroscience Centre, Department of Clinical Psychology, National Institute of Mental Health & Neuro Sciences (NIMHANS), Bengaluru, KA, India
| | - Dinakaran Damodharan
- Translational Psychiatry Laboratory, Department of Psychiatry, National Institute of Mental Health & Neuro Sciences (NIMHANS), Bengaluru, KA, India
| | - Mariamma Philip
- Department of Biostatistics, National Institute of Mental Health & Neuro Sciences (NIMHANS), Bengaluru, KA, India
| | - Ganesan Venkatasubramanian
- Translational Psychiatry Laboratory, Department of Psychiatry, National Institute of Mental Health & Neuro Sciences (NIMHANS), Bengaluru, KA, India
| | - Matcheri S. Keshavan
- Department of Psychiatry, Beth Israel Deaconess Medical Center Massachusetts Mental Health Center, Harvard Medical School, 75 Fenwood Rd., Boston, MA 02115, USA
| | - Shantala Hegde
- Clinical Neuropsychology & Cognitive Neuroscience Centre and Music Cognition Laboratory, Department of Clinical Psychology, National Institute of Mental Health & Neuro Sciences (NIMHANS), Bengaluru, KA, India.
| |
Collapse
|
99
|
Wang X, Liu W, Toiviainen P, Ristaniemi T, Cong F. Group analysis of ongoing EEG data based on fast double-coupled nonnegative tensor decomposition. J Neurosci Methods 2020; 330:108502. [PMID: 31730873 DOI: 10.1016/j.jneumeth.2019.108502] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Revised: 10/31/2019] [Accepted: 10/31/2019] [Indexed: 11/18/2022]
Abstract
BACKGROUND Ongoing EEG data are recorded as mixtures of stimulus-elicited EEG, spontaneous EEG and noises, which require advanced signal processing techniques for separation and analysis. Existing methods cannot simultaneously consider common and individual characteristics among/within subjects when extracting stimulus-elicited brain activities from ongoing EEG elicited by 512-s long modern tango music. NEW METHOD Aiming to discover the commonly music-elicited brain activities among subjects, we provide a comprehensive framework based on fast double-coupled nonnegative tensor decomposition (FDC-NTD) algorithm. The proposed algorithm with a generalized model is capable of simultaneously decomposing EEG tensors into common and individual components. RESULTS With the proposed framework, the brain activities can be effectively extracted and sorted into the clusters of interest. The proposed algorithm based on the generalized model achieved higher fittings and stronger robustness. In addition to the distribution of centro-parietal and occipito-parietal regions with theta and alpha oscillations, the music-elicited brain activities were also located in the frontal region and distributed in the 4∼11 Hz band. COMPARISON WITH EXISTING METHOD(S) The present study, by providing a solution of how to separate common stimulus-elicited brain activities using coupled tensor decomposition, has shed new light on the processing and analysis of ongoing EEG data in multi-subject level. It can also reveal more links between brain responses and the continuous musical stimulus. CONCLUSIONS The proposed framework based on coupled tensor decomposition can be successfully applied to group analysis of ongoing EEG data, as it can be reliably inferred that those brain activities we obtained are associated with musical stimulus.
Collapse
Affiliation(s)
- Xiulin Wang
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China; Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland.
| | - Wenya Liu
- Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland.
| | - Petri Toiviainen
- Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music, University of Jyväskylä, Jyväskylä, Finland.
| | - Tapani Ristaniemi
- Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland.
| | - Fengyu Cong
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China; Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland.
| |
Collapse
|
100
|
Van Criekinge T, D'Août K, O'Brien J, Coutinho E. Effect of music listening on hypertonia in neurologically impaired patients-systematic review. PeerJ 2019; 7:e8228. [PMID: 31875154 PMCID: PMC6925946 DOI: 10.7717/peerj.8228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 11/18/2019] [Indexed: 11/20/2022] Open
Abstract
Background As music listening is able to induce self-perceived and physiological signs of relaxation, it might be an interesting tool to induce muscle relaxation in patients with hypertonia. To this date effective non-pharmacological rehabilitation strategies to treat hypertonia in neurologically impaired patients are lacking. Therefore the aim is to investigate the effectiveness of music listening on muscle activity and relaxation. Methodology The search strategy was performed by the PRISMA guidelines and registered in the PROSPERO database (no. 42019128511). Seven databases were systematically searched until March 2019. Six of the 1,684 studies met the eligibility criteria and were included in this review. Risk of bias was assessed by the PEDro scale. In total 171 patients with a variety of neurological conditions were included assessing hypertonia with both clinicall and biomechanical measures. Results The analysis showed that there was a large treatment effect of music listening on muscle performance (SMD 0.96, 95% CI [0.29–1.63], I2 = 10%, Z = 2.82, p = 0.005). Music can be used as either background music during rehabilitation (dual-task) or during rest (single-task) and musical preferences seem to play a major role in the observed treatment effect. Conclusions Although music listening is able to induce muscle relaxation, several gaps in the available literature were acknowledged. Future research is in need of an accurate and objective assessment of hypertonia.
Collapse
Affiliation(s)
- Tamaya Van Criekinge
- Department of Rehabilitation Sciences and Physiotherapy, REVAKI/MOVANT, Universiteit Antwerpen, Antwerp, Belgium
| | - Kristiaan D'Août
- Department of Musculoskeletal Biology, University of Liverpool, Liverpool, United Kingdom
| | - Jonathon O'Brien
- School of Health Sciences, University of Liverpool, Liverpool, United Kingdom
| | - Eduardo Coutinho
- Applied Music Research Lab, Department of Music, University of Liverpool, Liverpool, United Kingdom
| |
Collapse
|