1
|
Alí Diez I, Fàbrega-Camps G, Parra-Tíjaro J, Marco-Pallarés J. Anticipatory and consummatory neural correlates of monetary and music rewarding stimuli. Brain Cogn 2024; 179:106186. [PMID: 38843763 DOI: 10.1016/j.bandc.2024.106186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 05/29/2024] [Accepted: 05/29/2024] [Indexed: 06/17/2024]
Abstract
Most of the literature on the neural bases of human reward and punishment processing has used monetary gains and losses, but less is known about the neurophysiological mechanisms underlying the anticipation and consumption of other types of rewarding stimuli. In the present study, EEG was recorded from 19 participants who completed a modified version of the Monetary Incentive Delay (MID) task. During the task, cues providing information about potential future outcomes were presented to the participants. Then, they had to respond rapidly to a target stimulus to win money or listening to pleasant music, or to avoid losing money or listening to unpleasant music. Results revealed similar responses for monetary and music cues, with increased activity for cues indicating potential gains compared to losses. However, differences emerged in the outcome phase between money and music. Monetary outcomes showed an interaction between the type of the cue and the outcome in the Feedback Related Negativity and Fb-P3 ERPs and increased theta activity increased for negative feedbacks. In contrast, music outcomes showed significant interactions in the Fb-P3 and theta activities. These findings suggest similar neurophysiological mechanisms in processing cues for potential positive or negative outcomes in these two types of stimuli.
Collapse
Affiliation(s)
- Italo Alí Diez
- Department of Cognition, Development and Educational Psychology, Institute of Neurosciences, University of Barcelona, Spain; Bellvitge Biomedical Research Institute (IDIBELL), Spain; Department of Psychology, University of La Frontera, Chile
| | - Gemma Fàbrega-Camps
- Department of Cognition, Development and Educational Psychology, Institute of Neurosciences, University of Barcelona, Spain; Bellvitge Biomedical Research Institute (IDIBELL), Spain
| | - Jeison Parra-Tíjaro
- Department of Cognition, Development and Educational Psychology, Institute of Neurosciences, University of Barcelona, Spain; Bellvitge Biomedical Research Institute (IDIBELL), Spain
| | - Josep Marco-Pallarés
- Department of Cognition, Development and Educational Psychology, Institute of Neurosciences, University of Barcelona, Spain; Bellvitge Biomedical Research Institute (IDIBELL), Spain.
| |
Collapse
|
2
|
Earl EH, Goyal M, Mishra S, Kannan B, Mishra A, Chowdhury N, Mishra P. EEG based functional connectivity in resting and emotional states may identify major depressive disorder using machine learning. Clin Neurophysiol 2024; 164:130-137. [PMID: 38870669 DOI: 10.1016/j.clinph.2024.05.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 04/02/2024] [Accepted: 05/22/2024] [Indexed: 06/15/2024]
Abstract
OBJECTIVE Disrupted brain network connectivity underlies major depressive disorder (MDD). Altered EEG based Functional connectivity (FC) with Emotional stimuli in major depressive disorder (MDD) in addition to resting state FC may help in improving the diagnostic accuracy of machine learning classification models. We explored the potential of EEG-based FC during resting state and emotional processing, for diagnosing MDD using machine learning approach. METHODS EEG was recorded during resting state and while watching emotionally contagious happy and sad videos in 24 drug-naïve MDD patients and 25 healthy controls. FC was quantified using the Phase Lag Index. Three Random Forest classifier models were constructed to classify MDD patients and healthy controls, Model-I incorporating FC features from the resting state and Model-II and Model-III incorporating FC features while watching happy and sad videos respectively. RESULTS Important features distinguishing MDD and healthy controls were from all frequency bands and represent functional connectivity between fronto-temporal, fronto-parietal and fronto occipital regions. The cross-validation accuracies for Model-I, Model-II and Model-III were 92.3%, 94.9% and 89.7% and test accuracies were 60%, 80% and 70% respectively. Incorporating emotionally contagious videos improved the classification accuracies. CONCLUSION Findings support EEG FC patterns during resting state and emotional processing along with machine learning can be used to diagnose MDD. Future research should focus on replicating and validating these results. SIGNIFICANCE EEG FC pattern combined with machine learning may be used for assisting in diagnosing MDD.
Collapse
Affiliation(s)
- Estelle Havilla Earl
- Department of Physiology, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Manish Goyal
- Department of Physiology, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Shree Mishra
- Department of Psychiatry, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Balakrishnan Kannan
- Department of Physiology, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Anushree Mishra
- Department of Psychiatry, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
| | - Nilotpal Chowdhury
- Department of Pathology, All India Institute of Medical Sciences, Rishikesh, Uttarakhand, India
| | - Priyadarshini Mishra
- Department of Physiology, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India.
| |
Collapse
|
3
|
Herff SA, Bonetti L, Cecchetti G, Vuust P, Kringelbach ML, Rohrmeier MA. Hierarchical syntax model of music predicts theta power during music listening. Neuropsychologia 2024; 199:108905. [PMID: 38740179 DOI: 10.1016/j.neuropsychologia.2024.108905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 03/07/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
Linguistic research showed that the depth of syntactic embedding is reflected in brain theta power. Here, we test whether this also extends to non-linguistic stimuli, specifically music. We used a hierarchical model of musical syntax to continuously quantify two types of expert-annotated harmonic dependencies throughout a piece of Western classical music: prolongation and preparation. Prolongations can roughly be understood as a musical analogue to linguistic coordination between constituents that share the same function (e.g., 'pizza' and 'pasta' in 'I ate pizza and pasta'). Preparation refers to the dependency between two harmonies whereby the first implies a resolution towards the second (e.g., dominant towards tonic; similar to how the adjective implies the presence of a noun in 'I like spicy … '). Source reconstructed MEG data of sixty-five participants listening to the musical piece was then analysed. We used Bayesian Mixed Effects models to predict theta envelope in the brain, using the number of open prolongation and preparation dependencies as predictors whilst controlling for audio envelope. We observed that prolongation and preparation both carry independent and distinguishable predictive value for theta band fluctuation in key linguistic areas such as the Angular, Superior Temporal, and Heschl's Gyri, or their right-lateralised homologues, with preparation showing additional predictive value for areas associated with the reward system and prediction. Musical expertise further mediated these effects in language-related brain areas. Results show that predictions of precisely formalised music-theoretical models are reflected in the brain activity of listeners which furthers our understanding of the perception and cognition of musical structure.
Collapse
Affiliation(s)
- Steffen A Herff
- Sydney Conservatorium of Music, University of Sydney, Sydney, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Gabriele Cecchetti
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
4
|
Roh H, Kim W, Hwang SY, Lee MS, Kim JH. Altered pattern of theta and gamma oscillation to visual stimuli in patients with postconcussion syndrome. J Neurophysiol 2024; 131:1240-1249. [PMID: 38691013 DOI: 10.1152/jn.00253.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 03/18/2024] [Accepted: 04/27/2024] [Indexed: 05/03/2024] Open
Abstract
Although many patients with mild traumatic brain injury (mTBI) suffer from postconcussional syndrome (PCS) including abnormal emotional responses, most conventional imaging studies fail to detect any causative brain lesion. We hypothesized that event-related electroencephalography (EEG) recordings with time-frequency analysis would show a distinguishable pattern in patients with mTBI with PCS compared with normal healthy controls. EEG signals were collected from a total of 18 subjects: eight patients with mTBI with PCS and 10 healthy control subjects. The signals were recorded while the subjects were presented with affective visual stimuli, including neutral, pleasant, and unpleasant emotional cues. Event-related spectral perturbation analysis was performed to calculate frontal midline theta activity and posterior midline gamma activity, followed by statistical analysis to identify whether patients with mTBI with PCS have distinct patterns of theta or gamma oscillations in response to affective stimuli. Compared with the healthy control group, patients with mTBI with PCS did not show a significant increase in the power of frontal theta activity in response to the pleasant stimuli, indicating less susceptibility toward pleasant cues. Moreover, the patient group showed attenuated gamma oscillatory activity, with no clear alteration in gamma oscillations in response to either pleasant or unpleasant cues. This study demonstrates that patients with mTBI with PCS exhibited altered patterns of oscillatory activities in the theta and gamma bands in response to affective visual stimuli compared with the normal control group. The current finding implicates that these distinguishable patterns of brain oscillation may represent the mechanism behind various psychiatric symptoms in patients with mTBI.NEW & NOTEWORTHY Patients with mild traumatic brain injury (mTBI) with postconcussional syndrome (PCS) exhibited altered patterns of changes in oscillatory activities in the theta and gamma bands in response to visual affective stimuli. Distinguishable patterns of brain oscillation may represent the mechanism behind various psychiatric symptoms in patients with mTBI.
Collapse
Affiliation(s)
- Haewon Roh
- The Department of Neurosurgery, Guro Hospital, Korea University of Medicine, Seoul, Korea
| | - Won Kim
- The Department of Neurosurgery, Guro Hospital, Korea University of Medicine, Seoul, Korea
| | - Soon-Young Hwang
- The Department of Biostatistics, Korea University of Medicine, Seoul, Korea
| | - Moon Soo Lee
- The Department of Psychiatry, Guro Hospital, Korea University of Medicine, Seoul, Korea
| | - Jong Hyun Kim
- The Department of Neurosurgery, Guro Hospital, Korea University of Medicine, Seoul, Korea
| |
Collapse
|
5
|
Spaccavento S, Carraturo G, Brattico E, Matarrelli B, Rivolta D, Montenegro F, Picciola E, Haumann NT, Jespersen KV, Vuust P, Losavio E. Musical and electrical stimulation as intervention in disorder of consciousness (DOC) patients: A randomised cross-over trial. PLoS One 2024; 19:e0304642. [PMID: 38820520 PMCID: PMC11142721 DOI: 10.1371/journal.pone.0304642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 05/14/2024] [Indexed: 06/02/2024] Open
Abstract
BACKGROUND Disorders of consciousness (DOC), i.e., unresponsive wakefulness syndrome (UWS) or vegetative state (VS) and minimally conscious state (MCS), are conditions that can arise from severe brain injury, inducing widespread functional changes. Given the damaging implications resulting from these conditions, there is an increasing need for rehabilitation treatments aimed at enhancing the level of consciousness, the quality of life, and creating new recovery perspectives for the patients. Music may represent an additional rehabilitative tool in contexts where cognition and language are severely compromised, such as among DOC patients. A further type of rehabilitation strategies for DOC patients consists of Non-Invasive Brain Stimulation techniques (NIBS), including transcranial electrical stimulation (tES), affecting neural excitability and promoting brain plasticity. OBJECTIVE We here propose a novel rehabilitation protocol for DOC patients that combines music-based intervention and NIBS in neurological patients. The main objectives are (i) to assess the residual neuroplastic processes in DOC patients exposed to music, (ii) to determine the putative neural modulation and the clinical outcome in DOC patients of non-pharmacological strategies, i.e., tES(control condition), and music stimulation, and (iii) to evaluate the putative positive impact of this intervention on caregiver's burden and psychological distress. METHODS This is a randomised cross-over trial in which a total of 30 participants will be randomly allocated to one of three different combinations of conditions: (i) Music only, (ii) tES only (control condition), (iii) Music + tES. The music intervention will consist of listening to an individually tailored playlist including familiar and self-relevant music together with fixed songs; concerning NIBS, tES will be applied for 20 minutes every day, 5 times a week, for two weeks. After these stimulations two weeks of placebo treatments will follow, with sham stimulation combined with noise for other two weeks. The primary outcomes will be clinical, i.e., based on the differences in the scores obtained on the neuropsychological tests, such as Coma Recovery Scale-Revised, and neurophysiological measures as EEG, collected pre-intervention, post-intervention and post-placebo. DISCUSSION This study proposes a novel rehabilitation protocol for patients with DOC including a combined intervention of music and NIBS. Considering the need for rigorous longitudinal randomised controlled trials for people with severe brain injury disease, the results of this study will be highly informative for highlighting and implementing the putative beneficial role of music and NIBS in rehabilitation treatments. TRIAL REGISTRATION ClinicalTrials.gov identifier: NCT05706831, registered on January 30, 2023.
Collapse
Affiliation(s)
- Simona Spaccavento
- Istituti Clinici Scientifici Maugeri IRCCS, Institute of Bari, Bari, Italy
| | - Giulio Carraturo
- Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy
| | - Elvira Brattico
- Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Aarhus/Aalborg, Aarhus, Denmark
| | - Benedetta Matarrelli
- Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy
| | - Davide Rivolta
- Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy
| | - Fabiana Montenegro
- Istituti Clinici Scientifici Maugeri IRCCS, Institute of Bari, Bari, Italy
| | - Emilia Picciola
- Istituti Clinici Scientifici Maugeri IRCCS, Institute of Bari, Bari, Italy
| | - Niels Trusbak Haumann
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Aarhus/Aalborg, Aarhus, Denmark
| | - Kira Vibe Jespersen
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Aarhus/Aalborg, Aarhus, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Aarhus/Aalborg, Aarhus, Denmark
| | - Ernesto Losavio
- Istituti Clinici Scientifici Maugeri IRCCS, Institute of Bari, Bari, Italy
| |
Collapse
|
6
|
Wang D, Lian J, Cheng H, Zhou Y. Music-evoked emotions classification using vision transformer in EEG signals. Front Psychol 2024; 15:1275142. [PMID: 38638516 PMCID: PMC11024288 DOI: 10.3389/fpsyg.2024.1275142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 03/20/2024] [Indexed: 04/20/2024] Open
Abstract
Introduction The field of electroencephalogram (EEG)-based emotion identification has received significant attention and has been widely utilized in both human-computer interaction and therapeutic settings. The process of manually analyzing electroencephalogram signals is characterized by a significant investment of time and work. While machine learning methods have shown promising results in classifying emotions based on EEG data, the task of extracting distinct characteristics from these signals still poses a considerable difficulty. Methods In this study, we provide a unique deep learning model that incorporates an attention mechanism to effectively extract spatial and temporal information from emotion EEG recordings. The purpose of this model is to address the existing gap in the field. The implementation of emotion EEG classification involves the utilization of a global average pooling layer and a fully linked layer, which are employed to leverage the discernible characteristics. In order to assess the effectiveness of the suggested methodology, we initially gathered a dataset of EEG recordings related to music-induced emotions. Experiments Subsequently, we ran comparative tests between the state-of-the-art algorithms and the method given in this study, utilizing this proprietary dataset. Furthermore, a publicly accessible dataset was included in the subsequent comparative trials. Discussion The experimental findings provide evidence that the suggested methodology outperforms existing approaches in the categorization of emotion EEG signals, both in binary (positive and negative) and ternary (positive, negative, and neutral) scenarios.
Collapse
Affiliation(s)
- Dong Wang
- School of Information Science and Electrical Engineering, Shandong Jiaotong University, Jinan, China
- School of Intelligence Engineering, Shandong Management University, Jinan, China
| | - Jian Lian
- School of Intelligence Engineering, Shandong Management University, Jinan, China
| | - Hebin Cheng
- School of Intelligence Engineering, Shandong Management University, Jinan, China
| | - Yanan Zhou
- School of Arts, Beijing Foreign Studies University, Beijing, China
| |
Collapse
|
7
|
Song L, Zhang G, Wang X, Ma L, Silvennoinen J, Cong F. Does artistic training affect color perception? A study of ERPs and EROs in experiencing colors of different brightness. Biol Psychol 2024; 188:108787. [PMID: 38552832 DOI: 10.1016/j.biopsycho.2024.108787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 03/14/2024] [Accepted: 03/24/2024] [Indexed: 04/04/2024]
Abstract
Color is a visual cue that can convey emotions and attract attention, and there is no doubt that brightness is an important element of color differentiation. To examine the impact of art training on color perception, 44 participants were assigned to two groups-one for those with and one for those without art training-in an EEG experiment. While the participants had their electroencephalographic data recorded, they scored their emotional responses to color stimuli of different brightness levels based on the Munsell color system. The behavioral results revealed that in both groups, high-brightness colors were rated more positively than low-brightness colors. Furthermore, event-related potential results for the artist group showed that high-brightness colors enhanced P2 and P3 amplitudes. Moreover, non-artists had longer N2 latency than artists, and there was a significant Group × Brightness interaction separately for the N2 and P3 components. Simple effect analysis showed that N2 and P3 amplitudes were substantially higher for high-brightness stimuli than for lower-brightness stimuli in the artistic group, but this was not the case in the non-artist group. Additionally, evoked event-related oscillation results showed that in both groups, high-brightness stimuli also elicited large delta, theta, and alpha as well as low gamma responses. These results indicate that high-brightness color stimuli elicit more positive emotions and stronger neurological reactions and that artistic training may have a positive effect on top-down visual perception.
Collapse
Affiliation(s)
- Liting Song
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian 116024, Liaoning Province, China; Faculty of Information Technology, University of Jyväskylä, Jyväskylä 40014, Finland
| | - Guanghui Zhang
- Center for Mind and Brain, University of California-Davis, Davis 95618, CA, USA.
| | - Xiaoshuang Wang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian 116024, Liaoning Province, China
| | - Lan Ma
- Department of Industrial Design, School of Architecture and Fine Art, Dalian University of Technology, Dalian 116014, Liaoning Province, China
| | - Johanna Silvennoinen
- Faculty of Information Technology, University of Jyväskylä, Jyväskylä 40014, Finland
| | - Fengyu Cong
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian 116024, Liaoning Province, China; Faculty of Information Technology, University of Jyväskylä, Jyväskylä 40014, Finland; School of Artificial Intelligence, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116014, Liaoning Province, China; Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian University of Technology, Dalian 116024, Liaoning Province, China
| |
Collapse
|
8
|
Ren Y, Brown TI. Beyond the ears: A review exploring the interconnected brain behind the hierarchical memory of music. Psychon Bull Rev 2024; 31:507-530. [PMID: 37723336 DOI: 10.3758/s13423-023-02376-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/22/2023] [Indexed: 09/20/2023]
Abstract
Music is a ubiquitous element of daily life. Understanding how music memory is represented and expressed in the brain is key to understanding how music can influence human daily cognitive tasks. Current music-memory literature is built on data from very heterogeneous tasks for measuring memory, and the neural correlates appear to differ depending on different forms of memory function targeted. Such heterogeneity leaves many exceptions and conflicts in the data underexplained (e.g., hippocampal involvement in music memory is debated). This review provides an overview of existing neuroimaging results from music-memory related studies and concludes that although music is a special class of event in our lives, the memory systems behind it do in fact share neural mechanisms with memories from other modalities. We suggest that dividing music memory into different levels of a hierarchy (structural level and semantic level) helps understand overlap and divergence in neural networks involved. This is grounded in the fact that memorizing a piece of music recruits brain clusters that separately support functions including-but not limited to-syntax storage and retrieval, temporal processing, prediction versus reality comparison, stimulus feature integration, personal memory associations, and emotion perception. The cross-talk between frontal-parietal music structural processing centers and the subcortical emotion and context encoding areas explains why music is not only so easily memorable but can also serve as strong contextual information for encoding and retrieving nonmusic information in our lives.
Collapse
Affiliation(s)
- Yiren Ren
- Georgia Institute of Technology, College of Science, School of Psychology, Atlanta, GA, USA.
| | - Thackery I Brown
- Georgia Institute of Technology, College of Science, School of Psychology, Atlanta, GA, USA
| |
Collapse
|
9
|
Strauss H, Vigl J, Jacobsen PO, Bayer M, Talamini F, Vigl W, Zangerle E, Zentner M. The Emotion-to-Music Mapping Atlas (EMMA): A systematically organized online database of emotionally evocative music excerpts. Behav Res Methods 2024; 56:3560-3577. [PMID: 38286947 PMCID: PMC11133078 DOI: 10.3758/s13428-024-02336-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2024] [Indexed: 01/31/2024]
Abstract
Selecting appropriate musical stimuli to induce specific emotions represents a recurring challenge in music and emotion research. Most existing stimuli have been categorized according to taxonomies derived from general emotion models (e.g., basic emotions, affective circumplex), have been rated for perceived emotions, and are rarely defined in terms of interrater agreement. To redress these limitations, we present research that served in the development of a new interactive online database, including an initial set of 364 music excerpts from three different genres (classical, pop, and hip/hop) that were rated for felt emotion using the Geneva Emotion Music Scale (GEMS), a music-specific emotion scale. The sample comprised 517 English- and German-speaking participants and each excerpt was rated by an average of 28.76 participants (SD = 7.99). Data analyses focused on research questions that are of particular relevance for musical database development, notably the number of raters required to obtain stable estimates of emotional effects of music and the adequacy of the GEMS as a tool for describing music-evoked emotions across three prominent music genres. Overall, our findings suggest that 10-20 raters are sufficient to obtain stable estimates of emotional effects of music excerpts in most cases, and that the GEMS shows promise as a valid and comprehensive annotation tool for music databases.
Collapse
Affiliation(s)
- Hannah Strauss
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria.
| | - Julia Vigl
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Peer-Ole Jacobsen
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Martin Bayer
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Francesca Talamini
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Wolfgang Vigl
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Eva Zangerle
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria.
| |
Collapse
|
10
|
Ma X, Qi Y, Xu C, Weng Y, Yu J, Sun X, Yu Y, Wu Y, Gao J, Li J, Shu Y, Duan S, Luo B, Pan G. How well do neural signatures of resting-state EEG detect consciousness? A large-scale clinical study. Hum Brain Mapp 2024; 45:e26586. [PMID: 38433651 PMCID: PMC10910334 DOI: 10.1002/hbm.26586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 12/12/2023] [Accepted: 12/21/2023] [Indexed: 03/05/2024] Open
Abstract
The assessment of consciousness states, especially distinguishing minimally conscious states (MCS) from unresponsive wakefulness states (UWS), constitutes a pivotal role in clinical therapies. Despite that numerous neural signatures of consciousness have been proposed, the effectiveness and reliability of such signatures for clinical consciousness assessment still remains an intense debate. Through a comprehensive review of the literature, inconsistent findings are observed about the effectiveness of diverse neural signatures. Notably, the majority of existing studies have evaluated neural signatures on a limited number of subjects (usually below 30), which may result in uncertain conclusions due to small data bias. This study presents a systematic evaluation of neural signatures with large-scale clinical resting-state electroencephalography (EEG) signals containing 99 UWS, 129 MCS, 36 emergence from the minimally conscious state, and 32 healthy subjects (296 total) collected over 3 years. A total of 380 EEG-based metrics for consciousness detection, including spectrum features, nonlinear measures, functional connectivity, and graph-based measures, are summarized and evaluated. To further mitigate the effect of data bias, the evaluation is performed with bootstrap sampling so that reliable measures can be obtained. The results of this study suggest that relative power in alpha and delta serve as dependable indicators of consciousness. With the MCS group, there is a notable increase in the phase lag index-related connectivity measures and enhanced functional connectivity between brain regions in comparison to the UWS group. A combination of features enables the development of an automatic detector of conscious states.
Collapse
Affiliation(s)
- Xiulin Ma
- Department of Neurobiology and Department of Neurology, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
- NHC and CAMS Key Laboratory of Medical Neurobiology, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou, China
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, and the Affiliated Mental Health Center & Hangzhou Seventh People's Hospital, Zhejiang University, Hangzhou, China
| | - Yu Qi
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, and the Affiliated Mental Health Center & Hangzhou Seventh People's Hospital, Zhejiang University, Hangzhou, China
- The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China
| | - Chuan Xu
- Department of Neurobiology and Department of Neurology, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
- Sir Run Run Shaw Hospital, Hangzhou, China
| | - Yijie Weng
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Jie Yu
- Department of Neurobiology and Department of Neurology, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Xuyun Sun
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yamei Yu
- Department of Neurobiology and Department of Neurology, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
- Sir Run Run Shaw Hospital, Hangzhou, China
| | - Yuehao Wu
- Department of Neurobiology and Department of Neurology, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Jian Gao
- Department of Rehabilitation, Hangzhou Mingzhou Brain Rehabilitation Hospital, Hangzhou, China
| | - Jingqi Li
- Department of Rehabilitation, Hangzhou Mingzhou Brain Rehabilitation Hospital, Hangzhou, China
| | - Yousheng Shu
- Department of Neurosurgery, Jinshan Hospital, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, Institute for Translational Brain Research, Fudan University, Shanghai, China
| | - Shumin Duan
- Department of Neurobiology and Department of Neurology, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
- NHC and CAMS Key Laboratory of Medical Neurobiology, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou, China
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, and the Affiliated Mental Health Center & Hangzhou Seventh People's Hospital, Zhejiang University, Hangzhou, China
| | - Benyan Luo
- Department of Neurobiology and Department of Neurology, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, and the Affiliated Mental Health Center & Hangzhou Seventh People's Hospital, Zhejiang University, Hangzhou, China
- The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China
| | - Gang Pan
- Department of Neurobiology and Department of Neurology, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, and the Affiliated Mental Health Center & Hangzhou Seventh People's Hospital, Zhejiang University, Hangzhou, China
- The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| |
Collapse
|
11
|
Lin D, Zhu T, Wang Y. Emotion contagion and physiological synchrony: The more intimate relationships, the more contagion of positive emotions. Physiol Behav 2024; 275:114434. [PMID: 38092069 DOI: 10.1016/j.physbeh.2023.114434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 12/09/2023] [Accepted: 12/10/2023] [Indexed: 12/18/2023]
Abstract
The study aimed to explore how interpersonal closeness (friends vs. strangers) and emotion type (positive vs. negative) influenced emotion contagion and physiological synchrony between interacting partners. Twenty-eight friend dyads (n = 56) and 29 stranger dyads (n = 58) participated in an emotion contagion laboratory task. In each dyad, one participant, the 'sender', was randomly asked to watch a film clip (neutral, positive, or negative), while their partner, the 'observer' passively observed the sender's facial expressions. Participants' electrocardiograms (ECG) and facial electromyography (EMG) signals were recorded using the BIOPAC system. Results revealed that observing the sender's facial expressions led to the observer's spontaneous mimicry and emotional contagion, accompanied by enhanced physiological synchrony between interacting partners. In the positive emotion condition, the observers reported more positive emotions and displayed stronger zygomaticus major activity in friend dyads than in stranger dyads. Greater physiological synchrony (heart rate and heart rate variability) between interacting partners was also observed in friend dyads than in stranger dyads in the positive emotion condition. These results indicate that positive emotion contagion is more likely to occur between close partners than negative emotion contagion.
Collapse
Affiliation(s)
- Daichun Lin
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Tongtong Zhu
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yanmei Wang
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China; Shanghai Changning Mental Health Center, Shanghai, China.
| |
Collapse
|
12
|
Hassan A, Deshun Z. Nature's therapeutic power: a study on the psychophysiological effects of touching ornamental grass in Chinese women. JOURNAL OF HEALTH, POPULATION, AND NUTRITION 2024; 43:23. [PMID: 38310320 PMCID: PMC10838459 DOI: 10.1186/s41043-024-00514-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 01/25/2024] [Indexed: 02/05/2024]
Abstract
The health of city residents is at risk due to the high rate of urbanization and the extensive use of electronics. In the context of urbanization, individuals have become increasingly disconnected from nature, resulting in elevated stress levels among adults. The goal of this study was to investigate the physical and psychological benefits of spending time in nature. The benefits of touching real grass and artificial turf (the control activity) outdoors with the palm of the hand for five minutes were measured. Blood pressure and electroencephalography (EEG) as well as State-trait Anxiety Inventory (STAI) scores, and the semantic differential scale (SDM) were used to investigate psychophysiological responses. Touching real grass was associated with significant changes in brainwave rhythms and a reduction in both systolic and diastolic blood pressure compared to touching artificial turf. In addition, SDM scores revealed that touching real grass increased relaxation, comfort, and a sense of naturalness while decreasing anxiety levels. Compared to the control group, the experimental group had higher mean scores in both meditation and attentiveness. Our findings indicate that contact with real grass may reduce physiological and psychological stress in adults.
Collapse
Affiliation(s)
- Ahmad Hassan
- College of Architecture and Urban Planning, Tongji University, 1239 Siping Rd, Shanghai, People's Republic of China.
| | - Zhang Deshun
- College of Architecture and Urban Planning, Tongji University, 1239 Siping Rd, Shanghai, People's Republic of China.
| |
Collapse
|
13
|
Martínez-Saez MC, Ros L, López-Cano M, Nieto M, Navarro B, Latorre JM. Effect of popular songs from the reminiscence bump as autobiographical memory cues in aging: a preliminary study using EEG. Front Neurosci 2024; 17:1300751. [PMID: 38264494 PMCID: PMC10803499 DOI: 10.3389/fnins.2023.1300751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 12/26/2023] [Indexed: 01/25/2024] Open
Abstract
Introduction Music has the capacity to evoke emotions and memories. This capacity is influenced by whether or not the music is from the reminiscence bump (RB) period. However, research on the neural correlates of the processes of evoking autobiographical memories through songs is scant. The aim of this study was to analyze the differences at the level of frequency band activation in two situations: (1) whether or not the song is able to generate a memory; and (2) whether or not the song is from the RB period. Methods A total of 35 older adults (22 women, age range: 61-73 years) listened to 10 thirty-second musical clips that coincided with the period of their RB and 10 from the immediately subsequent 5 years (non-RB). To record the EEG signal, a brain-computer interface (BCI) with 14 channels was used. The signal was recorded during the 30-seconds of listening to each music clip. Results The results showed differences in the activation levels of the frequency bands in the frontal and temporal regions. It was also found that the non-retrieval of a memory in response to a song clip showed a greater activation of low frequency waves in the frontal region, compared to the trials that did generate a memory. Discussion These results suggest the importance of analyzing not only brain activation, but also neuronal functional connectivity at older ages, in order to better understand cognitive and emotional functions in aging.
Collapse
Affiliation(s)
- Maria Cruz Martínez-Saez
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
| | - Laura Ros
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
- Applied Cognitive Psychology Laboratory, Research Institute for Neurological Disabilities, University of Castilla La Mancha, Albacete, Spain
| | - Marco López-Cano
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
| | - Marta Nieto
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
- Applied Cognitive Psychology Laboratory, Research Institute for Neurological Disabilities, University of Castilla La Mancha, Albacete, Spain
| | - Beatriz Navarro
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
- Applied Cognitive Psychology Laboratory, Research Institute for Neurological Disabilities, University of Castilla La Mancha, Albacete, Spain
| | - Jose Miguel Latorre
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
- Applied Cognitive Psychology Laboratory, Research Institute for Neurological Disabilities, University of Castilla La Mancha, Albacete, Spain
| |
Collapse
|
14
|
Yano H, Takiguchi T, Nakagawa S. Magnetic cortical oscillations associated with subjective auditory coolness during paired comparison of time-varying HVAC sounds. Neuroreport 2024; 35:1-8. [PMID: 37942702 DOI: 10.1097/wnr.0000000000001969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023]
Abstract
The impressions of heating, ventilation, and air conditioning (HVAC) sounds are important for the comfort people experience in their living spaces. Revealing neural substrates of the impressions induced by HVAC sounds can help to develop neurophysiological indices of the comfort of HVAC sounds. There have been numerous studies on the brain activities associated with the pleasantness of sounds, but few on the brain activities associated with the thermal impressions of HVAC sounds. Seven time-varying HVAC sounds were synthesized as stimuli using amplitude modulation. Six participants took part in subjective evaluation tests and MEG measurements. Subjective coolness of the HVAC sounds was measured using the paired comparison method. Magnetoencephalographic (MEG) measurements were carried out while participants listened to and compared the time-varying HVAC sounds. Time-frequency analysis and cluster-based analysis were performed on the MEG data. The subjective evaluation tests showed that the subjective coolness of the amplitude-modulated HVAC sounds was affected by the modulation frequency, and that there was individual difference in subjective coolness. A cluster-based analysis of the MEG data revealed that the brain activities of two participants significantly differed when they listened to cooler or less cool HVAC sounds. The frontal low-theta (4-5 Hz) and the temporal alpha (8-13 Hz) activities were observed. The frontal low-theta and the temporal alpha activities may be associated with the coolness of HVAC sound. This result suggests that the comfort level of HVAC sound can be evaluated and individually designed using neurophysiological measurements.
Collapse
Affiliation(s)
- Hajime Yano
- Graduate School of System Informatics, Kobe University, Kobe
- Biomedical Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Ikeda
| | | | - Seiji Nakagawa
- Biomedical Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Ikeda
- Center for Frontier Medical Engineering, Chiba University, Chiba, Japan
| |
Collapse
|
15
|
Lin K, Zhang L, Cai J, Sun J, Cui W, Liu G. DSE-Mixer: A pure multilayer perceptron network for emotion recognition from EEG feature maps. J Neurosci Methods 2024; 401:110008. [PMID: 37967671 DOI: 10.1016/j.jneumeth.2023.110008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 09/20/2023] [Accepted: 11/09/2023] [Indexed: 11/17/2023]
Abstract
BACKGROUND Decoding emotions from brain maps is a challenging task. Convolutional Neural Network (CNN) is commonly used for EEG feature map. However, due to its local bias, CNN is unable to efficiently utilize the global spatial information of EEG signals which limits the accuracy of emotion recognition. NEW METHODS We design the Dual-scal EEG-Mixer(DSE-Mixer) model for EEG feature map processing. Its brain region mixer layer and electrode mixer layer are designed to fuse EEG information at different spatial scales. For each mixer layer, the structure of alternating mixing of rows and columns of the input table enables cross-regional and cross-Mchannel communication of EEG information. In addition, a channel attention mechanism is introduced to adaptively learn the importance of each channel. RESULTS On the DEAP dataset, the DSE-Mixer model achieved a binary classification accuracy of 95.19% for arousal and 95.22% for valence. For the four-class classification across valence and arousal, the accuracies were HVHA: 92.12%, HVLA: 89.77%, LVLA: 93.35%, and LVHA: 92.63%. On the SEED dataset, the average recognition accuracy for the three emotions (positive, negative, and neutral) is 93.69%. COMPARISON WITH EXISTING METHODS In the emotion recognition research based on the DEAP and SEED datasets, DSE-Mixer achieved a high ranking performance. Compared to the two commonly used model in computer vision field, CNN and Vision Transformer(VIT), DSE-Mixer achieved significantly higher classification accuracy while requiring much less computational complexity. CONCLUSIONS DSE-Mixer provides a novel brain map processing model with a small size, demonstrating outstanding performance in emotion recognition.
Collapse
Affiliation(s)
- Kai Lin
- Colleage of Instrumentation and Electrical Engineering, Jilin University, Changchun, 130000, Jilin, China.
| | - Linhang Zhang
- Colleage of Instrumentation and Electrical Engineering, Jilin University, Changchun, 130000, Jilin, China.
| | - Jing Cai
- Colleage of Instrumentation and Electrical Engineering, Jilin University, Changchun, 130000, Jilin, China.
| | - Jiaqi Sun
- Colleage of Instrumentation and Electrical Engineering, Jilin University, Changchun, 130000, Jilin, China.
| | - Wenjie Cui
- Colleage of Instrumentation and Electrical Engineering, Jilin University, Changchun, 130000, Jilin, China.
| | - Guangda Liu
- Colleage of Instrumentation and Electrical Engineering, Jilin University, Changchun, 130000, Jilin, China.
| |
Collapse
|
16
|
Merrill J, Ackermann TI, Czepiel A. Effects of disliked music on psychophysiology. Sci Rep 2023; 13:20641. [PMID: 38001083 PMCID: PMC10674009 DOI: 10.1038/s41598-023-46963-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 11/07/2023] [Indexed: 11/26/2023] Open
Abstract
While previous research has shown the positive effects of music listening in response to one's favorite music, the negative effects of one's most disliked music have not gained much attention. In the current study, participants listened to three self-selected disliked musical pieces which evoked highly unpleasant feelings. As a contrast, three musical pieces were individually selected for each participant based on neutral liking ratings they provided to other participants' disliked music. During music listening, real-time ratings of subjective (dis)pleasure and simultaneous recordings of peripheral measures were obtained. Results showed that compared to neutral music, listening to disliked music evoked physiological reactions reflecting higher arousal (heart rate, skin conductance response, body temperature), disgust (levator labii muscle), anger (corrugator supercilii muscle), distress and grimacing (zygomaticus major muscle). The differences between conditions were most prominent during "very unpleasant" real-time ratings, showing peak responses for the disliked music. Hence, disliked music has a strenuous effect, as shown in strong physiological arousal responses and facial expression, reflecting the listener's attitude toward the music.
Collapse
Affiliation(s)
- Julia Merrill
- Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, 60322, Frankfurt am Main, Germany.
- Institute of Music, University of Kassel, Kassel, Germany.
| | - Taren-Ida Ackermann
- Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, 60322, Frankfurt am Main, Germany
| | - Anna Czepiel
- Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, 60322, Frankfurt am Main, Germany
| |
Collapse
|
17
|
Czepiel A, Fink LK, Seibert C, Scharinger M, Kotz SA. Aesthetic and physiological effects of naturalistic multimodal music listening. Cognition 2023; 239:105537. [PMID: 37487303 DOI: 10.1016/j.cognition.2023.105537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 05/31/2023] [Accepted: 06/24/2023] [Indexed: 07/26/2023]
Abstract
Compared to audio only (AO) conditions, audiovisual (AV) information can enhance the aesthetic experience of a music performance. However, such beneficial multimodal effects have yet to be studied in naturalistic music performance settings. Further, peripheral physiological correlates of aesthetic experiences are not well-understood. Here, participants were invited to a concert hall for piano performances of Bach, Messiaen, and Beethoven, which were presented in two conditions: AV and AO. They rated their aesthetic experience (AE) after each piece (Experiment 1 and 2), while peripheral signals (cardiorespiratory measures, skin conductance, and facial muscle activity) were continuously measured (Experiment 2). Factor scores of AE were significantly higher in the AV condition in both experiments. LF/HF ratio, a heart rhythm that represents activation of the sympathetic nervous system, was higher in the AO condition, suggesting increased arousal, likely caused by less predictable sound onsets in the AO condition. We present partial evidence that breathing was faster and facial muscle activity was higher in the AV condition, suggesting that observing a performer's movements likely enhances motor mimicry in these more voluntary peripheral measures. Further, zygomaticus ('smiling') muscle activity was a significant predictor of AE. Thus, we suggest physiological measures are related to AE, but at different levels: the more involuntary measures (i.e., heart rhythms) may reflect more sensory aspects, while the more voluntary measures (i.e., muscular control of breathing and facial responses) may reflect the liking aspect of an AE. In summary, we replicate and extend previous findings that AV information enhances AE in a naturalistic music performance setting. We further show that a combination of self-report and peripheral measures benefit a meaningful assessment of AE in naturalistic music performance settings.
Collapse
Affiliation(s)
- Anna Czepiel
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany; Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| | - Lauren K Fink
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany; Max Planck-NYU Center for Language, Music, and Emotion, Frankfurt am Main, Germany
| | - Christoph Seibert
- Institute for Music Informatics and Musicology, University of Music Karlsruhe, Karlsruhe, Germany
| | - Mathias Scharinger
- Research Group Phonetics, Department of German Linguistics, University of Marburg, Marburg, Germany; Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
18
|
Xu J, Hu L, Qiao R, Hu Y, Tian Y. Music-emotion EEG coupling effects based on representational similarity. J Neurosci Methods 2023; 398:109959. [PMID: 37661055 DOI: 10.1016/j.jneumeth.2023.109959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/05/2023] [Accepted: 08/30/2023] [Indexed: 09/05/2023]
Abstract
BACKGROUND Music can evoke intense emotions and music emotion is a complex cognitive process. However, we know little about the cognitive mechanisms underlying these processes, and there are significant individual differences in the emotional responses to the same musical stimuli. NEW METHOD We used the inter-subject representational similarity analysis (IS-RSA) method to investigate the shared music emotion responses across multiple participants. In addition, we extended IS-RSA to estimate the group cross-frequency coupling effects of music emotion. Based on the cross-frequency coupling IS-RSA, we analyzed the differences in cross-frequency coupling patterns under different music emotions using MI. Comparison of existing methods: most current IS-RSA analyses focus on within-frequency band analysis. However, the cognitive processing of music emotion involves not only activation and brain network connections differences within frequency bands but also information communication between frequency bands. RESULTS The results of the within-frequency band IS-RSA analysis showed that the theta and gamma frequency bands play important roles in the inter-participant consistency of music emotion. The inter-frequency band IS-RSA analysis showed that the theta-beta coupling pattern exhibited stronger inter-participant consistency compared to the theta-gamma coupling pattern, and the theta-beta coupling had significant consistent representation across various music conditions. Through the significant regions of cross-frequency coupling representation similarity analysis, we performed phase-amplitude coupling analysis on FC4-C6 and FC4-Pz connections. For the theta-beta coupling pattern, we found that the MI of these two connections exhibited different coupling patterns under different music conditions, and they showed a significant decrease compared to the baseline period.
Collapse
Affiliation(s)
- Jiayang Xu
- School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Liangliang Hu
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; West China institute of children's brain and cognition, Chongqing university of education, Chongqing 400065, China
| | - Rui Qiao
- School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Yilin Hu
- School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Yin Tian
- School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; Institute for Advanced Sciences,Chongqing University of Posts and Telecommunications, Chongqing 400065, China; Chongqing Institute for Brain and Intelligence, Guangyang Bay Laboratory, Chongqing 400064, China.
| |
Collapse
|
19
|
Gueguen L, Henry S, Delbos M, Lemasson A, Hausberger M. Selected Acoustic Frequencies Have a Positive Impact on Behavioural and Physiological Welfare Indicators in Thoroughbred Racehorses. Animals (Basel) 2023; 13:2970. [PMID: 37760370 PMCID: PMC10525862 DOI: 10.3390/ani13182970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 08/12/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023] Open
Abstract
(1) Background: Since antiquity, it is considered that sounds influence human emotional states and health. Acoustic enrichment has also been proposed for domestic animals. However, in both humans and animals, effects vary according to the type of sound. Human studies suggest that frequencies, more than melodies, play a key role. Low and high frequencies, music tuning frequency and even EEG slow waves used for 'neurofeedback' produce effects. (2) Methods: We tested the possible impact of such pure frequencies on racehorses' behavior and physiology. A commercial non-audible acoustic stimulus, composed of an array of the above-mentioned frequencies, was broadcasted twice daily and for three weeks to 12 thoroughbred horses in their home stall. (3) Results: The results show a decrease in stereotypic behaviors and other indicators such as yawning or vacuum chewing, an increase in the time spent in recumbent resting and foraging, and better hematological measures during and after the playback phase for 4 of the 10 physiological parameters measured. (4) Conclusions: These results open new lines of research on possible ways of alleviating the stress related to housing and training conditions in racehorses and of improving physical recovery.
Collapse
Affiliation(s)
- Léa Gueguen
- Univ Rennes, Normandie Univ, CNRS, EthoS (Éthologie Animale et Humaine)—UMR 6552, 35000 Rennes, France; (S.H.); (M.D.); (A.L.); (M.H.)
- UMR 8002 Integrative Neuroscience and Cognition Center, CNRS, Université Paris-Cité, 75006 Paris, France
| | - Séverine Henry
- Univ Rennes, Normandie Univ, CNRS, EthoS (Éthologie Animale et Humaine)—UMR 6552, 35000 Rennes, France; (S.H.); (M.D.); (A.L.); (M.H.)
| | - Maëlle Delbos
- Univ Rennes, Normandie Univ, CNRS, EthoS (Éthologie Animale et Humaine)—UMR 6552, 35000 Rennes, France; (S.H.); (M.D.); (A.L.); (M.H.)
| | - Alban Lemasson
- Univ Rennes, Normandie Univ, CNRS, EthoS (Éthologie Animale et Humaine)—UMR 6552, 35000 Rennes, France; (S.H.); (M.D.); (A.L.); (M.H.)
- Institut Universitaire de France, 75005 Paris, France
| | - Martine Hausberger
- Univ Rennes, Normandie Univ, CNRS, EthoS (Éthologie Animale et Humaine)—UMR 6552, 35000 Rennes, France; (S.H.); (M.D.); (A.L.); (M.H.)
- UMR 8002 Integrative Neuroscience and Cognition Center, CNRS, Université Paris-Cité, 75006 Paris, France
| |
Collapse
|
20
|
Proverbio AM. Listening to dissonant and atonal music induces psychological tension and anxiety: Comment on "Consonance and dissonance perception. A critical review" by Di Stefano et al. Phys Life Rev 2023; 46:69-70. [PMID: 37285665 DOI: 10.1016/j.plrev.2023.05.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 05/26/2023] [Indexed: 06/09/2023]
Affiliation(s)
- Alice Mado Proverbio
- Cognitive Electrophysiology Lab, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo, 1, 20162, Milan, Italy.
| |
Collapse
|
21
|
Huang Z, Ma Y, Su J, Shi H, Jia S, Yuan B, Li W, Geng J, Yang T. CDBA: a novel multi-branch feature fusion model for EEG-based emotion recognition. Front Physiol 2023; 14:1200656. [PMID: 37546532 PMCID: PMC10399240 DOI: 10.3389/fphys.2023.1200656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 07/10/2023] [Indexed: 08/08/2023] Open
Abstract
EEG-based emotion recognition through artificial intelligence is one of the major areas of biomedical and machine learning, which plays a key role in understanding brain activity and developing decision-making systems. However, the traditional EEG-based emotion recognition is a single feature input mode, which cannot obtain multiple feature information, and cannot meet the requirements of intelligent and high real-time brain computer interface. And because the EEG signal is nonlinear, the traditional methods of time domain or frequency domain are not suitable. In this paper, a CNN-DSC-Bi-LSTM-Attention (CDBA) model based on EEG signals for automatic emotion recognition is presented, which contains three feature-extracted channels. The normalized EEG signals are used as an input, the feature of which is extracted by multi-branching and then concatenated, and each channel feature weight is assigned through the attention mechanism layer. Finally, Softmax was used to classify EEG signals. To evaluate the performance of the proposed CDBA model, experiments were performed on SEED and DREAMER datasets, separately. The validation experimental results show that the proposed CDBA model is effective in classifying EEG emotions. For triple-category (positive, neutral and negative) and four-category (happiness, sadness, fear and neutrality), the classification accuracies were respectively 99.44% and 99.99% on SEED datasets. For five classification (Valence 1-Valence 5) on DREAMER datasets, the accuracy is 84.49%. To further verify and evaluate the model accuracy and credibility, the multi-classification experiments based on ten-fold cross-validation were conducted, the elevation indexes of which are all higher than other models. The results show that the multi-branch feature fusion deep learning model based on attention mechanism has strong fitting and generalization ability and can solve nonlinear modeling problems, so it is an effective emotion recognition method. Therefore, it is helpful to the diagnosis and treatment of nervous system diseases, and it is expected to be applied to emotion-based brain computer interface systems.
Collapse
Affiliation(s)
- Zhentao Huang
- School of Electronic Information, Xijing University, Xi’an, China
| | - Yahong Ma
- School of Electronic Information, Xijing University, Xi’an, China
| | - Jianyun Su
- Department of Neurosurgery, Affiliate Children’s Hospital of Xi’an Jiaotong University, Xi’an, China
| | - Hangyu Shi
- Department of Neurosurgery, Affiliate Children’s Hospital of Xi’an Jiaotong University, Xi’an, China
| | - Shanshan Jia
- Department of Neurology, Affiliate Children’s Hospital of Xi’an Jiaotong University, Xi’an, China
| | - Baoxi Yuan
- School of Electronic Information, Xijing University, Xi’an, China
| | - Weisu Li
- School of Electronic Information, Xijing University, Xi’an, China
| | - Jingzhi Geng
- Graduate Student Institute of Xi’an Medical University, Xi’an, Shanxi Province, China
| | - Tingting Yang
- Graduate Student Institute of Xi’an Medical University, Xi’an, Shanxi Province, China
| |
Collapse
|
22
|
Zhou Y, Lian J. Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals. Front Neurosci 2023; 17:1188696. [PMID: 37483354 PMCID: PMC10358766 DOI: 10.3389/fnins.2023.1188696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/20/2023] [Indexed: 07/25/2023] Open
Abstract
Introduction Emotion plays a vital role in understanding activities and associations. Due to being non-invasive, many experts have employed EEG signals as a reliable technique for emotion recognition. Identifying emotions from multi-channel EEG signals is evolving into a crucial task for diagnosing emotional disorders in neuroscience. One challenge with automated emotion recognition in EEG signals is to extract and select the discriminating features to classify different emotions accurately. Methods In this study, we proposed a novel Transformer model for identifying emotions from multi-channel EEG signals. Note that we directly fed the raw EEG signal into the proposed Transformer, which aims at eliminating the issues caused by the local receptive fields in the convolutional neural networks. The presented deep learning model consists of two separate channels to address the spatial and temporal information in the EEG signals, respectively. Results In the experiments, we first collected the EEG recordings from 20 subjects during listening to music. Experimental results of the proposed approach for binary emotion classification (positive and negative) and ternary emotion classification (positive, negative, and neutral) indicated the accuracy of 97.3 and 97.1%, respectively. We conducted comparison experiments on the same dataset using the proposed method and state-of-the-art techniques. Moreover, we achieved a promising outcome in comparison with these approaches. Discussion Due to the performance of the proposed approach, it can be a potentially valuable instrument for human-computer interface system.
Collapse
Affiliation(s)
- Yanan Zhou
- School of Arts, Beijing Foreign Studies University, Beijing, China
| | - Jian Lian
- School of Intelligence Engineering, Shandong Management University, Jinan, China
| |
Collapse
|
23
|
Zong J, Xiong X, Zhou J, Ji Y, Zhou D, Zhang Q. FCAN-XGBoost: A Novel Hybrid Model for EEG Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:5680. [PMID: 37420845 DOI: 10.3390/s23125680] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/03/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
In recent years, artificial intelligence (AI) technology has promoted the development of electroencephalogram (EEG) emotion recognition. However, existing methods often overlook the computational cost of EEG emotion recognition, and there is still room for improvement in the accuracy of EEG emotion recognition. In this study, we propose a novel EEG emotion recognition algorithm called FCAN-XGBoost, which is a fusion of two algorithms, FCAN and XGBoost. The FCAN module is a feature attention network (FANet) that we have proposed for the first time, which processes the differential entropy (DE) and power spectral density (PSD) features extracted from the four frequency bands of the EEG signal and performs feature fusion and deep feature extraction. Finally, the deep features are fed into the eXtreme Gradient Boosting (XGBoost) algorithm to classify the four emotions. We evaluated the proposed method on the DEAP and DREAMER datasets and achieved a four-category emotion recognition accuracy of 95.26% and 94.05%, respectively. Additionally, our proposed method reduces the computational cost of EEG emotion recognition by at least 75.45% for computation time and 67.51% for memory occupation. The performance of FCAN-XGBoost outperforms the state-of-the-art four-category model and reduces computational costs without losing classification performance compared with other models.
Collapse
Affiliation(s)
- Jing Zong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Xin Xiong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Jianhua Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Ying Ji
- Graduate School, Kunming Medical University, Kunming 650500, China
| | - Diao Zhou
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Qi Zhang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| |
Collapse
|
24
|
Vempati R, Sharma LD. EEG rhythm based emotion recognition using multivariate decomposition and ensemble machine learning classifier. J Neurosci Methods 2023; 393:109879. [PMID: 37182604 DOI: 10.1016/j.jneumeth.2023.109879] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/26/2023] [Accepted: 05/08/2023] [Indexed: 05/16/2023]
Abstract
Recently, electroencephalogram (EEG) signals have shown great potential to recognize human emotions. The goal of effective computing is to assist computers in understanding various types of emotions via human-computer interaction (HCI). Multichannel EEG signals are used to measure the electrical activity of the brain in space and time. Automated emotion recognition using multichannel EEG signals is an interesting area of cognitive neuroscience and affective computing research. This research proposes EEG multichannel rhythmic features and ensemble machine learning (EML) classifiers with leave-one-subject-out cross-validation (LOSOCV) for automatic emotion classification from multichannel EEG recordings. Multivariate fast iterative filtering (MvFIF) is used to assess the EEG rhythm sequences. EEG rhythms delta(δ), theta(θ), alpha(α), beta(β), and gamma(γ) are separated based on the mean frequency of the EEG rhythm sequence. Three Hjorth parameters and nine entropy features were extracted from multichannel EEG rhythms. Extracted features are selected using the minimum redundancy maximum relevance (mRMR) approach. The experimental design was performed on two emotional datasets (GAMEEMO and DREAMER). The validation showed that gamma rhythm multichannel features with EML-based subspace K-nearest neighbor (SS KNN) were as high as 93.5%-99.8%, achieving high classification accuracy. The comparisons of δ, θ, α, β, and γ rhythms with EML, support vector machine (SVM), and artificial neural network (ANN) were performed. we also analyzed multi-class emotions (HVHA, HVLA, LVHA, LVLA) with an ensemble-based bagging tree on gamma rhythm. It provides a novel solution for multichannel rhythm-specific features in EEG data analysis.
Collapse
Affiliation(s)
| | - Lakhan Dev Sharma
- School of Electronics Engineering VIT-AP University, Andhra Pradesh, 522237, India.
| |
Collapse
|
25
|
Plate RC, Jones C, Zhao S, Flum MW, Steinberg J, Daley G, Corbett N, Neumann C, Waller R. "But not the music": psychopathic traits and difficulties recognising and resonating with the emotion in music. Cogn Emot 2023; 37:748-762. [PMID: 37104122 DOI: 10.1080/02699931.2023.2205105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 12/23/2022] [Accepted: 04/05/2023] [Indexed: 04/28/2023]
Abstract
Recognising and responding appropriately to emotions is critical to adaptive psychological functioning. Psychopathic traits (e.g. callous, manipulative, impulsive, antisocial) are related to differences in recognition and response when emotion is conveyed through facial expressions and language. Use of emotional music stimuli represents a promising approach to improve our understanding of the specific emotion processing difficulties underlying psychopathic traits because it decouples recognition of emotion from cues directly conveyed by other people (e.g. facial signals). In Experiment 1, participants listened to clips of emotional music and identified the emotional content (Sample 1, N = 196) or reported on their feelings elicited by the music (Sample 2, N = 197). Participants accurately recognised (t(195) = 32.78, p < .001, d = 4.69) and reported feelings consistent with (t(196) = 7.84, p < .001, d = 1.12) the emotion conveyed in the music. However, psychopathic traits were associated with reduced emotion recognition accuracy (F(1, 191) = 19.39, p < .001) and reduced likelihood of feeling the emotion (F(1, 193) = 35.45, p < .001), particularly for fearful music. In Experiment 2, we replicated findings for broad difficulties with emotion recognition (Sample 3, N = 179) and emotional resonance (Sample 4, N = 199) associated with psychopathic traits. Results offer new insight into emotion recognition and response difficulties that are associated with psychopathic traits.
Collapse
Affiliation(s)
- R C Plate
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - C Jones
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - S Zhao
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - M W Flum
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - J Steinberg
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - G Daley
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - N Corbett
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - C Neumann
- Department of Psychology, University of North Texas, Denton, TX, USA
| | - R Waller
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
26
|
Lee J, Han JH, Lee HJ. Development of Novel Musical Stimuli to Investigate the Perception of Musical Emotions in Individuals With Hearing Loss. J Korean Med Sci 2023; 38:e82. [PMID: 36974396 PMCID: PMC10042730 DOI: 10.3346/jkms.2023.38.e82] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 12/15/2022] [Indexed: 03/29/2023] Open
Abstract
BACKGROUND Many studies have examined the perception of musical emotion using excerpts from familiar music that includes highly expressed emotions to classify emotional choices. However, using familiar music to study musical emotions in people with acquired hearing loss could produce ambiguous results as to whether the emotional perception is due to previous experiences or listening to the current musical stimuli. To overcome this limitation, we developed new musical stimuli to study emotional perception without the effects of episodic memory. METHODS A musician was instructed to compose five melodies with evenly distributed pitches around 1 kHz. The melodies were created to express the emotions of happy, sad, angry, tender, and neutral. To evaluate whether these melodies expressed the intended emotions, two methods were applied. First, we classified the expressed emotions of melodies with selected musical features from 60 features using genetic algorithm-based k-nearest neighbors. Second, forty-four people with normal hearing participated in an online survey regarding the emotional perception of music based on dimensional and discrete approaches to evaluate the musical stimuli set. RESULTS Twenty-four selected musical features produced classification for intended emotions with an accuracy of 76%. The results of the online survey in the normal hearing (NH) group showed that the intended emotions were selected significantly more often than the others. K-means clustering analysis revealed that melodies with arousal and valence ratings corresponded to representative quadrants of interest. Additionally, the applicability of the stimuli was tested in 4 individuals with high-frequency hearing loss. CONCLUSION By applying the individuals with NH, the musical stimuli were shown to classify emotions with high accuracy, as expressed. These results confirm that the set of musical stimuli can be used to study the perceived emotion in music, demonstrating the validity of the musical stimuli, independent of innate musical bias such as due to episodic memory. Furthermore, musical stimuli could be helpful for further studying perceived musical emotion in people with hearing loss because of the controlled pitch for each emotion.
Collapse
Affiliation(s)
- Jihyun Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
| | - Ji-Hye Han
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
- Department of Otorhinolaryngology, Hallym University College of Medicine, Chuncheon, Korea.
| |
Collapse
|
27
|
Kusunoki S, Fukuda T, Maeda S, Yao C, Hasegawa T, Akamatsu T, Yoshimura H. Relationships between feeding behaviors and emotions: an electroencephalogram (EEG) frequency analysis study. J Physiol Sci 2023; 73:2. [PMID: 36869303 DOI: 10.1186/s12576-022-00858-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 12/13/2022] [Indexed: 03/05/2023]
Abstract
Feeding behaviors may be easily affected by emotions, both being based on brain activity; however, the relationships between them have not been explicitly defined. In this study, we investigated how emotional environments modulate subjective feelings, brain activity, and feeding behaviors. Electroencephalogram (EEG) recordings were obtained from healthy participants in conditions of virtual comfortable space (CS) and uncomfortable space (UCS) while eating chocolate, and the times required for eating it were measured. We found that the more participants tended to feel comfortable under the CS, the more it took time to eat in the UCS. However, the EEG emergence patterns in the two virtual spaces varied across the individuals. Upon focusing on the theta and low-beta bands, the strength of the mental condition and eating times were found to be guided by these frequency bands. The results determined that the theta and low-beta bands are likely important and relevant waves for feeding behaviors under emotional circumstances, following alterations in mental conditions.
Collapse
Affiliation(s)
- Shintaro Kusunoki
- Field of Food Science & Technology, Graduate School of Technology, Industrial & Social Sciences, Tokushima University Graduate School, 2-1, Minami-josanjima-cho, Tokushima, 770-8513, Japan.,Department of Molecular Oral Physiology, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto, Tokushima, 770-8504, Japan
| | - Takako Fukuda
- Department of Molecular Oral Physiology, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto, Tokushima, 770-8504, Japan
| | - Saori Maeda
- Department of Molecular Oral Physiology, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto, Tokushima, 770-8504, Japan
| | - Chenjuan Yao
- Department of Molecular Oral Physiology, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto, Tokushima, 770-8504, Japan
| | - Takahiro Hasegawa
- Department of Molecular Oral Physiology, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto, Tokushima, 770-8504, Japan
| | - Tetsuya Akamatsu
- Field of Food Science & Technology, Graduate School of Technology, Industrial & Social Sciences, Tokushima University Graduate School, 2-1, Minami-josanjima-cho, Tokushima, 770-8513, Japan
| | - Hiroshi Yoshimura
- Department of Molecular Oral Physiology, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto, Tokushima, 770-8504, Japan.
| |
Collapse
|
28
|
Do H, Hoang H, Nguyen N, An A, Chau H, Khuu Q, Tran L, Le T, Le A, Nguyen K, Vo T, Ha H. Intermediate effects of mindfulness practice on the brain activity of college students: an EEG study. IBRO Neurosci Rep 2023. [DOI: 10.1016/j.ibneur.2023.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023] Open
|
29
|
Musical tempo affects EEG spectral dynamics during subsequent time estimation. Biol Psychol 2023; 178:108517. [PMID: 36801434 DOI: 10.1016/j.biopsycho.2023.108517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 01/24/2023] [Accepted: 02/12/2023] [Indexed: 02/19/2023]
Abstract
The perception of time depends on the rhythmicity of internal and external synchronizers. One external synchronizer that affects time estimation is music. This study aimed to analyze the effects of musical tempi on EEG spectral dynamics during subsequent time estimation. Participants performed a time production task after (i) silence and (ii) listening to music at different tempi -90, 120, and 150 bpm- while EEG activity was recorded. While listening, there was an increase in alpha power at all tempi compared to the resting state and an increase of beta at the fastest tempo. The beta increase persisted during the subsequent time estimations, with higher beta power during the task after listening to music at the fastest tempo than task performance without music. Spectral dynamics in frontal regions showed lower alpha activity in the final stages of time estimations after listening to music at 90- and 120-bpm than in the silence condition and higher beta in the early stages at 150 bpm. Behaviorally, the 120 bpm musical tempo produced slight improvements. Listening to music modified tonic EEG activity that subsequently affected EEG dynamics during time production. Music at a more optimal rate could have benefited temporal expectation and anticipation. The fastest musical tempo may have generated an over-activated state that affected subsequent time estimations. These results emphasize the importance of music as an external stimulus that can affect brain functional organization during time perception even after listening.
Collapse
|
30
|
Ghodousi M, Pousson JE, Bernhofs V, Griškova-Bulanova I. Assessment of Different Feature Extraction Methods for Discriminating Expressed Emotions during Music Performance towards BCMI Application. SENSORS (BASEL, SWITZERLAND) 2023; 23:2252. [PMID: 36850850 PMCID: PMC9967688 DOI: 10.3390/s23042252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 02/07/2023] [Accepted: 02/15/2023] [Indexed: 06/18/2023]
Abstract
A Brain-Computer Music Interface (BCMI) system may be designed to harness electroencephalography (EEG) signals for control over musical outputs in the context of emotionally expressive performance. To develop a real-time BCMI system, accurate and computationally efficient emotional biomarkers should first be identified. In the current study, we evaluated the ability of various features to discriminate between emotions expressed during music performance with the aim of developing a BCMI system. EEG data was recorded while subjects performed simple piano music with contrasting emotional cues and rated their success in communicating the intended emotion. Power spectra and connectivity features (Magnitude Square Coherence (MSC) and Granger Causality (GC)) were extracted from the signals. Two different approaches of feature selection were used to assess the contribution of neutral baselines in detection accuracies; 1- utilizing the baselines to normalize the features, 2- not taking them into account (non-normalized features). Finally, the Support Vector Machine (SVM) has been used to evaluate and compare the capability of various features for emotion detection. Best detection accuracies were obtained from the non-normalized MSC-based features equal to 85.57 ± 2.34, 84.93 ± 1.67, and 87.16 ± 0.55 for arousal, valence, and emotional conditions respectively, while the power-based features had the lowest accuracies. Both connectivity features show acceptable accuracy while requiring short processing time and thus are potential candidates for the development of a real-time BCMI system.
Collapse
Affiliation(s)
- Mahrad Ghodousi
- Department of Neurobiology and Biophysics, Vilnius University, 10257 Vilnius, Lithuania
| | | | | | | |
Collapse
|
31
|
Zhu X, Liu G, Zhao L, Rong W, Sun J, Liu R. Emotion Classification from Multi-Band Electroencephalogram Data Using Dynamic Simplifying Graph Convolutional Network and Channel Style Recalibration Module. SENSORS (BASEL, SWITZERLAND) 2023; 23:1917. [PMID: 36850512 PMCID: PMC9964605 DOI: 10.3390/s23041917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/16/2023] [Accepted: 02/03/2023] [Indexed: 06/18/2023]
Abstract
Because of its ability to objectively reflect people's emotional states, electroencephalogram (EEG) has been attracting increasing research attention for emotion classification. The classification method based on spatial-domain analysis is one of the research hotspots. However, most previous studies ignored the complementarity of information between different frequency bands, and the information in a single frequency band is not fully mined, which increases the computational time and the difficulty of improving classification accuracy. To address the above problems, this study proposes an emotion classification method based on dynamic simplifying graph convolutional (SGC) networks and a style recalibration module (SRM) for channels, termed SGC-SRM, with multi-band EEG data as input. Specifically, first, the graph structure is constructed using the differential entropy characteristics of each sub-band and the internal relationship between different channels is dynamically learned through SGC networks. Second, a convolution layer based on the SRM is introduced to recalibrate channel features to extract more emotion-related features. Third, the extracted sub-band features are fused at the feature level and classified. In addition, to reduce the redundant information between EEG channels and the computational time, (1) we adopt only 12 channels that are suitable for emotion classification to optimize the recognition algorithm, which can save approximately 90.5% of the time cost compared with using all channels; (2) we adopt information in the θ, α, β, and γ bands, consequently saving 23.3% of the time consumed compared with that in the full bands while maintaining almost the same level of classification accuracy. Finally, a subject-independent experiment is conducted on the public SEED dataset using the leave-one-subject-out cross-validation strategy. According to experimental results, SGC-SRM improves classification accuracy by 5.51-15.43% compared with existing methods.
Collapse
Affiliation(s)
- Xiaoliang Zhu
- National Engineering Research Center of Educational Big Data, Central China Normal University, Wuhan 430079, China
| | - Gendong Liu
- National Engineering Research Center of Educational Big Data, Central China Normal University, Wuhan 430079, China
| | - Liang Zhao
- National Engineering Research Center of Educational Big Data, Central China Normal University, Wuhan 430079, China
| | - Wenting Rong
- National Engineering Research Center of Educational Big Data, Central China Normal University, Wuhan 430079, China
| | - Junyi Sun
- National Engineering Research Center of Educational Big Data, Central China Normal University, Wuhan 430079, China
| | - Ran Liu
- Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore 487372, Singapore
| |
Collapse
|
32
|
Extracting a Novel Emotional EEG Topographic Map Based on a Stacked Autoencoder Network. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:9223599. [PMID: 36714412 PMCID: PMC9879679 DOI: 10.1155/2023/9223599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 11/02/2022] [Accepted: 12/23/2022] [Indexed: 01/21/2023]
Abstract
Emotion recognition based on brain signals has increasingly become attractive to evaluate human's internal emotional states. Conventional emotion recognition studies focus on developing machine learning and classifiers. However, most of these methods do not provide information on the involvement of different areas of the brain in emotions. Brain mapping is considered as one of the most distinguishing methods of showing the involvement of different areas of the brain in performing an activity. Most mapping techniques rely on projection and visualization of only one of the electroencephalogram (EEG) subband features onto brain regions. The present study aims to develop a new EEG-based brain mapping, which combines several features to provide more complete and useful information on a single map instead of common maps. In this study, the optimal combination of EEG features for each channel was extracted using a stacked autoencoder (SAE) network and visualizing a topographic map. Based on the research hypothesis, autoencoders can extract optimal features for quantitative EEG (QEEG) brain mapping. The DEAP EEG database was employed to extract topographic maps. The accuracy of image classifiers using the convolutional neural network (CNN) was used as a criterion for evaluating the distinction of the obtained maps from a stacked autoencoder topographic map (SAETM) method for different emotions. The average classification accuracy was obtained 0.8173 and 0.8037 in the valence and arousal dimensions, respectively. The extracted maps were also ranked by a team of experts compared to common maps. The results of quantitative and qualitative evaluation showed that the obtained map by SAETM has more information than conventional maps.
Collapse
|
33
|
Xu G, Guo W, Wang Y. Subject-independent EEG emotion recognition with hybrid spatio-temporal GRU-Conv architecture. Med Biol Eng Comput 2023; 61:61-73. [PMID: 36322243 DOI: 10.1007/s11517-022-02686-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 10/02/2022] [Indexed: 11/07/2022]
Abstract
Recently, various deep learning frameworks have shown excellent performance in decoding electroencephalogram (EEG) signals, especially in human emotion recognition. However, most of them just focus on temporal features and ignore the features based on spatial dimensions. Traditional gated recurrent unit (GRU) model performs well in processing time series data, and convolutional neural network (CNN) can obtain spatial characteristics from input data. Therefore, this paper introduces a hybrid GRU and CNN deep learning framework named GRU-Conv to fully leverage the advantages of both. Nevertheless, contrary to most previous GRU architectures, we retain the output information of all GRU units. So, the GRU-Conv model could extract crucial spatio-temporal features from EEG data. And more especially, the proposed model acquires the multi-dimensional features of multi-units after temporal processing in GRU and then uses CNN to extract spatial information from the temporal features. In this way, the EEG signals with different characteristics could be classified more accurately. Finally, the subject-independent experiment shows that our model has good performance on SEED and DEAP databases. The average accuracy of the former is 87.04%. The mean accuracy of the latter is 70.07% for arousal and 67.36% for valence.
Collapse
Affiliation(s)
- Guixun Xu
- College of Control Science and Engineering, China University of Petroleum (East China), Qingdao, 266580, Shandong Province, People's Republic of China
| | - Wenhui Guo
- College of Control Science and Engineering, China University of Petroleum (East China), Qingdao, 266580, Shandong Province, People's Republic of China
| | - Yanjiang Wang
- College of Control Science and Engineering, China University of Petroleum (East China), Qingdao, 266580, Shandong Province, People's Republic of China.
| |
Collapse
|
34
|
Sorinas J, Troyano JCF, Ferrández JM, Fernandez E. Unraveling the Development of an Algorithm for Recognizing Primary Emotions Through Electroencephalography. Int J Neural Syst 2023; 33:2250057. [PMID: 36495049 DOI: 10.1142/s0129065722500575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
The large range of potential applications, not only for patients but also for healthy people, that could be achieved by affective brain-computer interface (aBCI) makes more latent the necessity of finding a commonly accepted protocol for real-time EEG-based emotion recognition. Based on wavelet package for spectral feature extraction, attending to the nature of the EEG signal, we have specified some of the main parameters needed for the implementation of robust positive and negative emotion classification. Twelve seconds has resulted as the most appropriate sliding window size; from that, a set of 20 target frequency-location variables have been proposed as the most relevant features that carry the emotional information. Lastly, QDA and KNN classifiers and population rating criterion for stimuli labeling have been suggested as the most suitable approaches for EEG-based emotion recognition. The proposed model reached a mean accuracy of 98% (s.d. 1.4) and 98.96% (s.d. 1.28) in a subject-dependent (SD) approach for QDA and KNN classifier, respectively. This new model represents a step forward towards real-time classification. Moreover, new insights regarding subject-independent (SI) approximation have been discussed, although the results were not conclusive.
Collapse
Affiliation(s)
- Jennifer Sorinas
- Institute of Bioengineering, University Miguel Hernandez and CIBER BBN, Elche 03202, Spain
| | - Juan C Fernandez Troyano
- Department of Electronics and Computer Technology, University of Cartagena, Cartagena 30202, Spain
| | - Jose Manuel Ferrández
- Department of Electronics and Computer Technology, University of Cartagena, Cartagena 30202, Spain
| | - Eduardo Fernandez
- Institute of Bioengineering, University Miguel Hernandez and CIBER BBN, Elche 03202, Spain
| |
Collapse
|
35
|
Kannan MA, Ab Aziz NA, Ab Rani NS, Abdullah MW, Mohd Rashid MH, Shab MS, Ismail NI, Ab Ghani MA, Reza F, Muzaimi M. A review of the holy Quran listening and its neural correlation for its potential as a psycho-spiritual therapy. Heliyon 2022; 8:e12308. [PMID: 36578419 PMCID: PMC9791337 DOI: 10.1016/j.heliyon.2022.e12308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 07/26/2022] [Accepted: 12/05/2022] [Indexed: 12/14/2022] Open
Abstract
Since its revelation over 14 centuries ago, the Holy Quran is considered as scriptural divine words of Islam, and it is believed to promote psycho-spiritual therapeutic benefits to its reciter and/or listener. In this context, the listening of rhythmic Quranic verses among Muslims is often viewed as a form of unconventional melodic vocals, with accompanied anecdotal claims of the 'Quranic chills' pleasing effect. However, compared to music, rhythm, and meditation therapy, information on the neural basis of the anecdotal healing effects of the Quran remain largely unexplored. Current studies in this area took the leads from the low-frequency neuronal oscillations (i.e., alpha and theta) as the neural correlates, mainly using electroencephalography (EEG) and/or magnetoencephalography (MEG). In this narrative review, we present and discuss recent work related to these neural correlates and highlight several methodical issues and propose recommendations to progress this emerging transdisciplinary research. Collectively, evidence suggests that listening to rhythmic Quranic verses activates similar brain regions and elicits comparable therapeutic effects reported in music and rhythmic therapy. Notwithstanding, further research are warranted with more concise and standardized study designs to substantiate these findings, and opens avenue for the listening to Quranic verses as an effective complementary psycho-spiritual therapy.
Collapse
Affiliation(s)
- Mohammed Abdalla Kannan
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian, Kelantan, 16150, Malaysia,Department of Anatomy, Faculty of Medicine, Al Neelain University, Khartoum, 11111, Sudan
| | - Nurfaizatul Aisyah Ab Aziz
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian, Kelantan, 16150, Malaysia
| | - Nur Syairah Ab Rani
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian, Kelantan, 16150, Malaysia
| | - Mohd Waqiyuddin Abdullah
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian, Kelantan, 16150, Malaysia
| | - Muhammad Hakimi Mohd Rashid
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian, Kelantan, 16150, Malaysia,Department of Basic Medical Sciences, Kuliyyah of Pharmacy, International Islamic University Malaysia, 25200, Kuantan, Pahang, Malaysia
| | - Mas Syazwanee Shab
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian, Kelantan, 16150, Malaysia
| | - Nurul Iman Ismail
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian, Kelantan, 16150, Malaysia
| | - Muhammad Amiri Ab Ghani
- Department of Quran and Hadith, Sultan Ismail Petra International College, Nilam Puri, Kelantan, 15730, Malaysia
| | - Faruque Reza
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian, Kelantan, 16150, Malaysia
| | - Mustapha Muzaimi
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian, Kelantan, 16150, Malaysia,Corresponding author.
| |
Collapse
|
36
|
Di Stefano N, Vuust P, Brattico E. Consonance and dissonance perception. A critical review of the historical sources, multidisciplinary findings, and main hypotheses. Phys Life Rev 2022; 43:273-304. [PMID: 36372030 DOI: 10.1016/j.plrev.2022.10.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 10/17/2022] [Indexed: 11/05/2022]
Abstract
Revealed more than two millennia ago by Pythagoras, consonance and dissonance (C/D) are foundational concepts in music theory, perception, and aesthetics. The search for the biological, acoustical, and cultural factors that affect C/D perception has resulted in descriptive accounts inspired by arithmetic, musicological, psychoacoustical or neurobiological frameworks without reaching a consensus. Here, we review the key historical sources and modern multidisciplinary findings on C/D and integrate them into three main hypotheses: the vocal similarity hypothesis (VSH), the psychocultural hypothesis (PH), and the sensorimotor hypothesis (SH). By illustrating the hypotheses-related findings, we highlight their major conceptual, methodological, and terminological shortcomings. Trying to provide a unitary framework for C/D understanding, we put together multidisciplinary research on human and animal vocalizations, which converges to suggest that auditory roughness is associated with distress/danger and, therefore, elicits defensive behavioral reactions and neural responses that indicate aversion. We therefore stress the primacy of vocality and roughness as key factors in the explanation of C/D phenomenon, and we explore the (neuro)biological underpinnings of the attraction-aversion mechanisms that are triggered by C/D stimuli. Based on the reviewed evidence, while the aversive nature of dissonance appears as solidly rooted in the multidisciplinary findings, the attractive nature of consonance remains a somewhat speculative claim that needs further investigation. Finally, we outline future directions for empirical research in C/D, especially regarding cross-modal and cross-cultural approaches.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies (ISTC), National Research Council of Italy (CNR), Via San Martino della Battaglia 44, 00185 Rome, Italy.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, 70122 Bari, Italy.
| |
Collapse
|
37
|
Xie Z, Pan J, Li S, Ren J, Qian S, Ye Y, Bao W. Musical Emotions Recognition Using Entropy Features and Channel Optimization Based on EEG. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1735. [PMID: 36554139 PMCID: PMC9777832 DOI: 10.3390/e24121735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 11/15/2022] [Accepted: 11/22/2022] [Indexed: 06/17/2023]
Abstract
The dynamic of music is an important factor to arouse emotional experience, but current research mainly uses short-term artificial stimulus materials, which cannot effectively awaken complex emotions and reflect their dynamic brain response. In this paper, we used three long-term stimulus materials with many dynamic emotions inside: the "Waltz No. 2" containing pleasure and excitement, the "No. 14 Couplets" containing excitement, briskness, and nervousness, and the first movement of "Symphony No. 5 in C minor" containing passion, relaxation, cheerfulness, and nervousness. Approximate entropy (ApEn) and sample entropy (SampEn) were applied to extract the non-linear features of electroencephalogram (EEG) signals under long-term dynamic stimulation, and the K-Nearest Neighbor (KNN) method was used to recognize emotions. Further, a supervised feature vector dimensionality reduction method was proposed. Firstly, the optimal channel set for each subject was obtained by using a particle swarm optimization (PSO) algorithm, and then the number of times to select each channel in the optimal channel set of all subjects was counted. If the number was greater than or equal to the threshold, it was a common channel suitable for all subjects. The recognition results based on the optimal channel set demonstrated that each accuracy of two categories of emotions based on "Waltz No. 2" and three categories of emotions based on "No. 14 Couplets" was generally above 80%, respectively, and the recognition accuracy of four categories based on the first movement of "Symphony No. 5 in C minor" was about 70%. The recognition accuracy based on the common channel set was about 10% lower than that based on the optimal channel set, but not much different from that based on the whole channel set. This result suggested that the common channel could basically reflect the universal features of the whole subjects while realizing feature dimension reduction. The common channels were mainly distributed in the frontal lobe, central region, parietal lobe, occipital lobe, and temporal lobe. The channel number distributed in the frontal lobe was greater than the ones in other regions, indicating that the frontal lobe was the main emotional response region. Brain region topographic map based on the common channel set showed that there were differences in entropy intensity between different brain regions of the same emotion and the same brain region of different emotions. The number of times to select each channel in the optimal channel set of all 30 subjects showed that the principal component channels representing five brain regions were Fp1/F3 in the frontal lobe, CP5 in the central region, Pz in the parietal lobe, O2 in the occipital lobe, and T8 in the temporal lobe, respectively.
Collapse
Affiliation(s)
- Zun Xie
- Department of Arts and Design, Anhui University of Technology, Ma’anshan 243002, China
| | - Jianwei Pan
- Department of Arts and Design, Anhui University of Technology, Ma’anshan 243002, China
| | - Songjie Li
- Department of Management Science and Engineering, Anhui University of Technology, Ma’anshan 243002, China
| | - Jing Ren
- Department of Management Science and Engineering, Anhui University of Technology, Ma’anshan 243002, China
| | - Shao Qian
- Department of Management Science and Engineering, Anhui University of Technology, Ma’anshan 243002, China
| | - Ye Ye
- Department of Mechanical Engineering, Anhui University of Technology, Ma’anshan 243002, China
| | - Wei Bao
- Department of Management Science and Engineering, Anhui University of Technology, Ma’anshan 243002, China
| |
Collapse
|
38
|
Wang JG, Shao HM, Yao Y, Liu JL, Sun HP, Ma SW. Electroencephalograph-based emotion recognition using convolutional neural network without manual feature extraction. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
39
|
Guo X, Zhu T, Wu C, Bao Z, Liu Y. Emotional Activity Is Negatively Associated With Cognitive Load in Multimedia Learning: A Case Study With EEG Signals. Front Psychol 2022; 13:889427. [PMID: 35769742 PMCID: PMC9236132 DOI: 10.3389/fpsyg.2022.889427] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/16/2022] [Indexed: 11/13/2022] Open
Abstract
We aimed to investigate the relationship between emotional activity and cognitive load during multimedia learning from an emotion dynamics perspective using electroencephalography (EEG) signals. Using a between-subjects design, 42 university students were randomly assigned to two video lecture conditions (color-coded vs. grayscale). While the participants watched the assigned video, their EEG signals were recorded. After processing the EEG signals, we employed the correlation-based feature selector (CFS) method to identify emotion-related subject-independent features. We then put these features into the Isomap model to obtain a one-dimensional trajectory of emotional changes. Next, we used the zero-crossing rate (ZCR) as the quantitative characterization of emotional changes ZCR EC . Meanwhile, we extracted cognitive load-related features to analyze the degree of cognitive load (CLI). We employed a linear regression fitting method to study the relationship between ZCR EC and CLI. We conducted this study from two perspectives. One is the frequency domain method (wavelet feature), and the other is the non-linear dynamic method (entropy features). The results indicate that emotional activity is negatively associated with cognitive load. These findings have practical implications for designing video lectures for multimedia learning. Learning material should reduce learners' cognitive load to keep their emotional experience at optimal levels to enhance learning.
Collapse
Affiliation(s)
| | | | | | | | - Yang Liu
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou, China
| |
Collapse
|
40
|
Research on the Effects of Soundscapes on Human Psychological Health in an Old Community of a Cold Region. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19127212. [PMID: 35742461 PMCID: PMC9223413 DOI: 10.3390/ijerph19127212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 06/08/2022] [Accepted: 06/10/2022] [Indexed: 12/04/2022]
Abstract
The acoustic environment of residential areas is critical to the health of the residents. To reveal the impact of the acoustic environment on people's mental health and create a satisfactory acoustic setting, this study took a typical old residential area in Harbin as an example, conducted a field measurement and questionnaire survey on it, and took typical acoustic sources as the research object for human body index measurement. The relationship between heart rate (HR), skin conductivity level (SCL), physiological indicators, semantic differences (SD), and psychological indicators was studied. The sound distribution in the old community was obtained, determining that gender, age, and education level are significant factors producing different sound source evaluations. Music can alleviate residents' psychological depression, while traffic sounds and residents' psychological state can affect the satisfaction evaluation of the sound environment. There is a significant correlation between the physiological and psychological changes produced by different sounds. Pleasant sounds increase a person's HR and decrease skin conductivity. The subjects' HR increased 3.24 times per minute on average, and SCL decreased 1.65 times per minute on average in relation to hearing various sound sources. The SD evaluation showed that lively, pleasant, and attractive birdsongs and music produced the greatest HR and SCL changes, and that the sound barrier works best when placed 8 m and 18 m from the road.
Collapse
|
41
|
Li D, Xie L, Chai B, Wang Z, Yang H. Spatial-frequency convolutional self-attention network for EEG emotion recognition. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108740] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
42
|
Goshvarpour A, Goshvarpour A. Innovative Poincare's plot asymmetry descriptors for EEG emotion recognition. Cogn Neurodyn 2022; 16:545-559. [PMID: 35603058 PMCID: PMC9120274 DOI: 10.1007/s11571-021-09735-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 09/18/2021] [Accepted: 10/13/2021] [Indexed: 10/20/2022] Open
Abstract
Given the importance of emotion recognition in both medical and non-medical applications, designing an automatic system has captured the attention of several scholars. Currently, EEG-based emotion recognition has a special position, which has not fulfilled the desired accuracy rates yet. This experiment intended to provide novel EEG asymmetry measures to improve emotion recognition rates. Four emotional states have been classified using the k-nearest neighbor (kNN), support vector machine, and Naïve Bayes. Feature selection has been performed, and the role of employing a different number of top-ranked features on emotion recognition rates has been assessed. To validate the efficiency of the proposed scheme, two public databases, including the SJTU Emotion EEG Dataset-IV (SEED-IV) and a Database for Emotion Analysis using Physiological signals (DEAP) were evaluated. The experimental results indicated that kNN outperformed the other classifiers with a maximum accuracy of 95.49 and 98.63% using SEED-IV and DEAP datasets, respectively. In conclusion, the results of the proposed novel EEG-asymmetry measures make the framework a superior one compared to the state-of-art EEG emotion recognition approaches.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| | - Ateke Goshvarpour
- Department of Biomedical Engineering, Imam Reza International University, Rezvan Campus, Phalestine Sq., Mashhad, Razavi Khorasan Iran
| |
Collapse
|
43
|
Ji Y, Li F, Fu B, Li Y, Zhou Y, Niu Y, Zhang L, Chen Y, Shi G. Spatial-temporal Network for Fine-grained-level Emotion EEG Recognition. J Neural Eng 2022; 19. [PMID: 35523129 DOI: 10.1088/1741-2552/ac6d7d] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/05/2022] [Indexed: 11/12/2022]
Abstract
Electroencephalogram (EEG)-based affective computing brain-computer interfaces provide the capability for machines to understand human intentions. In practice, people are more concerned with the strength of a certain emotional state over a short period of time, which was called as fine-grained-level emotion in this paper. In this study, we built a fine-grained-level emotion EEG dataset that contains two coarse-grained emotions and four corresponding fine-grained-level emotions. To fully extract the features of the EEG signals, we proposed a corresponding fine-grained emotion EEG network (FG-emotionNet) for spatial-temporal feature extraction. Each feature extraction layer is linked to raw EEG signals to alleviate overfitting and ensure that the spatial features of each scale can be extracted from the raw signals. Moreover, all previous scale features are fused before the current spatial-feature layer to enhance the scale features in the spatial block. Additionally, long short-term memory is adopted as the temporal block to extract the temporal features based on spatial features and classify the category of fine-grained emotions. Subject-dependent and cross-session experiments demonstrated that the performance of the proposed method is superior to that of the representative methods in emotion recognition and similar structure methods with proposed method.
Collapse
Affiliation(s)
- Youshuo Ji
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Fu Li
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Boxun Fu
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Yang Li
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, 710071, CHINA
| | - YiJin Zhou
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Yi Niu
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Lijian Zhang
- Beijing Institute of Mechanical Equipment, No. 50 Yongding Road, Haidian District, Beijing, China, Beijing, 100854, CHINA
| | - Yuanfang Chen
- Beijing Institute of Mechanical Equipment, No. 50, Yongding Road, Haidian District, Beijing, China, Beijing, 100854, CHINA
| | | |
Collapse
|
44
|
An A, Hoang H, Trang L, Vo Q, Tran L, Le T, Le A, McCormick A, Du Old K, Williams NS, Mackellar G, Nguyen E, Luong T, Nguyen V, Nguyen K, Ha H. Investigating the effect of Mindfulness-Based Stress Reduction on stress level and brain activity of college students. IBRO Neurosci Rep 2022; 12:399-410. [PMID: 35601693 PMCID: PMC9121238 DOI: 10.1016/j.ibneur.2022.05.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 05/08/2022] [Accepted: 05/11/2022] [Indexed: 11/25/2022] Open
Abstract
Financial constraints usually hinder students, especially those in low-middle income countries (LMICs), from seeking mental health interventions. Hence, it is necessary to identify effective, affordable and sustainable counter-stress measures for college students in the LMICs context. This study examines the sustained effects of mindfulness practice on the psychological outcomes and brain activity of students, especially when they are exposed to stressful situations. Here, we combined psychological and electrophysiological methods (EEG) to investigate the sustained effects of an 8-week-long standardized Mindfulness-Based Stress Reduction (MBSR) intervention on the brain activity of college students. We found that the Test group showed a decrease in negative emotional states after the intervention, compared to the no statistically significant result of the Control group, as indicated by the Perceived Stress Scale (PSS) (33% reduction in the negative score) and Depression, Anxiety, Stress Scale (DASS-42) scores (nearly 40% reduction of three subscale scores). Spectral analysis of EEG data showed that this intervention is longitudinally associated with increased frontal and occipital lobe alpha band power. Additionally, the increase in alpha power is more prevalent when the Test group was being stress-induced by cognitive tasks, suggesting that practicing MBSR might enhance the practitioners’ tolerance of negative emotional states. In conclusion, MBSR intervention led to a sustained reduction of negative emotional states as measured by both psychological and electrophysiological metrics, which supports the adoption of MBSR as an effective and sustainable stress-countering approach for students in LMICs.
Collapse
|
45
|
Effects of facial expression and gaze interaction on brain dynamics during a working memory task in preschool children. PLoS One 2022; 17:e0266713. [PMID: 35482742 PMCID: PMC9049575 DOI: 10.1371/journal.pone.0266713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Accepted: 03/25/2022] [Indexed: 11/19/2022] Open
Abstract
Executive functioning in preschool children is important for building social relationships during the early stages of development. We investigated the brain dynamics of preschool children during an attention-shifting task involving congruent and incongruent gaze directions in emotional facial expressions (neutral, angry, and happy faces). Ignoring distracting stimuli (gaze direction and expression), participants (17 preschool children and 17 young adults) were required to detect and memorize the location (left or right) of a target symbol as a simple working memory task (i.e., no general priming paradigm in which a target appears after a cue stimulus). For the preschool children, the frontal late positive response and the central and parietal P3 responses increased for angry faces. In addition, a parietal midline α (Pmα) power to change attention levels decreased mainly during the encoding of a target for angry faces, possibly causing an association of no congruency effect on reaction times (i.e., no faster response in the congruent than incongruent gaze condition). For the adults, parietal P3 response and frontal midline θ (Fmθ) power increased mainly during the encoding period for incongruent gaze shifts in happy faces. The Pmα power for happy faces decreased for incongruent gaze during the encoding period and increased for congruent gaze during the first retention period. These results suggest that adults can quickly shift attention to a target in happy faces, sufficiently allocating attentional resources to ignore incongruent gazes and detect a target, which can attenuate a congruency effect on reaction times. By contrast, possibly because of underdeveloped brain activity, preschool children did not show the happy face superiority effect and they may be more responsive to angry faces. These observations imply a crucial key point to build better relationships between developing preschoolers and their parents and educators, incorporating nonverbal communication into social and emotional learning.
Collapse
|
46
|
Fusion of EEG-Based Activation, Spatial, and Connection Patterns for Fear Emotion Recognition. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3854513. [PMID: 35463262 PMCID: PMC9020909 DOI: 10.1155/2022/3854513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/19/2022] [Indexed: 11/29/2022]
Abstract
At present, emotion recognition based on electroencephalograms (EEGs) has attracted much more attention. Current studies of affective brain-computer interfaces (BCIs) focus on the recognition of happiness and sadness using brain activation patterns. Fear recognition involving brain activities in different spatial distributions and different brain functional networks has been scarcely investigated. In this study, we propose a multifeature fusion method combining energy activation, spatial distribution, and brain functional connection network (BFCN) features for fear emotion recognition. The affective brain pattern was identified by not only the power activation features of differential entropy (DE) but also the spatial distribution features of the common spatial pattern (CSP) and the EEG phase synchronization features of phase lock value (PLV). A total of 15 healthy subjects took part in the experiment, and the average accuracy rate was 85.00% ± 8.13%. The experimental results showed that the fear emotions of subjects were fully stimulated and effectively identified. The proposed fusion method on fear recognition was thus validated and is of great significance to the development of effective emotional BCI systems.
Collapse
|
47
|
Balconi M, Cassioli F. "We will be in touch". A neuroscientific assessment of remote vs. face-to-face job interviews via EEG hyperscanning. Soc Neurosci 2022; 17:209-224. [PMID: 35395918 DOI: 10.1080/17470919.2022.2064910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In the last decades, improving remote communications in companies has been a compelling issue. With the outspread of SARS-CoV-2 pandemic, this phenomenon has undergone an acceleration. Despite this, little to no research, considering neurocognitive and emotional systems, was conducted on job interview, a critical organizational phase which significantly contributes to a company long-term success.In this study, we aimed at exploring the emotional and cognitive processes related to different phases of a job interview (introductory, attitudinal, technical and conclusion), when considering two conditions: face-to-face and remote, by simultaneously gathering EEG (frequency bands: alpha, beta, delta, and theta) and autonomic data (skin-conductance-level, SCL, skin-conductance-response, SCR, and heart rate, HR) in both candidates and recruiters. Data highlighted a generalized alpha desynchronization during the job interview interaction. Recruiters showed increased frontal theta activity, which is connected to socio-emotional situations and emotional processing. In addition, results showed how face-to-face condition is related to increased SCL and theta power in the central-brain area, associated with learning processes, via the mid-brain dopamine system and the anterior cingulate cortex. Furthermore, we found higher HR in the candidates. Present results call to re-examine the impact of information-technology in the organization, opening to translational opportunities.
Collapse
Affiliation(s)
- Michela Balconi
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Università Cattolica del Sacro Cuore, Largo A. Gemelli 1, 20123, Milano, Italy.,Research Unit in Affective and Social Neuroscience, Department of Psychology, Università Cattolica del Sacro Cuore, Largo A. Gemelli 1, 20123, Milano, Italy
| | - Federico Cassioli
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Università Cattolica del Sacro Cuore, Largo A. Gemelli 1, 20123, Milano, Italy.,Research Unit in Affective and Social Neuroscience, Department of Psychology, Università Cattolica del Sacro Cuore, Largo A. Gemelli 1, 20123, Milano, Italy
| |
Collapse
|
48
|
Chang H, Zong Y, Zheng W, Tang C, Zhu J, Li X. Depression Assessment Method: An EEG Emotion Recognition Framework Based on Spatiotemporal Neural Network. Front Psychiatry 2022; 12:837149. [PMID: 35368726 PMCID: PMC8967371 DOI: 10.3389/fpsyt.2021.837149] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 12/27/2021] [Indexed: 12/05/2022] Open
Abstract
The main characteristic of depression is emotional dysfunction, manifested by increased levels of negative emotions and decreased levels of positive emotions. Therefore, accurate emotion recognition is an effective way to assess depression. Among the various signals used for emotion recognition, electroencephalogram (EEG) signal has attracted widespread attention due to its multiple advantages, such as rich spatiotemporal information in multi-channel EEG signals. First, we use filtering and Euclidean alignment for data preprocessing. In the feature extraction, we use short-time Fourier transform and Hilbert-Huang transform to extract time-frequency features, and convolutional neural networks to extract spatial features. Finally, bi-directional long short-term memory explored the timing relationship. Before performing the convolution operation, according to the unique topology of the EEG channel, the EEG features are converted into 3D tensors. This study has achieved good results on two emotion databases: SEED and Emotional BCI of 2020 WORLD ROBOT COMPETITION. We applied this method to the recognition of depression based on EEG and achieved a recognition rate of more than 70% under the five-fold cross-validation. In addition, the subject-independent protocol on SEED data has achieved a state-of-the-art recognition rate, which exceeds the existing research methods. We propose a novel EEG emotion recognition framework for depression detection, which provides a robust algorithm for real-time clinical depression detection based on EEG.
Collapse
Affiliation(s)
- Hongli Chang
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
- School of Information Science and Engineering, Southeast University, Nanjing, China
| | - Yuan Zong
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
| | - Wenming Zheng
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
| | - Chuangao Tang
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
| | - Jie Zhu
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
- School of Information Science and Engineering, Southeast University, Nanjing, China
| | - Xuejun Li
- Key Laboratory of Child Development and Learning Science, Ministry of Education, Southeast University, Nanjing, China
| |
Collapse
|
49
|
Two-dimensional CNN-based distinction of human emotions from EEG channels selected by multi-objective evolutionary algorithm. Sci Rep 2022; 12:3523. [PMID: 35241745 PMCID: PMC8894479 DOI: 10.1038/s41598-022-07517-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 02/21/2022] [Indexed: 01/17/2023] Open
Abstract
In this study we explore how different levels of emotional intensity (Arousal) and pleasantness (Valence) are reflected in electroencephalographic (EEG) signals. We performed the experiments on EEG data of 32 subjects from the DEAP public dataset, where the subjects were stimulated using 60-s videos to elicitate different levels of Arousal/Valence and then self-reported the rating from 1 to 9 using the self-assessment Manikin (SAM). The EEG data was pre-processed and used as input to a convolutional neural network (CNN). First, the 32 EEG channels were used to compute the maximum accuracy level obtainable for each subject as well as for creating a single model using data from all the subjects. The experiment was repeated using one channel at a time, to see if specific channels contain more information to discriminate between low vs high arousal/valence. The results indicate than using one channel the accuracy is lower compared to using all the 32 channels. An optimization process for EEG channel selection is then designed with the Non-dominated Sorting Genetic Algorithm II (NSGA-II) with the objective to obtain optimal channel combinations with high accuracy recognition. The genetic algorithm evaluates all possible combinations using a chromosome representation for all the 32 channels, and the EEG data from each chromosome in the different populations are tested iteratively solving two unconstrained objectives; to maximize classification accuracy and to reduce the number of required EEG channels for the classification process. Best combinations obtained from a Pareto-front suggests that as few as 8–10 channels can fulfill this condition and provide the basis for a lighter design of EEG systems for emotion recognition. In the best case, the results show accuracies of up to 1.00 for low vs high arousal using eight EEG channels, and 1.00 for low vs high valence using only two EEG channels. These results are encouraging for research and healthcare applications that will require automatic emotion recognition with wearable EEG.
Collapse
|
50
|
Smit EA, Milne AJ, Escudero P. Music Perception Abilities and Ambiguous Word Learning: Is There Cross-Domain Transfer in Nonmusicians? Front Psychol 2022; 13:801263. [PMID: 35401340 PMCID: PMC8984940 DOI: 10.3389/fpsyg.2022.801263] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 02/08/2022] [Indexed: 11/14/2022] Open
Abstract
Perception of music and speech is based on similar auditory skills, and it is often suggested that those with enhanced music perception skills may perceive and learn novel words more easily. The current study tested whether music perception abilities are associated with novel word learning in an ambiguous learning scenario. Using a cross-situational word learning (CSWL) task, nonmusician adults were exposed to word-object pairings between eight novel words and visual referents. Novel words were either non-minimal pairs differing in all sounds or minimal pairs differing in their initial consonant or vowel. In order to be successful in this task, learners need to be able to correctly encode the phonological details of the novel words and have sufficient auditory working memory to remember the correct word-object pairings. Using the Mistuning Perception Test (MPT) and the Melodic Discrimination Test (MDT), we measured learners’ pitch perception and auditory working memory. We predicted that those with higher MPT and MDT values would perform better in the CSWL task and in particular for novel words with high phonological overlap (i.e., minimal pairs). We found that higher musical perception skills led to higher accuracy for non-minimal pairs and minimal pairs differing in their initial consonant. Interestingly, this was not the case for vowel minimal pairs. We discuss the results in relation to theories of second language word learning such as the Second Language Perception model (L2LP).
Collapse
Affiliation(s)
- Eline A. Smit
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
- ARC Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
- *Correspondence: Eline A. Smit,
| | - Andrew J. Milne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| | - Paola Escudero
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
- ARC Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
| |
Collapse
|