1
|
Malekmohammadi A, Cheng G. Music Familiarization Elicits Functional Connectivity Between Right Frontal/Temporal and Parietal Areas in the Theta and Alpha Bands. Brain Topogr 2024; 38:2. [PMID: 39367155 PMCID: PMC11452474 DOI: 10.1007/s10548-024-01081-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 08/13/2024] [Indexed: 10/06/2024]
Abstract
Frequent listening to unfamiliar music excerpts forms functional connectivity in the brain as music becomes familiar and memorable. However, where these connections spectrally arise in the cerebral cortex during music familiarization has yet to be determined. This study investigates electrophysiological changes in phase-based functional connectivity recorded with electroencephalography (EEG) from twenty participants' brains during thrice passive listening to initially unknown classical music excerpts. Functional connectivity is evaluated based on measuring phase synchronization between all pairwise combinations of EEG electrodes across all repetitions via repeated measures ANOVA and between every two repetitions of listening to unknown music with the weighted phase lag index (WPLI) method in different frequency bands. The results indicate an increased phase synchronization during gradual short-term familiarization between the right frontal and the right parietal areas in the theta and alpha bands. In addition, the increased phase synchronization is discovered between the right temporal areas and the right parietal areas at the theta band during gradual music familiarization. Overall, this study explores the short-term music familiarization effects on neural responses by revealing that repetitions form phasic coupling in the theta and alpha bands in the right hemisphere during passive listening.
Collapse
Affiliation(s)
- Alireza Malekmohammadi
- Electrical Engineering, Institute for Cognitive Systems, Technical University of Munich, 80333, Munich, Germany.
| | - Gordon Cheng
- Electrical Engineering, Institute for Cognitive Systems, Technical University of Munich, 80333, Munich, Germany
| |
Collapse
|
2
|
Strauss H, Vigl J, Jacobsen PO, Bayer M, Talamini F, Vigl W, Zangerle E, Zentner M. The Emotion-to-Music Mapping Atlas (EMMA): A systematically organized online database of emotionally evocative music excerpts. Behav Res Methods 2024; 56:3560-3577. [PMID: 38286947 PMCID: PMC11133078 DOI: 10.3758/s13428-024-02336-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2024] [Indexed: 01/31/2024]
Abstract
Selecting appropriate musical stimuli to induce specific emotions represents a recurring challenge in music and emotion research. Most existing stimuli have been categorized according to taxonomies derived from general emotion models (e.g., basic emotions, affective circumplex), have been rated for perceived emotions, and are rarely defined in terms of interrater agreement. To redress these limitations, we present research that served in the development of a new interactive online database, including an initial set of 364 music excerpts from three different genres (classical, pop, and hip/hop) that were rated for felt emotion using the Geneva Emotion Music Scale (GEMS), a music-specific emotion scale. The sample comprised 517 English- and German-speaking participants and each excerpt was rated by an average of 28.76 participants (SD = 7.99). Data analyses focused on research questions that are of particular relevance for musical database development, notably the number of raters required to obtain stable estimates of emotional effects of music and the adequacy of the GEMS as a tool for describing music-evoked emotions across three prominent music genres. Overall, our findings suggest that 10-20 raters are sufficient to obtain stable estimates of emotional effects of music excerpts in most cases, and that the GEMS shows promise as a valid and comprehensive annotation tool for music databases.
Collapse
Affiliation(s)
- Hannah Strauss
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria.
| | - Julia Vigl
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Peer-Ole Jacobsen
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Martin Bayer
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Francesca Talamini
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Wolfgang Vigl
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Eva Zangerle
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria.
| |
Collapse
|
3
|
Yang L, Tang Q, Chen Z, Zhang S, Mu Y, Yan Y, Xu P, Yao D, Li F, Li C. EEG based emotion recognition by hierarchical bayesian spectral regression framework. J Neurosci Methods 2024; 402:110015. [PMID: 38000636 DOI: 10.1016/j.jneumeth.2023.110015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/22/2023] [Accepted: 11/16/2023] [Indexed: 11/26/2023]
Abstract
Spectral regression (SR), a graph-based learning regression model, can be used to extract features from graphs to realize efficient dimensionality reduction. However, due to the SR method remains a regularized least squares problem and being defined in L2-norm space, the effect of artifacts in EEG signals cannot be efficiently resisted. In this work, to further improve the robustness of the graph-based regression models, we propose to utilize the prior distribution estimation in the Bayesian framework and develop a robust hierarchical Bayesian spectral regression framework (named HB-SR), which is designed with the hierarchical Bayesian ensemble strategies. In the proposed HB-SR, the impact of noises can be effectively reduced by the adaptive adjustment approach in model parameters with the data-driven manner. Specifically, in the current work, three different distributions have been elaborately designed to enhance the universality of the proposed HB-SR, i.e., Gaussian distribution, Laplace distribution, and Student-t distribution. To objectively evaluate the performance of the HB-SR framework, we conducted both simulation studies and emotion recognition experiments based on emotional EEG signals. Experimental results have consistently indicated that compared with other existing spectral regression methods, the proposed HB-SR can effectively suppress the influence of noises and achieve robust EEG emotion recognition.
Collapse
Affiliation(s)
- Lei Yang
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation and School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Qi Tang
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation and School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Zhaojin Chen
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation and School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Shuhan Zhang
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation and School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Yufeng Mu
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation and School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Ye Yan
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation and School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Peng Xu
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation and School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Dezhong Yao
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation and School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China.
| | - Fali Li
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation and School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China.
| | - Cunbo Li
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation and School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China.
| |
Collapse
|
4
|
Aydın S, Onbaşı L. Graph theoretical brain connectivity measures to investigate neural correlates of music rhythms associated with fear and anger. Cogn Neurodyn 2024; 18:49-66. [PMID: 38406195 PMCID: PMC10881947 DOI: 10.1007/s11571-023-09931-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 10/19/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
The present study tests the hypothesis that emotions of fear and anger are associated with distinct psychophysiological and neural circuitry according to discrete emotion model due to contrasting neurotransmitter activities, despite being included in the same affective group in many studies due to similar arousal-valance scores of them in emotion models. EEG data is downloaded from OpenNeuro platform with access number of ds002721. Brain connectivity estimations are obtained by using both functional and effective connectivity estimators in analysis of short (2 sec) and long (6 sec) EEG segments across the cortex. In tests, discrete emotions and resting-states are identified by frequency band specific brain network measures and then contrasting emotional states are deep classified with 5-fold cross-validated Long Short Term Memory Networks. Logistic regression modeling has also been examined to provide robust performance criteria. Commonly, the best results are obtained by using Partial Directed Coherence in Gamma (31.5 - 60.5 H z ) sub-bands of short EEG segments. In particular, Fear and Anger have been classified with accuracy of 91.79%. Thus, our hypothesis is supported by overall results. In conclusion, Anger is found to be characterized by increased transitivity and decreased local efficiency in addition to lower modularity in Gamma-band in comparison to fear. Local efficiency refers functional brain segregation originated from the ability of the brain to exchange information locally. Transitivity refer the overall probability for the brain having adjacent neural populations interconnected, thus revealing the existence of tightly connected cortical regions. Modularity quantifies how well the brain can be partitioned into functional cortical regions. In conclusion, PDC is proposed to graph theoretical analysis of short EEG epochs in presenting robust emotional indicators sensitive to perception of affective sounds.
Collapse
Affiliation(s)
- Serap Aydın
- Department of Biophysics, Faculty of Medicine, Hacettepe University, Sıhhiye, Ankara, Turkey
| | - Lara Onbaşı
- School of Medicine, Hacettepe University, Sıhhiye, Ankara, Turkey
| |
Collapse
|
5
|
Martínez-Saez MC, Ros L, López-Cano M, Nieto M, Navarro B, Latorre JM. Effect of popular songs from the reminiscence bump as autobiographical memory cues in aging: a preliminary study using EEG. Front Neurosci 2024; 17:1300751. [PMID: 38264494 PMCID: PMC10803499 DOI: 10.3389/fnins.2023.1300751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 12/26/2023] [Indexed: 01/25/2024] Open
Abstract
Introduction Music has the capacity to evoke emotions and memories. This capacity is influenced by whether or not the music is from the reminiscence bump (RB) period. However, research on the neural correlates of the processes of evoking autobiographical memories through songs is scant. The aim of this study was to analyze the differences at the level of frequency band activation in two situations: (1) whether or not the song is able to generate a memory; and (2) whether or not the song is from the RB period. Methods A total of 35 older adults (22 women, age range: 61-73 years) listened to 10 thirty-second musical clips that coincided with the period of their RB and 10 from the immediately subsequent 5 years (non-RB). To record the EEG signal, a brain-computer interface (BCI) with 14 channels was used. The signal was recorded during the 30-seconds of listening to each music clip. Results The results showed differences in the activation levels of the frequency bands in the frontal and temporal regions. It was also found that the non-retrieval of a memory in response to a song clip showed a greater activation of low frequency waves in the frontal region, compared to the trials that did generate a memory. Discussion These results suggest the importance of analyzing not only brain activation, but also neuronal functional connectivity at older ages, in order to better understand cognitive and emotional functions in aging.
Collapse
Affiliation(s)
- Maria Cruz Martínez-Saez
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
| | - Laura Ros
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
- Applied Cognitive Psychology Laboratory, Research Institute for Neurological Disabilities, University of Castilla La Mancha, Albacete, Spain
| | - Marco López-Cano
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
| | - Marta Nieto
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
- Applied Cognitive Psychology Laboratory, Research Institute for Neurological Disabilities, University of Castilla La Mancha, Albacete, Spain
| | - Beatriz Navarro
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
- Applied Cognitive Psychology Laboratory, Research Institute for Neurological Disabilities, University of Castilla La Mancha, Albacete, Spain
| | - Jose Miguel Latorre
- Department of Psychology, Faculty of Medicine, University of Castilla La Mancha, Albacete, Spain
- Applied Cognitive Psychology Laboratory, Research Institute for Neurological Disabilities, University of Castilla La Mancha, Albacete, Spain
| |
Collapse
|
6
|
Tervaniemi M. The neuroscience of music – towards ecological validity. Trends Neurosci 2023; 46:355-364. [PMID: 37012175 DOI: 10.1016/j.tins.2023.03.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/28/2023] [Accepted: 03/02/2023] [Indexed: 04/03/2023]
Abstract
Studies in the neuroscience of music gained momentum in the 1990s as an integrated part of the well-controlled experimental research tradition. However, during the past two decades, these studies have moved toward more naturalistic, ecologically valid paradigms. Here, I introduce this move in three frameworks: (i) sound stimulation and empirical paradigms, (ii) study participants, and (iii) methods and contexts of data acquisition. I wish to provide a narrative historical overview of the development of the field and, in parallel, to stimulate innovative thinking to further advance the ecological validity of the studies without overlooking experimental rigor.
Collapse
Affiliation(s)
- Mari Tervaniemi
- Centre of Excellence in Music, Mind, Body, and Brain, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland; Cognitive Brain Research Unit, Department of Psychology and Locopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.
| |
Collapse
|
7
|
Kubińska K, Michałowska S, Samochowiec A. Does music heal? Opera and the mood of people over 50 years of age. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03612-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractThe authors of this work, noticing that opera is a combination of music and theater, examined the relationship between listening to opera music and mood changes in people over 50 years of age. The study took the form of a quasi-experiment. Recipients were invited to the previously prepared room, where the audiovisual material – a recording of the opera “La Traviata” – was presented for the first time. This was preceded by the respondents completing the SUPIN C30 and S30 questionnaires and a short survey by the authors. After the presentation of the stimulus, the subjects again filled in the SUPIN S30 questionnaire scale and the GEMS scale. The described procedure was carried out twice, using two different music materials. The procedure remained unchanged, while the audiovisual material changed. The second time, the participants were presented with a recording from the opera “The Barber of Seville”. The participants of the study were 30 people. In the studied group, there are no significant changes in emotional states in response to the opera “La Traviata”. In turn, the opera “The Barber of Seville” has no effect on a positive emotional state. Instead, it caused a statistically significant change in the level of negative emotional states. The results of this study are largely consistent with the results of other studies examining the relationship between music and mood, but there are also limitations – only two pieces of opera music were used and no control group was included. Research has shown that opera, as a specific musical genre, despite its peculiar form, affects mood and emotions.
Collapse
|
8
|
Ren H, Jiang X, Meng L, Lu C, Wang L, Dai C, Chen W. fNIRS-Based Dynamic Functional Connectivity Reveals the Innate Musical Sensing Brain Networks in Preterm Infants. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1806-1816. [PMID: 35617179 DOI: 10.1109/tnsre.2022.3178078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans have the ability to appreciate and create music. However, why and how humans have this distinctive ability to perceive music remains unclear. Additionally, the investigation of the innate perceiving skill in humans is compounded by the fact that we have been actively and passively exposed to auditory stimuli or have systematically learnt music after birth. Therefore, to explore the innate musical perceiving ability, infants with preterm birth may be the most suitable population. In this study, the auditory brain networks were explored using dynamic functional connectivity-based reliable component analysis (RCA) in preterm infants during music listening. The brain activation was captured by portable functional near-infrared spectroscopy (fNIRS) to simulate a natural environment for preterm infants. The components with the maximum inter-subject correlation were extracted. The generated spatial filters identified the shared spatial structural features of functional brain connectivity across subjects during listening to the common music, exhibiting a functional synchronization between the right temporal region and the frontal and motor cortex, and synchronization between the bilateral temporal regions. The specific pattern is responsible for the functions involving music comprehension, emotion generation, language processing, memory, and sensory. The fluctuation of the extracted components and the phase variation demonstrates the interactions between the extracted brain networks to encode musical information. These results are critically important for our understanding of the underlying mechanisms of the innate perceiving skills at early ages of human during naturalistic music listening.
Collapse
|
9
|
Ding L, Duan W, Wang Y, Lei X. Test-retest reproducibility comparison in resting and the mental task states: A sensor and source-level EEG spectral analysis. Int J Psychophysiol 2022; 173:20-28. [PMID: 35017028 DOI: 10.1016/j.ijpsycho.2022.01.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 01/04/2022] [Accepted: 01/04/2022] [Indexed: 01/04/2023]
Abstract
Previous test-retest analysis of EEG mostly focused on eyes open and eyes closed resting-state. However, less attention was paid to the EEG during the subject-driven mental imaginary task state. In the current study, we compared the test-retest reproducibility of EEG spectrum in three mental imaginary task states (i.e. performed mental arithmetic, recalled the events of their day, and silently sang lyrics) and two resting states (i.e. eyes open and closed) during three EEG sessions. The resting state with eyes closed has the highest reproducibility, while the resting state with eyes opened has the lowest reproducibility for the spectral features of EEG signals at the sensor level. However, the reproducibility during eyes-open ranked higher among the five states at the source level. Moreover, the mental arithmetic state has the highest reproducibility among all the three task states. And its reproducibility in certain rhythms (theta, gamma, etc) was higher than the resting states. The reproducibility of the EEG spectrum was also investigated from the perspective of large-scale brain networks. The dorsal attention network showed the highest reproducibility in a wide frequency range of the alpha and beta rhythms. Our study suggests the importance of task selection based on the target brain region and the target frequency band. This may provide some suggestions for future researchers to choose appropriate experimental paradigms and provide a guideline on EEG study for the basic and clinical applications.
Collapse
Affiliation(s)
- Lihong Ding
- Sleep and NeuroImaging Center, Faculty of Psychology, Southwest University, Chongqing 400715, China; Key Laboratory of Cognition and Personality of Ministry of Education, Chongqing 400715, China
| | - Wei Duan
- Sleep and NeuroImaging Center, Faculty of Psychology, Southwest University, Chongqing 400715, China; Key Laboratory of Cognition and Personality of Ministry of Education, Chongqing 400715, China
| | - Yulin Wang
- Sleep and NeuroImaging Center, Faculty of Psychology, Southwest University, Chongqing 400715, China; Key Laboratory of Cognition and Personality of Ministry of Education, Chongqing 400715, China
| | - Xu Lei
- Sleep and NeuroImaging Center, Faculty of Psychology, Southwest University, Chongqing 400715, China; Key Laboratory of Cognition and Personality of Ministry of Education, Chongqing 400715, China.
| |
Collapse
|
10
|
Fuentes-Sánchez N, Pastor R, Escrig MA, Elipe-Miravet M, Pastor MC. Emotion elicitation during music listening: Subjective self-reports, facial expression, and autonomic reactivity. Psychophysiology 2021; 58:e13884. [PMID: 34145586 DOI: 10.1111/psyp.13884] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 05/30/2021] [Accepted: 06/01/2021] [Indexed: 11/30/2022]
Abstract
The use of music as emotional stimuli in experimental studies has grown in recent years. However, prior studies have mainly focused on self-reports and central measures, with a few works exploring the time course of psychophysiological correlates. Moreover, most of the previous research has been carried out either from the dimensional or categorical model but not combining both approaches to emotions. This study aimed to investigate subjective and physiological correlates of emotion elicitation through music, following the three-dimensional and the discrete emotion model. A sample of 50 healthy volunteers (25 women) took part in this experiment by listening to 42 film music excerpts (14 pleasant, 14 unpleasant, 14 neutral) presented during 8 s, while peripheral measures were continuously recorded. After music offset, affective dimensions (valence, energy arousal, and tension arousal) as well as discrete emotions (happiness, sadness, tenderness, fear, and anger) were collected using a 9-point scale. Results showed an effect of the music category on subjective and psychophysiological measures. In peripheral physiology, greater electrodermal activity, heart rate acceleration, and zygomatic responses, besides lower corrugator amplitude, were observed for pleasant excerpts in comparison to neutral and unpleasant music, from 2 s after stimulus onset until the end of its duration. Overall, our results add evidence for the efficacy of standardized film music excerpts to evoke powerful emotions in laboratory settings; thus, opening a path to explore interventions based on music in pathologies with underlying emotion deregulatory processes.
Collapse
Affiliation(s)
- Nieves Fuentes-Sánchez
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - Raúl Pastor
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - Miguel A Escrig
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - Marcel Elipe-Miravet
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| | - M Carmen Pastor
- Facultad de Ciencias de la Salud, Departamento de Psicología Básica, Clínica y Psicobiología, Universitat Jaume I, Castelló de la Plana, Castellón, Spain
| |
Collapse
|
11
|
The Effect of Music Tempo and Volume on Acoustic Perceptions under the Noise Environment. SUSTAINABILITY 2021. [DOI: 10.3390/su13074055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This study aimed to investigate the distracting or masking effects of music tempo and volume, based on subjective evaluation under noise conditions. Two experiments were conducted with 32 participants. In the first one, the experimental conditions were set as follow: (1) the sound pressure levels of music are 45 dB, 60 dB, and 75 dB; (2) music tempos are 70 beats per minute (BPM), 110 BPM, and 150 BPM; (3) sound pressure levels of noise are 45 dB, 60 dB, and 75 dB; and (4) the noise types are talkers’ babble, traffic noise, and construction noise. All conditions on human acoustic perception were analyzed by orthogonal experiment. Based on part one, the second experiment was conducted. Sound pressure levels (50 dB, 60 dB, and 70 dB) of noise and sound pressure levels (50 dB, 60 dB, and 70 dB) of music and music tempo (70 BPM, 110 BPM, and 150 BPM) were assessed by subjective evaluation. The results showed although different types of noise had different effects on human perceptions, noise types had a small effect on acoustic comfort considering the superimposed music. Music can improve the acoustic environment. The sound pressure levels had significant effects on acoustic sensation. The tempo of the music affected the acoustic sensation insignificantly. Sound pressure levels of noise, music tempo, and sound pressure levels of music significantly affect acoustic comfort. The best acoustic environment in this study utilized superimposed 70 BPM, 60 dB music in a 50 dB noise environment. These results suggest that music can enable new strategies to improve indoor environmental satisfaction. Based on the findings, the effect of music on acoustic perceptions under the noise environment should be taken into account when aiming to enhance comfort in noisy environments.
Collapse
|
12
|
|
13
|
Modifications in the Topological Structure of EEG Functional Connectivity Networks during Listening Tonal and Atonal Concert Music in Musicians and Non-Musicians. Brain Sci 2021; 11:brainsci11020159. [PMID: 33530384 PMCID: PMC7910933 DOI: 10.3390/brainsci11020159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 01/12/2021] [Accepted: 01/20/2021] [Indexed: 11/17/2022] Open
Abstract
The present work aims to demonstrate the hypothesis that atonal music modifies the topological structure of electroencephalographic (EEG) connectivity networks in relation to tonal music. To this, EEG monopolar records were taken in musicians and non-musicians while listening to tonal, atonal, and pink noise sound excerpts. EEG functional connectivities (FC) among channels assessed by a phase synchronization index previously thresholded using surrogate data test were computed. Sound effects, on the topological structure of graph-based networks assembled with the EEG-FCs at different frequency-bands, were analyzed throughout graph metric and network-based statistic (NBS). Local and global efficiency normalized (vs. random-network) measurements (NLE|NGE) assessing network information exchanges were able to discriminate both music styles irrespective of groups and frequency-bands. During tonal audition, NLE and NGE values in the beta-band network get close to that of a small-world network, while during atonal and even more during noise its structure moved away from small-world. These effects were attributed to the different timbre characteristics (sounds spectral centroid and entropy) and different musical structure. Results from networks topographic maps for strength and NLE of the nodes, and for FC subnets obtained from the NBS, allowed discriminating the musical styles and verifying the different strength, NLE, and FC of musicians compared to non-musicians.
Collapse
|
14
|
He JX, Zhou L, Liu ZT, Hu XY. Digital Empirical Research of Influencing Factors of Musical Emotion Classification Based on Pleasure-Arousal Musical Emotion Fuzzy Model. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2020. [DOI: 10.20965/jaciii.2020.p0872] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In recent years, with the further breakthrough of artificial intelligence theory and technology, as well as the further expansion of the Internet scale, the recognition of human emotions and the necessity for satisfying human psychological needs in future artificial intelligence technology development tendencies have been highlighted, in addition to physical task accomplishment. Musical emotion classification is an important research topic in artificial intelligence. The key premise of realizing music emotion classification is to construct a musical emotion model that conforms to the characteristics of music emotion recognition. Currently, three types of music emotion classification models are available: discrete category, continuous dimensional, and music emotion-specific models. The pleasure-arousal music emotion fuzzy model, which includes a wide range of emotions compared with other models, is selected as the emotional classification system in this study to investigate the influencing factor for musical emotion classification. Two representative emotional attributes, i.e., speed and strength, are used as variables. Based on test experiments involving music and non-music majors combined with questionnaire results, the relationship between music properties and emotional changes under the pleasure-arousal model is revealed quantitatively.
Collapse
|
15
|
Wagener GL, Berning M, Costa AP, Steffgen G, Melzer A. Effects of Emotional Music on Facial Emotion Recognition in Children with Autism Spectrum Disorder (ASD). J Autism Dev Disord 2020; 51:3256-3265. [PMID: 33201423 DOI: 10.1007/s10803-020-04781-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/04/2020] [Indexed: 01/02/2023]
Abstract
Impaired facial emotion recognition in children with Autism Spectrum Disorder (ASD) is in contrast to their intact emotional music recognition. This study tested whether emotion congruent music enhances facial emotion recognition. Accuracy and reaction times were assessed for 19 children with ASD and 31 controls in a recognition task with angry, happy, or sad faces. Stimuli were shown with either emotionally congruent or incongruent music or no music. Although children with ASD had higher reaction times than controls, accuracy only differed when incongruent or no music was played, indicating that congruent emotional music can boost facial emotion recognition in children with ASD. Emotion congruent music may support emotion recognition in children with ASD, and thus may improve their social skills.
Collapse
Affiliation(s)
- Gary L Wagener
- Department of Behavioural and Cognitive Sciences, University of Luxembourg, 11, Porte des Sciences, 4366, Esch-sur-Alzette, Luxembourg.
| | - Madeleine Berning
- Institute of Psychology, University of Trier, Universitätsring 15, 54286, Trier, Germany
| | - Andreia P Costa
- Department of Behavioural and Cognitive Sciences, University of Luxembourg, 11, Porte des Sciences, 4366, Esch-sur-Alzette, Luxembourg
| | - Georges Steffgen
- Department of Behavioural and Cognitive Sciences, University of Luxembourg, 11, Porte des Sciences, 4366, Esch-sur-Alzette, Luxembourg
| | - André Melzer
- Department of Behavioural and Cognitive Sciences, University of Luxembourg, 11, Porte des Sciences, 4366, Esch-sur-Alzette, Luxembourg
| |
Collapse
|
16
|
Chabin T, Gabriel D, Chansophonkul T, Michelant L, Joucla C, Haffen E, Moulin T, Comte A, Pazart L. Cortical Patterns of Pleasurable Musical Chills Revealed by High-Density EEG. Front Neurosci 2020; 14:565815. [PMID: 33224021 PMCID: PMC7670092 DOI: 10.3389/fnins.2020.565815] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 09/29/2020] [Indexed: 01/02/2023] Open
Abstract
Music has the capacity to elicit strong positive feelings in humans by activating the brain's reward system. Because group emotional dynamics is a central concern of social neurosciences, the study of emotion in natural/ecological conditions is gaining interest. This study aimed to show that high-density EEG (HD-EEG) is able to reveal patterns of cerebral activities previously identified by fMRI or PET scans when the subject experiences pleasurable musical chills. We used HD-EEG to record participants (11 female, 7 male) while listening to their favorite pleasurable chill-inducing musical excerpts; they reported their subjective emotional state from low pleasure up to chills. HD-EEG results showed an increase of theta activity in the prefrontal cortex when arousal and emotional ratings increased, which are associated with orbitofrontal cortex activation localized using source localization algorithms. In addition, we identified two specific patterns of chills: a decreased theta activity in the right central region, which could reflect supplementary motor area activation during chills and may be related to rhythmic anticipation processing, and a decreased theta activity in the right temporal region, which may be related to musical appreciation and could reflect the right superior temporal gyrus activity. The alpha frontal/prefrontal asymmetry did not reflect the felt emotional pleasure, but the increased frontal beta to alpha ratio (measure of arousal) corresponded to increased emotional ratings. These results suggest that EEG may be a reliable method and a promising tool for the investigation of group musical pleasure through musical reward processing.
Collapse
Affiliation(s)
- Thibault Chabin
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
| | - Damien Gabriel
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation – Neuraxess, Centre Hospitalier Universitaire de Besançon, Université Bourgogne Franche-Comté, Besançon, France
| | - Tanawat Chansophonkul
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
| | - Lisa Michelant
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
| | - Coralie Joucla
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
| | - Emmanuel Haffen
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation – Neuraxess, Centre Hospitalier Universitaire de Besançon, Université Bourgogne Franche-Comté, Besançon, France
| | - Thierry Moulin
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation – Neuraxess, Centre Hospitalier Universitaire de Besançon, Université Bourgogne Franche-Comté, Besançon, France
| | - Alexandre Comte
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation – Neuraxess, Centre Hospitalier Universitaire de Besançon, Université Bourgogne Franche-Comté, Besançon, France
| | - Lionel Pazart
- Laboratoire de Neurosciences Intégratives et Cliniques, EA 481, Université Bourgogne Franche-Comté, Besançon, France
- INSERM CIC 1431, Centre d’Investigation Clinique de Besançon, Centre Hospitalier Universitaire de Besançon, Besançon, France
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation – Neuraxess, Centre Hospitalier Universitaire de Besançon, Université Bourgogne Franche-Comté, Besançon, France
| |
Collapse
|
17
|
Abstract
Background: In this study we measured the affective appraisal of sounds and video clips using a newly developed graphical self-report tool: the EmojiGrid. The EmojiGrid is a square grid, labeled with emoji that express different degrees of valence and arousal. Users rate the valence and arousal of a given stimulus by simply clicking on the grid. Methods: In Experiment I, observers (N=150, 74 males, mean age=25.2±3.5) used the EmojiGrid to rate their affective appraisal of 77 validated sound clips from nine different semantic categories, covering a large area of the affective space. In Experiment II, observers (N=60, 32 males, mean age=24.5±3.3) used the EmojiGrid to rate their affective appraisal of 50 validated film fragments varying in positive and negative affect (20 positive, 20 negative, 10 neutral). Results: The results of this study show that for both sound and video, the agreement between the mean ratings obtained with the EmojiGrid and those obtained with an alternative and validated affective rating tool in previous studies in the literature, is excellent for valence and good for arousal. Our results also show the typical universal U-shaped relation between mean valence and arousal that is commonly observed for affective sensory stimuli, both for sound and video. Conclusions: We conclude that the EmojiGrid can be used as an affective self-report tool for the assessment of sound and video-evoked emotions.
Collapse
Affiliation(s)
- Alexander Toet
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, 3584 CS, The Netherlands
| | - Jan B. F. van Erp
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Research Group Human Media Interaction, University of Twente, Enschede, 7522 NH, The Netherlands
| |
Collapse
|
18
|
Abstract
Background: In this study we measured the affective appraisal of sounds and video clips using a newly developed graphical self-report tool: the EmojiGrid. The EmojiGrid is a square grid, labeled with emoji that express different degrees of valence and arousal. Users rate the valence and arousal of a given stimulus by simply clicking on the grid. Methods: In Experiment I, observers (N=150, 74 males, mean age=25.2±3.5) used the EmojiGrid to rate their affective appraisal of 77 validated sound clips from nine different semantic categories, covering a large area of the affective space. In Experiment II, observers (N=60, 32 males, mean age=24.5±3.3) used the EmojiGrid to rate their affective appraisal of 50 validated film fragments varying in positive and negative affect (20 positive, 20 negative, 10 neutral). Results: The results of this study show that for both sound and video, the agreement between the mean ratings obtained with the EmojiGrid and those obtained with an alternative and validated affective rating tool in previous studies in the literature, is excellent for valence and good for arousal. Our results also show the typical universal U-shaped relation between mean valence and arousal that is commonly observed for affective sensory stimuli, both for sound and video. Conclusions: We conclude that the EmojiGrid can be used as an affective self-report tool for the assessment of sound and video-evoked emotions.
Collapse
Affiliation(s)
- Alexander Toet
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, 3584 CS, The Netherlands
| | - Jan B. F. van Erp
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Research Group Human Media Interaction, University of Twente, Enschede, 7522 NH, The Netherlands
| |
Collapse
|
19
|
Pralus A, Belfi A, Hirel C, Lévêque Y, Fornoni L, Bigand E, Jung J, Tranel D, Nighoghossian N, Tillmann B, Caclin A. Recognition of musical emotions and their perceived intensity after unilateral brain damage. Cortex 2020; 130:78-93. [PMID: 32645502 DOI: 10.1016/j.cortex.2020.05.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 10/24/2022]
Abstract
For the hemispheric laterality of emotion processing in the brain, two competing hypotheses are currently still debated. The first hypothesis suggests a greater involvement of the right hemisphere in emotion perception whereas the second hypothesis suggests different involvements of each hemisphere as a function of the valence of the emotion. These hypotheses are based on findings for facial and prosodic emotion perception. Investigating emotion perception for other stimuli, such as music, should provide further insight and potentially help to disentangle between these two hypotheses. The present study investigated musical emotion perception in patients with unilateral right brain damage (RBD, n = 16) or left brain damage (LBD, n = 16), as well as in matched healthy comparison participants (n = 28). The experimental task required explicit recognition of musical emotions as well as ratings on the perceived intensity of the emotion. Compared to matched comparison participants, musical emotion recognition was impaired only in LBD participants, suggesting a potential specificity of the left hemisphere for explicit emotion recognition in musical material. In contrast, intensity ratings of musical emotions revealed that RBD patients underestimated the intensity of negative emotions compared to positive emotions, while LBD patients and comparisons did not show this pattern. To control for a potential generalized emotion deficit for other types of stimuli, we also tested facial emotion recognition in the same patients and their matched healthy comparisons. This revealed that emotion recognition after brain damage might depend on the stimulus category or modality used. These results are in line with the hypothesis of a deficit of emotion perception depending on lesion laterality and valence in brain-damaged participants. The present findings provide critical information to disentangle the currently debated competing hypotheses and thus allow for a better characterization of the involvement of each hemisphere for explicit emotion recognition and their perceived intensity.
Collapse
Affiliation(s)
- Agathe Pralus
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France.
| | - Amy Belfi
- Department of Psychological Science, Missouri University of Science and Technology, Rolla, MO, USA
| | - Catherine Hirel
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France; Hôpital Neurologique Pierre Wertheimer, Hospices Civils de Lyon, Bron, France
| | - Yohana Lévêque
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France
| | - Lesly Fornoni
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France
| | - Emmanuel Bigand
- LEAD, CNRS, UMR 5022, University of Bourgogne, Dijon, France
| | - Julien Jung
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France; Hôpital Neurologique Pierre Wertheimer, Hospices Civils de Lyon, Bron, France
| | - Daniel Tranel
- Department of Neurology, University of Iowa, Iowa City, IA, USA
| | - Norbert Nighoghossian
- University Lyon 1, Lyon, France; Hôpital Neurologique Pierre Wertheimer, Hospices Civils de Lyon, Bron, France; CREATIS, CNRS, UMR5220, INSERM, U1044, University Lyon 1, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France
| | - Anne Caclin
- Lyon Neuroscience Research Center; CNRS, UMR5292; INSERM, U1028; Lyon, France; University Lyon 1, Lyon, France
| |
Collapse
|
20
|
Sharma S, Sasidharan A, Marigowda V, Vijay M, Sharma S, Mukundan CS, Pandit L, Masthi NRR. Indian classical music with incremental variation in tempo and octave promotes better anxiety reduction and controlled mind wandering-A randomised controlled EEG study. Explore (NY) 2020; 17:115-121. [PMID: 32249198 DOI: 10.1016/j.explore.2020.02.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 01/12/2020] [Accepted: 02/20/2020] [Indexed: 11/26/2022]
Abstract
Studies have reported the benefits of music-listening in stress-reduction using musical pieces of specific scale or 'Raaga'. But the influence of lower-level musical properties (like tempo, octave, timbre, etc.) lack research backing. Carnatic music concerts use incremental modulations in tempo and octave (e.g.: 'Ragam-Tanam-Pallavi') to elevate the mood of audiences. Therefore, the current study aimed to examine the anxiolytic effect of this musical property. A randomised controlled cross-over study with 21 male undergraduate medical students was followed. 11 participants listened to 'Varying music' (VM: instrumental music with incremental variations in tempo and octave) and 10 listened to 'Stable music' (SM: instrumental music without such variations), thrice daily for 6 days, both clips recorded in Raaga-Kaapi and silence being the control intervention. Electroencephalography (EEG) and Electrocardiography (for heart rate variability or HRV) were done on all 6 days. Beck's Anxiety inventory and State-trait anxiety scale were administered on Day-1 and Day-6. A significant anxiety score reduction was seen only in VM. VM showed marked decrease in lower frequency EEG power in bilateral temporo-parieto-occipital regions compared to silence, whereas SM showed increase in higher frequencies. Relatively, VM showed more midline power reduction (i.e., lower default mode network or DMN activity) and SM showed greater left-dominant alpha/beta asymmetry (i.e., greater right brain activation). During both music interventions HRV remained stable, unlike silence intervention. We speculate that, gradual transition between lower-slower and higher-faster music portions of VM induces a 'controlled-mind wandering' state involving balanced switching between heightened mind wandering ('attention to self') and reduced mind wandering ('attention to music') states, respectively. Therefore, music-selection has remarkable influence on stress-management and warrants further research.
Collapse
Affiliation(s)
- Sushma Sharma
- Kempegowda Institute of Medical Sciences (KIMS), Bengaluru, Karnataka, India
| | - Arun Sasidharan
- Axxonet Brain Research Laboratory (ABRL), Axxonet System Technologies Pvt. Ltd., Bengaluru, Karnataka, India.
| | - Vrinda Marigowda
- Axxonet Brain Research Laboratory (ABRL), Axxonet System Technologies Pvt. Ltd., Bengaluru, Karnataka, India
| | - Mohini Vijay
- Axxonet Brain Research Laboratory (ABRL), Axxonet System Technologies Pvt. Ltd., Bengaluru, Karnataka, India
| | - Sumit Sharma
- Axxonet Brain Research Laboratory (ABRL), Axxonet System Technologies Pvt. Ltd., Bengaluru, Karnataka, India
| | - Chetan Satyajit Mukundan
- Axxonet Brain Research Laboratory (ABRL), Axxonet System Technologies Pvt. Ltd., Bengaluru, Karnataka, India
| | - Lakshmi Pandit
- Kempegowda Institute of Medical Sciences (KIMS), Bengaluru, Karnataka, India
| | - N R Ramesh Masthi
- Kempegowda Institute of Medical Sciences (KIMS), Bengaluru, Karnataka, India
| |
Collapse
|
21
|
Proverbio AM, Santoni S, Adorni R. ERP Markers of Valence Coding in Emotional Speech Processing. iScience 2020; 23:100933. [PMID: 32151976 PMCID: PMC7063241 DOI: 10.1016/j.isci.2020.100933] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 12/20/2019] [Accepted: 02/19/2020] [Indexed: 11/01/2022] Open
Abstract
How is auditory emotional information processed? The study's aim was to compare cerebral responses to emotionally positive or negative spoken phrases matched for structure and content. Twenty participants listened to 198 vocal stimuli while detecting filler phrases containing first names. EEG was recorded from 128 sites. Three event-related potential (ERP) components were quantified and found to be sensitive to emotional valence since 350 ms of latency. P450 and late positivity were enhanced by positive content, whereas anterior negativity was larger to negative content. A similar set of markers (P300, N400, LP) was found previously for the processing of positive versus negative affective vocalizations, prosody, and music, which suggests a common neural mechanism for extracting the emotional content of auditory information. SwLORETA applied to potentials recorded between 350 and 550 ms showed that negative speech activated the right temporo/parietal areas (BA40, BA20/21), whereas positive speech activated the left homologous and inferior frontal areas.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy.
| | - Sacha Santoni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| | - Roberta Adorni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| |
Collapse
|
22
|
Haslbeck FB, Jakab A, Held U, Bassler D, Bucher HU, Hagmann C. Creative music therapy to promote brain function and brain structure in preterm infants: A randomized controlled pilot study. Neuroimage Clin 2020; 25:102171. [PMID: 31972397 PMCID: PMC6974781 DOI: 10.1016/j.nicl.2020.102171] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Revised: 12/18/2019] [Accepted: 01/10/2020] [Indexed: 01/17/2023]
Abstract
Cognitive and neurobehavioral problems are among the most severe adverse outcomes in very preterm infants. Such neurodevelopmental impairments may be mitigated through nonpharmacological interventions such as creative music therapy (CMT), an interactive, resource- and needs-oriented approach that provides individual social contact and musical stimulation. The aim was to test the feasibility of a study investigating the role of CMT and to measure the short- and medium-term effects of CMT on structural and functional brain connectivity with MRI. In this randomized, controlled clinical pilot feasibility trial, 82 infants were randomized to either CMT or standard care. A specially trained music therapist provided CMT via infant-directed humming and singing in lullaby style. To test the short-term effects of CMT on brain structure and function, diffusion tensor imaging data and resting-state functional imaging data were acquired. Clinical feasibility was achieved despite moderate parental refusal mainly in the control group after randomization. 40 infants remained as final cohort for the MRI analysis. Structural brain connectivity appears to be moderately affected by CMT, structural connectomic analysis revealed increased integration in the posterior cingulate cortex only. Lagged resting-state MRI analysis showed lower thalamocortical processing delay, stronger functional networks, and higher functional integration in predominantly left prefrontal, supplementary motor, and inferior temporal brain regions in infants treated with CMT. This trial provides unique evidence that CMT has beneficial effects on functional brain activity and connectivity in networks underlying higher-order cognitive, socio-emotional, and motor functions in preterm infants. Our results indicate the potential of CMT to improve long-term neurodevelopmental outcomes in children born very preterm.
Collapse
Affiliation(s)
- Friederike Barbara Haslbeck
- Department of Neonatology, University Hospital Zurich and University Zurich, Frauenklinikstrasse 10, 8091 Zürich, Switzerland.
| | - Andras Jakab
- MR Research Center, University Children's Hospital Zurich, Steinwiesstrasse 75, 8032 Zürich, Switzerland
| | - Ulrike Held
- Department of Biostatistics Epidemiology, Biostatistics and Prevention Institute UZH, Hirschengraben 84, 8001 Zürich, Switzerland
| | - Dirk Bassler
- Department of Neonatology, University Hospital Zurich and University Zurich, Frauenklinikstrasse 10, 8091 Zürich, Switzerland
| | - Hans-Ulrich Bucher
- Department of Neonatology, University Hospital Zurich and University Zurich, Frauenklinikstrasse 10, 8091 Zürich, Switzerland
| | - Cornelia Hagmann
- Department of Neonatology and Pediatric Intensive Care, Children's University Hospital of Zurich, Steinwiesstrasse 75, 8032 Zürich, Switzerland; Children's Research Center, University Children's Hospital Zurich, Steinwiesstrasse 75, 8032 Zürich, Switzerland
| |
Collapse
|
23
|
Sachs ME, Habibi A, Damasio A, Kaplan JT. Dynamic intersubject neural synchronization reflects affective responses to sad music. Neuroimage 2019; 218:116512. [PMID: 31901418 DOI: 10.1016/j.neuroimage.2019.116512] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 11/14/2019] [Accepted: 12/31/2019] [Indexed: 12/30/2022] Open
Abstract
Psychological theories of emotion often highlight the dynamic quality of the affective experience, yet neuroimaging studies of affect have traditionally relied on static stimuli that lack ecological validity. Consequently, the brain regions that represent emotions and feelings as they unfold remain unclear. Recently, dynamic, model-free analytical techniques have been employed with naturalistic stimuli to better capture time-varying patterns of activity in the brain; yet, few studies have focused on relating these patterns to changes in subjective feelings. Here, we address this gap, using intersubject correlation and phase synchronization to assess how stimulus-driven changes in brain activity and connectivity are related to two aspects of emotional experience: emotional intensity and enjoyment. During fMRI scanning, healthy volunteers listened to a full-length piece of music selected to induce sadness. After scanning, participants listened to the piece twice while simultaneously rating the intensity of felt sadness or felt enjoyment. Activity in the auditory cortex, insula, and inferior frontal gyrus was significantly synchronized across participants. Synchronization in auditory, visual, and prefrontal regions was significantly greater in participants with higher measures of a subscale of trait empathy related to feeling emotions in response to music. When assessed dynamically, continuous enjoyment ratings positively predicted a moment-to-moment measure of intersubject synchronization in auditory, default mode, and striatal networks, as well as the orbitofrontal cortex, whereas sadness predicted intersubject synchronization in limbic and striatal networks. The results suggest that stimulus-driven patterns of neural communication in emotional processing and high-level cortical regions carry meaningful information with regards to our feeling in response to a naturalistic stimulus.
Collapse
Affiliation(s)
- Matthew E Sachs
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA; Center for Science and Society, Columbia University in the City of New York, 1180 Amsterdam Avenue, New York, NY, 10027, USA.
| | - Assal Habibi
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA
| | - Antonio Damasio
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA
| | - Jonas T Kaplan
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA
| |
Collapse
|
24
|
Mohammad Alipour Z, Mohammadkhani S, Khosrowabadi R. Alteration of perceived emotion and brain functional connectivity by changing the musical rhythmic pattern. Exp Brain Res 2019; 237:2607-2619. [PMID: 31372689 DOI: 10.1007/s00221-019-05616-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2018] [Accepted: 07/26/2019] [Indexed: 02/04/2023]
Abstract
The arrangement of musical notes and their time intervals, also known as musical rhythm is one of the core elements of music. Nevertheless, the cognitive process and neural mechanism of the human brain that underlay the perception of musical rhythm are poorly understood. In this study, we hypothesized that changes in musical rhythmic patterns alter the emotional content expressed by music and the way it is perceived, that assumably causes specific changes in the brain functional connectome. Therefore, 18 male children aged 10-14 years old were recruited and exposed to 12 musical excerpts while their brain's electrical activity was recorded using a 32-channel EEG recorder. The musical rhythmic patterns were changed by manipulating only note values in beats while keeping time signature and other elements in a fixed state. The experienced emotions were assessed using a 2-dimensional self-assessment manikin questionnaire. The behavioral data showed that an increase in the complexity of musical rhythmic patterns significantly enhances perceived valence and arousal levels. In addition, the pattern of brain functional connectivity was also estimated using the weighted phase lag index and their association with behavioral changes was calculated. Interestingly, the behavioral changes were mainly associated with alteration of brain functional connectivity at the alpha band in the fronto-central connections. These results emphasize the important role of the motor cortical site-fronto-central connections, in the perception of musical rhythmic pattern. These findings may improve conception of the underlying brain mechanism involved in the perception of musical rhythm.
Collapse
Affiliation(s)
- Zhaleh Mohammad Alipour
- Department of Clinical Psychology, Kharazmi University, Tehran, Iran.,Institute for Cognitive and Brain Science, Shahid Beheshti University, Evin Sq., 19839-63113, Tehran, Iran
| | | | - Reza Khosrowabadi
- Institute for Cognitive and Brain Science, Shahid Beheshti University, Evin Sq., 19839-63113, Tehran, Iran.
| |
Collapse
|
25
|
Fachner JC, Maidhof C, Grocke D, Nygaard Pedersen I, Trondalen G, Tucek G, Bonde LO. "Telling me not to worry…" Hyperscanning and Neural Dynamics of Emotion Processing During Guided Imagery and Music. Front Psychol 2019; 10:1561. [PMID: 31402880 PMCID: PMC6673756 DOI: 10.3389/fpsyg.2019.01561] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 06/20/2019] [Indexed: 12/15/2022] Open
Abstract
To analyze how emotions and imagery are shared, processed and recognized in Guided Imagery and Music, we measured the brain activity of an experienced therapist (“Guide”) and client (“Traveler”) with dual-EEG in a real therapy session about potential death of family members. Synchronously with the EEG, the session was video-taped and then micro-analyzed. Four raters identified therapeutically important moments of interest (MOI) and no-interest (MONI) which were transcribed and annotated. Several indices of emotion- and imagery-related processing were analyzed: frontal and parietal alpha asymmetry, frontal midline theta, and occipital alpha activity. Session ratings showed overlaps across all raters, confirming the importance of these MOIs, which showed different cortical activity in visual areas compared to resting-state. MOI 1 was a pivotal moment including an important imagery with a message of hope from a close family member, while in the second MOI the Traveler sent a message to an unborn baby. Generally, results seemed to indicate that the emotions of Traveler and Guide during important moments were not positive, pleasurably or relaxed when compared to resting-state, confirming both were dealing with negative emotions and anxiety that had to be contained in the interpersonal process. However, the temporal dynamics of emotion-related markers suggested shifts in emotional valence and intensity during these important, personally meaningful moments; for example, during receiving the message of hope, an increase of frontal alpha asymmetry was observed, reflecting increased positive emotional processing. EEG source localization during the message suggested a peak activation in left middle temporal gyrus. Interestingly, peaks in emotional markers in the Guide partly paralleled the Traveler's peaks; for example, during the Guide's strong feeling of mutuality in MOI 2, the time series of frontal alpha asymmetries showed a significant cross-correlation, indicating similar emotional processing in Traveler and Guide. Investigating the moment-to-moment interaction in music therapy showed how asymmetry peaks align with the situated cognition of Traveler and Guide along the emotional contour of the music, representing the highs and lows during the therapy process. Combining dual-EEG with detailed audiovisual and qualitative data seems to be a promising approach for further research into music therapy.
Collapse
Affiliation(s)
- Jörg C Fachner
- Cambridge Institute for Music Therapy Research, Anglia Ruskin University, Cambridge, United Kingdom.,Josef Ressel Centre for Personalised Music Therapy, IMC University of Applied Sciences Krems, Krems an der Donau, Austria
| | - Clemens Maidhof
- Cambridge Institute for Music Therapy Research, Anglia Ruskin University, Cambridge, United Kingdom.,Josef Ressel Centre for Personalised Music Therapy, IMC University of Applied Sciences Krems, Krems an der Donau, Austria
| | - Denise Grocke
- Melbourne Conservatorium of Music, University of Melbourne, Melbourne, VIC, Australia
| | - Inge Nygaard Pedersen
- Department of Communication and Psychology, The Faculty of Humanities, Aalborg University, Aalborg, Denmark
| | - Gro Trondalen
- Centre for Research in Music and Health, Norwegian Academy of Music, Oslo, Norway
| | - Gerhard Tucek
- Josef Ressel Centre for Personalised Music Therapy, IMC University of Applied Sciences Krems, Krems an der Donau, Austria
| | - Lars O Bonde
- Department of Communication and Psychology, The Faculty of Humanities, Aalborg University, Aalborg, Denmark.,Centre for Research in Music and Health, Norwegian Academy of Music, Oslo, Norway
| |
Collapse
|
26
|
Improving the accuracy of EEG emotion recognition by combining valence lateralization and ensemble learning with tuning parameters. Cogn Process 2019; 20:405-417. [PMID: 31338704 DOI: 10.1007/s10339-019-00924-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 07/18/2019] [Indexed: 10/26/2022]
Abstract
For emotion recognition using EEG signals, the challenge is improving accuracy. This study proposes strategies that concentrate on incorporating emotion lateralization and ensemble learning approach to enhance the accuracy of EEG-based emotion recognition. In this paper, we obtained EEG signals from an EEG-based public emotion dataset with four classes (i.e. happy, sad, angry and relaxed). The EEG signal is acquired from pair asymmetry channels from left and right hemispheres. EEG features were extracted using a hybrid features extraction from three domains, namely time, frequency and wavelet. To demonstrate the lateralization, we performed a set of four experimental scenarios, i.e. without lateralization, right-/left-dominance lateralization, valence lateralization and others lateralization. For emotion classification, we use random forest (RF), which is known as the best classifier in ensemble learning. Tuning parameters in the RF model were done by grid search optimization. As a comparison of RF, we employed two prevalent algorithms in EEG, namely SVM and LDA. Emotion classification accuracy increased significantly from without lateralization to the valence lateralization using three pairs of asymmetry channel, i.e. T7-T8, C3-C4 and O1-O2. For the classification, the RF method provides the highest accuracy of 75.6% compared to SVM of 69.8% and LDA of 60.4%. In addition, the features of energy-entropy from wavelet are important for EEG emotion recognition. This study yields a significant performance improvement of EEG-based emotion recognition by the valence emotion lateralization. It indicates that happy and relaxed emotions are dominant in the left hemisphere, while angry and sad emotions are better recognized from the right hemisphere.
Collapse
|
27
|
Abstract
Most studies examining the neural underpinnings of music listening have no specific instruction on how to process the presented musical pieces. In this study, we explicitly manipulated the participants' focus of attention while they listened to the musical pieces. We used an ecologically valid experimental setting by presenting the musical stimuli simultaneously with naturalistic film sequences. In one condition, the participants were instructed to focus their attention on the musical piece (attentive listening), whereas in the second condition, the participants directed their attention to the film sequence (passive listening). We used two instrumental musical pieces: an electronic pop song, which was a major hit at the time of testing, and a classical musical piece. During music presentation, we measured electroencephalographic oscillations and responses from the autonomic nervous system (heart rate and high-frequency heart rate variability). During passive listening to the pop song, we found strong event-related synchronizations in all analyzed frequency bands (theta, lower alpha, upper alpha, lower beta, and upper beta). The neurophysiological responses during attentive listening to the pop song were similar to those of the classical musical piece during both listening conditions. Thus, the focus of attention had a strong influence on the neurophysiological responses to the pop song, but not on the responses to the classical musical piece. The electroencephalographic responses during passive listening to the pop song are interpreted as a neurophysiological and psychological state typically observed when the participants are 'drawn into the music'.
Collapse
|
28
|
O'Hara SJ, Worsley HK. A cost-effective, simple measure of emotional response in the brain for use by behavioral biologists. Biol Futur 2019; 70:143-148. [PMID: 34554416 DOI: 10.1556/019.70.2019.18] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Accepted: 04/02/2019] [Indexed: 11/19/2022]
Abstract
BACKGROUND AND AIMS Studies combining brain activity measures with behavior have the potential to reveal more about animal cognition than either on their own. However, brain measure procedures in animal studies are often practically challenging and cost-prohibitive. Therefore, we test whether a simple measure of ear temperature can be used to index hemispheric brain activation using a handheld thermoscanner. Cortisol levels are correlated with the activation of the right cortical region, implying that, when stressful situations are experienced, increased right hemisphere activation occurs. This leads to corresponding locally detectable increases in ipsilateral ear temperature. METHODS We compared right-and left-ear temperatures of 32 domestic dogs under non-stressful and partially stressful conditions. RESULTS We detected significant elevations in right-ear temperature - but not left-ear temperature - relative to baseline readings in the partially stressful condition that were not detected in the non-stressful condition. DISCUSSION These findings provide encouraging support for the notion that tympanic membrane temperature readings can provide a simple index for canine hemispheric brain activation, which can be combined with data on behavioral decision-making, expectancy violations, or other measures of emotional processing. Devices are cheap, simple to use, portable, and only minimally invasive providing a means for realtime brain and behavior measurements to be conducted in real-world settings.
Collapse
Affiliation(s)
- Sean J O'Hara
- School of Science, Engineering and Environment, University of Salford, Manchester, UK.
| | - Hannah K Worsley
- School of Science, Engineering and Environment, University of Salford, Manchester, UK
| |
Collapse
|
29
|
Bowling DL, Graf Ancochea P, Hove MJ, Fitch WT. Pupillometry of Groove: Evidence for Noradrenergic Arousal in the Link Between Music and Movement. Front Neurosci 2019; 12:1039. [PMID: 30686994 PMCID: PMC6335267 DOI: 10.3389/fnins.2018.01039] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 12/21/2018] [Indexed: 11/16/2022] Open
Abstract
The capacity to entrain motor action to rhythmic auditory stimulation is highly developed in humans and extremely limited in our closest relatives. An important aspect of auditory-motor entrainment is that not all forms of rhythmic stimulation motivate movement to the same degree. This variation is captured by the concept of musical groove: high-groove music stimulates a strong desire for movement, whereas low-groove music does not. Here, we utilize this difference to investigate the neurophysiological basis of our capacity for auditory-motor entrainment. In a series of three experiments we examine pupillary responses to musical stimuli varying in groove. Our results show stronger pupil dilation in response to (1) high- vs. low-groove music, (2) high vs. low spectral content, and (3) syncopated vs. straight drum patterns. We additionally report evidence for consistent sex differences in music-induced pupillary responses, with males exhibiting larger differences between responses, but females exhibiting stronger responses overall. These results imply that the biological link between movement and auditory rhythms in our species is supported by the capacity of high-groove music to stimulate arousal in the central and peripheral nervous system, presumably via highly conserved noradrenergic mechanisms.
Collapse
Affiliation(s)
- Daniel L. Bowling
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| | | | - Michael J. Hove
- Department of Psychological Science, Fitchburg State University, Fitchburg, MA, United States
| | - W. Tecumseh Fitch
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| |
Collapse
|
30
|
Rieck TM, Lee JR, Ferguson JA, Peterson LA, Martin Lillie CM, Clark MM, Limburg PJ, Bauer BA. A Randomized Controlled Trial in the Evaluation of a Novel Stress Management Tool: A Lounge Chair Experience. Glob Adv Health Med 2019; 8:2164956119892597. [PMID: 31827983 PMCID: PMC6886274 DOI: 10.1177/2164956119892597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Revised: 11/08/2019] [Accepted: 11/14/2019] [Indexed: 11/15/2022] Open
Abstract
Objectives The aim of this study was to compare the stress reduction effects of spending 25 minutes reclining in a SolTec™ Lounge between 2 intervention groups. Group 1 experienced the Lounge with multilayered music on an external speaker, while group 2 experienced the Lounge with multilayered music and synchronous vibration and magnetic stimulation from within the chair. Subjects In total, 110 participants with a self-reported stress level of 4 or higher on a 0- to 10-point scale were recruited from the local community including employees. Participants were randomized into receiving 1 of the 2 interventions. There were no significant differences between the group’s average stress levels prior to the interventions. Interventions Both groups received a 25-minute session in a dimly lit, quiet area on the Lounge with multilayered music. The second group also received vibration and magnetic stimulation that were synchronized with the music. Design Current stress level as well as ratings or feelings of anxiety, tenseness, energy, focus, happiness, relaxation, nervousness, creativeness, and being rested were recorded before and after the session. Results Both groups of participants reported equivalent decreased feelings of stress after using the Lounge. Participants receiving the synchronous multilayered music, vibration, and magnetic stimulation did report significantly reduced feelings of tenseness, feeling more relaxed, and feeling more creative when compared with the group that received music only. Conclusion Spending 25 minutes in the SolTec™ Lounge with multilayered music is an effective way to reduce self-reported stress in individuals who self-report having a high stress level. If confirmed by future studies, including synchronous vibration and magnetic stimulation with the multilayered music might be an effective stress reduction strategy.
Collapse
Affiliation(s)
- Thomas M Rieck
- Mayo Clinic Healthy Living Program, Department of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Jennifer R Lee
- Mayo Clinic Healthy Living Program, Department of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Jennifer A Ferguson
- Mayo Clinic Healthy Living Program, Department of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Laura A Peterson
- Mayo Clinic Healthy Living Program, Department of Medicine, Mayo Clinic, Rochester, Minnesota
| | | | - Matthew M Clark
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, Minnesota
| | - Paul J Limburg
- Department of Gastroenterology and Hepatology, Mayo Clinic, Rochester, Minnesota
| | - Brent A Bauer
- Division of General Internal Medicine, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
31
|
Tandle AL, Joshi MS, Dharmadhikari AS, Jaiswal SV. Mental state and emotion detection from musically stimulated EEG. Brain Inform 2018; 5:14. [PMID: 30499008 PMCID: PMC6429168 DOI: 10.1186/s40708-018-0092-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Accepted: 11/12/2018] [Indexed: 01/09/2023] Open
Abstract
This literature survey attempts to clarify different approaches considered to study the impact of the musical stimulus on the human brain using EEG Modality. Glancing at the field through various aspects of such studies specifically an experimental protocol, the EEG machine, number of channels investigated, feature extracted, categories of emotions, the brain area, the brainwaves, statistical tests, machine learning algorithms used for classification and validation of the developed model. This article comments on how these different approaches have particular weaknesses and strengths. Ultimately, this review concludes a suitable method to study the impact of the musical stimulus on brain and implications of such kind of studies.
Collapse
Affiliation(s)
| | | | | | - Suyog V Jaiswal
- H.B.T. Medical College and Dr. R.N. Cooper Mun. Gen. Hospital, Mumbai, India
| |
Collapse
|
32
|
Jin L, Zhang M, Xu J, Xia D, Zhang C, Wang J, Wang S. Music stimuli lead to increased levels of nitrite in unstimulated mixed saliva. SCIENCE CHINA. LIFE SCIENCES 2018; 61:1099-1106. [PMID: 29934916 DOI: 10.1007/s11427-018-9309-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Accepted: 04/10/2018] [Indexed: 06/08/2023]
Abstract
Concentration of salivary nitrate is approximately 10-fold to that of serum. Many circumstances such as acute stress could promote salivary nitrate secretion and nitrite formation. However, whether other conditions can also be used as regulators of salivary nitrate/nitrite has not yet been explored. The present study was designed to determine the influence of exposure to different music on the salivary flow rate and nitrate secretion and nitrite formation. Twenty-four undergraduate students (12 females and 12 males) were exposed to silence, rock music, classical music or white noise respectively on four consecutive mornings. The unstimulated salivary flow rate and stimulated salivary flow rate were measured. Salivary ionic (Na+, Ca2+ Cl-, and PO43-) content and nitrate/nitrite levels were detected. The unstimulated salivary flow rate was significantly increased after classical music exposure compared to that after silence. Salivary nitrite levels were significantly higher upon classical music and white noise stimulation than those under silence in females. However, males were more sensitive only to white noise with regard to the nitrite increase. In conclusion, this study demonstrated that classical music stimulation promotes salivary nitrite formation and an increase in saliva volume was observed. These observations may play an important role in regulating oral function.
Collapse
Affiliation(s)
- Luyuan Jin
- Salivary Gland Disease Center and Beijing Key Laboratory of Tooth Regeneration and Function Reconstruction, Capital Medical University School of Stomatology, Beijing, 100050, China
- Department of General Dentistry and Emergency Care, Capital Medical University School of Stomatology, Beijing, 100050, China
| | - Mengbi Zhang
- Salivary Gland Disease Center and Beijing Key Laboratory of Tooth Regeneration and Function Reconstruction, Capital Medical University School of Stomatology, Beijing, 100050, China
| | - Junji Xu
- Salivary Gland Disease Center and Beijing Key Laboratory of Tooth Regeneration and Function Reconstruction, Capital Medical University School of Stomatology, Beijing, 100050, China
| | - Dengsheng Xia
- Department of General Dentistry and Emergency Care, Capital Medical University School of Stomatology, Beijing, 100050, China
| | - Chunmei Zhang
- Salivary Gland Disease Center and Beijing Key Laboratory of Tooth Regeneration and Function Reconstruction, Capital Medical University School of Stomatology, Beijing, 100050, China
| | - Jingsong Wang
- Salivary Gland Disease Center and Beijing Key Laboratory of Tooth Regeneration and Function Reconstruction, Capital Medical University School of Stomatology, Beijing, 100050, China
| | - Songlin Wang
- Salivary Gland Disease Center and Beijing Key Laboratory of Tooth Regeneration and Function Reconstruction, Capital Medical University School of Stomatology, Beijing, 100050, China.
- Department of Biochemistry and Molecular Biology, Capital Medical University School of Basic Medical Sciences, Beijing, 100069, China.
| |
Collapse
|
33
|
Schmidt B, Warns L, Hellmer M, Ulrich N, Hewig J. What Makes Us Feel Good or Bad. JOURNAL OF INDIVIDUAL DIFFERENCES 2018. [DOI: 10.1027/1614-0001/a000258] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Affective science calls for methods to induce mood in an engaging and ecologically valid way. We present a method employing a naturally occurring scenario that fits these criteria: a job interview. Participants got positive or negative feedback from a fictive expert to induce positive or negative mood. After mood induction, we assessed participants’ decision making behavior in the so-called information sampling task (IST). Results show that our mood induction successfully changed valence, dominance, and state self-esteem ratings, while there were no differences in arousal ratings. Decision making in the IST was not influenced by the induced mood. Effect sizes of mood induction were equally high for positive and negative mood concerning valence ratings (d = .8) with participants scoring high on self-control showing smaller mood induction effects. We conclude that our mood induction technique is an effective and natural way to induce mood in the laboratory, meeting current criteria of affective science.
Collapse
Affiliation(s)
- Barbara Schmidt
- Department of Psychology, University of Jena, Germany
- Department of Psychology, University of Würzburg, Germany
| | - Luise Warns
- Department of Psychology, University of Würzburg, Germany
| | - Meike Hellmer
- Department of Psychology, University of Würzburg, Germany
| | - Natalie Ulrich
- Department of Psychology, University of Würzburg, Germany
- Department of Psychology, University of Osnabrück, Germany
| | - Johannes Hewig
- Department of Psychology, University of Würzburg, Germany
| |
Collapse
|
34
|
Maki H, Sakti S, Tanaka H, Nakamura S. Quality prediction of synthesized speech based on tensor structured EEG signals. PLoS One 2018; 13:e0193521. [PMID: 29902169 PMCID: PMC6002021 DOI: 10.1371/journal.pone.0193521] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 02/13/2018] [Indexed: 12/01/2022] Open
Abstract
This study investigates quality prediction methods for synthesized speech using EEG. Training a predictive model using EEG is challenging due to a small number of training trials, a low signal-to-noise ratio, and a high correlation among independent variables. When a predictive model is trained with a machine learning algorithm, the features extracted from multi-channel EEG signals are usually organized as a vector and their structures are ignored even though they are highly structured signals. This study predicts the subjective rating scores of synthesized speeches, including their overall impression, valence, and arousal, by creating tensor structured features instead of vectorized ones to exploit the structure of the features. We extracted various features to construct a tensor feature that maintained their structure. Vectorized and tensorial features were used to predict the rating scales, and the experimental result showed that prediction with tensorial features achieved the better predictive performance. Among the features, the alpha and beta bands are particularly more effective for predictions than other features, which agrees with previous neurophysiological studies.
Collapse
Affiliation(s)
- Hayato Maki
- Graduate School of Information Sciences, Nara Institue of Science and Technology, Ikoma, Nara, Japan
| | - Sakriani Sakti
- Graduate School of Information Sciences, Nara Institue of Science and Technology, Ikoma, Nara, Japan
| | - Hiroki Tanaka
- Graduate School of Information Sciences, Nara Institue of Science and Technology, Ikoma, Nara, Japan
| | - Satoshi Nakamura
- Graduate School of Information Sciences, Nara Institue of Science and Technology, Ikoma, Nara, Japan
| |
Collapse
|
35
|
Methods of Neuromarketing and Implication of the Frontal Theta Asymmetry induced due to musical stimulus as choice modeling. ACTA ACUST UNITED AC 2018. [DOI: 10.1016/j.procs.2018.05.059] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
36
|
Proverbio AM, De Benedetto F. Auditory enhancement of visual memory encoding is driven by emotional content of the auditory material and mediated by superior frontal cortex. Biol Psychol 2017; 132:164-175. [PMID: 29292233 DOI: 10.1016/j.biopsycho.2017.12.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Revised: 07/12/2017] [Accepted: 12/19/2017] [Indexed: 12/31/2022]
Abstract
BACKGROUND The aim of the present study was to investigate how auditory background interacts with learning and memory. Both facilitatory (e.g., "Mozart effect") and interfering effects of background have been reported, depending on the type of auditory stimulation and of concurrent cognitive tasks. METHOD Here we recorded event related potentials (ERPs) during face encoding followed by an old/new memory test to investigate the effect of listening to classical music (Čajkovskij, dramatic), environmental sounds (rain) or silence on learning. Participants were 15 healthy non-musician university students. Almost 400 (previously unknown) faces of women and men of various age were presented. RESULTS Listening to music during study led to a better encoding of faces as indexed by an increased Anterior Negativity. The FN400 response recorded during the memory test showed a gradient in its amplitude reflecting face familiarity. FN400 was larger to new than old faces, and to faces studied during rain sound listening and silence than music listening. CONCLUSION The results indicate that listening to music enhances memory recollection of faces by merging with visual information. A swLORETA analysis showed the main involvement of Superior Temporal Gyrus (STG) and medial frontal gyrus in the integration of audio-visual information.
Collapse
Affiliation(s)
- A M Proverbio
- NeuroMI Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Italy.
| | - F De Benedetto
- NeuroMI Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Italy
| |
Collapse
|
37
|
Arjmand HA, Hohagen J, Paton B, Rickard NS. Emotional Responses to Music: Shifts in Frontal Brain Asymmetry Mark Periods of Musical Change. Front Psychol 2017; 8:2044. [PMID: 29255434 PMCID: PMC5723012 DOI: 10.3389/fpsyg.2017.02044] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2016] [Accepted: 11/08/2017] [Indexed: 11/13/2022] Open
Abstract
Recent studies have demonstrated increased activity in brain regions associated with emotion and reward when listening to pleasurable music. Unexpected change in musical features intensity and tempo - and thereby enhanced tension and anticipation - is proposed to be one of the primary mechanisms by which music induces a strong emotional response in listeners. Whether such musical features coincide with central measures of emotional response has not, however, been extensively examined. In this study, subjective and physiological measures of experienced emotion were obtained continuously from 18 participants (12 females, 6 males; 18-38 years) who listened to four stimuli-pleasant music, unpleasant music (dissonant manipulations of their own music), neutral music, and no music, in a counter-balanced order. Each stimulus was presented twice: electroencephalograph (EEG) data were collected during the first, while participants continuously subjectively rated the stimuli during the second presentation. Frontal asymmetry (FA) indices from frontal and temporal sites were calculated, and peak periods of bias toward the left (indicating a shift toward positive affect) were identified across the sample. The music pieces were also examined to define the temporal onset of key musical features. Subjective reports of emotional experience averaged across the condition confirmed participants rated their music selection as very positive, the scrambled music as negative, and the neutral music and silence as neither positive nor negative. Significant effects in FA were observed in the frontal electrode pair FC3-FC4, and the greatest increase in left bias from baseline was observed in response to pleasurable music. These results are consistent with findings from previous research. Peak FA responses at this site were also found to co-occur with key musical events relating to change, for instance, the introduction of a new motif, or an instrument change, or a change in low level acoustic factors such as pitch, dynamics or texture. These findings provide empirical support for the proposal that change in basic musical features is a fundamental trigger of emotional responses in listeners.
Collapse
Affiliation(s)
| | - Jesper Hohagen
- Institute for Systematic Musicology, University of Hamburg, Hamburg, Germany
| | - Bryan Paton
- Monash Biomedical Imaging, Monash University, University of Newcastle, Newcastle, NSW, Australia
| | - Nikki S. Rickard
- School of Psychological Sciences, Monash University, Melbourne, VIC, Australia
- Centre for Positive Psychology, Graduate School of Education, University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
38
|
Brown DR, Cavanagh JF. The sound and the fury: Late positive potential is sensitive to sound affect. Psychophysiology 2017; 54:1812-1825. [PMID: 28726287 DOI: 10.1111/psyp.12959] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Revised: 04/19/2017] [Accepted: 04/24/2017] [Indexed: 01/10/2023]
Abstract
Emotion is an emergent construct of multiple distinct neural processes. EEG is uniquely sensitive to real-time neural computations, and thus is a promising tool to study the construction of emotion. This series of studies aimed to probe the mechanistic contribution of the late positive potential (LPP) to multimodal emotion perception. Experiment 1 revealed that LPP amplitudes for visual images, sounds, and visual images paired with sounds were larger for negatively rated stimuli than for neutrally rated stimuli. Experiment 2 manipulated this audiovisual enhancement by altering the valence pairings with congruent (e.g., positive audio + positive visual) or conflicting emotional pairs (e.g., positive audio + negative visual). Negative visual stimuli evoked larger early LPP amplitudes than positive visual stimuli, regardless of sound pairing. However, time frequency analyses revealed significant midfrontal theta-band power differences for conflicting over congruent stimuli pairs, suggesting very early (∼500 ms) realization of thematic fidelity violations. Interestingly, late LPP modulations were reflective of the opposite pattern of congruency, whereby congruent over conflicting pairs had larger LPP amplitudes. Together, these findings suggest that enhanced parietal activity for affective valence is modality independent and sensitive to complex affective processes. Furthermore, these findings suggest that altered neural activities for affective visual stimuli are enhanced by concurrent affective sounds, paving the way toward an understanding of the construction of multimodal affective experience.
Collapse
Affiliation(s)
- Darin R Brown
- Department of Psychology, University of New Mexico, Albuquerque, New Mexico, USA
| | - James F Cavanagh
- Department of Psychology, University of New Mexico, Albuquerque, New Mexico, USA
| |
Collapse
|
39
|
Markovic A, Kühnis J, Jäncke L. Task Context Influences Brain Activation during Music Listening. Front Hum Neurosci 2017; 11:342. [PMID: 28706480 PMCID: PMC5489556 DOI: 10.3389/fnhum.2017.00342] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2017] [Accepted: 06/13/2017] [Indexed: 11/14/2022] Open
Abstract
In this paper, we examined brain activation in subjects during two music listening conditions: listening while simultaneously rating the musical piece being played [Listening and Rating (LR)] and listening to the musical pieces unconstrained [Listening (L)]. Using these two conditions, we tested whether the sequence in which the two conditions were fulfilled influenced the brain activation observable during the L condition (LR → L or L → LR). We recorded high-density EEG during the playing of four well-known positively experienced soundtracks in two subject groups. One group started with the L condition and continued with the LR condition (L → LR); the second group performed this experiment in reversed order (LR → L). We computed from the recorded EEG the power for different frequency bands (theta, lower alpha, upper alpha, lower beta, and upper beta). Statistical analysis revealed that the power in all examined frequency bands increased during the L condition but only when the subjects had not had previous experience with the LR condition (i.e., L → LR). For the subjects who began with the LR condition, there were no power increases during the L condition. Thus, the previous experience with the LR condition prevented subjects from developing the particular mental state associated with the typical power increase in all frequency bands. The subjects without previous experience of the LR condition listened to the musical pieces in an unconstrained and undisturbed manner and showed a general power increase in all frequency bands. We interpret the fact that unconstrained music listening was associated with increased power in all examined frequency bands as a neural indicator of a mental state that can best be described as a mind-wandering state during which the subjects are “drawn into” the music.
Collapse
Affiliation(s)
- Andjela Markovic
- Division Neuropsychology, Institute of Psychology, University of ZurichZurich, Switzerland
| | - Jürg Kühnis
- Division Neuropsychology, Institute of Psychology, University of ZurichZurich, Switzerland
| | - Lutz Jäncke
- Division Neuropsychology, Institute of Psychology, University of ZurichZurich, Switzerland.,International Normal Aging and Plasticity Imaging Center, University of ZurichZurich, Switzerland.,University Research Priority Program, Dynamic of Healthy Aging, University of ZurichZurich, Switzerland
| |
Collapse
|
40
|
Leder H, Goller J, Forster M, Schlageter L, Paul MA. Face inversion increases attractiveness. Acta Psychol (Amst) 2017; 178:25-31. [PMID: 28554156 DOI: 10.1016/j.actpsy.2017.05.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 03/22/2017] [Accepted: 05/17/2017] [Indexed: 11/19/2022] Open
Abstract
Assessing facial attractiveness is a ubiquitous, inherent, and hard-wired phenomenon in everyday interactions. As such, it has highly adapted to the default way that faces are typically processed: viewing faces in upright orientation. By inverting faces, we can disrupt this default mode, and study how facial attractiveness is assessed. Faces, rotated at 90 (tilting to either side) and 180°, were rated on attractiveness and distinctiveness scales. For both orientations, we found that faces were rated more attractive and less distinctive than upright faces. Importantly, these effects were more pronounced for faces rated low in upright orientation, and smaller for highly attractive faces. In other words, the less attractive a face was, the more it gained in attractiveness by inversion or rotation. Based on these findings, we argue that facial attractiveness assessments might not rely on the presence of attractive facial characteristics, but on the absence of distinctive, unattractive characteristics. These unattractive characteristics are potentially weighed against an individual, attractive prototype in assessing facial attractiveness.
Collapse
Affiliation(s)
- Helmut Leder
- Department of Basic Psychological Research and Research Methods, University of Vienna, Austria
| | - Juergen Goller
- Department of Basic Psychological Research and Research Methods, University of Vienna, Austria.
| | - Michael Forster
- Department of Basic Psychological Research and Research Methods, University of Vienna, Austria
| | - Lena Schlageter
- Department of Basic Psychological Research and Research Methods, University of Vienna, Austria
| | - Matthew A Paul
- Department of Basic Psychological Research and Research Methods, University of Vienna, Austria
| |
Collapse
|
41
|
Dziembowska I, Izdebski P, Rasmus A, Brudny J, Grzelczak M, Cysewski P. Effects of Heart Rate Variability Biofeedback on EEG Alpha Asymmetry and Anxiety Symptoms in Male Athletes: A Pilot Study. Appl Psychophysiol Biofeedback 2017; 41:141-50. [PMID: 26459346 DOI: 10.1007/s10484-015-9319-4] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Heart rate variability biofeedback (HRV-BFB) has been shown as useful tool to manage stress in various populations. The present study was designed to investigate whether the biofeedback-based stress management tool consisting of rhythmic breathing, actively self-generated positive emotions and a portable biofeedback device induce changes in athletes' HRV, EEG patterns, and self-reported anxiety and self-esteem. The study involved 41 healthy male athletes, aged 16-21 (mean 18.34 ± 1.36) years. Participants were randomly divided into two groups: biofeedback and control. Athletes in the biofeedback group received HRV biofeedback training, athletes in the control group didn't receive any intervention. During the randomized controlled trial (days 0-21), the mean anxiety score declined significantly for the intervention group (change-4 p < 0.001) but not for the control group (p = 0.817). In addition, as compared to the control, athletes in biofeedback group showed substantial and statistically significant improvement in heart rate variability indices and changes in power spectra of both theta and alpha brain waves, and alpha asymmetry. These changes suggest better self-control in the central nervous system and better flexibility of the autonomic nervous system in the group that received biofeedback training. A HRV biofeedback-based stress management tool may be beneficial for stress reduction for young male athletes.
Collapse
Affiliation(s)
- Inga Dziembowska
- Department of Pathophysiology, Collegium Medicum in Bydgoszcz, Nicolaus Copernicus University in Toruń, Toruń, Poland.,Institute of Psychology, Kazimierz Wielki University in Bydgoszcz, Bydgoszcz, Poland
| | - Paweł Izdebski
- Institute of Psychology, Kazimierz Wielki University in Bydgoszcz, Bydgoszcz, Poland.
| | - Anna Rasmus
- Institute of Psychology, Kazimierz Wielki University in Bydgoszcz, Bydgoszcz, Poland
| | - Janina Brudny
- Institute of Psychology, Kazimierz Wielki University in Bydgoszcz, Bydgoszcz, Poland.,Department of Gastroenterology and Eating Disorders, Collegium Medicum in Bydgoszcz, Nicolaus Copernicus Univeristy in Toruń, Toruń, Poland
| | - Marta Grzelczak
- Department of Health, Physical Education and Sport, the University of Economy in Bydgoszcz, Bydgoszcz, Poland
| | - Piotr Cysewski
- Chair and Department of Physical Chemistry, Collegium Medicum in Bydgoszcz, Nicolaus Copernicus Univeristy in Toruń, Toruń, Poland
| |
Collapse
|
42
|
Brown LS. The Influence of Music on Facial Emotion Recognition in Children with Autism Spectrum Disorder and Neurotypical Children. J Music Ther 2016; 54:55-79. [DOI: 10.1093/jmt/thw017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
43
|
Gebuza G, Dombrowska A, Kaźmierczak M, Gierszewska M, Mieczkowska E. The effect of music therapy on the cardiac activity parameters of a fetus in a cardiotocographic examination. J Matern Fetal Neonatal Med 2016; 30:2440-2445. [DOI: 10.1080/14767058.2016.1253056] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Grażyna Gebuza
- Faculty of Health Sciences, Nicolaus Copernicus University in Torun, Collegium Medicum in Bydgoszcz, Poland, Bydgoszcz, Poland
| | - Agnieszka Dombrowska
- Faculty of Health Sciences, Nicolaus Copernicus University in Torun, Collegium Medicum in Bydgoszcz, Poland, Bydgoszcz, Poland
| | - Marzena Kaźmierczak
- Faculty of Health Sciences, Nicolaus Copernicus University in Torun, Collegium Medicum in Bydgoszcz, Poland, Bydgoszcz, Poland
| | - Małgorzata Gierszewska
- Faculty of Health Sciences, Nicolaus Copernicus University in Torun, Collegium Medicum in Bydgoszcz, Poland, Bydgoszcz, Poland
| | - Estera Mieczkowska
- Faculty of Health Sciences, Nicolaus Copernicus University in Torun, Collegium Medicum in Bydgoszcz, Poland, Bydgoszcz, Poland
| |
Collapse
|
44
|
Virtue S, Schutzenhofer M, Tomkins B. Hemispheric processing of predictive inferences during reading: The influence of negatively emotional valenced stimuli. Laterality 2016; 22:455-472. [PMID: 27530829 DOI: 10.1080/1357650x.2016.1218890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Although a left hemisphere advantage is usually evident during language processing, the right hemisphere is highly involved during the processing of weakly constrained inferences. However, currently little is known about how the emotional valence of environmental stimuli influences the hemispheric processing of these inferences. In the current study, participants read texts promoting either strongly or weakly constrained predictive inferences and performed a lexical decision task to inference-related targets presented to the left visual field-right hemisphere or the right visual field-left hemisphere. While reading these texts, participants either listened to dissonant music (i.e., the music condition) or did not listen to music (i.e., the no music condition). In the no music condition, the left hemisphere showed an advantage for strongly constrained inferences compared to weakly constrained inferences, whereas the right hemisphere showed high facilitation for both strongly and weakly constrained inferences. In the music condition, both hemispheres showed greater facilitation for strongly constrained inferences than for weakly constrained inferences. These results suggest that negatively valenced stimuli (such as dissonant music) selectively influences the right hemisphere's processing of weakly constrained inferences during reading.
Collapse
Affiliation(s)
- Sandra Virtue
- a Department of Psychology , DePaul University , Chicago , IL , USA
| | | | - Blaine Tomkins
- a Department of Psychology , DePaul University , Chicago , IL , USA
| |
Collapse
|
45
|
Rickard NS, Toukhsati SR, Field SE. The Effect of Music on Cognitive Performance: Insight From Neurobiological and Animal Studies. ACTA ACUST UNITED AC 2016; 4:235-61. [PMID: 16585799 DOI: 10.1177/1534582305285869] [Citation(s) in RCA: 67] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The past 50 years have seen numerous claims that music exposure enhances human cognitive performance. Critical evaluation of studies across a variety of contexts, however, reveals important methodological weaknesses. The current article argues that an interdisciplinary approach is required to advance this research. A case is made for the use of appropriate animal models to avoid many confounds associated with human music research. Although such research has validity limitations for humans, reductionist methodology enables a more controlled exploration of music's elementary effects. This article also explores candidate mechanisms for this putative effect. A review of neurobiological evidence from human and comparative animal studies confirms that musical stimuli modify autonomic and neurochemical arousal indices, and may also modify synaptic plasticity. It is proposed that understanding how music affects animals provides a valuable conjunct to human research and may be vital in uncovering how music might be used to enhance cognitive performance.
Collapse
Affiliation(s)
- Nikki S Rickard
- School of Psychology, Psychiatry and Psychological Medicine, Monash University, Australia
| | | | | |
Collapse
|
46
|
Towards the bio-personalization of music recommendation systems: A single-sensor EEG biomarker of subjective music preference. Inf Sci (N Y) 2016. [DOI: 10.1016/j.ins.2016.01.005] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
47
|
Toward automatic detection of brain responses to emotional music through analysis of EEG effective connectivity. COMPUTERS IN HUMAN BEHAVIOR 2016. [DOI: 10.1016/j.chb.2016.01.005] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
48
|
Rogenmoser L, Zollinger N, Elmer S, Jäncke L. Independent component processes underlying emotions during natural music listening. Soc Cogn Affect Neurosci 2016; 11:1428-39. [PMID: 27217116 DOI: 10.1093/scan/nsw048] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2016] [Accepted: 03/31/2016] [Indexed: 12/12/2022] Open
Abstract
The aim of this study was to investigate the brain processes underlying emotions during natural music listening. To address this, we recorded high-density electroencephalography (EEG) from 22 subjects while presenting a set of individually matched whole musical excerpts varying in valence and arousal. Independent component analysis was applied to decompose the EEG data into functionally distinct brain processes. A k-means cluster analysis calculated on the basis of a combination of spatial (scalp topography and dipole location mapped onto the Montreal Neurological Institute brain template) and functional (spectra) characteristics revealed 10 clusters referring to brain areas typically involved in music and emotion processing, namely in the proximity of thalamic-limbic and orbitofrontal regions as well as at frontal, fronto-parietal, parietal, parieto-occipital, temporo-occipital and occipital areas. This analysis revealed that arousal was associated with a suppression of power in the alpha frequency range. On the other hand, valence was associated with an increase in theta frequency power in response to excerpts inducing happiness compared to sadness. These findings are partly compatible with the model proposed by Heller, arguing that the frontal lobe is involved in modulating valenced experiences (the left frontal hemisphere for positive emotions) whereas the right parieto-temporal region contributes to the emotional arousal.
Collapse
Affiliation(s)
- Lars Rogenmoser
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland Neuroimaging and Stroke Recovery Laboratory, Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School, 02215, Boston, MA, USA Neuroscience Center Zurich, University of Zurich and ETH Zurich, 8050, Zurich, Switzerland
| | - Nina Zollinger
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland
| | - Stefan Elmer
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland
| | - Lutz Jäncke
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland Center for Integrative Human Physiology (ZIHP), University of Zurich, 8050, Zurich, Switzerland International Normal Aging and Plasticity Imaging Center (INAPIC), University of Zurich, 8050, Zurich, Switzerland University Research Priority Program (URPP) "Dynamic of Healthy Aging," University of Zurich, 8050, Zurich, Switzerland Department of Special Education, King Abdulaziz University, 21589, Jeddah, Saudi Arabia
| |
Collapse
|
49
|
Hausmann M, Hodgetts S, Eerola T. Music-induced changes in functional cerebral asymmetries. Brain Cogn 2016; 104:58-71. [PMID: 26970942 DOI: 10.1016/j.bandc.2016.03.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2015] [Revised: 02/22/2016] [Accepted: 03/01/2016] [Indexed: 11/17/2022]
Abstract
After decades of research, it remains unclear whether emotion lateralization occurs because one hemisphere is dominant for processing the emotional content of the stimuli, or whether emotional stimuli activate lateralised networks associated with the subjective emotional experience. By using emotion-induction procedures, we investigated the effect of listening to happy and sad music on three well-established lateralization tasks. In a prestudy, Mozart's piano sonata (K. 448) and Beethoven's Moonlight Sonata were rated as the most happy and sad excerpts, respectively. Participants listened to either one emotional excerpt, or sat in silence before completing an emotional chimeric faces task (Experiment 1), visual line bisection task (Experiment 2) and a dichotic listening task (Experiment 3 and 4). Listening to happy music resulted in a reduced right hemispheric bias in facial emotion recognition (Experiment 1) and visuospatial attention (Experiment 2) and increased left hemispheric bias in language lateralization (Experiments 3 and 4). Although Experiments 1-3 revealed an increased positive emotional state after listening to happy music, mediation analyses revealed that the effect on hemispheric asymmetries was not mediated by music-induced emotional changes. The direct effect of music listening on lateralization was investigated in Experiment 4 in which tempo of the happy excerpt was manipulated by controlling for other acoustic features. However, the results of Experiment 4 made it rather unlikely that tempo is the critical cue accounting for the effects. We conclude that listening to music can affect functional cerebral asymmetries in well-established emotional and cognitive laterality tasks, independent of music-induced changes in the emotion state.
Collapse
Affiliation(s)
- Markus Hausmann
- Department of Psychology, Durham University, Durham, United Kingdom.
| | - Sophie Hodgetts
- Department of Psychology, Durham University, Durham, United Kingdom
| | - Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| |
Collapse
|
50
|
Sandler H, Tamm S, Fendel U, Rose M, Klapp BF, Bösel R. Positive Emotional Experience: Induced by Vibroacoustic Stimulation Using a Body Monochord in Patients with Psychosomatic Disorders: Is Associated with an Increase in EEG-Theta and a Decrease in EEG-Alpha Power. Brain Topogr 2016; 29:524-38. [PMID: 26936595 DOI: 10.1007/s10548-016-0480-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2014] [Accepted: 02/19/2016] [Indexed: 10/22/2022]
Abstract
Relaxation and meditation techniques are generally characterized by focusing attention, which is associated with an increase of frontal EEG Theta. Some studies on music perception suggest an activation of Frontal Midline Theta during emotionally positive attribution, others display a lateralization of electrocortical processes in the attribution of music induced emotion of different valence. The present study examined the effects of vibroacoustic stimulation using a Body Monochord and the conventional relaxation music from an audio CD on the spontaneous EEG of patients suffering from psychosomatic disorders (N = 60). Each treatment took about 20 min and was presented to the patients in random order. Subjective experience was recorded via self-rating scale. EEG power spectra of the Theta, Alpha-1 and Alpha-2 bands were analysed and compard between the two treatment conditions. There was no lateralization of electrocortical activity in terms of the emotional experience of the musical pieces. A reduction in Alpha-2 power occurred during both treatments. An emotionally positive attribution of the experience of the vibroacoustically induced relaxation state is characterized by a more pronounced release of control. In the context of focused attention this is interpreted as flow experience. The spontaneous EEG showed an increase in Theta power, particularly in the frontal medial and central medial area, and a greater reduction in Alpha-2 power. The intensity of positive emotional feelings during the CD music showed no significant effect on the increase in Theta power.
Collapse
Affiliation(s)
- H Sandler
- Department for General Internal and Psychosomatic Medicine, Charité Universiätsmedizin Berlin, Berlin, Germany.
| | - S Tamm
- Center of Applied Neuroscience, Freie Universität Berlin, Berlin, Germany
| | - U Fendel
- Department for General Internal and Psychosomatic Medicine, Charité Universiätsmedizin Berlin, Berlin, Germany
| | - M Rose
- Department for General Internal and Psychosomatic Medicine, Charité Universiätsmedizin Berlin, Berlin, Germany
| | - B F Klapp
- Department for General Internal and Psychosomatic Medicine, Charité Universiätsmedizin Berlin, Berlin, Germany
| | - R Bösel
- International Psychoanalytic University Berlin, Berlin, Germany.,Department of Cognitive Neuroscience, Freie Universität Berlin, Berlin, Germany
| |
Collapse
|