101
|
Kim SG, Mueller K, Lepsien J, Mildner T, Fritz TH. Brain networks underlying aesthetic appreciation as modulated by interaction of the spectral and temporal organisations of music. Sci Rep 2019; 9:19446. [PMID: 31857651 PMCID: PMC6923468 DOI: 10.1038/s41598-019-55781-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 12/02/2019] [Indexed: 11/19/2022] Open
Abstract
Music is organised both spectrally and temporally, determining musical structures such as musical scale, harmony, and sequential rules in chord progressions. A number of human neuroimaging studies investigated neural processes associated with emotional responses to music investigating the influence of musical valence (pleasantness/unpleasantness) comparing the response to music and unpleasantly manipulated counterparts where harmony and sequential rules were varied. Interactions between the previously applied alterations to harmony and sequential rules of the music in terms of emotional experience and corresponding neural activities have not been systematically studied although such interactions are at the core of how music affects the listener. The current study investigates the interaction between such alterations in harmony and sequential rules by using data sets from two functional magnetic resonance imaging (fMRI) experiments. While replicating the previous findings, we found a significant interaction between the spectral and temporal alterations in the fronto-limbic system, including the ventromedial prefrontal cortex (vmPFC), nucleus accumbens, caudate nucleus, and putamen. We further revealed that the functional connectivity between the vmPFC and the right inferior frontal gyrus (IFG) was reduced when listening to excerpts with alterations in both domains compared to the original music. As it has been suggested that the vmPFC operates as a pivotal point that mediates between the limbic system and the frontal cortex in reward-related processing, we propose that this fronto-limbic interaction might be related to the involvement of cognitive processes in the emotional appreciation of music.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany. .,Department of Psychology and Neuroscience, Duke University, Durham, NC, United States.
| | - Karsten Mueller
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jöran Lepsien
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Toralf Mildner
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas Hans Fritz
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Institute for Psychoacoustics and Electronic Music, University of Ghent, Ghent, Belgium
| |
Collapse
|
102
|
Shen YW, Lin YP. Challenge for Affective Brain-Computer Interfaces: Non-stationary Spatio-spectral EEG Oscillations of Emotional Responses. Front Hum Neurosci 2019; 13:366. [PMID: 31736727 PMCID: PMC6831623 DOI: 10.3389/fnhum.2019.00366] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Accepted: 09/27/2019] [Indexed: 11/13/2022] Open
Abstract
Electroencephalogram (EEG)-based affective brain-computer interfaces (aBCIs) have been attracting ever-growing interest and research resources. Whereas most previous neuroscience studies have focused on single-day/-session recording and sensor-level analysis, less effort has been invested in assessing the fundamental nature of non-stationary EEG oscillations underlying emotional responses across days and individuals. This work thus aimed to use a data-driven blind source separation method, i.e., independent component analysis (ICA), to derive emotion-relevant spatio-spectral EEG source oscillations and assess the extent of non-stationarity. To this end, this work conducted an 8-day music-listening experiment (i.e., roughly interspaced over 2 months) and recorded whole-scalp 30-ch EEG data from 10 subjects. Given the large size of the data (i.e., from 80 sessions), results indicated that EEG non-stationarity was clearly revealed in the numbers and locations of brain sources of interest as well as their spectral modulation to the emotional responses. Less than half of subjects (two to four) showed the same relatively day-stationary (source reproducibility >6 days) spatio-spectral tendency towards one of the binary valence and arousal states. This work substantially advances the previous work by exploiting intra- and inter-individual EEG variability in an ecological multiday scenario. Such EEG non-stationarity may inevitably present a great challenge for the development of an accurate, robust, and generalized emotion-classification model.
Collapse
Affiliation(s)
- Yi-Wei Shen
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Yuan-Pin Lin
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung, Taiwan
| |
Collapse
|
103
|
Yang F, Zhao X, Jiang W, Gao P, Liu G. Multi-method Fusion of Cross-Subject Emotion Recognition Based on High-Dimensional EEG Features. Front Comput Neurosci 2019; 13:53. [PMID: 31507396 PMCID: PMC6714862 DOI: 10.3389/fncom.2019.00053] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 07/19/2019] [Indexed: 11/20/2022] Open
Abstract
Emotion recognition using electroencephalogram (EEG) signals has attracted significant research attention. However, it is difficult to improve the emotional recognition effect across subjects. In response to this difficulty, in this study, multiple features were extracted for the formation of high-dimensional features. Based on the high-dimensional features, an effective method for cross-subject emotion recognition was then developed, which integrated the significance test/sequential backward selection and the support vector machine (ST-SBSSVM). The effectiveness of the ST-SBSSVM was validated on a dataset for emotion analysis using physiological signals (DEAP) and the SJTU Emotion EEG Dataset (SEED). With respect to high-dimensional features, the ST-SBSSVM average improved the accuracy of cross-subject emotion recognition by 12.4% on the DEAP and 26.5% on the SEED when compared with common emotion recognition methods. The recognition accuracy obtained using ST-SBSSVM was as high as that obtained using sequential backward selection (SBS) on the DEAP dataset. However, on the SEED dataset, the recognition accuracy increased by ~6% using ST-SBSSVM from that using the SBS. Using the ST-SBSSVM, ~97% (DEAP) and 91% (SEED) of the program runtime was eliminated when compared with the SBS. Compared with recent similar works, the method developed in this study for emotion recognition across all subjects was found to be effective, and its accuracy was 72% (DEAP) and 89% (SEED).
Collapse
Affiliation(s)
- Fu Yang
- College of Electronic Information and Engineering, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuit and Intelligent Information Processing, Chongqing, China
| | - Xingcong Zhao
- College of Electronic Information and Engineering, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuit and Intelligent Information Processing, Chongqing, China
| | - Wenge Jiang
- College of Electronic Information and Engineering, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuit and Intelligent Information Processing, Chongqing, China
| | - Pengfei Gao
- College of Electronic Information and Engineering, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuit and Intelligent Information Processing, Chongqing, China
| | - Guangyuan Liu
- College of Electronic Information and Engineering, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuit and Intelligent Information Processing, Chongqing, China
| |
Collapse
|
104
|
Fachner JC, Maidhof C, Grocke D, Nygaard Pedersen I, Trondalen G, Tucek G, Bonde LO. "Telling me not to worry…" Hyperscanning and Neural Dynamics of Emotion Processing During Guided Imagery and Music. Front Psychol 2019; 10:1561. [PMID: 31402880 PMCID: PMC6673756 DOI: 10.3389/fpsyg.2019.01561] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 06/20/2019] [Indexed: 12/15/2022] Open
Abstract
To analyze how emotions and imagery are shared, processed and recognized in Guided Imagery and Music, we measured the brain activity of an experienced therapist (“Guide”) and client (“Traveler”) with dual-EEG in a real therapy session about potential death of family members. Synchronously with the EEG, the session was video-taped and then micro-analyzed. Four raters identified therapeutically important moments of interest (MOI) and no-interest (MONI) which were transcribed and annotated. Several indices of emotion- and imagery-related processing were analyzed: frontal and parietal alpha asymmetry, frontal midline theta, and occipital alpha activity. Session ratings showed overlaps across all raters, confirming the importance of these MOIs, which showed different cortical activity in visual areas compared to resting-state. MOI 1 was a pivotal moment including an important imagery with a message of hope from a close family member, while in the second MOI the Traveler sent a message to an unborn baby. Generally, results seemed to indicate that the emotions of Traveler and Guide during important moments were not positive, pleasurably or relaxed when compared to resting-state, confirming both were dealing with negative emotions and anxiety that had to be contained in the interpersonal process. However, the temporal dynamics of emotion-related markers suggested shifts in emotional valence and intensity during these important, personally meaningful moments; for example, during receiving the message of hope, an increase of frontal alpha asymmetry was observed, reflecting increased positive emotional processing. EEG source localization during the message suggested a peak activation in left middle temporal gyrus. Interestingly, peaks in emotional markers in the Guide partly paralleled the Traveler's peaks; for example, during the Guide's strong feeling of mutuality in MOI 2, the time series of frontal alpha asymmetries showed a significant cross-correlation, indicating similar emotional processing in Traveler and Guide. Investigating the moment-to-moment interaction in music therapy showed how asymmetry peaks align with the situated cognition of Traveler and Guide along the emotional contour of the music, representing the highs and lows during the therapy process. Combining dual-EEG with detailed audiovisual and qualitative data seems to be a promising approach for further research into music therapy.
Collapse
Affiliation(s)
- Jörg C Fachner
- Cambridge Institute for Music Therapy Research, Anglia Ruskin University, Cambridge, United Kingdom.,Josef Ressel Centre for Personalised Music Therapy, IMC University of Applied Sciences Krems, Krems an der Donau, Austria
| | - Clemens Maidhof
- Cambridge Institute for Music Therapy Research, Anglia Ruskin University, Cambridge, United Kingdom.,Josef Ressel Centre for Personalised Music Therapy, IMC University of Applied Sciences Krems, Krems an der Donau, Austria
| | - Denise Grocke
- Melbourne Conservatorium of Music, University of Melbourne, Melbourne, VIC, Australia
| | - Inge Nygaard Pedersen
- Department of Communication and Psychology, The Faculty of Humanities, Aalborg University, Aalborg, Denmark
| | - Gro Trondalen
- Centre for Research in Music and Health, Norwegian Academy of Music, Oslo, Norway
| | - Gerhard Tucek
- Josef Ressel Centre for Personalised Music Therapy, IMC University of Applied Sciences Krems, Krems an der Donau, Austria
| | - Lars O Bonde
- Department of Communication and Psychology, The Faculty of Humanities, Aalborg University, Aalborg, Denmark.,Centre for Research in Music and Health, Norwegian Academy of Music, Oslo, Norway
| |
Collapse
|
105
|
Matsuo M, Masuda F, Sumi Y, Takahashi M, Yoshimura A, Yamada N, Kadotani H. Background Music Dependent Reduction of Aversive Perception and Its Relation to P3 Amplitude Reduction and Increased Heart Rate. Front Hum Neurosci 2019; 13:184. [PMID: 31316359 PMCID: PMC6610262 DOI: 10.3389/fnhum.2019.00184] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2019] [Accepted: 05/20/2019] [Indexed: 11/24/2022] Open
Abstract
Music is commonly used to modify mood and has attracted attention as a potential therapeutic intervention. Despite the well-recognized effects of music on mood, changes in affective perception due to music remain majorly unknown. Here, we examined if the perception of aversive stimuli could be altered by mood-changing background music. Using subjective scoring data from 17 healthy volunteers, we assessed the effect of relaxing background music (RelaxBGM), busy background music (BusyBGM), or no background music (NoBGM) conditions on response to aversive white noise stimulation. Interestingly, affective response to the white noise was selectively alleviated, and white noise-related P3 component amplitude was reduced in BusyBGM. However, affective responses as well as P3 amplitude to reference pure tone stimuli were similar regardless of background music conditions. Interestingly, heart rate (HR) increased in BusyBGM, whereas no increase in HR was found in similar distress, NoBGM condition. These findings suggest that increase in HR, which happens during BusyBGM exposure, can be a reflecting feature of music that ameliorates the affective response to aversive stimuli, possibly through selective reduction in neurophysiological responses.
Collapse
Affiliation(s)
- Masahiro Matsuo
- Department of Psychiatry, Shiga University of Medical Science, Otsu, Japan
| | - Fumi Masuda
- Department of Psychiatry, Shiga University of Medical Science, Otsu, Japan.,Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | - Yukiyoshi Sumi
- Department of Psychiatry, Shiga University of Medical Science, Otsu, Japan
| | - Masahiro Takahashi
- Department of Psychiatry, Shiga University of Medical Science, Otsu, Japan
| | - Atsushi Yoshimura
- Department of Psychiatry, Shiga University of Medical Science, Otsu, Japan
| | - Naoto Yamada
- Department of Psychiatry, Shiga University of Medical Science, Otsu, Japan
| | - Hiroshi Kadotani
- Department of Sleep and Behavioral Science, Shiga University of Medical Science, Otsu, Japan
| |
Collapse
|
106
|
Daly I, Williams D, Hwang F, Kirke A, Miranda ER, Nasuto SJ. Electroencephalography reflects the activity of sub-cortical brain regions during approach-withdrawal behaviour while listening to music. Sci Rep 2019; 9:9415. [PMID: 31263113 PMCID: PMC6603018 DOI: 10.1038/s41598-019-45105-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Accepted: 06/03/2019] [Indexed: 11/09/2022] Open
Abstract
The ability of music to evoke activity changes in the core brain structures that underlie the experience of emotion suggests that it has the potential to be used in therapies for emotion disorders. A large volume of research has identified a network of sub-cortical brain regions underlying music-induced emotions. Additionally, separate evidence from electroencephalography (EEG) studies suggests that prefrontal asymmetry in the EEG reflects the approach-withdrawal response to music-induced emotion. However, fMRI and EEG measure quite different brain processes and we do not have a detailed understanding of the functional relationships between them in relation to music-induced emotion. We employ a joint EEG – fMRI paradigm to explore how EEG-based neural correlates of the approach-withdrawal response to music reflect activity changes in the sub-cortical emotional response network. The neural correlates examined are asymmetry in the prefrontal EEG, and the degree of disorder in that asymmetry over time, as measured by entropy. Participants’ EEG and fMRI were recorded simultaneously while the participants listened to music that had been specifically generated to target the elicitation of a wide range of affective states. While listening to this music, participants also continuously reported their felt affective states. Here we report on co-variations in the dynamics of these self-reports, the EEG, and the sub-cortical brain activity. We find that a set of sub-cortical brain regions in the emotional response network exhibits activity that significantly relates to prefrontal EEG asymmetry. Specifically, EEG in the pre-frontal cortex reflects not only cortical activity, but also changes in activity in the amygdala, posterior temporal cortex, and cerebellum. We also find that, while the magnitude of the asymmetry reflects activity in parts of the limbic and paralimbic systems, the entropy of that asymmetry reflects activity in parts of the autonomic response network such as the auditory cortex. This suggests that asymmetry magnitude reflects affective responses to music, while asymmetry entropy reflects autonomic responses to music. Thus, we demonstrate that it is possible to infer activity in the limbic and paralimbic systems from pre-frontal EEG asymmetry. These results show how EEG can be used to measure and monitor changes in the limbic and paralimbic systems. Specifically, they suggest that EEG asymmetry acts as an indicator of sub-cortical changes in activity induced by music. This shows that EEG may be used as a measure of the effectiveness of music therapy to evoke changes in activity in the sub-cortical emotion response network. This is also the first time that the activity of sub-cortical regions, normally considered “invisible” to EEG, has been shown to be characterisable directly from EEG dynamics measured during music listening.
Collapse
Affiliation(s)
- Ian Daly
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, CO4 3SQ, UK.
| | - Duncan Williams
- Digital Creativity Labs, Department of Computer Science, University of York, Heslington, YO10 5RG, UK
| | - Faustina Hwang
- Brain Embodiment Laboratory, Biomedical Sciences and Biomedical Engineering Division, School of Biological Sciences, University of Reading, Reading, RG6 6AY, UK
| | - Alexis Kirke
- Interdisciplinary Centre for Computer Music Research, University of Plymouth, Plymouth, PL4 8AA, UK
| | - Eduardo R Miranda
- Interdisciplinary Centre for Computer Music Research, University of Plymouth, Plymouth, PL4 8AA, UK
| | - Slawomir J Nasuto
- Brain Embodiment Laboratory, Biomedical Sciences and Biomedical Engineering Division, School of Biological Sciences, University of Reading, Reading, RG6 6AY, UK
| |
Collapse
|
107
|
Daly I, Bourgaize J, Vernitski A. Mathematical mindsets increase student motivation: Evidence from the EEG. Trends Neurosci Educ 2019; 15:18-28. [PMID: 31176468 DOI: 10.1016/j.tine.2019.02.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Revised: 02/19/2019] [Accepted: 02/27/2019] [Indexed: 10/27/2022]
Abstract
Mathematical mindset theory suggests learner motivation in mathematics may be increased by opening problems using a set of recommended ideas. However, very little evidence supports this theory. We explore motivation through self-reports while learners attempt problems formulated according to mindset theory and standard problems. We also explore neural correlates of motivation and felt-affect while participants attempt the problems. Notably, we do not tell participants what mindset theory is and instead simply investigate whether mindset problems affect reported motivation levels and neural correlates of motivation in learners. We find significant increases in motivation for mindset problems compared to standard problems. We also find significant differences in brain activity in prefrontal EEG asymmetry between problems. This provides some of the first evidence that mathematical mindset theory increases motivation (even when participants are not aware of mindset theory), and that this change is reflected in brain activity of learners attempting mathematical problems.
Collapse
Affiliation(s)
- Ian Daly
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK.
| | - Jake Bourgaize
- Department of Psychology, University of Essex, Colchester CO4 3SQ, UK
| | - Alexei Vernitski
- Department of Mathematical Sciences, University of Essex, Colchester CO4 3SQ, UK
| |
Collapse
|
108
|
Bălan O, Moise G, Moldoveanu A, Leordeanu M, Moldoveanu F. Fear Level Classification Based on Emotional Dimensions and Machine Learning Techniques. SENSORS 2019; 19:s19071738. [PMID: 30978980 PMCID: PMC6479627 DOI: 10.3390/s19071738] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 04/03/2019] [Accepted: 04/09/2019] [Indexed: 11/24/2022]
Abstract
There has been steady progress in the field of affective computing over the last two decades that has integrated artificial intelligence techniques in the construction of computational models of emotion. Having, as a purpose, the development of a system for treating phobias that would automatically determine fear levels and adapt exposure intensity based on the user’s current affective state, we propose a comparative study between various machine and deep learning techniques (four deep neural network models, a stochastic configuration network, Support Vector Machine, Linear Discriminant Analysis, Random Forest and k-Nearest Neighbors), with and without feature selection, for recognizing and classifying fear levels based on the electroencephalogram (EEG) and peripheral data from the DEAP (Database for Emotion Analysis using Physiological signals) database. Fear was considered an emotion eliciting low valence, high arousal and low dominance. By dividing the ratings of valence/arousal/dominance emotion dimensions, we propose two paradigms for fear level estimation—the two-level (0—no fear and 1—fear) and the four-level (0—no fear, 1—low fear, 2—medium fear, 3—high fear) paradigms. Although all the methods provide good classification accuracies, the highest F scores have been obtained using the Random Forest Classifier—89.96% and 85.33% for the two-level and four-level fear evaluation modality.
Collapse
Affiliation(s)
- Oana Bălan
- Department of Computer Science and Engineering, Faculty of Automatic Control and Computers, University POLITEHNICA of Bucharest, 060042 Bucharest, Romania.
| | - Gabriela Moise
- Department of Computer Science, Information Technology, Mathematics and Physics (ITIMF), Petroleum-Gas University of Ploiesti, 100680 Ploiesti, Romania.
| | - Alin Moldoveanu
- Department of Computer Science and Engineering, Faculty of Automatic Control and Computers, University POLITEHNICA of Bucharest, 060042 Bucharest, Romania.
| | - Marius Leordeanu
- Department of Computer Science and Engineering, Faculty of Automatic Control and Computers, University POLITEHNICA of Bucharest, 060042 Bucharest, Romania.
| | - Florica Moldoveanu
- Department of Computer Science and Engineering, Faculty of Automatic Control and Computers, University POLITEHNICA of Bucharest, 060042 Bucharest, Romania.
| |
Collapse
|
109
|
Park KS, Hass CJ, Fawver B, Lee H, Janelle CM. Emotional states influence forward gait during music listening based on familiarity with music selections. Hum Mov Sci 2019; 66:53-62. [PMID: 30913416 DOI: 10.1016/j.humov.2019.03.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 02/25/2019] [Accepted: 03/05/2019] [Indexed: 10/27/2022]
Abstract
Music elicits a wide range of human emotions, which influence human movement. We sought to determine how emotional states impact forward gait during music listening, and whether the emotional effects of music on gait differ as a function of familiarity with music. Twenty-four healthy young adults completed walking trials while listening to four types of music selections: experimenter-selected music (unfamiliar-pleasant), its dissonant counterpart (unfamiliar-unpleasant), each participant's self-selected favorite music (familiar-pleasant), and its dissonant counterpart (familiar-unpleasant). Faster gait velocity, cadence, and stride time, as well as longer stride length were identified during pleasant versus unpleasant music conditions. Increased gait velocity, stride length, and cadence as well as reduced stride time were positively correlated with subjective ratings of emotional arousal and pleasure as well as musical emotions such as happiness-elation, nostalgia-longing, interest-expectancy, pride-confidence, and chills, and they were negatively related to anger-irritation and disgust-contempt. Moreover, familiarity with music interacted with emotional responses to influence gait kinematics. Gait velocity was faster in the familiar-pleasant music condition relative to the familiar-unpleasant condition, primarily due to longer stride length. In contrast, no differences in any gait parameters were found between unfamiliar-pleasant and unfamiliar-unpleasant music conditions. These results suggest emotional states influence gait behavior during music listening and that such effects are altered by familiarity with music. Our findings provide fundamental evidence of the impact of musical emotion on human gait, with implications for using music to enhance motor performance in clinical and performance settings.
Collapse
Affiliation(s)
- K Shin Park
- Department of Applied Physiology and Kinesiology, University of Florida, Gainesville, FL, USA
| | - Chris J Hass
- Department of Applied Physiology and Kinesiology, University of Florida, Gainesville, FL, USA
| | - Bradley Fawver
- Department of Health, Kinesiology, and Recreation, University of Utah, Salt Lake City, UT, USA
| | - Hyokeun Lee
- Department of Applied Physiology and Kinesiology, University of Florida, Gainesville, FL, USA
| | - Christopher M Janelle
- Department of Applied Physiology and Kinesiology, University of Florida, Gainesville, FL, USA.
| |
Collapse
|
110
|
Chen J, Jiang D, Zhang Y. A Common Spatial Pattern and Wavelet Packet Decomposition Combined Method for EEG-Based Emotion Recognition. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2019. [DOI: 10.20965/jaciii.2019.p0274] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
To effectively reduce the day-to-day fluctuations and differences in subjects’ brain electroencephalogram (EEG) signals and improve the accuracy and stability of EEG emotion classification, a new EEG feature extraction method based on common spatial pattern (CSP) and wavelet packet decomposition (WPD) is proposed. For the five-day emotion related EEG data of 12 subjects, the CSP algorithm is firstly used to project the raw EEG data into an optimal subspace to extract the discriminative features by maximizing the Kullback-Leibler (KL) divergences between the two categories of EEG data. Then the WPD algorithm is used to decompose the EEG signals into the related features in time-frequency domain. Finally, four state-of-the-art classifiers including Bagging tree, SVM, linear discriminant analysis and Bayesian linear discriminant analysis are used to make binary emotion classification. The experimental results show that with CSP spatial filtering, the emotion classification on the WPD features extracted with bior3.3 wavelet base gets the best accuracy of 0.862, which is 29.3% higher than that of the power spectral density (PSD) feature without CSP preprocessing, is 23% higher than that of the PSD feature with CSP preprocessing, is 1.9% higher than that of the WPD feature extracted with bior3.3 wavelet base without CSP preprocessing, and is 3.2% higher than that of the WPD feature extracted with the rbio6.8 wavelet base without CSP preprocessing. Our proposed method can effectively reduce the variance and non-stationary of the cross-day EEG signals, extract the emotion related features and improve the accuracy and stability of the cross-day EEG emotion classification. It is valuable for the development of robust emotional brain-computer interface applications.
Collapse
|
111
|
Ribeiro FS, Santos FH, Albuquerque PB, Oliveira-Silva P. Emotional Induction Through Music: Measuring Cardiac and Electrodermal Responses of Emotional States and Their Persistence. Front Psychol 2019; 10:451. [PMID: 30894829 PMCID: PMC6414444 DOI: 10.3389/fpsyg.2019.00451] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Accepted: 02/14/2019] [Indexed: 11/13/2022] Open
Abstract
Emotional inductions through music (EIM) procedures have proved to evoke genuine emotions according to neuroimaging studies. However, the persistence of the emotional states after being exposed to musical excerpts remains mostly unexplored. This study aimed to investigate the curve of emotional state generated by an EIM paradigm over a 6-min recovery phase, monitored with valence and arousal self-report measures, and physiological parameters. Stimuli consisted of a neutral and two valenced musical excerpts previously reported to generate such states. The neutral excerpt was composed in a minimalist form characterized by simple sonorities, rhythms, and patterns; the positive excerpt had fast tempo and major tones, and the negative one was slower in tempo and had minor tone. Results of 24 participants revealed that positive and negative EIM effectively induced self-reported happy and sad emotions and elicited higher skin conductance levels (SCL). Although self-reported adjectives describing evoked-emotions states changed to neutral after 2 min in the recovery phase, the SCL data suggest longer lasting arousal for both positive and negative emotional states. The implications of these outcomes for musical research are discussed.
Collapse
Affiliation(s)
- Fabiana Silva Ribeiro
- School of Psychology (CIPsi), University of Minho, Braga, Portugal.,Faculty of Education and Psychology (CEDH/HNL), Universidade Católica, Porto, Portugal
| | | | | | | |
Collapse
|
112
|
Li P, Liu H, Si Y, Li C, Li F, Zhu X, Huang X, Zeng Y, Yao D, Zhang Y, Xu P. EEG Based Emotion Recognition by Combining Functional Connectivity Network and Local Activations. IEEE Trans Biomed Eng 2019; 66:2869-2881. [PMID: 30735981 DOI: 10.1109/tbme.2019.2897651] [Citation(s) in RCA: 128] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Spectral power analysis plays a predominant role in electroencephalogram-based emotional recognition. It can reflect activity differences among multiple brain regions. In addition to activation difference, different emotions also involve different large-scale network during related information processing. In this paper, both information propagation patterns and activation difference in the brain were fused to improve the performance of emotional recognition. METHODS We constructed emotion-related brain networks with phase locking value and adopted a multiple feature fusion approach to combine the compensative activation and connection information for emotion recognition. RESULTS Recognition results on three public emotional databases demonstrated that the combined features are superior to either single feature based on power distribution or network character. Furthermore, the conducted feature fusion analysis revealed the common characters between activation and connection patterns involved in the positive, neutral, and negative emotions for information processing. SIGNIFICANCE The proposed feasible combination of both information propagation patterns and activation difference in the brain is meaningful for developing the effective human-computer interaction systems by adapting to human emotions in the real world applications.
Collapse
|
113
|
Sanyal S, Nag S, Banerjee A, Sengupta R, Ghosh D. Music of brain and music on brain: a novel EEG sonification approach. Cogn Neurodyn 2019; 13:13-31. [PMID: 30728868 PMCID: PMC6339862 DOI: 10.1007/s11571-018-9502-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Revised: 07/18/2018] [Accepted: 08/22/2018] [Indexed: 11/29/2022] Open
Abstract
Can we hear the sound of our brain? Is there any technique which can enable us to hear the neuro-electrical impulses originating from the different lobes of brain? The answer to all these questions is YES. In this paper we present a novel method with which we can sonify the electroencephalogram (EEG) data recorded in "control" state as well as under the influence of a simple acoustical stimuli-a tanpura drone. The tanpura has a very simple construction yet the tanpura drone exhibits very complex acoustic features, which is generally used for creation of an ambience during a musical performance. Hence, for this pilot project we chose to study the nonlinear correlations between musical stimulus (tanpura drone as well as music clips) and sonified EEG data. Till date, there have been no study which deals with the direct correlation between a bio-signal and its acoustic counterpart and also tries to see how that correlation varies under the influence of different types of stimuli. This study tries to bridge this gap and looks for a direct correlation between music signal and EEG data using a robust mathematical microscope called Multifractal Detrended Cross Correlation Analysis (MFDXA). For this, we took EEG data of 10 participants in 2 min "control condition" (i.e. with white noise) and in 2 min 'tanpura drone' (musical stimulus) listening condition. The same experimental paradigm was repeated for two emotional music, "Chayanat" and "Darbari Kanada". These are well known Hindustani classical ragas which conventionally portray contrast emotional attributes, also verified from human response data. Next, the EEG signals from different electrodes were sonified and MFDXA technique was used to assess the degree of correlation (or the cross correlation coefficient γx) between the EEG signals and the music clips. The variation of γx for different lobes of brain during the course of the experiment provides interesting new information regarding the extraordinary ability of music stimuli to engage several areas of the brain significantly unlike any other stimuli (which engages specific domains only).
Collapse
Affiliation(s)
- Shankha Sanyal
- Sir C.V. Raman Centre for Physics and Music, Jadavpur University, Kolkata, India
- Department of Physics, Jadavpur University, Kolkata, India
| | - Sayan Nag
- Department of Electrical Engineering, Jadavpur University, Kolkata, India
| | - Archi Banerjee
- Sir C.V. Raman Centre for Physics and Music, Jadavpur University, Kolkata, India
- Department of Physics, Jadavpur University, Kolkata, India
| | - Ranjan Sengupta
- Sir C.V. Raman Centre for Physics and Music, Jadavpur University, Kolkata, India
| | - Dipak Ghosh
- Sir C.V. Raman Centre for Physics and Music, Jadavpur University, Kolkata, India
| |
Collapse
|
114
|
Nemati S, Akrami H, Salehi S, Esteky H, Moghimi S. Lost in music: Neural signature of pleasure and its role in modulating attentional resources. Brain Res 2019; 1711:7-15. [PMID: 30629944 DOI: 10.1016/j.brainres.2019.01.011] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 12/23/2018] [Accepted: 01/06/2019] [Indexed: 01/04/2023]
Abstract
We investigated the neural correlates of pleasure induced by listening to highly pleasant and neutral musical excerpts using electroencephalography (EEG). Power spectrum analysis of EEG data showed a distinct gradual change in the power of low-frequency oscillations in response to highly pleasant, but not neutral, musical excerpts. Specifically, listening to highly pleasant music was associated with (i) relatively higher oscillatory activity in the theta band over the frontocentral (FC) area and in the alpha band over the parieto-occipital area, and (ii) a gradual increase in the oscillatory power over time. Correlation analysis between behavioral and electrophysiological data revealed that theta power over the FC electrodes was correlated with subjective assessment of pleasantness while listening to music. To study the link between attention and positive valence in our experiments, volunteers performed a delayed match-to-sample memory task while listening to the musical excerpts. The subjects' performances were significantly lower under highly pleasant conditions compared to neutral conditions. Listening to pleasant music requires higher degrees of attention, leading to the observed decline in memory performance. Gradual development of low-frequency oscillations in the frontal and posterior areas may be at least partly due to gradual recruitment of higher levels of attention over time in response to pleasurable music.
Collapse
Affiliation(s)
- Samaneh Nemati
- Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad Postal Code: 9177948974, Iran.
| | - Haleh Akrami
- Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad Postal Code: 9177948974, Iran.
| | - Sina Salehi
- Shiraz Neuroscience Research Center, Shiraz University of Medical Sciences, Shiraz Postal Code: 7194815644, Iran.
| | - Hossein Esteky
- Research Center for Brain and Cognitive Sciences, Shahid Beheshti University of Medical Sciences, Tehran Postal Code: 1983969411, Iran; Physiology Department, Shahid Beheshti University of Medical Sciences, Tehran Postal Code: 1983969411, Iran.
| | - Sahar Moghimi
- Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad Postal Code: 9177948974, Iran; Rayan Center for Neuroscience and Behavior, Ferdowsi University of Mashhad, Mashhad Postal Code: 9177948974, Iran.
| |
Collapse
|
115
|
Goshvarpour A, Goshvarpour A. EEG spectral powers and source localization in depressing, sad, and fun music videos focusing on gender differences. Cogn Neurodyn 2018; 13:161-173. [PMID: 30956720 DOI: 10.1007/s11571-018-9516-y] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Revised: 11/07/2018] [Accepted: 12/11/2018] [Indexed: 12/14/2022] Open
Abstract
Previously, gender-specific affective responses have been shown using neurophysiological signals. The present study intended to compare the differences in electroencephalographic (EEG) power spectra and EEG brain sources between men and women during the exposure of affective music video stimuli. The multi-channel EEG signals of 15 males and 15 females available in the database for emotion analysis using physiological signals were studied, while subjects were watching sad, depressing, and fun music videos. Seven EEG frequency bands were computed using average Fourier cross-spectral matrices. Then, standardized low-resolution electromagnetic tomography (sLORETA) was used to localize regions involved specifically in these emotional responses. To evaluate gender differences, independent sample t test was calculated for the sLORETA source powers. Our results showed that (1) the mean EEG power for all frequency bands in the women's group was significantly higher than that of the men's group; (2) spatial distribution differentiation between men and women was detected in all EEG frequency bands. (3) This difference has been related to the emotional stimuli, which was more evident for negative emotions. Taken together, our results showed that men and women recruited dissimilar brain networks for processing sad, depressing, and fun audio-visual stimuli.
Collapse
Affiliation(s)
- Atefeh Goshvarpour
- 1Department of Biomedical Engineering, Faculty of Electrical Engineering, Sahand University of Technology, Tabriz, Iran
| | - Ateke Goshvarpour
- 2Department of Biomedical Engineering, Imam Reza International University, Mashhad, Razavi Khorasan Iran
| |
Collapse
|
116
|
Amezcua-Gutiérrez C, Marisela HG, Fernández Guasti A, Aguilar MAC, Guevara MA. Observing Erotic Videos With Heterosexual Content Induces Different Cerebral Responses in Homosexual and Heterosexual Men. JOURNAL OF HOMOSEXUALITY 2018; 67:639-657. [PMID: 30526443 DOI: 10.1080/00918369.2018.1550331] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
To determine the cerebral functionality associated with the perception and processing of erotic stimuli in men with different sexual orientation, this work evaluated the electroencephalographic activity (EEG) from several cortical areas, as well as subjective arousal in homosexual and heterosexual men during observation of an erotic film with heterosexual content. The heterosexual men rated the erotic video with higher general and sexual arousal than the homosexual participants. During observation of the neutral and erotic videos, both groups showed a decreased amplitude of the alpha band in prefrontal and parietal cortices, indicating increased attention. When watching the erotic video, the homosexual men showed an increased amplitude of the theta and fast bands only in the prefrontal cortex, which could be related to the cognitive processing of the erotic stimulus. These EEG results should broaden our knowledge of the cortical mechanisms related to the different perception and processing of erotic stimuli in men with different sexual orientations.
Collapse
Affiliation(s)
| | | | | | - Manuel Alejandro Cruz Aguilar
- Dirección de Investigaciones en Neurociencias, Laboratorio de Cronobiología y Sueño, Instituto Nacional de Psiquiatría "Ramón de la Fuente Muñiz", México City, México
| | - Miguel Angel Guevara
- Institute of Neuroscience, CUCBA, University of Guadalajara, Guadalajara, Jalisco, México
| |
Collapse
|
117
|
Tandle AL, Joshi MS, Dharmadhikari AS, Jaiswal SV. Mental state and emotion detection from musically stimulated EEG. Brain Inform 2018; 5:14. [PMID: 30499008 PMCID: PMC6429168 DOI: 10.1186/s40708-018-0092-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Accepted: 11/12/2018] [Indexed: 01/09/2023] Open
Abstract
This literature survey attempts to clarify different approaches considered to study the impact of the musical stimulus on the human brain using EEG Modality. Glancing at the field through various aspects of such studies specifically an experimental protocol, the EEG machine, number of channels investigated, feature extracted, categories of emotions, the brain area, the brainwaves, statistical tests, machine learning algorithms used for classification and validation of the developed model. This article comments on how these different approaches have particular weaknesses and strengths. Ultimately, this review concludes a suitable method to study the impact of the musical stimulus on brain and implications of such kind of studies.
Collapse
Affiliation(s)
| | | | | | - Suyog V Jaiswal
- H.B.T. Medical College and Dr. R.N. Cooper Mun. Gen. Hospital, Mumbai, India
| |
Collapse
|
118
|
Son YJ, Chun C. Research on electroencephalogram to measure thermal pleasure in thermal alliesthesia in temperature step-change environment. INDOOR AIR 2018; 28:916-923. [PMID: 29989216 DOI: 10.1111/ina.12491] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2018] [Revised: 05/18/2018] [Accepted: 07/05/2018] [Indexed: 06/08/2023]
Abstract
Thermal pleasure is currently measured along psychological and physiological variables. However, in transient environments where temperatures change, it is hard to correlate psychological and physiological measures, because there is a delay in physiological changes. This study tests a method for correlating both measures using electroencephalogram (EEG), which can capture physiological feedback with a rapid response rate. In this experimental study, thermal pleasure was induced in a temperature step-change environment, one of non-uniform and transient environments. During the experiment, EEG was monitored and psychological responses of thermal sensation and thermal comfort votes were collected via survey questionnaire. A total of 50 males in their twenties participated in a climate chamber experiment. An experimental group of 25 men were exposed to temperature step-change between two different room conditions (32°C, 65% and 25°C, 50%), experiencing thermal pleasure. The control group of the remaining 25 men were exposed to an unchanging condition, experiencing thermal comfort close to thermal neutrality. The EEG spectral analysis demonstrated that EEG frequency band associated with pleasant emotional (theta) increased while frequency band related to pleasantness, satisfaction or relaxation (beta) decreased with thermal pleasure.
Collapse
Affiliation(s)
- Young J Son
- Department of Interior Architecture and Built Environment, Yonsei University, Seoul, Korea
| | - Chungyoon Chun
- Department of Interior Architecture and Built Environment, Yonsei University, Seoul, Korea
| |
Collapse
|
119
|
Zhao G, Zhang Y, Ge Y. Frontal EEG Asymmetry and Middle Line Power Difference in Discrete Emotions. Front Behav Neurosci 2018; 12:225. [PMID: 30443208 PMCID: PMC6221898 DOI: 10.3389/fnbeh.2018.00225] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Accepted: 09/10/2018] [Indexed: 12/25/2022] Open
Abstract
A traditional model of emotion cannot explain the differences in brain activities between two discrete emotions that are similar in the valence-arousal coordinate space. The current study elicited two positive emotions (amusement and tenderness) and two negative emotions (anger and fear) that are similar in both valence and arousal dimensions to examine the differences in brain activities in these emotional states. Frontal electroencephalographic (EEG) asymmetry and midline power in three bands (theta, alpha and beta) were measured when participants watched affective film excerpts. Significant differences were detected between tenderness and amusement on FP1/FP2 theta asymmetry, F3/F4 theta and alpha asymmetry. Significant differences between anger and fear on FP1/FP2 theta asymmetry and F3/F4 alpha asymmetry were also observed. For midline power, midline theta power could distinguish two negative emotions, while midline alpha and beta power could effectively differentiate two positive emotions. Liking and dominance were also related to EEG features. Stepwise multiple linear regression results revealed that frontal alpha and theta asymmetry could predict the subjective feelings of two positive and two negative emotions in different patterns. The binary classification accuracy, which used EEG frontal asymmetry and midline power as features and support vector machine (SVM) as classifiers, was as high as 64.52% for tenderness and amusement and 78.79% for anger and fear. The classification accuracy was improved after adding these features to other features extracted across the scalp. These findings indicate that frontal EEG asymmetry and midline power might have the potential to recognize discrete emotions that are similar in the valence-arousal coordinate space.
Collapse
Affiliation(s)
- Guozhen Zhao
- CAS Key Laboratory of Behavioral Science, Institute of PsychologyBeijing, China
- Department of Psychology, University of Chinese Academy of SciencesBeijing, China
| | - Yulin Zhang
- CAS Key Laboratory of Behavioral Science, Institute of PsychologyBeijing, China
- Department of Psychology, University of Chinese Academy of SciencesBeijing, China
| | - Yan Ge
- CAS Key Laboratory of Behavioral Science, Institute of PsychologyBeijing, China
- Department of Psychology, University of Chinese Academy of SciencesBeijing, China
| |
Collapse
|
120
|
Schirmer A, McGlone F. A touching Sight: EEG/ERP correlates for the vicarious processing of affectionate touch. Cortex 2018; 111:1-15. [PMID: 30419352 DOI: 10.1016/j.cortex.2018.10.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Revised: 08/28/2018] [Accepted: 10/08/2018] [Indexed: 11/30/2022]
Abstract
Observers can simulate aspects of other people's tactile experiences. We asked whether they do so when faced with full-body social interactions, whether emerging representations go beyond basic sensorimotor mirroring, and whether they depend on processing goals and inclinations. In an EEG/ERP study, we presented line-drawn, dyadic interactions with and without affectionate touch. In an explicit and an implicit task, participants categorized images into touch versus no-touch and same versus opposite sex interactions, respectively. Modulations of central Rolandic rhythms implied that affectionate touch displays engaged sensorimotor mechanisms. Additionally, the late positive potential (LPP) being larger for images with as compared to without touch pointed to an involvement of higher order socio-affective mechanisms. Task and sex modulated touch perception. Sensorimotor responding, indexed by Rolandic rhythms, was fairly independent of the task but appeared less effortful in women than in men. Touch induced socio-affective responding, indexed by the LPP, declined from explicit to implicit processing in women and disappeared in men. In sum, this study provides first evidence that vicarious touch from full-body social interactions entails shared sensorimotor as well as socio-affective experiences. Yet, mental representations of touch at a socio-affective level are more likely when touch is goal relevant and observers are female. Together, these results outline the conditions under which touch in visual media may be usefully employed to socially engage observers.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong; Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong; Center for Cognition and Brain Studies, The Chinese University of Hong Kong, Hong Kong.
| | - Francis McGlone
- School of Natural Sciences & Psychology, Liverpool John Moores University, UK; Institute of Psychology, Health & Society, University of Liverpool, UK
| |
Collapse
|
121
|
Abstract
In this article we conclude the main scientific studies into the changes in the bioelectrical brainwave activity that occur while listening to music. A brainwave spectral analysis, derived from findings of electroencephalograms, is a powerful tool to obtain deep and objective insights into the effects of music on the brain. This capacity is being investigated in various contexts. Starting with a healthy population, studies also seek to determine the impact of music in such conditions as disorders of consciousness, psychiatric diseases, and chronic conditions, as well as to further explore the role of music for rehabilitation purposes. Supplemental investigations in this field are needed not only to deepen the knowledge of general neurophysiology of listening to music, but also to possibly open new perspectives for its broader use in clinical practices.
Collapse
Affiliation(s)
| | - Rūta Praninskienė
- Department of Children's Neurology, Children's Hospital, Affiliate of Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania.,Clinic of Children's Diseases, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
| |
Collapse
|
122
|
Linnemann A, Kreutz G, Gollwitzer M, Nater UM. Validation of the German Version of the Music-Empathizing-Music-Systemizing (MEMS) Inventory (Short Version). Front Behav Neurosci 2018; 12:153. [PMID: 30135649 PMCID: PMC6092492 DOI: 10.3389/fnbeh.2018.00153] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Accepted: 07/03/2018] [Indexed: 01/11/2023] Open
Abstract
Background: Kreutz et al. (2008) developed the Music-Empathizing-Music-Systemizing (ME-MS) Inventory to extend Baron-Cohen's cognitive style theory to the domain of music. We sought to confirm the ME-MS construct in a German sample and to explore these individual differences in relation to music preferences. Methods: The German adaptation of the MEMS Inventory was achieved by forward and backward translation. A total of 1014 participants (532 male, age: 33.79 ± 11.89 years) completed the 18-item short version of the MEMS Inventory online. Confirmatory factor analysis (CFA) was performed and cut-off values were established to identify individuals who could be classified as ME, Balanced, or MS. Statistical analyses were used to examine differences in music preference based on music-related cognitive styles. Results: Confirmatory factor analysis (CFA) confirmed two factors, ME and MS, with sufficiently good fit (CFI = 0.87; GFI = 0.93) and adequate internal consistency (Cronbach's Alpha ME: 0.753, MS: 0.783). Analyses of difference scores allowed for a classification as either ME, Balanced, or MS. ME and MS differed in sociodemographic variables, preferred music genres, preferred reasons for music listening, musical expertise, situations in which music is listened to in daily life, and frequency of music-induced chills. Discussion: The German short version of the MEMS Inventory shows good psychometric properties. Based on the cut-off values, differences in music preference were found. Consequently, ME and MS use music in different ways, and the cognitive style of music listening thus appears to be an important moderator in research on the psychology of music. Future research should identify behavioral and neurophysiological correlates and investigate mechanisms underlying music processing based on these different cognitive styles of music listening.
Collapse
Affiliation(s)
- Alexandra Linnemann
- Department of Psychiatry and Psychotherapy, University Medical Center Mainz, Mainz, Germany
| | - Gunter Kreutz
- Department of Music, School for Linguistics and Cultural Studies, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Mario Gollwitzer
- Chair of Social Psychology, Department Psychology, Ludwig-Maximilians-Universität Munich, Munich, Germany
| | - Urs M Nater
- Clinical Psychology, Department of Psychology, University of Vienna, Vienna, Austria
| |
Collapse
|
123
|
Hamada M, Zaidan BB, Zaidan AA. A Systematic Review for Human EEG Brain Signals Based Emotion Classification, Feature Extraction, Brain Condition, Group Comparison. J Med Syst 2018; 42:162. [PMID: 30043178 DOI: 10.1007/s10916-018-1020-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 07/18/2018] [Indexed: 11/24/2022]
Abstract
The study of electroencephalography (EEG) signals is not a new topic. However, the analysis of human emotions upon exposure to music considered as important direction. Although distributed in various academic databases, research on this concept is limited. To extend research in this area, the researchers explored and analysed the academic articles published within the mentioned scope. Thus, in this paper a systematic review is carried out to map and draw the research scenery for EEG human emotion into a taxonomy. Systematically searched all articles about the, EEG human emotion based music in three main databases: ScienceDirect, Web of Science and IEEE Xplore from 1999 to 2016. These databases feature academic studies that used EEG to measure brain signals, with a focus on the effects of music on human emotions. The screening and filtering of articles were performed in three iterations. In the first iteration, duplicate articles were excluded. In the second iteration, the articles were filtered according to their titles and abstracts, and articles outside of the scope of our domain were excluded. In the third iteration, the articles were filtered by reading the full text and excluding articles outside of the scope of our domain and which do not meet our criteria. Based on inclusion and exclusion criteria, 100 articles were selected and separated into five classes. The first class includes 39 articles (39%) consists of emotion, wherein various emotions are classified using artificial intelligence (AI). The second class includes 21 articles (21%) is composed of studies that use EEG techniques. This class is named 'brain condition'. The third class includes eight articles (8%) is related to feature extraction, which is a step before emotion classification. That this process makes use of classifiers should be noted. However, these articles are not listed under the first class because these eight articles focus on feature extraction rather than classifier accuracy. The fourth class includes 26 articles (26%) comprises studies that compare between or among two or more groups to identify and discover human emotion-based EEG. The final class includes six articles (6%) represents articles that study music as a stimulus and its impact on brain signals. Then, discussed the five main categories which are action types, age of the participants, and number size of the participants, duration of recording and listening to music and lastly countries or authors' nationality that published these previous studies. it afterward recognizes the main characteristics of this promising area of science in: motivation of using EEG process for measuring human brain signals, open challenges obstructing employment and recommendations to improve the utilization of EEG process.
Collapse
Affiliation(s)
- Mohamed Hamada
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - B B Zaidan
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia
| | - A A Zaidan
- Department of Computing, Universiti Pendidikan Sultan Idris, Tanjong Malim, Perak, Malaysia.
| |
Collapse
|
124
|
Kaji H, Iizuka H, Sugiyama M. ECG-Based Concentration Recognition With Multi-Task Regression. IEEE Trans Biomed Eng 2018; 66:101-110. [PMID: 29993442 DOI: 10.1109/tbme.2018.2830366] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Recognition of human activities and mental states using wearable sensors and smartphones has attracted considerable attention recently. In particular, prediction of the stress level of a subject using an electrocardiogram sensor has been studied extensively. In this paper, we attempt to predict the degree of concentration by using heart-rate features. However, due to strong diversity in individuals and high sampling costs, building an accurate prediction model is still highly challenging. METHOD To overcome these difficulties, we propose to use a multitask learning (MTL) technique for effectively sharing information among similar individuals. RESULT Through experiments with 18 healthy subjects performing daily office works, such as writing reports, we demonstrate that the proposed method significantly improves the accuracy of concentration prediction in small sample situations. CONCLUSION The performance of the MTL method is shown to be stable across different subjects, which is an important advantage over conventional models. SIGNIFICANCE This improvement has significant impact in real-world concentration recognition because the data collection burden of each user can be drastically mitigated.
Collapse
|
125
|
Balasubramanian G, Kanagasabai A, Mohan J, Seshadri NG. Music induced emotion using wavelet packet decomposition—An EEG study. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2018.01.015] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
126
|
Strong JV, Mast BT. The cognitive functioning of older adult instrumental musicians and non-musicians. AGING NEUROPSYCHOLOGY AND COGNITION 2018. [DOI: 10.1080/13825585.2018.1448356] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Jessica V Strong
- New England Geriatric Research Education and Clinical Center (NE GRECC), Boston VA Healthcare System, Boston, MA, USA
| | - Benjamin T Mast
- Department of Psychological and Brain Sciences, 317 Life Sciences, University of Louisville, Louisville, KY, USA
| |
Collapse
|
127
|
Cao D, Li Y, Niznikiewicz MA, Tang Y, Wang J. The theta burst transcranial magnetic stimulation over the right PFC affects electroencephalogram oscillation during emotional processing. Prog Neuropsychopharmacol Biol Psychiatry 2018; 82:21-30. [PMID: 29241839 DOI: 10.1016/j.pnpbp.2017.12.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Revised: 12/08/2017] [Accepted: 12/08/2017] [Indexed: 10/18/2022]
Abstract
Prefrontal cortex (PFC) plays an important role in emotional processing and therefore is one of the most frequently targeted regions for non-invasive brain stimulation such as repetitive transcranial magnetic stimulation (rTMS) in clinical trials, especially in the treatment of emotional disorders. As an approach to enhance the effectiveness of rTMS, continuous theta burst stimulation (cTBS) has been demonstrated to be efficient and safe. However, it is unclear how cTBS affects brain processes related to emotion. In particular, psychophysiological studies on the underlying neural mechanisms are sparse. In the current study, we investigated how the cTBS influences emotional processing when applied over the right PFC. Participants performed an emotion recognition Go/NoGo task, which asked them to select a GO response to either happy or fearful faces after the cTBS or after sham stimulation, while 64-channel electroencephalogram (EEG) was recorded. EEG oscillation was examined using event-related spectral perturbation (ERSP) in a time-interval between 170 and 310ms after face stimuli onset. In the sham group, we found a significant difference in the alpha band between response to happy and fearful stimuli but that effect did not exist in the cTBS group. The alpha band activity at the scalp was reduced suggesting the excitatory effect at the brain level. The beta and gamma band activity was not sensitive to cTBS intervention. The results of the current study demonstrate that cTBS does affect emotion processing and the effect is reflected in changes in EEG oscillations in the alpha band specifically. The results confirm the role of prefrontal cortex in emotion processing. We also suggest that this pattern of cTBS results elucidates mechanisms by which mood improvement in depressive disorders is achieved using cTBS intervention.
Collapse
Affiliation(s)
- Dan Cao
- School of Communication and Information Engineering, Qianweichang College, Shanghai University, Shanghai 200444, China
| | - Yingjie Li
- School of Communication and Information Engineering, Qianweichang College, Shanghai University, Shanghai 200444, China.
| | - Margaret A Niznikiewicz
- Laboratory of Cognitive Neuroscience, Boston VA Healthcare System, Brockton Division and Department of Psychiatry, Harvard Medical School, Boston, MA 02301, United States.
| | - Yingying Tang
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China.
| | - Jijun Wang
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China; CAS Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences, China; Brain Science and Technology Research Center, Shanghai Jiao Tong University, Shanghai 200030, China; Bio-X Institutes, Key Laboratory for the Genetics of Developmental and Neuropsychiatric Disorders (Ministry of Education), Shanghai Jiaotong University, Shanghai 200030, China.
| |
Collapse
|
128
|
Kalaganis F, Adamos D, Laskaris N. Musical NeuroPicks: A consumer-grade BCI for on-demand music streaming services. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.08.073] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
129
|
Joucla C, Nicolier M, Giustiniani J, Brunotte G, Noiret N, Monnin J, Magnin E, Pazart L, Moulin T, Haffen E, Vandel P, Gabriel D. Evidence for a neural signature of musical preference during silence. Int J Psychophysiol 2018; 125:50-56. [PMID: 29474854 DOI: 10.1016/j.ijpsycho.2018.02.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 02/16/2018] [Accepted: 02/18/2018] [Indexed: 11/18/2022]
Abstract
One of the most basic and person-specific affective responses to music is liking. The present investigation sought to determine whether liking was preserved during spontaneous auditory imagery. To this purpose, we inserted two-second silent intervals into liked and disliked songs, a method known to automatically recreate a mental image of these songs. Neural correlates of musical preference were measured by high-density electroencephalography in twenty subjects who had to listen to a set of five pre-selected unknown songs the same number of times for two weeks. Time frequency analysis of the two most liked and the two most disliked songs confirmed the presence of neural responses related to liking. At the beginning of silent intervals (400-900 ms and 1000-1300 ms), significant differences in theta activity were originating from the inferior frontal and superior temporal gyrus. These two brain structures are known to work together to process various aspects of music and are also activated when measuring liking while listening to music. At the end of silent intervals (1400-1900 ms), significant alpha activity differences originating from the insula were observed, whose exact role remains to be explored. Although exposure was controlled for liked and disliked songs, liked songs were rated as more familiar, underlying the strong relationship that exists between liking, exposure, and familiarity.
Collapse
Affiliation(s)
- Coralie Joucla
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France
| | - Magali Nicolier
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Julie Giustiniani
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Gaelle Brunotte
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France
| | - Nicolas Noiret
- Centre Mémoire de Ressource et de Recherche de Franche-Comté, CHRU Besançon, F-25000 Besançon, France; Laboratoire de psychologie EA 3188, Université de Franche-Comté, F-25000 Besançon, France
| | - Julie Monnin
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Eloi Magnin
- Centre Mémoire de Ressource et de Recherche de Franche-Comté, CHRU Besançon, F-25000 Besançon, France; Service de neurologie, CHRU Besançon, F-25000 Besançon, France
| | - Lionel Pazart
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France
| | - Thierry Moulin
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de neurologie, CHRU Besançon, F-25000 Besançon, France
| | - Emmanuel Haffen
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Pierre Vandel
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France; Centre Mémoire de Ressource et de Recherche de Franche-Comté, CHRU Besançon, F-25000 Besançon, France
| | - Damien Gabriel
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France.
| |
Collapse
|
130
|
Koelsch S, Skouras S, Lohmann G. The auditory cortex hosts network nodes influential for emotion processing: An fMRI study on music-evoked fear and joy. PLoS One 2018; 13:e0190057. [PMID: 29385142 PMCID: PMC5791961 DOI: 10.1371/journal.pone.0190057] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Accepted: 12/07/2017] [Indexed: 01/12/2023] Open
Abstract
Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions.
Collapse
Affiliation(s)
- Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
- * E-mail:
| | - Stavros Skouras
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Gabriele Lohmann
- Department of Biomedical Magnetic Resonance, University Clinic Tübingen, Tübingen, Germany
- Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
131
|
Bazanova OM, Auer T, Sapina EA. On the Efficiency of Individualized Theta/Beta Ratio Neurofeedback Combined with Forehead EMG Training in ADHD Children. Front Hum Neurosci 2018; 12:3. [PMID: 29403368 PMCID: PMC5785729 DOI: 10.3389/fnhum.2018.00003] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Accepted: 01/03/2018] [Indexed: 01/28/2023] Open
Abstract
Background: Neurofeedback training (NFT) to decrease the theta/beta ratio (TBR) has been used for treating hyperactivity and impulsivity in attention deficit hyperactivity disorder (ADHD); however, often with low efficiency. Individual variance in EEG profile can confound NFT, because it may lead to influencing non-relevant activity, if ignored. More importantly, it may lead to influencing ADHD-related activities adversely, which may even result in worsening ADHD symptoms. Electromyogenic (EMG) signal resulted from forehead muscles can also explain the low efficiency of the NFT in ADHD from both practical and psychological point-of-view. The first aim of this study was to determine EEG and EMG biomarkers most related to the main ADHD characteristics, such as impulsivity and hyperactivity. The second aim was to confirm our hypothesis that the efficiency of the TBR NFT can be increased by individual adjustment of the frequency bands and simultaneous training on forehead muscle tension. Methods: We recruited 94 children diagnosed with ADHD (ADHD) and 23 healthy controls (HC). All participants were male and aged between six and nine. Impulsivity and attention were assessed with Go/no-Go task and delayed gratification task, respectively; and 19-channel EEG and forehead EMG were recorded. Then, the ADHD group was randomly subdivided into (1) standard, (2) individualized, (3) individualized+EMG, and (4) sham NFT (control) groups. The groups were compared based on TBR and EEG alpha activity, as well as hyperactivity and impulsivity three times: pre-NFT, post-NFT and 6 months after the NFT (follow-up). Results: ADHD children were characterized with decreased individual alpha peak frequency, alpha bandwidth and alpha amplitude suppression magnitude, as well as with increased alpha1/alpha2 (a1/a2) ratio and scalp muscle tension when c (η2 ≥ 0.212). All contingent TBR NFT groups exhibited significant NFT-related decrease in TBR not evident in the control group. Moreover, we detected a higher overall alpha activity in the individualized but not in the standard NFT group. Mixed MANOVA considering between-subject factor GROUP and within-subject factor TIME showed that the individualized+EMG group exhibited the highest level of clinical improvement, which was associated with increase in the individual alpha activity at the 6 months follow-up when comparing with the other approaches (post hoc t = 3.456, p = 0.011). Conclusions: This study identified various (adjusted) alpha activity metrics as biomarkers with close relationship with ADHD symptoms, and demonstrated that TBR NFT individually adjusted for variances in alpha activity is more successful and clinically more efficient than standard, non-individualized NFT. Moreover, these training effects of the individualized TBR NFT lasted longer when combined with EMG.
Collapse
Affiliation(s)
- Olga M Bazanova
- Laboratory of Affective, Cognitive and Translational Neuroscience, Department of Experimental, Clinical Neuroscience, Federal State Research Institute of Physiology and Basic Medicine, Novosibirsk, Russia
- Department of Neuroscience, Novosibirsk State University, Novosibirsk, Russia
| | - Tibor Auer
- Department of Psychology, Royal Holloway University of London, Egham, United Kingdom
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Elena A Sapina
- Laboratory of Biofeedback Computer System, Research Institute of Molecular Biology and Biophysics, Novosibirsk, Russia
- Department of Psychology, Novosibirsk State University of Economics and Management, Novosibirsk, Russia
| |
Collapse
|
132
|
Methods of Neuromarketing and Implication of the Frontal Theta Asymmetry induced due to musical stimulus as choice modeling. ACTA ACUST UNITED AC 2018. [DOI: 10.1016/j.procs.2018.05.059] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
133
|
Kim SK, Kang HB. An analysis of smartphone overuse recognition in terms of emotions using brainwaves and deep learning. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.09.081] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
134
|
Abstract
Do you know that our soul is composed of harmony? Leonardo Da Vinci Despite evidence for music-specific mechanisms at the level of pitch-pattern representations, the most fascinating aspect of music is its transmodality. Recent psychological and neuroscientific evidence suggest that music is unique in the coupling of perception, cognition, action and emotion. This potentially explains why music has been since time immemorial almost inextricably linked to healing processes and should continue to be.
Collapse
Affiliation(s)
- Paulo E Andrade
- Department of Psychology, Goldsmiths, University of London, London, UK
| | | |
Collapse
|
135
|
The therapeutic contribution of music in music-assisted systematic desensitization for substance addiction treatment: A pilot study. ARTS IN PSYCHOTHERAPY 2017. [DOI: 10.1016/j.aip.2017.07.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
136
|
Invitto S, Calcagnì A, Mignozzi A, Scardino R, Piraino G, Turchi D, De Feudis I, Brunetti A, Bevilacqua V, de Tommaso M. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias. Front Behav Neurosci 2017; 11:144. [PMID: 28824392 PMCID: PMC5539234 DOI: 10.3389/fnbeh.2017.00144] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Accepted: 07/19/2017] [Indexed: 01/28/2023] Open
Abstract
Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.
Collapse
Affiliation(s)
- Sara Invitto
- Human Anatomy and Neuroscience Lab, Department of Environmental Science and Technology, University of SalentoLecce, Italy
| | - Antonio Calcagnì
- Department of Psychology and Cognitive Sciences, University of TrentoTrento, Italy
| | - Arianna Mignozzi
- Human Anatomy and Neuroscience Lab, Department of Environmental Science and Technology, University of SalentoLecce, Italy
| | - Rosanna Scardino
- Human Anatomy and Neuroscience Lab, Department of Environmental Science and Technology, University of SalentoLecce, Italy
| | | | - Daniele Turchi
- Human Anatomy and Neuroscience Lab, Department of Environmental Science and Technology, University of SalentoLecce, Italy
| | - Irio De Feudis
- Department of Electrical and Information Engineering, Polytechnic University of BariBari, Italy
| | - Antonio Brunetti
- Department of Electrical and Information Engineering, Polytechnic University of BariBari, Italy
| | - Vitoantonio Bevilacqua
- Department of Electrical and Information Engineering, Polytechnic University of BariBari, Italy
| | - Marina de Tommaso
- Department of Medical Science, Neuroscience, and Sense Organs, University Aldo MoroBari, Italy
| |
Collapse
|
137
|
Nolden S, Rigoulot S, Jolicoeur P, Armony JL. Effects of musical expertise on oscillatory brain activity in response to emotional sounds. Neuropsychologia 2017; 103:96-105. [DOI: 10.1016/j.neuropsychologia.2017.07.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Revised: 07/05/2017] [Accepted: 07/14/2017] [Indexed: 10/19/2022]
|
138
|
Maglione AG, Brizi A, Vecchiato G, Rossi D, Trettel A, Modica E, Babiloni F. A Neuroelectrical Brain Imaging Study on the Perception of Figurative Paintings against Only their Color or Shape Contents. Front Hum Neurosci 2017; 11:378. [PMID: 28790907 PMCID: PMC5524918 DOI: 10.3389/fnhum.2017.00378] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Accepted: 07/06/2017] [Indexed: 11/17/2022] Open
Abstract
In this study, the cortical activity correlated with the perception and appreciation of different set of pictures was estimated by using neuroelectric brain activity and graph theory methodologies in a group of artistic educated persons. The pictures shown to the subjects consisted of original pictures of Titian's and a contemporary artist's paintings (Orig dataset) plus two sets of additional pictures. These additional datasets were obtained from the previous paintings by removing all but the colors or the shapes employed (Color and Style dataset, respectively). Results suggest that the verbal appreciation of Orig dataset when compared to Color and Style ones was mainly correlated to the neuroelectric indexes estimated during the first 10 s of observation of the pictures. Always in the first 10 s of observation: (1) Orig dataset induced more emotion and is perceived with more appreciation than the other two Color and Style datasets; (2) Style dataset is perceived with more attentional effort than the other investigated datasets. During the whole period of observation of 30 s: (1) emotion induced by Color and Style datasets increased across the time while that induced of the Orig dataset remain stable; (2) Color and Style dataset were perceived with more attentional effort than the Orig dataset. During the entire experience, there is evidence of a cortical flow of activity from the parietal and central areas toward the prefrontal and frontal areas during the observation of the images of all the datasets. This is coherent from the notion that active perception of the images with sustained cognitive attention in parietal and central areas caused the generation of the judgment about their aesthetic appreciation in frontal areas.
Collapse
Affiliation(s)
- Anton G Maglione
- Department of Molecular Medicine, Sapienza Università di RomaRome, Italy
| | - Ambra Brizi
- Department of Molecular Medicine, Sapienza Università di RomaRome, Italy
| | | | - Dario Rossi
- Department of Anatomy, Histology, Forensic Medicine and Orthopedics, Sapienza Università di RomaRome, Italy
| | | | - Enrica Modica
- Department of Anatomy, Histology, Forensic Medicine and Orthopedics, Sapienza Università di RomaRome, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza Università di RomaRome, Italy.,BrainSigns, Sapienza Università di RomaRome, Italy
| |
Collapse
|
139
|
Brown DR, Cavanagh JF. The sound and the fury: Late positive potential is sensitive to sound affect. Psychophysiology 2017; 54:1812-1825. [PMID: 28726287 DOI: 10.1111/psyp.12959] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Revised: 04/19/2017] [Accepted: 04/24/2017] [Indexed: 01/10/2023]
Abstract
Emotion is an emergent construct of multiple distinct neural processes. EEG is uniquely sensitive to real-time neural computations, and thus is a promising tool to study the construction of emotion. This series of studies aimed to probe the mechanistic contribution of the late positive potential (LPP) to multimodal emotion perception. Experiment 1 revealed that LPP amplitudes for visual images, sounds, and visual images paired with sounds were larger for negatively rated stimuli than for neutrally rated stimuli. Experiment 2 manipulated this audiovisual enhancement by altering the valence pairings with congruent (e.g., positive audio + positive visual) or conflicting emotional pairs (e.g., positive audio + negative visual). Negative visual stimuli evoked larger early LPP amplitudes than positive visual stimuli, regardless of sound pairing. However, time frequency analyses revealed significant midfrontal theta-band power differences for conflicting over congruent stimuli pairs, suggesting very early (∼500 ms) realization of thematic fidelity violations. Interestingly, late LPP modulations were reflective of the opposite pattern of congruency, whereby congruent over conflicting pairs had larger LPP amplitudes. Together, these findings suggest that enhanced parietal activity for affective valence is modality independent and sensitive to complex affective processes. Furthermore, these findings suggest that altered neural activities for affective visual stimuli are enhanced by concurrent affective sounds, paving the way toward an understanding of the construction of multimodal affective experience.
Collapse
Affiliation(s)
- Darin R Brown
- Department of Psychology, University of New Mexico, Albuquerque, New Mexico, USA
| | - James F Cavanagh
- Department of Psychology, University of New Mexico, Albuquerque, New Mexico, USA
| |
Collapse
|
140
|
Kim SG, Lepsien J, Fritz TH, Mildner T, Mueller K. Dissonance encoding in human inferior colliculus covaries with individual differences in dislike of dissonant music. Sci Rep 2017; 7:5726. [PMID: 28720776 PMCID: PMC5516034 DOI: 10.1038/s41598-017-06105-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Accepted: 06/09/2017] [Indexed: 12/20/2022] Open
Abstract
Harmony is one of the most fundamental elements of music that evokes emotional response. The inferior colliculus (IC) has been known to detect poor agreement of harmonics of sound, that is, dissonance. Electrophysiological evidence has implicated a relationship between a sustained auditory response mainly from the brainstem and unpleasant emotion induced by dissonant harmony. Interestingly, an individual’s dislike of dissonant harmony of an individual correlated with a reduced sustained auditory response. In the current paper, we report novel evidence based on functional magnetic resonance imaging (fMRI) for such a relationship between individual variability in dislike of dissonance and the IC activation. Furthermore, for the first time, we show how dissonant harmony modulates functional connectivity of the IC and its association with behaviourally reported unpleasantness. The current findings support important contributions of low level auditory processing and corticofugal interaction in musical harmony preference.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Jöran Lepsien
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas Hans Fritz
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Institute for Psychoacoustics and Electronic Music, University of Ghent, Ghent, Belgium
| | - Toralf Mildner
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Karsten Mueller
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
141
|
Markovic A, Kühnis J, Jäncke L. Task Context Influences Brain Activation during Music Listening. Front Hum Neurosci 2017; 11:342. [PMID: 28706480 PMCID: PMC5489556 DOI: 10.3389/fnhum.2017.00342] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2017] [Accepted: 06/13/2017] [Indexed: 11/14/2022] Open
Abstract
In this paper, we examined brain activation in subjects during two music listening conditions: listening while simultaneously rating the musical piece being played [Listening and Rating (LR)] and listening to the musical pieces unconstrained [Listening (L)]. Using these two conditions, we tested whether the sequence in which the two conditions were fulfilled influenced the brain activation observable during the L condition (LR → L or L → LR). We recorded high-density EEG during the playing of four well-known positively experienced soundtracks in two subject groups. One group started with the L condition and continued with the LR condition (L → LR); the second group performed this experiment in reversed order (LR → L). We computed from the recorded EEG the power for different frequency bands (theta, lower alpha, upper alpha, lower beta, and upper beta). Statistical analysis revealed that the power in all examined frequency bands increased during the L condition but only when the subjects had not had previous experience with the LR condition (i.e., L → LR). For the subjects who began with the LR condition, there were no power increases during the L condition. Thus, the previous experience with the LR condition prevented subjects from developing the particular mental state associated with the typical power increase in all frequency bands. The subjects without previous experience of the LR condition listened to the musical pieces in an unconstrained and undisturbed manner and showed a general power increase in all frequency bands. We interpret the fact that unconstrained music listening was associated with increased power in all examined frequency bands as a neural indicator of a mental state that can best be described as a mind-wandering state during which the subjects are “drawn into” the music.
Collapse
Affiliation(s)
- Andjela Markovic
- Division Neuropsychology, Institute of Psychology, University of ZurichZurich, Switzerland
| | - Jürg Kühnis
- Division Neuropsychology, Institute of Psychology, University of ZurichZurich, Switzerland
| | - Lutz Jäncke
- Division Neuropsychology, Institute of Psychology, University of ZurichZurich, Switzerland.,International Normal Aging and Plasticity Imaging Center, University of ZurichZurich, Switzerland.,University Research Priority Program, Dynamic of Healthy Aging, University of ZurichZurich, Switzerland
| |
Collapse
|
142
|
Is laughter a better vocal change detector than a growl? Cortex 2017; 92:233-248. [DOI: 10.1016/j.cortex.2017.03.018] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2016] [Revised: 01/26/2017] [Accepted: 03/27/2017] [Indexed: 11/23/2022]
|
143
|
Pérez-Hernández M, Hernández-González M, Hidalgo-Aguirre R, Amezcua-Gutiérrez C, Guevara M. Listening to a baby crying induces higher electroencephalographic synchronization among prefrontal, temporal and parietal cortices in adoptive mothers. Infant Behav Dev 2017; 47:1-12. [DOI: 10.1016/j.infbeh.2017.02.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Revised: 01/16/2017] [Accepted: 02/21/2017] [Indexed: 11/26/2022]
|
144
|
Bigliassi M, Karageorghis CI, Wright MJ, Orgs G, Nowicky AV. Effects of auditory stimuli on electrical activity in the brain during cycle ergometry. Physiol Behav 2017; 177:135-147. [PMID: 28442333 DOI: 10.1016/j.physbeh.2017.04.023] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Revised: 04/21/2017] [Accepted: 04/21/2017] [Indexed: 11/25/2022]
Abstract
The present study sought to further understanding of the brain mechanisms that underlie the effects of music on perceptual, affective, and visceral responses during whole-body modes of exercise. Eighteen participants were administered light-to-moderate intensity bouts of cycle ergometer exercise. Each exercise bout was of 12-min duration (warm-up [3min], exercise [6min], and warm-down [3min]). Portable techniques were used to monitor the electrical activity in the brain, heart, and muscle during the administration of three conditions: music, audiobook, and control. Conditions were randomized and counterbalanced to prevent any influence of systematic order on the dependent variables. Oscillatory potentials at the Cz electrode site were used to further understanding of time-frequency changes influenced by voluntary control of movements. Spectral coherence analysis between Cz and frontal, frontal-central, central, central-parietal, and parietal electrode sites was also calculated. Perceptual and affective measures were taken at five timepoints during the exercise bout. Results indicated that music reallocated participants' attentional focus toward auditory pathways and reduced perceived exertion. The music also inhibited alpha resynchronization at the Cz electrode site and reduced the spectral coherence values at Cz-C4 and Cz-Fz. The reduced focal awareness induced by music led to a more autonomous control of cycle movements performed at light-to-moderate-intensities. Processing of interoceptive sensory cues appears to upmodulate fatigue-related sensations, increase the connectivity in the frontal and central regions of the brain, and is associated with neural resynchronization to sustain the imposed exercise intensity.
Collapse
Affiliation(s)
| | | | | | - Guido Orgs
- Department of Psychology, Goldsmiths, University of London, UK
| | | |
Collapse
|
145
|
Lee IE, Latchoumane CFV, Jeong J. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music. Front Psychol 2017; 8:440. [PMID: 28421007 PMCID: PMC5379063 DOI: 10.3389/fpsyg.2017.00440] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Accepted: 03/09/2017] [Indexed: 01/04/2023] Open
Abstract
Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.
Collapse
Affiliation(s)
- Irene Eunyoung Lee
- Communicative Interaction Lab, Graduate School of Culture Technology, Korea Advanced Institute of Science and TechnologyDaejeon, South Korea.,Beat Connectome Lab, Sonic Arts & CultureYongin, South Korea
| | | | - Jaeseung Jeong
- Communicative Interaction Lab, Graduate School of Culture Technology, Korea Advanced Institute of Science and TechnologyDaejeon, South Korea.,Department of Bio and Brain Engineering, Korea Advanced Institute of Science and TechnologyDaejeon, South Korea
| |
Collapse
|
146
|
Watanabe K, Ooishi Y, Kashino M. Heart rate responses induced by acoustic tempo and its interaction with basal heart rate. Sci Rep 2017; 7:43856. [PMID: 28266647 PMCID: PMC5339732 DOI: 10.1038/srep43856] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2016] [Accepted: 02/01/2017] [Indexed: 11/13/2022] Open
Abstract
Many studies have revealed the influences of music on the autonomic nervous system (ANS). Since previous studies focused on the effects of acoustic tempo on the ANS, and humans have their own physiological oscillations such as the heart rate (HR), the effects of acoustic tempo might depend on the HR. Here we show the relationship between HR elevation induced by acoustic tempo and individual basal HR. Since high tempo-induced HR elevation requires fast respiration, which is based on sympatho-respiratory coupling, we controlled the participants’ respiration at a faster rate (20 CPM) than usual (15 CPM). We found that sound stimuli with a faster tempo than the individual basal HR increased the HR. However, the HR increased following a gradual increase in the acoustic tempo only when the extent of the gradual increase in tempo was within a specific range (around + 2%/min). The HR did not follow the increase in acoustic tempo when the rate of the increase in the acoustic tempo exceeded 3% per minute. These results suggest that the effect of the sympatho-respiratory coupling underlying the HR elevation caused by a high acoustic tempo depends on the basal HR, and the strength and the temporal dynamics of the tempo.
Collapse
Affiliation(s)
- Ken Watanabe
- Department of Information Processing, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8503, Japan
| | - Yuuki Ooishi
- NTT Communication Science Laboratories, NTT Corporation, 3-1, Morinosato Wakamiya Atsugi, Kanagawa 243-0198, Japan
| | - Makio Kashino
- Department of Information Processing, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8503, Japan.,NTT Communication Science Laboratories, NTT Corporation, 3-1, Morinosato Wakamiya Atsugi, Kanagawa 243-0198, Japan.,Core Research for Evolutional Science and Technology, Japan Science and Technology Agency (CREST, JST), Atsugi, Kanagawa 243-0198, Japan
| |
Collapse
|
147
|
Jiam NT, Caldwell M, Deroche ML, Chatterjee M, Limb CJ. Voice emotion perception and production in cochlear implant users. Hear Res 2017; 352:30-39. [PMID: 28088500 DOI: 10.1016/j.heares.2017.01.006] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2016] [Revised: 12/14/2016] [Accepted: 01/06/2017] [Indexed: 10/20/2022]
Abstract
Voice emotion is a fundamental component of human social interaction and social development. Unfortunately, cochlear implant users are often forced to interface with highly degraded prosodic cues as a result of device constraints in extraction, processing, and transmission. As such, individuals with cochlear implants frequently demonstrate significant difficulty in recognizing voice emotions in comparison to their normal hearing counterparts. Cochlear implant-mediated perception and production of voice emotion is an important but relatively understudied area of research. However, a rich understanding of the voice emotion auditory processing offers opportunities to improve upon CI biomedical design and to develop training programs benefiting CI performance. In this review, we will address the issues, current literature, and future directions for improved voice emotion processing in cochlear implant users.
Collapse
Affiliation(s)
- N T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M Caldwell
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M L Deroche
- Centre for Research on Brain, Language and Music, McGill University Montreal, QC, Canada
| | - M Chatterjee
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| | - C J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA.
| |
Collapse
|
148
|
Tadić B, Andjelković M, Boshkoska BM, Levnajić Z. Algebraic Topology of Multi-Brain Connectivity Networks Reveals Dissimilarity in Functional Patterns during Spoken Communications. PLoS One 2016; 11:e0166787. [PMID: 27880802 PMCID: PMC5120797 DOI: 10.1371/journal.pone.0166787] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2016] [Accepted: 11/03/2016] [Indexed: 12/03/2022] Open
Abstract
Human behaviour in various circumstances mirrors the corresponding brain connectivity patterns, which are suitably represented by functional brain networks. While the objective analysis of these networks by graph theory tools deepened our understanding of brain functions, the multi-brain structures and connections underlying human social behaviour remain largely unexplored. In this study, we analyse the aggregate graph that maps coordination of EEG signals previously recorded during spoken communications in two groups of six listeners and two speakers. Applying an innovative approach based on the algebraic topology of graphs, we analyse higher-order topological complexes consisting of mutually interwoven cliques of a high order to which the identified functional connections organise. Our results reveal that the topological quantifiers provide new suitable measures for differences in the brain activity patterns and inter-brain synchronisation between speakers and listeners. Moreover, the higher topological complexity correlates with the listener's concentration to the story, confirmed by self-rating, and closeness to the speaker's brain activity pattern, which is measured by network-to-network distance. The connectivity structures of the frontal and parietal lobe consistently constitute distinct clusters, which extend across the listener's group. Formally, the topology quantifiers of the multi-brain communities exceed the sum of those of the participating individuals and also reflect the listener's rated attributes of the speaker and the narrated subject. In the broader context, the presented study exposes the relevance of higher topological structures (besides standard graph measures) for characterising functional brain networks under different stimuli.
Collapse
Affiliation(s)
- Bosiljka Tadić
- Department of Theoretical Physics, Jožef Stefan Institute, 1001 Ljubljana, Slovenia
| | - Miroslav Andjelković
- Department of Theoretical Physics, Jožef Stefan Institute, 1001 Ljubljana, Slovenia
- Institute for Nuclear Sciences Vinča, University of Belgrade, Belgrade, Serbia
| | - Biljana Mileva Boshkoska
- Faculty of Information Studies, Ulica Talcev 3, 8000 Novo Mesto, Slovenia
- Department of Knowledge Technologies, Jožef Stefan Institute, 1001 Ljubljana, Slovenia
| | - Zoran Levnajić
- Faculty of Information Studies, Ulica Talcev 3, 8000 Novo Mesto, Slovenia
| |
Collapse
|
149
|
Common modulation of limbic network activation underlies musical emotions as they unfold. Neuroimage 2016; 141:517-529. [DOI: 10.1016/j.neuroimage.2016.07.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2015] [Revised: 07/01/2016] [Accepted: 07/02/2016] [Indexed: 11/21/2022] Open
|
150
|
Frontal Theta Activity as an EEG Correlate of Mood-Related Emotional Processing in Dysphoria. JOURNAL OF PSYCHOPATHOLOGY AND BEHAVIORAL ASSESSMENT 2016. [DOI: 10.1007/s10862-016-9572-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|