1
|
Luo G, Sun S, Qian K, Hu B, Schuller BW, Yamamoto Y. How does Music Affect Your Brain? A Pilot Study on EEG and Music Features for Automatic Analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083758 DOI: 10.1109/embc40787.2023.10339971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Music can effectively induce specific emotion and usually be used in clinical treatment or intervention. The electroencephalogram can help reflect the impact of music. Previous studies showed that the existing methods achieved relatively good performance in predicting emotion response to music. However, these methods tend to be time consuming and expensive due to their complexity. To this end, this study proposes a grey wolf optimiser-based method to predict the induced emotion through fusing electroencephalogram features and music features. Experimental results show that, the proposed method can reach a promising performance for predicting emotional response to music and outperform the alternative method. In addition, we analyse the relationship between the music features and electroencephalogram features and the results demonstrate that, musical timbre features are significantly related to the electroencephalogram features.Clinical relevance- This study targets the automatic prediction of the human response to music. It further explores the correlation between EEG features and music features aiming to provide the basis for the extension to the application of music. The grey wolf optimiser-based method proposed in this study could supply a promising avenue for the emotion prediction as induced by music.
Collapse
|
2
|
Roman IR, Roman AS, Kim JC, Large EW. Hebbian learning with elasticity explains how the spontaneous motor tempo affects music performance synchronization. PLoS Comput Biol 2023; 19:e1011154. [PMID: 37285380 DOI: 10.1371/journal.pcbi.1011154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 05/02/2023] [Indexed: 06/09/2023] Open
Abstract
A musician's spontaneous rate of movement, called spontaneous motor tempo (SMT), can be measured while spontaneously playing a simple melody. Data shows that the SMT influences the musician's tempo and synchronization. In this study we present a model that captures these phenomena. We review the results from three previously-published studies: solo musical performance with a pacing metronome tempo that is different from the SMT, solo musical performance without a metronome at a tempo that is faster or slower than the SMT, and duet musical performance between musicians with matching or mismatching SMTs. These studies showed, respectively, that the asynchrony between the pacing metronome and the musician's tempo grew as a function of the difference between the metronome tempo and the musician's SMT, musicians drifted away from the initial tempo toward the SMT, and the absolute asynchronies were smaller if musicians had matching SMTs. We hypothesize that the SMT constantly acts as a pulling force affecting musical actions at a tempo different from a musician's SMT. To test our hypothesis, we developed a model consisting of a non-linear oscillator with Hebbian tempo learning and a pulling force to the model's spontaneous frequency. While the model's spontaneous frequency emulates the SMT, elastic Hebbian learning allows for frequency learning to match a stimulus' frequency. To test our hypothesis, we first fit model parameters to match the data in the first of the three studies and asked whether this same model would explain the data the remaining two studies without further tuning. Results showed that the model's dynamics allowed it to explain all three experiments with the same set of parameters. Our theory offers a dynamical-systems explanation of how an individual's SMT affects synchronization in realistic music performance settings, and the model also enables predictions about performance settings not yet tested.
Collapse
Affiliation(s)
- Iran R Roman
- Center for Computer Research in Music and Acoustics, Department of Music, Stanford University, Stanford, California, United States of America
| | - Adrian S Roman
- Department of Mathematics, University of California Davis, Davis, California, United States of America
| | - Ji Chul Kim
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| | - Edward W Large
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Physics, University of Connecticut, Storrs, Connecticut, United States of America
| |
Collapse
|
3
|
Abstract
Neural decoding models can be used to decode neural representations of visual, acoustic, or semantic information. Recent studies have demonstrated neural decoders that are able to decode accoustic information from a variety of neural signal types including electrocortiography (ECoG) and the electroencephalogram (EEG). In this study we explore how functional magnetic resonance imaging (fMRI) can be combined with EEG to develop an accoustic decoder. Specifically, we first used a joint EEG-fMRI paradigm to record brain activity while participants listened to music. We then used fMRI-informed EEG source localisation and a bi-directional long-term short term deep learning network to first extract neural information from the EEG related to music listening and then to decode and reconstruct the individual pieces of music an individual was listening to. We further validated our decoding model by evaluating its performance on a separate dataset of EEG-only recordings. We were able to reconstruct music, via our fMRI-informed EEG source analysis approach, with a mean rank accuracy of 71.8% ([Formula: see text], [Formula: see text]). Using only EEG data, without participant specific fMRI-informed source analysis, we were able to identify the music a participant was listening to with a mean rank accuracy of 59.2% ([Formula: see text], [Formula: see text]). This demonstrates that our decoding model may use fMRI-informed source analysis to aid EEG based decoding and reconstruction of acoustic information from brain activity and makes a step towards building EEG-based neural decoders for other complex information domains such as other acoustic, visual, or semantic information.
Collapse
|
4
|
Effect of Indian Music as an Auditory Stimulus on Physiological Measures of Stress, Anxiety, Cardiovascular and Autonomic Responses in Humans-A Randomized Controlled Trial. Eur J Investig Health Psychol Educ 2022; 12:1535-1558. [PMID: 36286092 PMCID: PMC9601678 DOI: 10.3390/ejihpe12100108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/26/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Among the different anthropogenic stimuli humans are exposed to, the psychological and cardiovascular effects of auditory stimuli are less understood. This study aims to explore the possible range of change after a single session of auditory stimulation with three different ‘Modes’ of musical stimuli (MS) on anxiety, biomarkers of stress, and cardiovascular parameters among healthy young individuals. In this randomized control trial, 140 healthy young adults, aged 18−30 years, were randomly assigned to three MS groups (Mode/Raga Miyan ki Todi, Malkauns, and Puriya) and one control group (natural sounds). The outcome measurements of the State-Trait Anxiety Inventory, salivary alpha-amylase (sAA), salivary cortisol (sCort), blood pressure, and heart rate variability (HRV) were collected at three time points: before (M1), during (M2), and after the intervention (M3). State anxiety was reduced significantly with raga Puriya (p = 0.018), followed by raga Malkauns and raga Miyan Ki Todi. All the groups showed a significant reduction in sAA. Raga Miyan ki Todi and Puriya caused an arousal effect (as evidenced by HRV) during the intervention and significant relaxation after the intervention (both p < 0.005). Raga Malkauns and the control group had a sustained rise in parasympathetic activity over 30 min. Future studies should try to use other modes and features to develop a better scientific foundation for the use of Indian music in medicine.
Collapse
|
5
|
Liu Y, Lian W, Zhao X, Tang Q, Liu G. Spatial Connectivity and Temporal Dynamic Functional Network Connectivity of Musical Emotions Evoked by Dynamically Changing Tempo. Front Neurosci 2021; 15:700154. [PMID: 34421523 PMCID: PMC8375772 DOI: 10.3389/fnins.2021.700154] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 07/07/2021] [Indexed: 11/13/2022] Open
Abstract
Music tempo is closely connected to listeners' musical emotion and multifunctional neural activities. Music with increasing tempo evokes higher emotional responses and music with decreasing tempo enhances relaxation. However, the neural substrate of emotion evoked by dynamically changing tempo is still unclear. To investigate the spatial connectivity and temporal dynamic functional network connectivity (dFNC) of musical emotion evoked by dynamically changing tempo, we collected dynamic emotional ratings and conducted group independent component analysis (ICA), sliding time window correlations, and k-means clustering to assess the FNC of emotion evoked by music with decreasing tempo (180-65 bpm) and increasing tempo (60-180 bpm). Music with decreasing tempo (with more stable dynamic valences) evoked higher valence than increasing tempo both with stronger independent components (ICs) in the default mode network (DMN) and sensorimotor network (SMN). The dFNC analysis showed that with time-decreasing FNC across the whole brain, emotion evoked by decreasing music was associated with strong spatial connectivity within the DMN and SMN. Meanwhile, it was associated with strong FNC between the DMN-frontoparietal network (FPN) and DMN-cingulate-opercular network (CON). The paired t-test showed that music with a decreasing tempo evokes stronger activation of ICs within DMN and SMN than that with an increasing tempo, which indicated that faster music is more likely to enhance listeners' emotions with multifunctional brain activities even when the tempo is slowing down. With increasing FNC across the whole brain, music with an increasing tempo was associated with strong connectivity within FPN; time-decreasing connectivity was found within CON, SMN, VIS, and between CON and SMN, which explained its unstable valence during the dynamic valence rating. Overall, the FNC can help uncover the spatial and temporal neural substrates of musical emotions evoked by dynamically changing tempi.
Collapse
Affiliation(s)
- Ying Liu
- School of Mathematics and Statistics, Southwest University, Chongqing, China
- School of Music, Southwest University, Chongqing, China
| | - Weili Lian
- College of Preschool Education, Chongqing Youth Vocational and Technical College, Chongqing, China
| | - Xingcong Zhao
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
| | - Qingting Tang
- Faculty of Psychology, Southwest University, Chongqing, China
| | - Guangyuan Liu
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
| |
Collapse
|
6
|
Neural and physiological data from participants listening to affective music. Sci Data 2020; 7:177. [PMID: 32541806 PMCID: PMC7295758 DOI: 10.1038/s41597-020-0507-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 05/07/2020] [Indexed: 11/09/2022] Open
Abstract
Music provides a means of communicating affective meaning. However, the neurological mechanisms by which music induces affect are not fully understood. Our project sought to investigate this through a series of experiments into how humans react to affective musical stimuli and how physiological and neurological signals recorded from those participants change in accordance with self-reported changes in affect. In this paper, the datasets recorded over the course of this project are presented, including details of the musical stimuli, participant reports of their felt changes in affective states as they listened to the music, and concomitant recordings of physiological and neurological activity. We also include non-identifying meta data on our participant populations for purposes of further exploratory analysis. This data provides a large and valuable novel resource for researchers investigating emotion, music, and how they affect our neural and physiological activity.
Collapse
|
7
|
Gao C, Fillmore P, Scullin MK. Classical music, educational learning, and slow wave sleep: A targeted memory reactivation experiment. Neurobiol Learn Mem 2020; 171:107206. [PMID: 32145407 DOI: 10.1016/j.nlm.2020.107206] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 01/20/2020] [Accepted: 03/02/2020] [Indexed: 11/19/2022]
Abstract
Poor sleep in college students compromises the memory consolidation processes necessary to retain course materials. A solution may lie in targeting reactivation of memories during sleep (TMR). Fifty undergraduate students completed a college-level microeconomics lecture (mathematics-based) while listening to distinctive classical music (Chopin, Beethoven, and Vivaldi). After they fell asleep, we re-played the classical music songs (TMR) or a control noise during slow wave sleep. Relative to the control condition, the TMR condition showed an 18% improvement for knowledge transfer items that measured concept integration (d = 0.63), increasing the probability of "passing" the test with a grade of 70 or above (OR = 4.68, 95%CI: 1.21, 18.04). The benefits of TMR did not extend to a 9-month follow-up test when performance dropped to floor levels, demonstrating that long-term-forgetting curves are largely resistant to experimentally-consolidated memories. Spectral analyses revealed greater frontal theta activity during slow wave sleep in the TMR condition than the control condition (d = 0.87), and greater frontal theta activity across conditions was associated with protection against long-term-forgetting at the next-day and 9-month follow-up tests (rs = 0.42), at least in female students. Thus, students can leverage instrumental music-which they already commonly pair with studying-to help prepare for academic tests, an approach that may promote course success and persistence.
Collapse
Affiliation(s)
- Chenlu Gao
- Baylor University, Department of Psychology and Neuroscience, Waco, TX, United States
| | - Paul Fillmore
- Baylor University, Department of Communication Sciences and Disorders, Waco, TX, United States
| | - Michael K Scullin
- Baylor University, Department of Psychology and Neuroscience, Waco, TX, United States.
| |
Collapse
|
8
|
Yurgil KA, Velasquez MA, Winston JL, Reichman NB, Colombo PJ. Music Training, Working Memory, and Neural Oscillations: A Review. Front Psychol 2020; 11:266. [PMID: 32153474 PMCID: PMC7047970 DOI: 10.3389/fpsyg.2020.00266] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Accepted: 02/04/2020] [Indexed: 12/18/2022] Open
Abstract
This review focuses on reports that link music training to working memory and neural oscillations. Music training is increasingly associated with improvement in working memory, which is strongly related to both localized and distributed patterns of neural oscillations. Importantly, there is a small but growing number of reports of relationships between music training, working memory, and neural oscillations in adults. Taken together, these studies make important contributions to our understanding of the neural mechanisms that support effects of music training on behavioral measures of executive functions. In addition, they reveal gaps in our knowledge that hold promise for further investigation. The current review is divided into the main sections that follow: (1) discussion of behavioral measures of working memory, and effects of music training on working memory in adults; (2) relationships between music training and neural oscillations during temporal stages of working memory; (3) relationships between music training and working memory in children; (4) relationships between music training and working memory in older adults; and (5) effects of entrainment of neural oscillations on cognitive processing. We conclude that the study of neural oscillations is proving useful in elucidating the neural mechanisms of relationships between music training and the temporal stages of working memory. Moreover, a lifespan approach to these studies will likely reveal strategies to improve and maintain executive function during development and aging.
Collapse
Affiliation(s)
- Kate A. Yurgil
- Department of Psychological Sciences, Loyola University, New Orleans, LA, United States
| | | | - Jenna L. Winston
- Department of Psychology, Tulane University, New Orleans, LA, United States
| | - Noah B. Reichman
- Brain Institute, Tulane University, New Orleans, LA, United States
| | - Paul J. Colombo
- Department of Psychology, Tulane University, New Orleans, LA, United States
- Brain Institute, Tulane University, New Orleans, LA, United States
| |
Collapse
|
9
|
Daly I, Williams D, Hwang F, Kirke A, Miranda ER, Nasuto SJ. Electroencephalography reflects the activity of sub-cortical brain regions during approach-withdrawal behaviour while listening to music. Sci Rep 2019; 9:9415. [PMID: 31263113 PMCID: PMC6603018 DOI: 10.1038/s41598-019-45105-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Accepted: 06/03/2019] [Indexed: 11/09/2022] Open
Abstract
The ability of music to evoke activity changes in the core brain structures that underlie the experience of emotion suggests that it has the potential to be used in therapies for emotion disorders. A large volume of research has identified a network of sub-cortical brain regions underlying music-induced emotions. Additionally, separate evidence from electroencephalography (EEG) studies suggests that prefrontal asymmetry in the EEG reflects the approach-withdrawal response to music-induced emotion. However, fMRI and EEG measure quite different brain processes and we do not have a detailed understanding of the functional relationships between them in relation to music-induced emotion. We employ a joint EEG – fMRI paradigm to explore how EEG-based neural correlates of the approach-withdrawal response to music reflect activity changes in the sub-cortical emotional response network. The neural correlates examined are asymmetry in the prefrontal EEG, and the degree of disorder in that asymmetry over time, as measured by entropy. Participants’ EEG and fMRI were recorded simultaneously while the participants listened to music that had been specifically generated to target the elicitation of a wide range of affective states. While listening to this music, participants also continuously reported their felt affective states. Here we report on co-variations in the dynamics of these self-reports, the EEG, and the sub-cortical brain activity. We find that a set of sub-cortical brain regions in the emotional response network exhibits activity that significantly relates to prefrontal EEG asymmetry. Specifically, EEG in the pre-frontal cortex reflects not only cortical activity, but also changes in activity in the amygdala, posterior temporal cortex, and cerebellum. We also find that, while the magnitude of the asymmetry reflects activity in parts of the limbic and paralimbic systems, the entropy of that asymmetry reflects activity in parts of the autonomic response network such as the auditory cortex. This suggests that asymmetry magnitude reflects affective responses to music, while asymmetry entropy reflects autonomic responses to music. Thus, we demonstrate that it is possible to infer activity in the limbic and paralimbic systems from pre-frontal EEG asymmetry. These results show how EEG can be used to measure and monitor changes in the limbic and paralimbic systems. Specifically, they suggest that EEG asymmetry acts as an indicator of sub-cortical changes in activity induced by music. This shows that EEG may be used as a measure of the effectiveness of music therapy to evoke changes in activity in the sub-cortical emotion response network. This is also the first time that the activity of sub-cortical regions, normally considered “invisible” to EEG, has been shown to be characterisable directly from EEG dynamics measured during music listening.
Collapse
Affiliation(s)
- Ian Daly
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, CO4 3SQ, UK.
| | - Duncan Williams
- Digital Creativity Labs, Department of Computer Science, University of York, Heslington, YO10 5RG, UK
| | - Faustina Hwang
- Brain Embodiment Laboratory, Biomedical Sciences and Biomedical Engineering Division, School of Biological Sciences, University of Reading, Reading, RG6 6AY, UK
| | - Alexis Kirke
- Interdisciplinary Centre for Computer Music Research, University of Plymouth, Plymouth, PL4 8AA, UK
| | - Eduardo R Miranda
- Interdisciplinary Centre for Computer Music Research, University of Plymouth, Plymouth, PL4 8AA, UK
| | - Slawomir J Nasuto
- Brain Embodiment Laboratory, Biomedical Sciences and Biomedical Engineering Division, School of Biological Sciences, University of Reading, Reading, RG6 6AY, UK
| |
Collapse
|
10
|
Proverbio AM, De Benedetto F, Ferrari MV, Ferrarini G. When listening to rain sounds boosts arithmetic ability. PLoS One 2018; 13:e0192296. [PMID: 29466472 PMCID: PMC5821317 DOI: 10.1371/journal.pone.0192296] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Accepted: 01/10/2018] [Indexed: 01/25/2023] Open
Abstract
Studies in the literature have provided conflicting evidence about the effects of background noise or music on concurrent cognitive tasks. Some studies have shown a detrimental effect, while others have shown a beneficial effect of background auditory stimuli. The aim of this study was to investigate the influence of agitating, happy or touching music, as opposed to environmental sounds or silence, on the ability of non-musician subjects to perform arithmetic operations. Fifty university students (25 women and 25 men, 25 introverts and 25 extroverts) volunteered for the study. The participants were administered 180 easy or difficult arithmetic operations (division, multiplication, subtraction and addition) while listening to heavy rain sounds, silence or classical music. Silence was detrimental when participants were faced with difficult arithmetic operations, as it was associated with significantly worse accuracy and slower RTs than music or rain sound conditions. This finding suggests that the benefit of background stimulation was not music-specific but possibly due to an enhanced cerebral alertness level induced by the auditory stimulation. Introverts were always faster than extroverts in solving mathematical problems, except when the latter performed calculations accompanied by the sound of heavy rain, a condition that made them as fast as introverts. While the background auditory stimuli had no effect on the arithmetic ability of either group in the easy condition, it strongly affected extroverts in the difficult condition, with RTs being faster during agitating or joyful music as well as rain sounds, compared to the silent condition. For introverts, agitating music was associated with faster response times than the silent condition. This group difference may be explained on the basis of the notion that introverts have a generally higher arousal level compared to extroverts and would therefore benefit less from the background auditory stimuli.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Neuro-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca, Milan, Italy
- * E-mail:
| | - Francesco De Benedetto
- Neuro-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Maria Vittoria Ferrari
- Neuro-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Giorgia Ferrarini
- Neuro-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca, Milan, Italy
| |
Collapse
|
11
|
Nicolaou N, Malik A, Daly I, Weaver J, Hwang F, Kirke A, Roesch EB, Williams D, Miranda ER, Nasuto SJ. Directed Motor-Auditory EEG Connectivity Is Modulated by Music Tempo. Front Hum Neurosci 2017; 11:502. [PMID: 29093672 PMCID: PMC5651276 DOI: 10.3389/fnhum.2017.00502] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Accepted: 10/02/2017] [Indexed: 11/18/2022] Open
Abstract
Beat perception is fundamental to how we experience music, and yet the mechanism behind this spontaneous building of the internal beat representation is largely unknown. Existing findings support links between the tempo (speed) of the beat and enhancement of electroencephalogram (EEG) activity at tempo-related frequencies, but there are no studies looking at how tempo may affect the underlying long-range interactions between EEG activity at different electrodes. The present study investigates these long-range interactions using EEG activity recorded from 21 volunteers listening to music stimuli played at 4 different tempi (50, 100, 150 and 200 beats per minute). The music stimuli consisted of piano excerpts designed to convey the emotion of “peacefulness”. Noise stimuli with an identical acoustic content to the music excerpts were also presented for comparison purposes. The brain activity interactions were characterized with the imaginary part of coherence (iCOH) in the frequency range 1.5–18 Hz (δ, θ, α and lower β) between all pairs of EEG electrodes for the four tempi and the music/noise conditions, as well as a baseline resting state (RS) condition obtained at the start of the experimental task. Our findings can be summarized as follows: (a) there was an ongoing long-range interaction in the RS engaging fronto-posterior areas; (b) this interaction was maintained in both music and noise, but its strength and directionality were modulated as a result of acoustic stimulation; (c) the topological patterns of iCOH were similar for music, noise and RS, however statistically significant differences in strength and direction of iCOH were identified; and (d) tempo had an effect on the direction and strength of motor-auditory interactions. Our findings are in line with existing literature and illustrate a part of the mechanism by which musical stimuli with different tempi can entrain changes in cortical activity.
Collapse
Affiliation(s)
- Nicoletta Nicolaou
- Brain Embodiment Laboratory, Biomedical Engineering Section, School of Biological Sciences, University of Reading, Reading, United Kingdom.,Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Asad Malik
- Brain Embodiment Laboratory, Biomedical Engineering Section, School of Biological Sciences, University of Reading, Reading, United Kingdom.,School of Psychology, University of Reading, Reading, United Kingdom.,Centre for Integrative Neuroscience and Neurodynamics, University of Reading, Reading, United Kingdom
| | - Ian Daly
- Brain-Computer Interfacing and Neural Engineering Laboratory, Department of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - James Weaver
- Brain Embodiment Laboratory, Biomedical Engineering Section, School of Biological Sciences, University of Reading, Reading, United Kingdom
| | - Faustina Hwang
- Brain Embodiment Laboratory, Biomedical Engineering Section, School of Biological Sciences, University of Reading, Reading, United Kingdom
| | - Alexis Kirke
- Interdisciplinary Centre for Computer Music Research, University of Plymouth, Plymouth, United Kingdom
| | - Etienne B Roesch
- School of Psychology, University of Reading, Reading, United Kingdom.,Centre for Integrative Neuroscience and Neurodynamics, University of Reading, Reading, United Kingdom
| | - Duncan Williams
- Interdisciplinary Centre for Computer Music Research, University of Plymouth, Plymouth, United Kingdom
| | - Eduardo R Miranda
- Interdisciplinary Centre for Computer Music Research, University of Plymouth, Plymouth, United Kingdom
| | - Slawomir J Nasuto
- Brain Embodiment Laboratory, Biomedical Engineering Section, School of Biological Sciences, University of Reading, Reading, United Kingdom
| |
Collapse
|
12
|
Daly I, Williams D, Hallowell J, Hwang F, Kirke A, Malik A, Weaver J, Miranda E, Nasuto SJ. Music-induced emotions can be predicted from a combination of brain activity and acoustic features. Brain Cogn 2015; 101:1-11. [PMID: 26544602 DOI: 10.1016/j.bandc.2015.08.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2015] [Revised: 08/03/2015] [Accepted: 08/04/2015] [Indexed: 10/22/2022]
Abstract
It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests that over 20% of the variance of the participant's music induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01).
Collapse
Affiliation(s)
- Ian Daly
- Brain Embodiment Lab, School of Systems Engineering, University of Reading, Reading, UK.
| | - Duncan Williams
- Interdisciplinary Centre for Music Research, University of Plymouth, Plymouth, UK
| | - James Hallowell
- Brain Embodiment Lab, School of Systems Engineering, University of Reading, Reading, UK
| | - Faustina Hwang
- Brain Embodiment Lab, School of Systems Engineering, University of Reading, Reading, UK
| | - Alexis Kirke
- Interdisciplinary Centre for Music Research, University of Plymouth, Plymouth, UK
| | - Asad Malik
- Brain Embodiment Lab, School of Systems Engineering, University of Reading, Reading, UK
| | - James Weaver
- Brain Embodiment Lab, School of Systems Engineering, University of Reading, Reading, UK
| | - Eduardo Miranda
- Interdisciplinary Centre for Music Research, University of Plymouth, Plymouth, UK
| | - Slawomir J Nasuto
- Brain Embodiment Lab, School of Systems Engineering, University of Reading, Reading, UK
| |
Collapse
|