1
|
Cui AX, Kraeutner SN, Motamed Yeganeh N, Hermiston N, Werker JF, Boyd LA. Resting-state brain connectivity correlates of musical sophistication. Front Hum Neurosci 2023; 17:1195996. [PMID: 37841073 PMCID: PMC10570446 DOI: 10.3389/fnhum.2023.1195996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 09/15/2023] [Indexed: 10/17/2023] Open
Abstract
Introduction A growing body of research has investigated how performing arts training, and more specifically, music training, impacts the brain. Recent meta-analytic work has identified multiple brain areas where activity varies as a function of levels of musical expertise gained through music training. However, research has also shown that musical sophistication may be high even without music training. Thus, we aim to extend previous work by investigating whether the functional connectivity of these areas relates to interindividual differences in musical sophistication, and to characterize differences in connectivity attributed to performing arts training. Methods We analyzed resting-state functional magnetic resonance imaging from n = 74 participants, of whom 37 received performing arts training, that is, including a musical instrument, singing, and/or acting, at university level. We used a validated, continuous measure of musical sophistication to further characterize our sample. Following standard pre-processing, fifteen brain areas were identified a priori based on meta-analytic work and used as seeds in separate seed-to-voxel analyses to examine the effect of musical sophistication across the sample, and between-group analyses to examine the effects of performing arts training. Results Connectivity of bilateral superior temporal gyrus, bilateral precentral gyrus and cerebellum, and bilateral putamen, left insula, and left thalamus varied with different aspects of musical sophistication. By including these measures of these aspects as covariates in post hoc analyses, we found that connectivity of the right superior temporal gyrus and left precentral gyrus relate to effects of performing arts training beyond effects of individual musical sophistication. Discussion Our results highlight the potential role of sensory areas in active engagement with music, the potential role of motor areas in emotion processing, and the potential role of connectivity between putamen and lingual gyrus in general musical sophistication.
Collapse
Affiliation(s)
- Anja-Xiaoxing Cui
- Department of Musicology, University of Vienna, Vienna, Austria
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Sarah N. Kraeutner
- Department of Psychology, University of British Columbia, Kelowna, BC, Canada
| | | | - Nancy Hermiston
- School of Music, University of British Columbia, Vancouver, BC, Canada
| | - Janet F. Werker
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Lara A. Boyd
- Brain Behaviour Lab, Department of Physical Therapy, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
2
|
Di Stefano N, Vuust P, Brattico E. Consonance and dissonance perception. A critical review of the historical sources, multidisciplinary findings, and main hypotheses. Phys Life Rev 2022; 43:273-304. [PMID: 36372030 DOI: 10.1016/j.plrev.2022.10.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 10/17/2022] [Indexed: 11/05/2022]
Abstract
Revealed more than two millennia ago by Pythagoras, consonance and dissonance (C/D) are foundational concepts in music theory, perception, and aesthetics. The search for the biological, acoustical, and cultural factors that affect C/D perception has resulted in descriptive accounts inspired by arithmetic, musicological, psychoacoustical or neurobiological frameworks without reaching a consensus. Here, we review the key historical sources and modern multidisciplinary findings on C/D and integrate them into three main hypotheses: the vocal similarity hypothesis (VSH), the psychocultural hypothesis (PH), and the sensorimotor hypothesis (SH). By illustrating the hypotheses-related findings, we highlight their major conceptual, methodological, and terminological shortcomings. Trying to provide a unitary framework for C/D understanding, we put together multidisciplinary research on human and animal vocalizations, which converges to suggest that auditory roughness is associated with distress/danger and, therefore, elicits defensive behavioral reactions and neural responses that indicate aversion. We therefore stress the primacy of vocality and roughness as key factors in the explanation of C/D phenomenon, and we explore the (neuro)biological underpinnings of the attraction-aversion mechanisms that are triggered by C/D stimuli. Based on the reviewed evidence, while the aversive nature of dissonance appears as solidly rooted in the multidisciplinary findings, the attractive nature of consonance remains a somewhat speculative claim that needs further investigation. Finally, we outline future directions for empirical research in C/D, especially regarding cross-modal and cross-cultural approaches.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies (ISTC), National Research Council of Italy (CNR), Via San Martino della Battaglia 44, 00185 Rome, Italy.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, 70122 Bari, Italy.
| |
Collapse
|
3
|
Speech-related auditory salience detection in the posterior superior temporal region. Neuroimage 2021; 248:118840. [PMID: 34958951 DOI: 10.1016/j.neuroimage.2021.118840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 11/13/2021] [Accepted: 12/19/2021] [Indexed: 11/22/2022] Open
Abstract
Processing auditory human speech requires both detection (early and transient) and analysis (sustained). We analyzed high gamma (70-110 Hz) activity of intracranial electroencephalography waveforms acquired during an auditory task that paired forward speech, reverse speech, and signal correlated noise. We identified widespread superior temporal sites with sustained activity responding only to forward and reverse speech regardless of paired order. More localized superior temporal auditory onset sites responded to all stimulus types when presented first in a pair and responded in recurrent fashion to the second paired stimulus in select conditions even in the absence of interstimulus silence; a novel finding. Auditory onset activity to a second paired sound recurred according to relative salience, with evidence of partial suppression during linguistic processing. We propose that temporal lobe auditory onset sites facilitate a salience detector function with hysteresis of 200 ms and are influenced by cortico-cortical feedback loops involving linguistic processing and articulation.
Collapse
|
4
|
The Rapid Emergence of Musical Pitch Structure in Human Cortex. J Neurosci 2020; 40:2108-2118. [PMID: 32001611 DOI: 10.1523/jneurosci.1399-19.2020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2019] [Revised: 01/06/2020] [Accepted: 01/07/2020] [Indexed: 11/21/2022] Open
Abstract
In tonal music, continuous acoustic waveforms are mapped onto discrete, hierarchically arranged, internal representations of pitch. To examine the neural dynamics underlying this transformation, we presented male and female human listeners with tones embedded within a Western tonal context while recording their cortical activity using magnetoencephalography. Machine learning classifiers were then trained to decode different tones from their underlying neural activation patterns at each peristimulus time sample, providing a dynamic measure of their dissimilarity in cortex. Comparing the time-varying dissimilarity between tones with the predictions of acoustic and perceptual models, we observed a temporal evolution in the brain's representational structure. Whereas initial dissimilarities mirrored their fundamental-frequency separation, dissimilarities beyond 200 ms reflected the perceptual status of each tone within the tonal hierarchy of Western music. These effects occurred regardless of stimulus regularities within the context or whether listeners were engaged in a task requiring explicit pitch analysis. Lastly, patterns of cortical activity that discriminated between tones became increasingly stable in time as the information coded by those patterns transitioned from low-to-high level properties. Current results reveal the dynamics with which the complex perceptual structure of Western tonal music emerges in cortex at the timescale of an individual tone.SIGNIFICANCE STATEMENT Little is understood about how the brain transforms an acoustic waveform into the complex perceptual structure of musical pitch. Applying neural decoding techniques to the cortical activity of human subjects engaged in music listening, we measured the dynamics of information processing in the brain on a moment-to-moment basis as subjects heard each tone. In the first 200 ms after onset, transient patterns of neural activity coded the fundamental frequency of tones. Subsequently, a period emerged during which more temporally stable activation patterns coded the perceptual status of each tone within the "tonal hierarchy" of Western music. Our results provide a crucial link between the complex perceptual structure of tonal music and the underlying neural dynamics from which it emerges.
Collapse
|
5
|
Sachs ME, Habibi A, Damasio A, Kaplan JT. Dynamic intersubject neural synchronization reflects affective responses to sad music. Neuroimage 2019; 218:116512. [PMID: 31901418 DOI: 10.1016/j.neuroimage.2019.116512] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 11/14/2019] [Accepted: 12/31/2019] [Indexed: 12/30/2022] Open
Abstract
Psychological theories of emotion often highlight the dynamic quality of the affective experience, yet neuroimaging studies of affect have traditionally relied on static stimuli that lack ecological validity. Consequently, the brain regions that represent emotions and feelings as they unfold remain unclear. Recently, dynamic, model-free analytical techniques have been employed with naturalistic stimuli to better capture time-varying patterns of activity in the brain; yet, few studies have focused on relating these patterns to changes in subjective feelings. Here, we address this gap, using intersubject correlation and phase synchronization to assess how stimulus-driven changes in brain activity and connectivity are related to two aspects of emotional experience: emotional intensity and enjoyment. During fMRI scanning, healthy volunteers listened to a full-length piece of music selected to induce sadness. After scanning, participants listened to the piece twice while simultaneously rating the intensity of felt sadness or felt enjoyment. Activity in the auditory cortex, insula, and inferior frontal gyrus was significantly synchronized across participants. Synchronization in auditory, visual, and prefrontal regions was significantly greater in participants with higher measures of a subscale of trait empathy related to feeling emotions in response to music. When assessed dynamically, continuous enjoyment ratings positively predicted a moment-to-moment measure of intersubject synchronization in auditory, default mode, and striatal networks, as well as the orbitofrontal cortex, whereas sadness predicted intersubject synchronization in limbic and striatal networks. The results suggest that stimulus-driven patterns of neural communication in emotional processing and high-level cortical regions carry meaningful information with regards to our feeling in response to a naturalistic stimulus.
Collapse
Affiliation(s)
- Matthew E Sachs
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA; Center for Science and Society, Columbia University in the City of New York, 1180 Amsterdam Avenue, New York, NY, 10027, USA.
| | - Assal Habibi
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA
| | - Antonio Damasio
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA
| | - Jonas T Kaplan
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA
| |
Collapse
|
6
|
|
7
|
The pleasantness of sensory dissonance is mediated by musical style and expertise. Sci Rep 2019; 9:1070. [PMID: 30705379 PMCID: PMC6355932 DOI: 10.1038/s41598-018-35873-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 11/09/2018] [Indexed: 12/13/2022] Open
Abstract
Western musical styles use a large variety of chords and vertical sonorities. Based on objective acoustical properties, chords can be situated on a dissonant-consonant continuum. While this might to some extent converge with the unpleasant-pleasant continuum, subjective liking might diverge for various chord forms from music across different styles. Our study aimed to investigate how well appraisals of the roughness and pleasantness dimensions of isolated chords taken from real-world music are predicted by Parncutt’s established model of sensory dissonance. Furthermore, we related these subjective ratings to style of origin and acoustical features of the chords as well as musical sophistication of the raters. Ratings were obtained for chords deemed representative of the harmonic language of three different musical styles (classical, jazz and avant-garde music), plus randomly generated chords. Results indicate that pleasantness and roughness ratings were, on average, mirror opposites; however, their relative distribution differed greatly across styles, reflecting different underlying aesthetic ideals. Parncutt’s model only weakly predicted ratings for all but Classical chords, suggesting that listeners’ appraisal of the dissonance and pleasantness of chords bears not only on stimulus-side but also on listener-side factors. Indeed, we found that levels of musical sophistication negatively predicted listeners’ tendency to rate the consonance and pleasantness of any one chord as coupled measures, suggesting that musical education and expertise may serve to individuate how these musical dimensions are apprehended.
Collapse
|
8
|
Johnson EL, King-Stephens D, Weber PB, Laxer KD, Lin JJ, Knight RT. Spectral Imprints of Working Memory for Everyday Associations in the Frontoparietal Network. Front Syst Neurosci 2019; 12:65. [PMID: 30670953 PMCID: PMC6333050 DOI: 10.3389/fnsys.2018.00065] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Accepted: 12/11/2018] [Indexed: 12/22/2022] Open
Abstract
How does the human brain rapidly process incoming information in working memory? In growing divergence from a single-region focus on the prefrontal cortex (PFC), recent work argues for emphasis on how distributed neural networks are rapidly coordinated in support of this central neurocognitive function. Previously, we showed that working memory for everyday “what,” “where,” and “when” associations depends on multiplexed oscillatory systems, in which signals of different frequencies simultaneously link the PFC to parieto-occipital and medial temporal regions, pointing to a complex web of sub-second, bidirectional interactions. Here, we used direct brain recordings to delineate the frontoparietal oscillatory correlates of working memory with high spatiotemporal precision. Seven intracranial patients with electrodes simultaneously localized to prefrontal and parietal cortices performed a visuospatial working memory task that operationalizes the types of identity and spatiotemporal information we encounter every day. First, task-induced oscillations in the same delta-theta (2–7 Hz) and alpha-beta (9–24 Hz) frequency ranges previously identified using scalp electroencephalography (EEG) carried information about the contents of working memory. Second, maintenance was linked to directional connectivity from the parietal cortex to the PFC. However, presentation of the test prompt to cue identity, spatial, or temporal information changed delta-theta coordination from a unidirectional, parietal-led system to a bidirectional, frontoparietal system. Third, the processing of spatiotemporal information was more bidirectional in the delta-theta range than was the processing of identity information, where alpha-beta connectivity did not exhibit sensitivity to the contents of working memory. These findings implicate a bidirectional delta-theta mechanism for frontoparietal control over the contents of working memory.
Collapse
Affiliation(s)
- Elizabeth L Johnson
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States.,Institute of Gerontology, Wayne State University, Detroit, MI, United States
| | - David King-Stephens
- Department of Neurology and Neurosurgery, California Pacific Medical Center, San Francisco, CA, United States
| | - Peter B Weber
- Department of Neurology and Neurosurgery, California Pacific Medical Center, San Francisco, CA, United States
| | - Kenneth D Laxer
- Department of Neurology and Neurosurgery, California Pacific Medical Center, San Francisco, CA, United States
| | - Jack J Lin
- Comprehensive Epilepsy Program, Department of Neurology, University of California, Irvine, Irvine, CA, United States.,Department of Biomedical Engineering, University of California, Irvine, Irvine, CA, United States
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States.,Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
9
|
Early neural responses underlie advantages for consonance over dissonance. Neuropsychologia 2018; 117:188-198. [PMID: 29885961 PMCID: PMC6092559 DOI: 10.1016/j.neuropsychologia.2018.06.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Revised: 06/01/2018] [Accepted: 06/06/2018] [Indexed: 12/05/2022]
Abstract
Consonant musical intervals tend to be more readily processed than dissonant intervals. In the present study, we explore the neural basis for this difference by registering how the brain responds after changes in consonance and dissonance, and how formal musical training modulates these responses. Event-related brain potentials (ERPs) were registered while participants were presented with sequences of consonant intervals interrupted by a dissonant interval, or sequences of dissonant intervals interrupted by a consonant interval. Participants were musicians and non-musicians. Our results show that brain responses triggered by changes in a consonant context differ from those triggered in a dissonant context. Changes in a sequence of consonant intervals are rapidly processed independently of musical expertise, as revealed by a change-related mismatch negativity (MMN, a component of the ERPs triggered by an odd stimulus in a sequence of stimuli) elicited in both musicians and non-musicians. In contrast, changes in a sequence of dissonant intervals elicited a late MMN only in participants with prolonged musical training. These different neural responses might form the basis for the processing advantages observed for consonance over dissonance and provide information about how formal musical training modulates them. We registered ERPs after deviant intervals in consonant and dissonant sequences. Violations of consonant sequences are detected independently of musical expertise. Musical training modulates responses to violations of dissonant sequences. These neural responses might form the basis for a processing advantage of consonance.
Collapse
|
10
|
Trulla LL, Di Stefano N, Giuliani A. Computational Approach to Musical Consonance and Dissonance. Front Psychol 2018; 9:381. [PMID: 29670552 PMCID: PMC5893895 DOI: 10.3389/fpsyg.2018.00381] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2017] [Accepted: 03/08/2018] [Indexed: 11/21/2022] Open
Abstract
In sixth century BC, Pythagoras discovered the mathematical foundation of musical consonance and dissonance. When auditory frequencies in small-integer ratios are combined, the result is a harmonious perception. In contrast, most frequency combinations result in audible, off-centered by-products labeled “beating” or “roughness;” these are reported by most listeners to sound dissonant. In this paper, we consider second-order beats, a kind of beating recognized as a product of neural processing, and demonstrate that the data-driven approach of Recurrence Quantification Analysis (RQA) allows for the reconstruction of the order in which interval ratios are ranked in music theory and harmony. We take advantage of computer-generated sounds containing all intervals over the span of an octave. To visualize second-order beats, we use a glissando from the unison to the octave. This procedure produces a profile of recurrence values that correspond to subsequent epochs along the original signal. We find that the higher recurrence peaks exactly match the epochs corresponding to just intonation frequency ratios. This result indicates a link between consonance and the dynamical features of the signal. Our findings integrate a new element into the existing theoretical models of consonance, thus providing a computational account of consonance in terms of dynamical systems theory. Finally, as it considers general features of acoustic signals, the present approach demonstrates a universal aspect of consonance and dissonance perception and provides a simple mathematical tool that could serve as a common framework for further neuro-psychological and music theory research.
Collapse
Affiliation(s)
| | - Nicola Di Stefano
- Institute of Philosophy of Scientific and Technological Practice and Laboratory of Developmental Neuroscience, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Alessandro Giuliani
- Environment and Health Department, National Institute of Health, Rome, Italy
| |
Collapse
|
11
|
Decoding the dynamic representation of musical pitch from human brain activity. Sci Rep 2018; 8:839. [PMID: 29339790 PMCID: PMC5770452 DOI: 10.1038/s41598-018-19222-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 12/27/2017] [Indexed: 11/30/2022] Open
Abstract
In music, the perception of pitch is governed largely by its tonal function given the preceding harmonic structure of the music. While behavioral research has advanced our understanding of the perceptual representation of musical pitch, relatively little is known about its representational structure in the brain. Using Magnetoencephalography (MEG), we recorded evoked neural responses to different tones presented within a tonal context. Multivariate Pattern Analysis (MVPA) was applied to “decode” the stimulus that listeners heard based on the underlying neural activity. We then characterized the structure of the brain’s representation using decoding accuracy as a proxy for representational distance, and compared this structure to several well established perceptual and acoustic models. The observed neural representation was best accounted for by a model based on the Standard Tonal Hierarchy, whereby differences in the neural encoding of musical pitches correspond to their differences in perceived stability. By confirming that perceptual differences honor those in the underlying neuronal population coding, our results provide a crucial link in understanding the cognitive foundations of musical pitch across psychological and neural domains.
Collapse
|
12
|
Proverbio AM, De Benedetto F. Auditory enhancement of visual memory encoding is driven by emotional content of the auditory material and mediated by superior frontal cortex. Biol Psychol 2017; 132:164-175. [PMID: 29292233 DOI: 10.1016/j.biopsycho.2017.12.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Revised: 07/12/2017] [Accepted: 12/19/2017] [Indexed: 12/31/2022]
Abstract
BACKGROUND The aim of the present study was to investigate how auditory background interacts with learning and memory. Both facilitatory (e.g., "Mozart effect") and interfering effects of background have been reported, depending on the type of auditory stimulation and of concurrent cognitive tasks. METHOD Here we recorded event related potentials (ERPs) during face encoding followed by an old/new memory test to investigate the effect of listening to classical music (Čajkovskij, dramatic), environmental sounds (rain) or silence on learning. Participants were 15 healthy non-musician university students. Almost 400 (previously unknown) faces of women and men of various age were presented. RESULTS Listening to music during study led to a better encoding of faces as indexed by an increased Anterior Negativity. The FN400 response recorded during the memory test showed a gradient in its amplitude reflecting face familiarity. FN400 was larger to new than old faces, and to faces studied during rain sound listening and silence than music listening. CONCLUSION The results indicate that listening to music enhances memory recollection of faces by merging with visual information. A swLORETA analysis showed the main involvement of Superior Temporal Gyrus (STG) and medial frontal gyrus in the integration of audio-visual information.
Collapse
Affiliation(s)
- A M Proverbio
- NeuroMI Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Italy.
| | - F De Benedetto
- NeuroMI Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Italy
| |
Collapse
|
13
|
Neurosurgery and Music; Effect of Wolfgang Amadeus Mozart. World Neurosurg 2017; 102:313-319. [DOI: 10.1016/j.wneu.2017.02.081] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2017] [Revised: 02/14/2017] [Accepted: 02/16/2017] [Indexed: 12/12/2022]
|