1
|
Li CW, Tsai CG. Motivated cognitive control during cued anticipation and receipt of unfamiliar musical themes: An fMRI study. Neuropsychologia 2024; 194:108778. [PMID: 38147907 DOI: 10.1016/j.neuropsychologia.2023.108778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 12/17/2023] [Accepted: 12/21/2023] [Indexed: 12/28/2023]
Abstract
Principal themes, particularly choruses in pop songs, hold a central place in human music. Singing along with a familiar chorus tends to elicit pleasure and a sense of belonging, especially in group settings. These principal themes, which frequently serve as musical rewards, are commonly preceded by distinctive musical cues. Such cues guide listeners' attention and amplify their motivation to receive the impending themes. Despite the significance of cue-theme sequences in music, the neural mechanisms underlying the processing of these sequences in unfamiliar songs remain underexplored. To fill this research gap, we employed fMRI to examine neural activity during the cued anticipation of unfamiliar musical themes and the subsequent receipt of their opening phrase. Twenty-three Taiwanese participants underwent fMRI scans while listening to excerpts of Korean slow pop songs unfamiliar to them, with lyrics they could not understand. Our findings revealed distinct temporal dynamics in lateral frontal activity, with posterior regions being more active during theme anticipation and anterior regions during theme receipt. During anticipation, participants reported substantial increases in arousal levels, aligning with the observed enhanced activity in the midbrain, ventral striatum, inferior frontal junction, and premotor regions. We posit that when motivational musical cues are detected, the ventral striatum and inferior frontal junction played a role in attention allocation, while premotor regions may be engaged in monitoring the theme's entry. Notably, both the anticipation and receipt of themes were associated with pronounced activity in the frontal eye field, dorsolateral prefrontal cortex, posterior parietal cortex, dorsal caudate, and salience network. Overall, our results highlight that within a naturalistic music-listening context, the dynamic interplay between the frontoparietal, dopaminergic midbrain-striatal, and salience networks could allow for precise adjustments of control demands based on the cue-theme structure in unfamiliar songs.
Collapse
Affiliation(s)
- Chia-Wei Li
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
| | - Chen-Gia Tsai
- Graduate Institute of Musicology, National Taiwan University, Taipei, Taiwan; Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
2
|
Shin JH, Jeong E. Virtual reality-based music attention training for acquired brain injury: A protocol for randomized cross-over trial. Front Neurol 2023; 14:1192181. [PMID: 37638184 PMCID: PMC10450247 DOI: 10.3389/fneur.2023.1192181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 07/24/2023] [Indexed: 08/29/2023] Open
Abstract
Attention training is the primary step in the rehabilitation for patients with acquired brain injury (ABI). While active music performance has been reported to aid neural and functional recovery, its efficacy for patients with ABI remains uncertain due to methodological concerns. The purpose of the study is to develop a virtual reality-based music attention training (VR-MAT), which utilizes a visually guided, bilateral drumming in an immersive environment to train attention and executive functions. We also aims to examine the feasibility and effectiveness of the VR-MAT with a small sample size of participants (3-60 months after ABI, N = 20 approximately). Participants will be randomly assigned to either a waitlist control or music group, in which VR-MAT will take place five times weekly over 4 weeks (randomized crossover design). The evaluation of VR-MAT performance will include accuracy and response time in music responses. Neurocognitive outcome measures will be administered to quantify pre-post changes in attention, working memory, and executive functions. Additionally, functional near-infrared spectroscopy will be employed to explore the relationships between musical behavior, neurocognitive function, and neurophysiological responses.
Collapse
Affiliation(s)
- Joon-Ho Shin
- Department of Rehabilitation, National Rehabilitation Center, Ewha Womans University, Seoul, Republic of Korea
| | - Eunju Jeong
- Department of Music Therapy, Graduate School, Ewha Womans University, Seoul, Republic of Korea
| |
Collapse
|
3
|
Tsai CG, Fu YF, Li CW. Prediction errors arising from switches between major and minor modes in music: An fMRI study. Brain Cogn 2023; 169:105987. [PMID: 37126951 DOI: 10.1016/j.bandc.2023.105987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Revised: 04/15/2023] [Accepted: 04/17/2023] [Indexed: 05/03/2023]
Abstract
The major and minor modes in Western music have positive and negative connotations, respectively. The present fMRI study examined listeners' neural responses to switches between major and minor modes. We manipulated the final chords of J. S. Bach's keyboard pieces so that each major-mode passage ended with either the major (Major-Major) or minor (Major-Minor) tonic chord, and each minor-mode passage ended with either the minor (Minor-Minor) or major (Minor-Major) tonic chord. If the final major and minor chords have positive and negative reward values respectively, the Major-Minor and Minor-Major stimuli would cause negative and positive reward prediction errors (RPEs) respectively in a listener's brain. We found that activity in a frontoparietal network was significantly higher for Major-Minor than for Major-Major. Based on previous research, these results support the idea that a major-to-minor switch causes negative RPE. The contrast of Minor-Major minus Minor-Minor yielded activation in the ventral insula and visual cortex, speaking against the idea that a minor-to-major switch causes positive RPE. We discuss our results in relation to executive functions and the emotional connotations of major versus minor modes.
Collapse
Affiliation(s)
- Chen-Gia Tsai
- Graduate Institute of Musicology, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan
| | - Yi-Fan Fu
- Department of Bio-Industry Communication and Development, National Taiwan University, Taipei, Taiwan
| | - Chia-Wei Li
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
4
|
Christensen J, Slavik L, Nicol JJ, Loehr JD. Alpha oscillations related to self-other integration and distinction during live orchestral performance: A naturalistic case study. PSYCHOLOGY OF MUSIC 2023; 51:295-315. [PMID: 36532616 PMCID: PMC9751440 DOI: 10.1177/03057356221091313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Ensemble music performance requires musicians to achieve precise interpersonal coordination while maintaining autonomous control over their own actions. To do so, musicians dynamically shift between integrating other performers' actions into their own action plans and maintaining a distinction between their own and others' actions. Research in laboratory settings has shown that this dynamic process of self-other integration and distinction is indexed by sensorimotor alpha oscillations. The purpose of the current descriptive case study was to examine oscillations related to self-other integration and distinction in a naturalistic performance context. We measured alpha activity from four violinists during a concert hall performance of a 60-musician orchestra. We selected a musical piece from the orchestra's repertoire and, before analyzing alpha activity, performed a score analysis to divide the piece into sections that were expected to strongly promote self-other integration and distinction. In line with previous laboratory findings, performers showed suppressed and enhanced alpha activity during musical sections that promoted self-other integration and distinction, respectively. The current study thus provides preliminary evidence that findings from carefully controlled laboratory experiments generalize to complex real-world performance. Its findings also suggest directions for future research and potential applications of interest to musicians, music educators, and music therapists.
Collapse
Affiliation(s)
| | - Lauren Slavik
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| | - Jennifer J Nicol
- Department of Educational Psychology and Special Education, University of Saskatchewan, Saskatoon, Canada
| | - Janeen D Loehr
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
5
|
Jeong E, Ireland SJ. Criterion-Related Validation of a Music-Based Attention Assessment for Individuals with Traumatic Brain Injury. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:16285. [PMID: 36498353 PMCID: PMC9738551 DOI: 10.3390/ijerph192316285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/15/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
The music-based attention assessment (MAA) is a melody contour identification task that evaluates different types of attention. Previous studies have examined the psychometric and physiological validity of the MAA across various age groups in clinical and typical populations. The purpose of this study was to confirm the MAA's criterion validity in individuals with traumatic brain injury (TBI) and to correlate this with standardized neuropsychological measurements. The MAA and various neurocognitive tests (i.e., the Wechsler adult intelligence scale DST, Delis-Kaplan executive functioning scale color-word interference test, and Conner's continuous performance test) were administered to 38 patients within two weeks prior to or post to the MAA administration. Significant correlations between MAA and neurocognitive batteries were found, indicating the potential of MAA as a valid measure of different types of attention deficits. An additional multiple regression analysis revealed that MAA was a significant factor in predicting attention ability.
Collapse
Affiliation(s)
- Eunju Jeong
- Department of Music Therapy, Graduate School, Ewha Womans University, Seoul 03760, Republic of Korea
| | | |
Collapse
|
6
|
Chabin T, Pazart L, Gabriel D. Vocal melody and musical background are simultaneously processed by the brain for musical predictions. Ann N Y Acad Sci 2022; 1512:126-140. [PMID: 35229293 DOI: 10.1111/nyas.14755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 01/18/2022] [Indexed: 12/18/2022]
Abstract
Musical pleasure is related to the capacity to predict and anticipate the music. By recording early cerebral responses of 16 participants with electroencephalography during periods of silence inserted in known and unknown songs, we aimed to measure the contribution of different musical attributes to musical predictions. We investigated the mismatch between past encoded musical features and the current sensory inputs when listening to lyrics associated with vocal melody, only background instrumental material, or both attributes grouped together. When participants were listening to chords and lyrics for known songs, the brain responses related to musical violation produced event-related potential responses around 150-200 ms that were of a larger amplitude than for chords or lyrics only. Microstate analysis also revealed that for chords and lyrics, the global field power had an increased stability and a longer duration. The source localization identified that the right superior temporal and frontal gyri and the inferior and medial frontal gyri were activated for a longer time for chords and lyrics, likely caused by the increased complexity of the stimuli. We conclude that grouped together, a broader integration and retrieval of several musical attributes at the same time recruit larger neuronal networks that lead to more accurate predictions.
Collapse
Affiliation(s)
- Thibault Chabin
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France
| | - Lionel Pazart
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Bourgogne Franche-Comté, France
| | - Damien Gabriel
- Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| |
Collapse
|
7
|
Hausfeld L, Disbergen NR, Valente G, Zatorre RJ, Formisano E. Modulating Cortical Instrument Representations During Auditory Stream Segregation and Integration With Polyphonic Music. Front Neurosci 2021; 15:635937. [PMID: 34630007 PMCID: PMC8498193 DOI: 10.3389/fnins.2021.635937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 08/24/2021] [Indexed: 11/13/2022] Open
Abstract
Numerous neuroimaging studies demonstrated that the auditory cortex tracks ongoing speech and that, in multi-speaker environments, tracking of the attended speaker is enhanced compared to the other irrelevant speakers. In contrast to speech, multi-instrument music can be appreciated by attending not only on its individual entities (i.e., segregation) but also on multiple instruments simultaneously (i.e., integration). We investigated the neural correlates of these two modes of music listening using electroencephalography (EEG) and sound envelope tracking. To this end, we presented uniquely composed music pieces played by two instruments, a bassoon and a cello, in combination with a previously validated music auditory scene analysis behavioral paradigm (Disbergen et al., 2018). Similar to results obtained through selective listening tasks for speech, relevant instruments could be reconstructed better than irrelevant ones during the segregation task. A delay-specific analysis showed higher reconstruction for the relevant instrument during a middle-latency window for both the bassoon and cello and during a late window for the bassoon. During the integration task, we did not observe significant attentional modulation when reconstructing the overall music envelope. Subsequent analyses indicated that this null result might be due to the heterogeneous strategies listeners employ during the integration task. Overall, our results suggest that subsequent to a common processing stage, top-down modulations consistently enhance the relevant instrument's representation during an instrument segregation task, whereas such an enhancement is not observed during an instrument integration task. These findings extend previous results from speech tracking to the tracking of multi-instrument music and, furthermore, inform current theories on polyphonic music perception.
Collapse
Affiliation(s)
- Lars Hausfeld
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Centre (MBIC), Maastricht University, Maastricht, Netherlands
| | - Niels R Disbergen
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Centre (MBIC), Maastricht University, Maastricht, Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Centre (MBIC), Maastricht University, Maastricht, Netherlands
| | - Robert J Zatorre
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
| | - Elia Formisano
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Centre (MBIC), Maastricht University, Maastricht, Netherlands
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, Maastricht, Netherlands
- Brightlands Institute for Smart Society (BISS), Maastricht University, Maastricht, Netherlands
| |
Collapse
|
8
|
Barrett KC, Ashley R, Strait DL, Skoe E, Limb CJ, Kraus N. Multi-Voiced Music Bypasses Attentional Limitations in the Brain. Front Neurosci 2021; 15:588914. [PMID: 33584187 PMCID: PMC7877539 DOI: 10.3389/fnins.2021.588914] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 01/06/2021] [Indexed: 11/25/2022] Open
Abstract
Attentional limits make it difficult to comprehend concurrent speech streams. However, multiple musical streams are processed comparatively easily. Coherence may be a key difference between music and stimuli like speech, which does not rely on the integration of multiple streams for comprehension. The musical organization between melodies in a composition may provide a cognitive scaffold to overcome attentional limitations when perceiving multiple lines of music concurrently. We investigated how listeners attend to multi–voiced music, examining biological indices associated with processing structured versus unstructured music. We predicted that musical structure provides coherence across distinct musical lines, allowing listeners to attend to simultaneous melodies, and that a lack of organization causes simultaneous melodies to be heard as separate streams. Musician participants attended to melodies in a Coherent music condition featuring flute duets and a Jumbled condition where those duets were manipulated to eliminate coherence between the parts. Auditory–evoked cortical potentials were collected to a tone probe. Analysis focused on the N100 response which is primarily generated within the auditory cortex and is larger for attended versus ignored stimuli. Results suggest that participants did not attend to one line over the other when listening to Coherent music, instead perceptually integrating the streams. Yet, for the Jumbled music, effects indicate that participants attended to one line while ignoring the other, abandoning their integration. Our findings lend support for the theory that musical organization aids attention when perceiving multi–voiced music.
Collapse
Affiliation(s)
- Karen Chan Barrett
- UCSF Sound and Music Perception Lab, Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Richard Ashley
- Program in Music Theory and Cognition, Bienen School of Music, Northwestern University, Evanston, IL, United States
| | - Dana L Strait
- Division of Strategy and Finance, Saint Mary's College, Notre Dame, IN, United States
| | - Erika Skoe
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT, United States
| | - Charles J Limb
- UCSF Sound and Music Perception Lab, Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| |
Collapse
|
9
|
Greco A, Spada D, Rossi S, Perani D, Valenza G, Scilingo EP. EEG Hyperconnectivity Study on Saxophone Quartet Playing in Ensemble. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:1015-1018. [PMID: 30440563 DOI: 10.1109/embc.2018.8512409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A professional quartet of saxophonists playing in ensemble provides a perfect scenario to study the eventual occurrence of synchronous oscillatory brain activity across subjects. Here, we applied hyperscanning methodologies for simultaneously recordings of electroencephalographic (EEG) signals from four professional saxophonists while they observ an audiovideo recording of their own previous musical performance. An ad-hoc musical composition was written for the study. At debriefing, the subjects were asked to answer two questionnaires to assess their empathy trait and the musical leadership. In order to estimate the hyperconnectivity of each musician we proposed a measure which combines phase synchronization index of brain oscillations and graph theory framework. The inter-connectivity level of each musician was statistically compared. Statistical results revealed a significant lower hyperconnectivity in the left Brodmann area 44 for the Soprano with respect to the other three members. Recent theories attributed this brain region (Broca's area) to music generation, empathy processes and communication. We hypothesize a relationship between brain-to-brain connectivity level and the musical role within the quartet.
Collapse
|
10
|
Disbergen NR, Valente G, Formisano E, Zatorre RJ. Assessing Top-Down and Bottom-Up Contributions to Auditory Stream Segregation and Integration With Polyphonic Music. Front Neurosci 2018; 12:121. [PMID: 29563861 PMCID: PMC5845899 DOI: 10.3389/fnins.2018.00121] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 02/15/2018] [Indexed: 11/24/2022] Open
Abstract
Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments.
Collapse
Affiliation(s)
- Niels R. Disbergen
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, Netherlands
| | - Robert J. Zatorre
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain Music and Sound Research (BRAMS), Montreal, QC, Canada
| |
Collapse
|
11
|
Kawase S, Obata S. Audience gaze while appreciating a multipart musical performance. Conscious Cogn 2016; 46:15-26. [PMID: 27677050 DOI: 10.1016/j.concog.2016.09.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2016] [Revised: 09/11/2016] [Accepted: 09/17/2016] [Indexed: 11/19/2022]
Abstract
Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members' gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers' gaze shifts.
Collapse
|
12
|
Keller PE, Novembre G, Hove MJ. Rhythm in joint action: psychological and neurophysiological mechanisms for real-time interpersonal coordination. Philos Trans R Soc Lond B Biol Sci 2014; 369:20130394. [PMID: 25385772 PMCID: PMC4240961 DOI: 10.1098/rstb.2013.0394] [Citation(s) in RCA: 205] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Human interaction often requires simultaneous precision and flexibility in the coordination of rhythmic behaviour between individuals engaged in joint activity, for example, playing a musical duet or dancing with a partner. This review article addresses the psychological processes and brain mechanisms that enable such rhythmic interpersonal coordination. First, an overview is given of research on the cognitive-motor processes that enable individuals to represent joint action goals and to anticipate, attend and adapt to other's actions in real time. Second, the neurophysiological mechanisms that underpin rhythmic interpersonal coordination are sought in studies of sensorimotor and cognitive processes that play a role in the representation and integration of self- and other-related actions within and between individuals' brains. Finally, relationships between social-psychological factors and rhythmic interpersonal coordination are considered from two perspectives, one concerning how social-cognitive tendencies (e.g. empathy) affect coordination, and the other concerning how coordination affects interpersonal affiliation, trust and prosocial behaviour. Our review highlights musical ensemble performance as an ecologically valid yet readily controlled domain for investigating rhythm in joint action.
Collapse
Affiliation(s)
- Peter E Keller
- The MARCS Institute, University of Western Sydney, Locked Bag 1797, Penrith, New South Wales 2751, Australia
| | - Giacomo Novembre
- The MARCS Institute, University of Western Sydney, Locked Bag 1797, Penrith, New South Wales 2751, Australia
| | | |
Collapse
|
13
|
Wisniewski MG, Mercado E, Church BA, Gramann K, Makeig S. Brain dynamics that correlate with effects of learning on auditory distance perception. Front Neurosci 2014; 8:396. [PMID: 25538550 PMCID: PMC4260497 DOI: 10.3389/fnins.2014.00396] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2014] [Accepted: 11/18/2014] [Indexed: 11/18/2022] Open
Abstract
Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4–8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8–12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10–16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.
Collapse
Affiliation(s)
- Matthew G Wisniewski
- 711th Human Performance Wing, U. S. Air Force Research Laboratory Wright-Patterson Air Force Base, OH, USA ; Department of Psychology, University at Buffalo, The State University of New York Buffalo, NY, USA
| | - Eduardo Mercado
- Department of Psychology, University at Buffalo, The State University of New York Buffalo, NY, USA
| | - Barbara A Church
- Department of Psychology, University at Buffalo, The State University of New York Buffalo, NY, USA
| | - Klaus Gramann
- Biological Psychology and Neuroergonomics, Berlin Institute of Technology Berlin, Germany
| | - Scott Makeig
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California, San Diego San Diego, CA, USA
| |
Collapse
|
14
|
Spada D, Verga L, Iadanza A, Tettamanti M, Perani D. The auditory scene: An fMRI study on melody and accompaniment in professional pianists. Neuroimage 2014; 102 Pt 2:764-75. [DOI: 10.1016/j.neuroimage.2014.08.036] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2014] [Revised: 06/13/2014] [Accepted: 08/20/2014] [Indexed: 11/17/2022] Open
|
15
|
Ragert M, Fairhurst MT, Keller PE. Segregation and integration of auditory streams when listening to multi-part music. PLoS One 2014; 9:e84085. [PMID: 24475030 PMCID: PMC3901649 DOI: 10.1371/journal.pone.0084085] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 11/12/2013] [Indexed: 11/19/2022] Open
Abstract
In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams, respectively.
Collapse
Affiliation(s)
- Marie Ragert
- Max Planck Institute for Human Cognitive and Brain Sciences, Research Group: Music Cognition and Action, Leipzig, Germany
- * E-mail:
| | - Merle T. Fairhurst
- Max Planck Institute for Human Cognitive and Brain Sciences, Research Group: Music Cognition and Action, Leipzig, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Research Group: Early Social Development, Leipzig, Germany
| | - Peter E. Keller
- Max Planck Institute for Human Cognitive and Brain Sciences, Research Group: Music Cognition and Action, Leipzig, Germany
- The MARCS Institute, Music Cognition and Action Group, University of Western Sydney, Sydney, Australia
| |
Collapse
|