1
|
Di Stefano N, Spence C. Should absolute pitch be considered as a unique kind of absolute sensory judgment in humans? A systematic and theoretical review of the literature. Cognition 2024; 249:105805. [PMID: 38761646 DOI: 10.1016/j.cognition.2024.105805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 04/12/2024] [Accepted: 04/23/2024] [Indexed: 05/20/2024]
Abstract
Absolute pitch is the name given to the rare ability to identify a musical note in an automatic and effortless manner without the need for a reference tone. Those individuals with absolute pitch can, for example, name the note they hear, identify all of the tones of a given chord, and/or name the pitches of everyday sounds, such as car horns or sirens. Hence, absolute pitch can be seen as providing a rare example of absolute sensory judgment in audition. Surprisingly, however, the intriguing question of whether such an ability presents unique features in the domain of sensory perception, or whether instead similar perceptual skills also exist in other sensory domains, has not been explicitly addressed previously. In this paper, this question is addressed by systematically reviewing research on absolute pitch using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) method. Thereafter, we compare absolute pitch with two rare types of sensory experience, namely synaesthesia and eidetic memory, to understand if and how these phenomena exhibit similar features to absolute pitch. Furthermore, a common absolute perceptual ability that has been often compared to absolute pitch, namely colour perception, is also discussed. Arguments are provided supporting the notion that none of the examined abilities can be considered like absolute pitch. Therefore, we conclude by suggesting that absolute pitch does indeed appear to constitute a unique kind of absolute sensory judgment in humans, and we discuss some open issues and novel directions for future research in absolute pitch.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Via Gian Domenico Romagnosi, 18, 00196 Rome, Italy.
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
2
|
Dhakal K, Rosenthal ES, Kulpanowski AM, Dodelson JA, Wang Z, Cudemus-Deseda G, Villien M, Edlow BL, Presciutti AM, Januzzi JL, Ning M, Taylor Kimberly W, Amorim E, Brandon Westover M, Copen WA, Schaefer PW, Giacino JT, Greer DM, Wu O. Increased task-relevant fMRI responsiveness in comatose cardiac arrest patients is associated with improved neurologic outcomes. J Cereb Blood Flow Metab 2024; 44:50-65. [PMID: 37728641 PMCID: PMC10905635 DOI: 10.1177/0271678x231197392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/27/2023] [Accepted: 06/29/2023] [Indexed: 09/21/2023]
Abstract
Early prediction of the recovery of consciousness in comatose cardiac arrest patients remains challenging. We prospectively studied task-relevant fMRI responses in 19 comatose cardiac arrest patients and five healthy controls to assess the fMRI's utility for neuroprognostication. Tasks involved instrumental music listening, forward and backward language listening, and motor imagery. Task-specific reference images were created from group-level fMRI responses from the healthy controls. Dice scores measured the overlap of individual subject-level fMRI responses with the reference images. Task-relevant responsiveness index (Rindex) was calculated as the maximum Dice score across the four tasks. Correlation analyses showed that increased Dice scores were significantly associated with arousal recovery (P < 0.05) and emergence from the minimally conscious state (EMCS) by one year (P < 0.001) for all tasks except motor imagery. Greater Rindex was significantly correlated with improved arousal recovery (P = 0.002) and consciousness (P = 0.001). For patients who survived to discharge (n = 6), the Rindex's sensitivity was 75% for predicting EMCS (n = 4). Task-based fMRI holds promise for detecting covert consciousness in comatose cardiac arrest patients, but further studies are needed to confirm these findings. Caution is necessary when interpreting the absence of task-relevant fMRI responses as a surrogate for inevitable poor neurological prognosis.
Collapse
Affiliation(s)
- Kiran Dhakal
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Eric S Rosenthal
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Annelise M Kulpanowski
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Jacob A Dodelson
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Zihao Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Gaston Cudemus-Deseda
- Department of Cardiac Anesthesiology and Critical Care Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Marjorie Villien
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Brian L Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Alexander M Presciutti
- Department of Psychiatry, Center for Health Outcomes and Interdisciplinary Research, Massachusetts General Hospital, Boston, MA, USA
| | - James L Januzzi
- Department of Medicine, Cardiology Division, Massachusetts General Hospital and Baim Institute for Clinical Research, Boston, MA, USA
| | - MingMing Ning
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - W Taylor Kimberly
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Edilberto Amorim
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | | | - William A Copen
- Department of Radiology, Neuroradiology Division, Massachusetts General Hospital, Boston, MA, USA
| | - Pamela W Schaefer
- Department of Radiology, Neuroradiology Division, Massachusetts General Hospital, Boston, MA, USA
| | - Joseph T Giacino
- Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Harvard Medical School, Charlestown, MA, USA
| | - David M Greer
- Department of Neurology, Boston University School of Medicine, Boston Medical Center, Boston, MA, USA
| | - Ona Wu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| |
Collapse
|
3
|
Pousson JE, Shen YW, Lin YP, Voicikas A, Pipinis E, Bernhofs V, Burmistrova L, Griskova-Bulanova I. Exploring Spatio-Spectral Electroencephalogram Modulations of Imbuing Emotional Intent During Active Piano Playing. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4347-4356. [PMID: 37883285 DOI: 10.1109/tnsre.2023.3327740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Imbuing emotional intent serves as a crucial modulator of music improvisation during active musical instrument playing. However, most improvisation-related neural endeavors have been gained without considering the emotional context. This study attempts to exploit reproducible spatio-spectral electroencephalogram (EEG) oscillations of emotional intent using a data-driven independent component analysis framework in an ecological multiday piano playing experiment. Through the four-day 32-ch EEG dataset of 10 professional players, we showed that EEG patterns were substantially affected by both intra- and inter-individual variability underlying the emotional intent of the dichotomized valence (positive vs. negative) and arousal (high vs. low) categories. Less than half (3-4) of the 10 participants analogously exhibited day-reproducible ( ≥ three days) spectral modulations at the right frontal beta in response to the valence contrast as well as the frontal central gamma and the superior parietal alpha to the arousal counterpart. In particular, the frontal engagement facilitates a better understanding of the frontal cortex (e.g., dorsolateral prefrontal cortex and anterior cingulate cortex) and its role in intervening emotional processes and expressing spectral signatures that are relatively resistant to natural EEG variability. Such ecologically vivid EEG findings may lead to better understanding of the development of a brain-computer music interface infrastructure capable of guiding the training, performance, and appreciation for emotional improvisatory status or actuating music interaction via emotional context.
Collapse
|
4
|
Bianco R, Hall ET, Pearce MT, Chait M. Implicit auditory memory in older listeners: From encoding to 6-month retention. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 5:100115. [PMID: 38020808 PMCID: PMC10663129 DOI: 10.1016/j.crneur.2023.100115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 10/12/2023] [Accepted: 10/24/2023] [Indexed: 12/01/2023] Open
Abstract
Any listening task, from sound recognition to sound-based communication, rests on auditory memory which is known to decline in healthy ageing. However, how this decline maps onto multiple components and stages of auditory memory remains poorly characterised. In an online unsupervised longitudinal study, we tested ageing effects on implicit auditory memory for rapid tone patterns. The test required participants (younger, aged 20-30, and older adults aged 60-70) to quickly respond to rapid regularly repeating patterns emerging from random sequences. Patterns were novel in most trials (REGn), but unbeknownst to the participants, a few distinct patterns reoccurred identically throughout the sessions (REGr). After correcting for processing speed, the response times (RT) to REGn should reflect the information held in echoic and short-term memory before detecting the pattern; long-term memory formation and retention should be reflected by the RT advantage (RTA) to REGr vs REGn which is expected to grow with exposure. Older participants were slower than younger adults in detecting REGn and exhibited a smaller RTA to REGr. Computational simulations using a model of auditory sequence memory indicated that these effects reflect age-related limitations both in early and long-term memory stages. In contrast to ageing-related accelerated forgetting of verbal material, here older adults maintained stable memory traces for REGr patterns up to 6 months after the first exposure. The results demonstrate that ageing is associated with reduced short-term memory and long-term memory formation for tone patterns, but not with forgetting, even over surprisingly long timescales.
Collapse
Affiliation(s)
- Roberta Bianco
- Ear Institute, University College London, WC1X 8EE, London, United Kingdom
- Neuroscience of Perception and Action Laboratory, Italian Institute of Technology, 00161, Rome, Italy
| | - Edward T.R. Hall
- School of Electronic Engineering and Computer Science, Queen Mary University of London, E1 4NS, London, United Kingdom
| | - Marcus T. Pearce
- School of Electronic Engineering and Computer Science, Queen Mary University of London, E1 4NS, London, United Kingdom
- Department of Clinical Medicine, Aarhus University, 8000, Aarhus C, Denmark
| | - Maria Chait
- Ear Institute, University College London, WC1X 8EE, London, United Kingdom
| |
Collapse
|
5
|
Youssofzadeh V, Conant L, Stout J, Ustine C, Humphries C, Gross WL, Shah-Basak P, Mathis J, Awe E, Allen L, DeYoe EA, Carlson C, Anderson CT, Maganti R, Hermann B, Nair VA, Prabhakaran V, Meyerand B, Binder JR, Raghavan M. Late dominance of the right hemisphere during narrative comprehension. Neuroimage 2022; 264:119749. [PMID: 36379420 PMCID: PMC9772156 DOI: 10.1016/j.neuroimage.2022.119749] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 10/12/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
PET and fMRI studies suggest that auditory narrative comprehension is supported by a bilateral multilobar cortical network. The superior temporal resolution of magnetoencephalography (MEG) makes it an attractive tool to investigate the dynamics of how different neuroanatomic substrates engage during narrative comprehension. Using beta-band power changes as a marker of cortical engagement, we studied MEG responses during an auditory story comprehension task in 31 healthy adults. The protocol consisted of two runs, each interleaving 7 blocks of the story comprehension task with 15 blocks of an auditorily presented math task as a control for phonological processing, working memory, and attention processes. Sources at the cortical surface were estimated with a frequency-resolved beamformer. Beta-band power was estimated in the frequency range of 16-24 Hz over 1-sec epochs starting from 400 msec after stimulus onset until the end of a story or math problem presentation. These power estimates were compared to 1-second epochs of data before the stimulus block onset. The task-related cortical engagement was inferred from beta-band power decrements. Group-level source activations were statistically compared using non-parametric permutation testing. A story-math contrast of beta-band power changes showed greater bilateral cortical engagement within the fusiform gyrus, inferior and middle temporal gyri, parahippocampal gyrus, and left inferior frontal gyrus (IFG) during story comprehension. A math-story contrast of beta power decrements showed greater bilateral but left-lateralized engagement of the middle frontal gyrus and superior parietal lobule. The evolution of cortical engagement during five temporal windows across the presentation of stories showed significant involvement during the first interval of the narrative of bilateral opercular and insular regions as well as the ventral and lateral temporal cortex, extending more posteriorly on the left and medially on the right. Over time, there continued to be sustained right anterior ventral temporal engagement, with increasing involvement of the right anterior parahippocampal gyrus, STG, MTG, posterior superior temporal sulcus, inferior parietal lobule, frontal operculum, and insula, while left hemisphere engagement decreased. Our findings are consistent with prior imaging studies of narrative comprehension, but in addition, they demonstrate increasing right-lateralized engagement over the course of narratives, suggesting an important role for these right-hemispheric regions in semantic integration as well as social and pragmatic inference processing.
Collapse
Affiliation(s)
- Vahab Youssofzadeh
- Neurology, Medical College of Wisconsin, Milwaukee, WI, USA,Corresponding author. (V. Youssofzadeh)
| | - Lisa Conant
- Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Jeffrey Stout
- Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Candida Ustine
- Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | | | - William L. Gross
- Neurology, Medical College of Wisconsin, Milwaukee, WI, USA,Anesthesiology, Medical College of Wisconsin, Milwaukee, WI, USA
| | | | - Jed Mathis
- Neurology, Medical College of Wisconsin, Milwaukee, WI, USA,Radiology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Elizabeth Awe
- Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Linda Allen
- Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Edgar A. DeYoe
- Radiology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Chad Carlson
- Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | | | - Rama Maganti
- Neurology, University of Wisconsin-Madison, Madison, WI, USA
| | - Bruce Hermann
- Neurology, University of Wisconsin-Madison, Madison, WI, USA
| | - Veena A. Nair
- Radiology, University of Wisconsin-Madison, Madison, WI, USA
| | - Vivek Prabhakaran
- Radiology, University of Wisconsin-Madison, Madison, WI, USA,Medical Physics, University of Wisconsin-Madison, Madison, WI, USA,Psychiatry, University of Wisconsin-Madison, Madison, WI, USA
| | - Beth Meyerand
- Radiology, University of Wisconsin-Madison, Madison, WI, USA,Medical Physics, University of Wisconsin-Madison, Madison, WI, USA,Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA
| | | | - Manoj Raghavan
- Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
6
|
Billig AJ, Lad M, Sedley W, Griffiths TD. The hearing hippocampus. Prog Neurobiol 2022; 218:102326. [PMID: 35870677 PMCID: PMC10510040 DOI: 10.1016/j.pneurobio.2022.102326] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 06/08/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022]
Abstract
The hippocampus has a well-established role in spatial and episodic memory but a broader function has been proposed including aspects of perception and relational processing. Neural bases of sound analysis have been described in the pathway to auditory cortex, but wider networks supporting auditory cognition are still being established. We review what is known about the role of the hippocampus in processing auditory information, and how the hippocampus itself is shaped by sound. In examining imaging, recording, and lesion studies in species from rodents to humans, we uncover a hierarchy of hippocampal responses to sound including during passive exposure, active listening, and the learning of associations between sounds and other stimuli. We describe how the hippocampus' connectivity and computational architecture allow it to track and manipulate auditory information - whether in the form of speech, music, or environmental, emotional, or phantom sounds. Functional and structural correlates of auditory experience are also identified. The extent of auditory-hippocampal interactions is consistent with the view that the hippocampus makes broad contributions to perception and cognition, beyond spatial and episodic memory. More deeply understanding these interactions may unlock applications including entraining hippocampal rhythms to support cognition, and intervening in links between hearing loss and dementia.
Collapse
Affiliation(s)
| | - Meher Lad
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, UK; Human Brain Research Laboratory, Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, USA
| |
Collapse
|
7
|
Sihvonen AJ, Soinila S, Särkämö T. Post-stroke enriched auditory environment induces structural connectome plasticity: secondary analysis from a randomized controlled trial. Brain Imaging Behav 2022; 16:1813-1822. [PMID: 35352235 PMCID: PMC9279272 DOI: 10.1007/s11682-022-00661-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2022] [Indexed: 11/30/2022]
Abstract
Post-stroke neuroplasticity and cognitive recovery can be enhanced by multimodal stimulation via environmental enrichment. In this vein, recent studies have shown that enriched sound environment (i.e., listening to music) during the subacute post-stroke stage improves cognitive outcomes compared to standard care. The beneficial effects of post-stroke music listening are further pronounced when listening to music containing singing, which enhances language recovery coupled with structural and functional connectivity changes within the language network. However, outside the language network, virtually nothing is known about the effects of enriched sound environment on the structural connectome of the recovering post-stroke brain. Here, we report secondary outcomes from a single-blind randomized controlled trial (NCT01749709) in patients with ischaemic or haemorrhagic stroke (N = 38) who were randomly assigned to listen to vocal music, instrumental music, or audiobooks during the first 3 post-stroke months. Utilizing the longitudinal diffusion-weighted MRI data of the trial, the present study aimed to determine whether the music listening interventions induce changes on structural white matter connectome compared to the control audiobook intervention. Both vocal and instrumental music groups increased quantitative anisotropy longitudinally in multiple left dorsal and ventral tracts as well as in the corpus callosum, and also in the right hemisphere compared to the audiobook group. Audiobook group did not show increased structural connectivity changes compared to both vocal and instrumental music groups. This study shows that listening to music, either vocal or instrumental promotes wide-spread structural connectivity changes in the post-stroke brain, providing a fertile ground for functional restoration.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Turku, Finland. .,School of Health and Rehabilitation Sciences, Queensland Aphasia Research Centre and UQ Centre for Clinical Research, The University of Queensland, Brisbane, Australia.
| | - Seppo Soinila
- Neurocenter, Turku University Hospital and Division of Clinical Neurosciences, University of Turku, Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Turku, Finland
| |
Collapse
|
8
|
Sihvonen AJ, Pitkäniemi A, Leo V, Soinila S, Särkämö T. Resting-state language network neuroplasticity in post-stroke music listening: A randomized controlled trial. Eur J Neurosci 2021; 54:7886-7898. [PMID: 34763370 DOI: 10.1111/ejn.15524] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 10/13/2021] [Accepted: 11/08/2021] [Indexed: 01/31/2023]
Abstract
Recent evidence suggests that post-stroke vocal music listening can aid language recovery, but the network-level functional neuroplasticity mechanisms of this effect are unknown. Here, we sought to determine if improved language recovery observed after post-stroke listening to vocal music is driven by changes in longitudinal resting-state functional connectivity within the language network. Using data from a single-blind randomized controlled trial on stroke patients (N = 38), we compared the effects of daily listening to self-selected vocal music, instrumental music and audio books on changes of the resting-state functional connectivity within the language network and their correlation to improved language skills and verbal memory during the first 3 months post-stroke. From acute to 3-month stage, the vocal music and instrumental music groups increased functional connectivity between a cluster comprising the left inferior parietal areas and the language network more than the audio book group. However, the functional connectivity increase correlated with improved verbal memory only in the vocal music group cluster. This study shows that listening to vocal music post-stroke promotes recovery of verbal memory by inducing changes in longitudinal functional connectivity in the language network. Our results conform to the variable neurodisplacement theory underpinning aphasia recovery.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Centre for Clinical Research, The University of Queensland, Brisbane, Queensland, Australia
| | - Anni Pitkäniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Seppo Soinila
- Neurocenter, Turku University Hospital and Division of Clinical Neurosciences, University of Turku, Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
9
|
Musical components important for the Mozart K448 effect in epilepsy. Sci Rep 2021; 11:16490. [PMID: 34531410 PMCID: PMC8446029 DOI: 10.1038/s41598-021-95922-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 07/28/2021] [Indexed: 11/08/2022] Open
Abstract
There is growing evidence for the efficacy of music, specifically Mozart’s Sonata for Two Pianos in D Major (K448), at reducing ictal and interictal epileptiform activity. Nonetheless, little is known about the mechanism underlying this beneficial “Mozart K448 effect” for persons with epilepsy. Here, we measured the influence that K448 had on intracranial interictal epileptiform discharges (IEDs) in sixteen subjects undergoing intracranial monitoring for refractory focal epilepsy. We found reduced IEDs during the original version of K448 after at least 30-s of exposure. Nonsignificant IED rate reductions were witnessed in all brain regions apart from the bilateral frontal cortices, where we observed increased frontal theta power during transitions from prolonged musical segments. All other presented musical stimuli were associated with nonsignificant IED alterations. These results suggest that the “Mozart K448 effect” is dependent on the duration of exposure and may preferentially modulate activity in frontal emotional networks, providing insight into the mechanism underlying this response. Our findings encourage the continued evaluation of Mozart’s K448 as a noninvasive, non-pharmacological intervention for refractory epilepsy.
Collapse
|
10
|
Asano R, Boeckx C, Seifert U. Hierarchical control as a shared neurocognitive mechanism for language and music. Cognition 2021; 216:104847. [PMID: 34311153 DOI: 10.1016/j.cognition.2021.104847] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 05/14/2021] [Accepted: 07/11/2021] [Indexed: 12/16/2022]
Abstract
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany.
| | - Cedric Boeckx
- Section of General Linguistics, University of Barcelona, Spain; University of Barcelona Institute for Complex Systems (UBICS), Spain; Catalan Institute for Advanced Studies and Research (ICREA), Spain
| | - Uwe Seifert
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany
| |
Collapse
|
11
|
Sihvonen AJ, Ripollés P, Leo V, Saunavaara J, Parkkola R, Rodríguez-Fornells A, Soinila S, Särkämö T. Vocal music listening enhances post-stroke language network reorganization. eNeuro 2021; 8:ENEURO.0158-21.2021. [PMID: 34140351 PMCID: PMC8266215 DOI: 10.1523/eneuro.0158-21.2021] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 05/24/2021] [Accepted: 06/06/2021] [Indexed: 11/25/2022] Open
Abstract
Listening to vocal music has been recently shown to improve language recovery in stroke survivors. The neuroplasticity mechanisms supporting this effect are, however, still unknown. Using data from a three-arm single-blind randomized controlled trial including acute stroke patients (N=38) and a 3-month follow-up, we set out to compare the neuroplasticity effects of daily listening to self-selected vocal music, instrumental music, and audiobooks on both brain activity and structural connectivity of the language network. Using deterministic tractography we show that the 3-month intervention induced an enhancement of the microstructural properties of the left frontal aslant tract (FAT) for the vocal music group as compared to the audiobook group. Importantly, this increase in the strength of the structural connectivity of the left FAT correlated with improved language skills. Analyses of stimulus-specific activation changes showed that the vocal music group exhibited increased activations in the frontal termination points of the left FAT during vocal music listening as compared to the audiobook group from acute to 3-month post-stroke stage. The increased activity correlated with the structural neuroplasticity changes in the left FAT. These results suggest that the beneficial effects of vocal music listening on post-stroke language recovery are underpinned by structural neuroplasticity changes within the language network and extend our understanding of music-based interventions in stroke rehabilitation.Significance statementPost-stroke language deficits have a devastating effect on patients and their families. Current treatments yield highly variable outcomes and the evidence for their long-term effects is limited. Patients often receive insufficient treatment that are predominantly given outside the optimal time window for brain plasticity. Post-stroke vocal music listening improves language outcome which is underpinned by neuroplasticity changes within the language network. Vocal music listening provides a complementary rehabilitation strategy which could be safely implemented in the early stages of stroke rehabilitation and seems to specifically target language symptoms and recovering language network.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
- Centre for Clinical Research, The University of Queensland, Australia
| | - Pablo Ripollés
- Department of Psychology, New York University, USA
- Music and Audio Research Laboratory, New York University, USA
- Center for Language Music and emotion, New York UniversityUSA
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Jani Saunavaara
- Department of Medical Physics, Turku University Hospital, Turku, Finland
| | - Riitta Parkkola
- Department of Radiology, Turku University Hospital and University of Turku, Finland
| | - Antoni Rodríguez-Fornells
- Department of Cognition, Development and Education Psychology, University of Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
- Division of Clinical Neurosciences, Department of Neurology, Turku University Hospital and University of Turku, Finland
| | - Seppo Soinila
- Division of Clinical Neurosciences, Department of Neurology, Turku University Hospital and University of Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| |
Collapse
|
12
|
Recognition of musical emotions in the behavioral variant of frontotemporal dementia. REVISTA COLOMBIANA DE PSIQUIATRÍA (ENGLISH ED.) 2021; 50:74-81. [PMID: 34099256 DOI: 10.1016/j.rcpeng.2020.01.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2019] [Accepted: 01/09/2020] [Indexed: 11/20/2022]
Abstract
INTRODUCTION Multiple investigations have revealed that patients with behavioral variant of frontotemporal dementia (bvFTD) experience difficulty recognizing emotional signals in multiple processing modalities (e.g., faces, prosody). Few studies have evaluated the recognition of musical emotions in these patients. This research aims to evaluate the ability of subjects with bvFTD to recognize musical stimuli with positive and negative emotions, in comparison with healthy subjects. METHODS bvFTD (n=12) and healthy control participants (n=24) underwent a test of musical emotion recognition: 56 fragments of piano music were randomly reproduced, 14 for each of the emotions (happiness, sadness, fear, and peacefulness). RESULTS In the subjects with bvFTD, a mean of correct answers of 23.6 (42.26%) was observed in contrast to the control subjects, where the average number of correct answers was 36.3 (64.8%). Statistically significant differences were found for each of the evaluated musical emotions and in the total score on the performed test (P<.01). The within-group analysis showed greater difficulty for both groups in recognizing negative musical emotions (sadness, fear), with the subjects with bvFTD exhibiting worse performance. CONCLUSIONS Our results indicate that the recognition of musical stimuli with positive (happiness, peacefulness) and negative (sadness, fear) emotions are compromised in patients with bvFTD. The processing of negative musical emotions is the most difficult for these individuals.
Collapse
|
13
|
Recognition of musical emotions in the behavioral variant of frontotemporal dementia. ACTA ACUST UNITED AC 2021; 50:74-81. [PMID: 33735039 DOI: 10.1016/j.rcp.2020.01.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2019] [Accepted: 01/09/2020] [Indexed: 11/22/2022]
Abstract
INTRODUCTION Multiple investigations have revealed that patients with behavioral variant of frontotemporal dementia (bvFTD) experience difficulty recognizing emotional signals in multiple processing modalities (e.g., faces, prosody). Few studies have evaluated the recognition of musical emotions in these patients. This research aims to evaluate the ability of subjects with bvFTD to recognize musical stimuli with positive and negative emotions, in comparison with healthy subjects. METHODS bvFTD (n=12) and healthy control participants (n=24) underwent a test of musical emotion recognition: 56 fragments of piano music were randomly reproduced, 14 for each of the emotions (happiness, sadness, fear, and peacefulness). RESULTS In the subjects with bvFTD, a mean of correct answers of 23.6 (42.26%) was observed in contrast to the control subjects, where the average number of correct answers was 36.3 (64.8%). Statistically significant differences were found for each of the evaluated musical emotions and in the total score on the performed test (P<.01). The within-group analysis showed greater difficulty for both groups in recognizing negative musical emotions (sadness, fear), with the subjects with bvFTD exhibiting worse performance. CONCLUSIONS Our results indicate that the recognition of musical stimuli with positive (happiness, peacefulness) and negative (sadness, fear) emotions are compromised in patients with bvFTD. The processing of negative musical emotions is the most difficult for these individuals.
Collapse
|
14
|
Frontotemporal dementia, music perception and social cognition share neurobiological circuits: A meta-analysis. Brain Cogn 2021; 148:105660. [PMID: 33421942 DOI: 10.1016/j.bandc.2020.105660] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/27/2020] [Accepted: 11/26/2020] [Indexed: 01/18/2023]
Abstract
Frontotemporal dementia (FTD) is a neurodegenerative disease that presents with profound changes in social cognition. Music might be a sensitive probe for social cognition abilities, but underlying neurobiological substrates are unclear. We performed a meta-analysis of voxel-based morphometry studies in FTD patients and functional MRI studies for music perception and social cognition tasks in cognitively normal controls to identify robust patterns of atrophy (FTD) or activation (music perception or social cognition). Conjunction analyses were performed to identify overlapping brain regions. In total 303 articles were included: 53 for FTD (n = 1153 patients, 42.5% female; 1337 controls, 53.8% female), 28 for music perception (n = 540, 51.8% female) and 222 for social cognition in controls (n = 5664, 50.2% female). We observed considerable overlap in atrophy patterns associated with FTD, and functional activation associated with music perception and social cognition, mostly encompassing the ventral language network. We further observed overlap across all three modalities in mesolimbic, basal forebrain and striatal regions. The results of our meta-analysis suggest that music perception and social cognition share neurobiological circuits that are affected in FTD. This supports the idea that music might be a sensitive probe for social cognition abilities with implications for diagnosis and monitoring.
Collapse
|
15
|
Srinivasan N, Bishop J, Yekovich R, Rosenfield DB, Helekar SA. Differential Activation and Functional Plasticity of Multimodal Areas Associated with Acquired Musical Skill. Neuroscience 2020; 446:294-303. [PMID: 32818600 DOI: 10.1016/j.neuroscience.2020.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 07/27/2020] [Accepted: 08/10/2020] [Indexed: 10/23/2022]
Abstract
Training of a musical skill is known to produce a distributed neural representation of the ability to perceive music and perform musical tasks. In the present study we tested the hypothesis that the audiovisual perception of music involves a wider activation of multimodal sensory and sensorimotor structures in the brain, including those containing mirror neurons. We mapped the activation of brain areas during passive listening and viewing of the first 40 s of "Ode to Joy" being played on the piano by an expert pianist. To do this we performed brain functional magnetic resonance imaging during the presentation of 6 different stimulus contrasts pertaining to that musical melody in a pseudo-randomized order. Group data analysis in musically trained and untrained adults showed robust activation in broadly distributed occipitotemporal, parietal and frontal areas in trained subjects and much restricted activation in untrained subjects. A visual stimulus contrast focusing on the visual motion percept of moving fingers on piano keys revealed selective bilateral activation of a locus corresponding to the V5/MT area, which was significantly more pronounced in trained subjects and showed partial linear dependence on the duration of training on the left side. Quantitative analysis of individual brain volumes confirmed a significantly greater and wider spread of activation in trained compared to untrained subjects. These findings support the view that audiovisual perception of music and musical gestures in trained musicians involves an expanded and widely distributed neural representation formed due to experience-dependent plasticity.
Collapse
Affiliation(s)
- N Srinivasan
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States
| | - J Bishop
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States
| | - R Yekovich
- Shepherd School of Music, Rice University, Houston, TX, United States
| | - D B Rosenfield
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States; Shepherd School of Music, Rice University, Houston, TX, United States
| | - S A Helekar
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States.
| |
Collapse
|
16
|
Shen YW, Lin YP. Challenge for Affective Brain-Computer Interfaces: Non-stationary Spatio-spectral EEG Oscillations of Emotional Responses. Front Hum Neurosci 2019; 13:366. [PMID: 31736727 PMCID: PMC6831623 DOI: 10.3389/fnhum.2019.00366] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Accepted: 09/27/2019] [Indexed: 11/13/2022] Open
Abstract
Electroencephalogram (EEG)-based affective brain-computer interfaces (aBCIs) have been attracting ever-growing interest and research resources. Whereas most previous neuroscience studies have focused on single-day/-session recording and sensor-level analysis, less effort has been invested in assessing the fundamental nature of non-stationary EEG oscillations underlying emotional responses across days and individuals. This work thus aimed to use a data-driven blind source separation method, i.e., independent component analysis (ICA), to derive emotion-relevant spatio-spectral EEG source oscillations and assess the extent of non-stationarity. To this end, this work conducted an 8-day music-listening experiment (i.e., roughly interspaced over 2 months) and recorded whole-scalp 30-ch EEG data from 10 subjects. Given the large size of the data (i.e., from 80 sessions), results indicated that EEG non-stationarity was clearly revealed in the numbers and locations of brain sources of interest as well as their spectral modulation to the emotional responses. Less than half of subjects (two to four) showed the same relatively day-stationary (source reproducibility >6 days) spatio-spectral tendency towards one of the binary valence and arousal states. This work substantially advances the previous work by exploiting intra- and inter-individual EEG variability in an ecological multiday scenario. Such EEG non-stationarity may inevitably present a great challenge for the development of an accurate, robust, and generalized emotion-classification model.
Collapse
Affiliation(s)
- Yi-Wei Shen
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Yuan-Pin Lin
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung, Taiwan
| |
Collapse
|
17
|
Angulo-Perkins A, Concha L. Discerning the functional networks behind processing of music and speech through human vocalizations. PLoS One 2019; 14:e0222796. [PMID: 31600231 PMCID: PMC6786620 DOI: 10.1371/journal.pone.0222796] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 09/06/2019] [Indexed: 01/28/2023] Open
Abstract
A fundamental question regarding music processing is its degree of independence from speech processing, in terms of their underlying neuroanatomy and influence of cognitive traits and abilities. Although a straight answer to that question is still lacking, a large number of studies have described where in the brain and in which contexts (tasks, stimuli, populations) this independence is, or is not, observed. We examined the independence between music and speech processing using functional magnetic resonance imagining and a stimulation paradigm with different human vocal sounds produced by the same voice. The stimuli were grouped as Speech (spoken sentences), Hum (hummed melodies), and Song (sung sentences); the sentences used in Speech and Song categories were the same, as well as the melodies used in the two musical categories. Each category had a scrambled counterpart which allowed us to render speech and melody unintelligible, while preserving global amplitude and frequency characteristics. Finally, we included a group of musicians to evaluate the influence of musical expertise. Similar global patterns of cortical activity were related to all sound categories compared to baseline, but important differences were evident. Regions more sensitive to musical sounds were located bilaterally in the anterior and posterior superior temporal gyrus (planum polare and temporale), the right supplementary and premotor areas, and the inferior frontal gyrus. However, only temporal areas and supplementary motor cortex remained music-selective after subtracting brain activity related to the scrambled stimuli. Speech-selective regions mainly affected by intelligibility of the stimuli were observed on the left pars opecularis and the anterior portion of the medial temporal gyrus. We did not find differences between musicians and non-musicians Our results confirmed music-selective cortical regions in associative cortices, independent of previous musical training.
Collapse
Affiliation(s)
- Arafat Angulo-Perkins
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, Querétaro, México
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria
| | - Luis Concha
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, Querétaro, México
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada
| |
Collapse
|
18
|
Sihvonen AJ, Särkämö T, Rodríguez-Fornells A, Ripollés P, Münte TF, Soinila S. Neural architectures of music - Insights from acquired amusia. Neurosci Biobehav Rev 2019; 107:104-114. [PMID: 31479663 DOI: 10.1016/j.neubiorev.2019.08.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 08/27/2019] [Accepted: 08/29/2019] [Indexed: 12/27/2022]
Abstract
The ability to perceive and produce music is a quintessential element of human life, present in all known cultures. Modern functional neuroimaging has revealed that music listening activates a large-scale bilateral network of cortical and subcortical regions in the healthy brain. Even the most accurate structural studies do not reveal which brain areas are critical and causally linked to music processing. Such questions may be answered by analysing the effects of focal brain lesions in patients´ ability to perceive music. In this sense, acquired amusia after stroke provides a unique opportunity to investigate the neural architectures crucial for normal music processing. Based on the first large-scale longitudinal studies on stroke-induced amusia using modern multi-modal magnetic resonance imaging (MRI) techniques, such as advanced lesion-symptom mapping, grey and white matter morphometry, tractography and functional connectivity, we discuss neural structures critical for music processing, consider music processing in light of the dual-stream model in the right hemisphere, and propose a neural model for acquired amusia.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Department of Neurosciences, University of Helsinki, Finland; Cognitive Brain Research Unit, Department of Psychology and Logopedics, University of Helsinki, Finland.
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, University of Helsinki, Finland
| | - Antoni Rodríguez-Fornells
- Department of Cognition, University of Barcelona, Cognition & Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL), Institució Catalana de recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Pablo Ripollés
- Department of Psychology, New York University and Music and Audio Research Laboratory, New York University, USA
| | - Thomas F Münte
- Department of Neurology and Institute of Psychology II, University of Lübeck, Germany
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital, Department of Neurology, University of Turku, Finland
| |
Collapse
|
19
|
Zarei SA, Sheibani V, Mansouri FA. Interaction of music and emotional stimuli in modulating working memory in macaque monkeys. Am J Primatol 2019; 81:e22999. [DOI: 10.1002/ajp.22999] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 04/25/2019] [Accepted: 05/12/2019] [Indexed: 11/07/2022]
Affiliation(s)
- Shahab A. Zarei
- Cognitive Neuroscience Laboratory, Kerman Neuroscience Research CenterInstitute of Neuropharmacology, Kerman University of Medical SciencesKerman Iran
| | - Vahid Sheibani
- Cognitive Neuroscience Laboratory, Kerman Neuroscience Research CenterInstitute of Neuropharmacology, Kerman University of Medical SciencesKerman Iran
- Cognitive Neuroscience Laboratory, Cognitive Neuroscience Research CentreKerman University of Medical SciencesKerman Iran
| | - Farshad A. Mansouri
- Cognitive Neuroscience Laboratory, Cognitive Neuroscience Research CentreKerman University of Medical SciencesKerman Iran
- Cognitive Neuroscience Laboratory, ARC Center of Excellence for Integrative Brain FunctionMonash UniversityClayton VIC Australia
| |
Collapse
|
20
|
Wired for musical rhythm? A diffusion MRI-based study of individual differences in music perception. Brain Struct Funct 2019; 224:1711-1722. [DOI: 10.1007/s00429-019-01868-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Accepted: 03/25/2019] [Indexed: 02/07/2023]
|
21
|
Recruitment of the motor system during music listening: An ALE meta-analysis of fMRI data. PLoS One 2018; 13:e0207213. [PMID: 30452442 PMCID: PMC6242316 DOI: 10.1371/journal.pone.0207213] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Accepted: 10/26/2018] [Indexed: 12/04/2022] Open
Abstract
Several neuroimaging studies have shown that listening to music activates brain regions that reside in the motor system, even when there is no overt movement. However, many of these studies report the activation of varying motor system areas that include the primary motor cortex, supplementary motor area, dorsal and ventral pre-motor areas and parietal regions. In order to examine what specific roles are played by various motor regions during music perception, we used activation likelihood estimation (ALE) to conduct a meta-analysis of neuroimaging literature on passive music listening. After extensive search of the literature, 42 studies were analyzed resulting in a total of 386 unique subjects contributing 694 activation foci in total. As suspected, auditory activations were found in the bilateral superior temporal gyrus, transverse temporal gyrus, insula, pyramis, bilateral precentral gyrus, and bilateral medial frontal gyrus. We also saw the widespread activation of motor networks including left and right lateral premotor cortex, right primary motor cortex, and the left cerebellum. These results suggest a central role of the motor system in music and rhythm perception. We discuss these findings in the context of the Action Simulation for Auditory Prediction (ASAP) model and other predictive coding accounts of brain function.
Collapse
|
22
|
Mansouri FA, Acevedo N, Illipparampil R, Fehring DJ, Fitzgerald PB, Jaberzadeh S. Interactive effects of music and prefrontal cortex stimulation in modulating response inhibition. Sci Rep 2017; 7:18096. [PMID: 29273796 PMCID: PMC5741740 DOI: 10.1038/s41598-017-18119-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Accepted: 12/06/2017] [Indexed: 12/30/2022] Open
Abstract
Influential hypotheses propose that alterations in emotional state influence decision processes and executive control of behavior. Both music and transcranial direct current stimulation (tDCS) of prefrontal cortex affect emotional state, however interactive effects of music and tDCS on executive functions remain unknown. Learning to inhibit inappropriate responses is an important aspect of executive control which is guided by assessing the decision outcomes such as errors. We found that high-tempo music, but not low-tempo music or low-level noise, significantly influenced learning and implementation of inhibitory control. In addition, a brief period of tDCS over prefrontal cortex specifically interacted with high-tempo music and altered its effects on executive functions. Measuring event-related autonomic and arousal response of participants indicated that exposure to task demands and practice led to a decline in arousal response to the decision outcome and high-tempo music enhanced such practice-related processes. However, tDCS specifically moderated the high-tempo music effect on the arousal response to errors and concomitantly restored learning and improvement in executive functions. Here, we show that tDCS and music interactively influence the learning and implementation of inhibitory control. Our findings indicate that alterations in the arousal-emotional response to the decision outcome might underlie these interactive effects.
Collapse
Affiliation(s)
- Farshad Alizadeh Mansouri
- Department of Physiology, Cognitive Neuroscience Laboratory, Monash Biomedicine Discovery Institute, Monash University, Victoria, 3800, Australia. .,ARC Centre of Excellence in Integrative Brain Function, Monash University, Victoria, Australia.
| | - Nicola Acevedo
- Department of Physiology, Cognitive Neuroscience Laboratory, Monash Biomedicine Discovery Institute, Monash University, Victoria, 3800, Australia
| | - Rosin Illipparampil
- Department of Physiology, Cognitive Neuroscience Laboratory, Monash Biomedicine Discovery Institute, Monash University, Victoria, 3800, Australia
| | - Daniel J Fehring
- Department of Physiology, Cognitive Neuroscience Laboratory, Monash Biomedicine Discovery Institute, Monash University, Victoria, 3800, Australia.,ARC Centre of Excellence in Integrative Brain Function, Monash University, Victoria, Australia
| | - Paul B Fitzgerald
- Monash Alfred Psychiatry Research Centre, Central Clinical School, Monash University and the Alfred Hospital, Victoria, Australia
| | - Shapour Jaberzadeh
- Department of Physiotherapy, Non-invasive Brain Stimulation & Neuroplasticity Laboratory, Monash University, Victoria, 3199, Australia
| |
Collapse
|
23
|
Tracting the neural basis of music: Deficient structural connectivity underlying acquired amusia. Cortex 2017; 97:255-273. [DOI: 10.1016/j.cortex.2017.09.028] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 06/08/2017] [Accepted: 09/29/2017] [Indexed: 11/17/2022]
|
24
|
Sihvonen AJ, Särkämö T, Ripollés P, Leo V, Saunavaara J, Parkkola R, Rodríguez-Fornells A, Soinila S. Functional neural changes associated with acquired amusia across different stages of recovery after stroke. Sci Rep 2017; 7:11390. [PMID: 28900231 PMCID: PMC5595783 DOI: 10.1038/s41598-017-11841-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 08/30/2017] [Indexed: 11/09/2022] Open
Abstract
Brain damage causing acquired amusia disrupts the functional music processing system, creating a unique opportunity to investigate the critical neural architectures of musical processing in the brain. In this longitudinal fMRI study of stroke patients (N = 41) with a 6-month follow-up, we used natural vocal music (sung with lyrics) and instrumental music stimuli to uncover brain activation and functional network connectivity changes associated with acquired amusia and its recovery. In the acute stage, amusic patients exhibited decreased activation in right superior temporal areas compared to non-amusic patients during instrumental music listening. During the follow-up, the activation deficits expanded to comprise a wide-spread bilateral frontal, temporal, and parietal network. The amusics showed less activation deficits to vocal music, suggesting preserved processing of singing in the amusic brain. Compared to non-recovered amusics, recovered amusics showed increased activation to instrumental music in bilateral frontoparietal areas at 3 months and in right middle and inferior frontal areas at 6 months. Amusia recovery was also associated with increased functional connectivity in right and left frontoparietal attention networks to instrumental music. Overall, our findings reveal the dynamic nature of deficient activation and connectivity patterns in acquired amusia and highlight the role of dorsal networks in amusia recovery.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Faculty of Medicine, University of Turku, 20520, Turku, Finland. .,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland.
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland
| | - Pablo Ripollés
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, 08907, Barcelona, Spain.,Department of Cognition, Development and Education Psychology, University of Barcelona, 08035, Barcelona, Spain.,Poeppel Lab, Department of Psychology, New York University, 10003, NY, USA
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland
| | - Jani Saunavaara
- Department of Medical Physics, Turku University Hospital, 20521, Turku, Finland
| | - Riitta Parkkola
- Department of Radiology, Turku University and Turku University Hospital, 20521, Turku, Finland
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, 08907, Barcelona, Spain.,Department of Cognition, Development and Education Psychology, University of Barcelona, 08035, Barcelona, Spain.,Catalan Institution for Research and Advanced Studies, ICREA, Barcelona, Spain
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital and Department of Neurology, University of Turku, 20521, Turku, Finland
| |
Collapse
|
25
|
Shi Y, Zeng W, Tang X, Kong W, Yin J. An improved multi-objective optimization-based CICA method with data-driver temporal reference for group fMRI data analysis. Med Biol Eng Comput 2017; 56:683-694. [PMID: 28864838 DOI: 10.1007/s11517-017-1716-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Accepted: 08/17/2017] [Indexed: 11/26/2022]
Abstract
Group independent component analysis (GICA) has been successfully applied to study multi-subject functional magnetic resonance imaging (fMRI) data, and the group independent component (GIC) represents the commonality of all subjects in the group. However, some studies show that the performance of GICA can be improved by incorporating a priori information, which is not always considered when looking for GICs in existing GICA methods. In this paper, we propose an improved multi-objective optimization-based constrained independent component analysis (CICA) method to take advantage of the temporal a priori information extracted from all subjects in the group by incorporating it into the computational process of GICA for group fMRI data analysis. The experimental results of simulated and real data show that the activated regions and the time course detected by the improved CICA method are more accurate in some sense. Moreover, the GIC computed by the improved CICA method has a higher correlation with the corresponding independent component of each subject in the group, which means that the improved CICA method with the temporal a priori information extracted from the group can better reflect the commonality of the subjects. These results demonstrate that the improved CICA method has its own advantages in fMRI data analysis.
Collapse
Affiliation(s)
- Yuhu Shi
- Laboratory of Digital Image and Intelligent Computation, Shanghai Maritime University, 1550 Harbor Avenue, Pudong, Shanghai, 201306, China
| | - Weiming Zeng
- Laboratory of Digital Image and Intelligent Computation, Shanghai Maritime University, 1550 Harbor Avenue, Pudong, Shanghai, 201306, China.
- Information Engineering College, Shanghai Maritime University, 1550 Harbor Avenue, Pudong, Shanghai, 201306, China.
| | - Xiaoyan Tang
- Laboratory of Digital Image and Intelligent Computation, Shanghai Maritime University, 1550 Harbor Avenue, Pudong, Shanghai, 201306, China
| | - Wei Kong
- Laboratory of Digital Image and Intelligent Computation, Shanghai Maritime University, 1550 Harbor Avenue, Pudong, Shanghai, 201306, China
| | - Jun Yin
- Laboratory of Digital Image and Intelligent Computation, Shanghai Maritime University, 1550 Harbor Avenue, Pudong, Shanghai, 201306, China
| |
Collapse
|
26
|
Shi Y, Zeng W, Wang N. SCGICAR: Spatial concatenation based group ICA with reference for fMRI data analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 148:137-151. [PMID: 28774436 DOI: 10.1016/j.cmpb.2017.07.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 05/21/2017] [Accepted: 07/03/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE With the rapid development of big data, the functional magnetic resonance imaging (fMRI) data analysis of multi-subject is becoming more and more important. As a kind of blind source separation technique, group independent component analysis (GICA) has been widely applied for the multi-subject fMRI data analysis. However, spatial concatenated GICA is rarely used compared with temporal concatenated GICA due to its disadvantages. METHODS In this paper, in order to overcome these issues and to consider that the ability of GICA for fMRI data analysis can be improved by adding a priori information, we propose a novel spatial concatenation based GICA with reference (SCGICAR) method to take advantage of the priori information extracted from the group subjects, and then the multi-objective optimization strategy is used to implement this method. Finally, the post-processing means of principal component analysis and anti-reconstruction are used to obtain group spatial component and individual temporal component in the group, respectively. RESULTS The experimental results show that the proposed SCGICAR method has a better performance on both single-subject and multi-subject fMRI data analysis compared with classical methods. It not only can detect more accurate spatial and temporal component for each subject of the group, but also can obtain a better group component on both temporal and spatial domains. CONCLUSIONS These results demonstrate that the proposed SCGICAR method has its own advantages in comparison with classical methods, and it can better reflect the commonness of subjects in the group.
Collapse
Affiliation(s)
- Yuhu Shi
- Lab of Digital Image and Intelligent Computation, Shanghai Maritime University, 1550 Harbor Avenue, Pudong, Shanghai, 201306, China
| | - Weiming Zeng
- Lab of Digital Image and Intelligent Computation, Shanghai Maritime University, 1550 Harbor Avenue, Pudong, Shanghai, 201306, China.
| | - Nizhuan Wang
- Neuroimaging Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, China; Guangdong Key Laboratory of Biomedical Information Detection and Ultrasound Imaging, Shenzhen 518060, China
| |
Collapse
|
27
|
Sihvonen AJ, Ripollés P, Rodríguez-Fornells A, Soinila S, Särkämö T. Revisiting the Neural Basis of Acquired Amusia: Lesion Patterns and Structural Changes Underlying Amusia Recovery. Front Neurosci 2017; 11:426. [PMID: 28790885 PMCID: PMC5524924 DOI: 10.3389/fnins.2017.00426] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 07/11/2017] [Indexed: 01/25/2023] Open
Abstract
Although, acquired amusia is a common deficit following stroke, relatively little is still known about its precise neural basis, let alone to its recovery. Recently, we performed a voxel-based lesion-symptom mapping (VLSM) and morphometry (VBM) study which revealed a right lateralized lesion pattern, and longitudinal gray matter volume (GMV) and white matter volume (WMV) changes that were specifically associated with acquired amusia after stroke. In the present study, using a larger sample of stroke patients (N = 90), we aimed to replicate and extend the previous structural findings as well as to determine the lesion patterns and volumetric changes associated with amusia recovery. Structural MRIs were acquired at acute and 6-month post-stroke stages. Music perception was behaviorally assessed at acute and 3-month post-stroke stages using the Scale and Rhythm subtests of the Montreal Battery of Evaluation of Amusia (MBEA). Using these scores, the patients were classified as non-amusic, recovered amusic, and non-recovered amusic. The results of the acute stage VLSM analyses and the longitudinal VBM analyses converged to show that more severe and persistent (non-recovered) amusia was associated with an extensive pattern of lesions and GMV/WMV decrease in right temporal, frontal, parietal, striatal, and limbic areas. In contrast, less severe and transient (recovered) amusia was linked to lesions specifically in left inferior frontal gyrus as well as to a GMV decrease in right parietal areas. Separate continuous analyses of MBEA Scale and Rhythm scores showed extensively overlapping lesion pattern in right temporal, frontal, and subcortical structures as well as in the right insula. Interestingly, the recovered pitch amusia was related to smaller GMV decreases in the temporoparietal junction whereas the recovered rhythm amusia was associated to smaller GMV decreases in the inferior temporal pole. Overall, the results provide a more comprehensive picture of the lesions and longitudinal structural changes associated with different recovery trajectories of acquired amusia.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Faculty of Medicine, University of TurkuTurku, Finland.,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of HelsinkiHelsinki, Finland
| | - Pablo Ripollés
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de LlobregatBarcelona, Spain.,Department of Cognition, Development and Education Psychology, University of BarcelonaBarcelona, Spain.,Poeppel Lab, Department of Psychology, New York UniversityNew York, NY, United States
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de LlobregatBarcelona, Spain.,Department of Cognition, Development and Education Psychology, University of BarcelonaBarcelona, Spain.,Catalan Institution for Research and Advanced Studies, Institució Catalana de Recerca i Estudis Avançats (ICREA)Barcelona, Spain
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital and Department of Neurology, University of TurkuTurku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of HelsinkiHelsinki, Finland
| |
Collapse
|
28
|
Familiarity affects electrocortical power spectra during dance imagery, listening to different music genres: independent component analysis of Alpha and Beta rhythms. SPORT SCIENCES FOR HEALTH 2017. [DOI: 10.1007/s11332-017-0379-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
29
|
Rogenmoser L, Zollinger N, Elmer S, Jäncke L. Independent component processes underlying emotions during natural music listening. Soc Cogn Affect Neurosci 2016; 11:1428-39. [PMID: 27217116 DOI: 10.1093/scan/nsw048] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2016] [Accepted: 03/31/2016] [Indexed: 12/12/2022] Open
Abstract
The aim of this study was to investigate the brain processes underlying emotions during natural music listening. To address this, we recorded high-density electroencephalography (EEG) from 22 subjects while presenting a set of individually matched whole musical excerpts varying in valence and arousal. Independent component analysis was applied to decompose the EEG data into functionally distinct brain processes. A k-means cluster analysis calculated on the basis of a combination of spatial (scalp topography and dipole location mapped onto the Montreal Neurological Institute brain template) and functional (spectra) characteristics revealed 10 clusters referring to brain areas typically involved in music and emotion processing, namely in the proximity of thalamic-limbic and orbitofrontal regions as well as at frontal, fronto-parietal, parietal, parieto-occipital, temporo-occipital and occipital areas. This analysis revealed that arousal was associated with a suppression of power in the alpha frequency range. On the other hand, valence was associated with an increase in theta frequency power in response to excerpts inducing happiness compared to sadness. These findings are partly compatible with the model proposed by Heller, arguing that the frontal lobe is involved in modulating valenced experiences (the left frontal hemisphere for positive emotions) whereas the right parieto-temporal region contributes to the emotional arousal.
Collapse
Affiliation(s)
- Lars Rogenmoser
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland Neuroimaging and Stroke Recovery Laboratory, Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School, 02215, Boston, MA, USA Neuroscience Center Zurich, University of Zurich and ETH Zurich, 8050, Zurich, Switzerland
| | - Nina Zollinger
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland
| | - Stefan Elmer
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland
| | - Lutz Jäncke
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland Center for Integrative Human Physiology (ZIHP), University of Zurich, 8050, Zurich, Switzerland International Normal Aging and Plasticity Imaging Center (INAPIC), University of Zurich, 8050, Zurich, Switzerland University Research Priority Program (URPP) "Dynamic of Healthy Aging," University of Zurich, 8050, Zurich, Switzerland Department of Special Education, King Abdulaziz University, 21589, Jeddah, Saudi Arabia
| |
Collapse
|
30
|
Guerrero Arenas C, Hidalgo Tobón SS, Dies Suarez P, Barragán Pérez E, Castro Sierra E, García J, de Celis Alonso B. Strategies for tonal and atonal musical interpretation in blind and normally sighted children: an fMRI study. Brain Behav 2016; 6:e00450. [PMID: 27066309 PMCID: PMC4802423 DOI: 10.1002/brb3.450] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2015] [Accepted: 01/19/2016] [Indexed: 11/17/2022] Open
Abstract
INTRODUCTION Early childhood is known to be a period when cortical plasticity phenomena are at a maximum. Music is a stimulus known to modulate these mechanisms. On the other hand, neurological impairments like blindness are also known to affect cortical plasticity. Here, we address how tonal and atonal musical stimuli are processed in control and blind young children. We aimed to understand the differences between the two groups when processing this physiological information. RESULTS Atonal stimuli produced larger activations in cerebellum, fusiform, and temporal lobe structures than tonal. In contrast, tonal stimuli induced larger frontal lobe representations than atonal. Control participants presented large activations in cerebellum, fusiform, and temporal lobe. A correlation/connectivity study showed that the blind group incorporated larger amounts of perceptual information (somatosensory and motor) into tonal processing through the function of the anterior prefrontal cortex (APC). They also used the visual cortex in conjunction with the Wernicke's area to process this information. In contrast, controls processed sound with perceptual stimuli from auditory cortex structures (including Wernicke's area). In this case, information was processed through the dorsal posterior cingulate cortex and not the APC. The orbitofrontal cortex also played a key role for atonal interpretation in this group. DISCUSSION Wernicke's area, known to be involved in speech, was heavily involved for both groups and all stimuli. The two groups presented clear differences in strategies for music processing, with very different recruitment of brain regions.
Collapse
Affiliation(s)
| | - Silvia S Hidalgo Tobón
- Departamento de Imagenología Hospital Infantil de México Federico Gómez México DF Mexico; Departamento de Física Universidad Autónoma Metropolitana, Campus Iztapalapa México DF Mexico
| | - Pilar Dies Suarez
- Departamento de Imagenología Hospital Infantil de México Federico Gómez México DF Mexico
| | - Eduardo Barragán Pérez
- Departamento de Imagenología Hospital Infantil de México Federico Gómez México DF Mexico
| | - Eduardo Castro Sierra
- Departamento de Imagenología Hospital Infantil de México Federico Gómez México DF Mexico
| | - Julio García
- Department of Radiology Feinberg School of Medicine - Northwestern University Chicago Illinois
| | - Benito de Celis Alonso
- Facultad de Ciencias Físico - Matemáticas Benemérita Universidad Autónoma de Puebla Puebla Mexico
| |
Collapse
|
31
|
Kotchoubey B, Pavlov YG, Kleber B. Music in Research and Rehabilitation of Disorders of Consciousness: Psychological and Neurophysiological Foundations. Front Psychol 2015; 6:1763. [PMID: 26640445 PMCID: PMC4661237 DOI: 10.3389/fpsyg.2015.01763] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 11/03/2015] [Indexed: 01/18/2023] Open
Abstract
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients' self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation.
Collapse
Affiliation(s)
- Boris Kotchoubey
- Institute for Medical Psychology and Behavioural Neurobiology, University of Tübingen, Tübingen, Germany
| | - Yuri G. Pavlov
- Institute for Medical Psychology and Behavioural Neurobiology, University of Tübingen, Tübingen, Germany
- Department of Psychology, Ural Federal University, Yekaterinburg, Russia
| | - Boris Kleber
- Institute for Medical Psychology and Behavioural Neurobiology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
32
|
Mueller K, Fritz T, Mildner T, Richter M, Schulze K, Lepsien J, Schroeter ML, Möller HE. Investigating the dynamics of the brain response to music: A central role of the ventral striatum/nucleus accumbens. Neuroimage 2015; 116:68-79. [DOI: 10.1016/j.neuroimage.2015.05.006] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2014] [Revised: 03/09/2015] [Accepted: 05/04/2015] [Indexed: 10/23/2022] Open
|
33
|
Spada D, Verga L, Iadanza A, Tettamanti M, Perani D. The auditory scene: An fMRI study on melody and accompaniment in professional pianists. Neuroimage 2014; 102 Pt 2:764-75. [DOI: 10.1016/j.neuroimage.2014.08.036] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2014] [Revised: 06/13/2014] [Accepted: 08/20/2014] [Indexed: 11/17/2022] Open
|
34
|
Mayhew S, Mullinger K, Bagshaw A, Bowtell R, Francis S. Investigating intrinsic connectivity networks using simultaneous BOLD and CBF measurements. Neuroimage 2014; 99:111-21. [DOI: 10.1016/j.neuroimage.2014.05.042] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Revised: 04/18/2014] [Accepted: 05/14/2014] [Indexed: 11/29/2022] Open
|
35
|
Angulo-Perkins A, Aubé W, Peretz I, Barrios FA, Armony JL, Concha L. Music listening engages specific cortical regions within the temporal lobes: differences between musicians and non-musicians. Cortex 2014; 59:126-37. [PMID: 25173956 DOI: 10.1016/j.cortex.2014.07.013] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2013] [Revised: 02/22/2014] [Accepted: 07/18/2014] [Indexed: 11/26/2022]
Abstract
Music and speech are two of the most relevant and common sounds in the human environment. Perceiving and processing these two complex acoustical signals rely on a hierarchical functional network distributed throughout several brain regions within and beyond the auditory cortices. Given their similarities, the neural bases for processing these two complex sounds overlap to a certain degree, but particular brain regions may show selectivity for one or the other acoustic category, which we aimed to identify. We examined 53 subjects (28 of them professional musicians) by functional magnetic resonance imaging (fMRI), using a paradigm designed to identify regions showing increased activity in response to different types of musical stimuli, compared to different types of complex sounds, such as speech and non-linguistic vocalizations. We found a region in the anterior portion of the superior temporal gyrus (aSTG) (planum polare) that showed preferential activity in response to musical stimuli and was present in all our subjects, regardless of musical training, and invariant across different musical instruments (violin, piano or synthetic piano). Our data show that this cortical region is preferentially involved in processing musical, as compared to other complex sounds, suggesting a functional role as a second-order relay, possibly integrating acoustic characteristics intrinsic to music (e.g., melody extraction). Moreover, we assessed whether musical experience modulates the response of cortical regions involved in music processing and found evidence of functional differences between musicians and non-musicians during music listening. In particular, bilateral activation of the planum polare was more prevalent, but not exclusive, in musicians than non-musicians, and activation of the right posterior portion of the superior temporal gyrus (planum temporale) differed between groups. Our results provide evidence of functional specialization for music processing in specific regions of the auditory cortex and show domain-specific functional differences possibly correlated with musicianship.
Collapse
Affiliation(s)
- Arafat Angulo-Perkins
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Querétaro, Querétaro, México
| | - William Aubé
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada; Department of Psychology, Université de Montréal, Montreal, Québec, Canada
| | - Isabelle Peretz
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada; Department of Psychology, Université de Montréal, Montreal, Québec, Canada
| | - Fernando A Barrios
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Querétaro, Querétaro, México
| | - Jorge L Armony
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada; Department of Psychology, Université de Montréal, Montreal, Québec, Canada; Douglas Institute and Department of Psychiatry, McGill University, Montreal, Québec, Canada
| | - Luis Concha
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Querétaro, Querétaro, México; International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada.
| |
Collapse
|
36
|
Lin YP, Yang YH, Jung TP. Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening. Front Neurosci 2014; 8:94. [PMID: 24822035 PMCID: PMC4013455 DOI: 10.3389/fnins.2014.00094] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Accepted: 04/12/2014] [Indexed: 11/23/2022] Open
Abstract
Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61-67% in valence classification and from around 58-67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.
Collapse
Affiliation(s)
- Yuan-Pin Lin
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Center for Advanced Neurological Engineering, Institute of Engineering in Medicine, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Yi-Hsuan Yang
- Music and Audio Computing Lab, Research Center for IT InnovationAcademia Sinica, Taipei, Taiwan
| | - Tzyy-Ping Jung
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of CaliforniaSan Diego, La Jolla, CA, USA
- Center for Advanced Neurological Engineering, Institute of Engineering in Medicine, University of CaliforniaSan Diego, La Jolla, CA, USA
| |
Collapse
|
37
|
Hegde S. Music-based cognitive remediation therapy for patients with traumatic brain injury. Front Neurol 2014; 5:34. [PMID: 24715887 PMCID: PMC3970008 DOI: 10.3389/fneur.2014.00034] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2014] [Accepted: 03/10/2014] [Indexed: 01/28/2023] Open
Abstract
Traumatic brain injury (TBI) is one of the common causes of disability in physical, psychological, and social domains of functioning leading to poor quality of life. TBI leads to impairment in sensory, motor, language, and emotional processing, and also in cognitive functions such as attention, information processing, executive functions, and memory. Cognitive impairment plays a central role in functional recovery in TBI. Innovative methods such as music therapy to alleviate cognitive impairments have been investigated recently. The role of music in cognitive rehabilitation is evolving, based on newer findings emerging from the fields of neuromusicology and music cognition. Research findings from these fields have contributed significantly to our understanding of music perception and cognition, and its neural underpinnings. From a neuroscientific perspective, indulging in music is considered as one of the best cognitive exercises. With “plasticity” as its veritable nature, brain engages in producing music indulging an array of cognitive functions and the product, the music, in turn permits restoration and alters brain functions. With scientific findings as its basis, “neurologic music therapy” (NMT) has been developed as a systematic treatment method to improve sensorimotor, language, and cognitive domains of functioning via music. A preliminary study examining the effect of NMT in cognitive rehabilitation has reported promising results in improving executive functions along with improvement in emotional adjustment and decreasing depression and anxiety following TBI. The potential usage of music-based cognitive rehabilitation therapy in various clinical conditions including TBI is yet to be fully explored. There is a need for systematic research studies to bridge the gap between increasing theoretical understanding of usage of music in cognitive rehabilitation and application of the same in a heterogeneous condition such as TBI.
Collapse
Affiliation(s)
- Shantala Hegde
- Cognitive Psychology and Cognitive Neurosciences Laboratory, Department of Clinical Psychology, Neurobiology Research Center, National Institute of Mental Health and Neuro Sciences (NIMHANS) , Bangalore , India
| |
Collapse
|
38
|
Lin YP, Duann JR, Feng W, Chen JH, Jung TP. Revealing spatio-spectral electroencephalographic dynamics of musical mode and tempo perception by independent component analysis. J Neuroeng Rehabil 2014; 11:18. [PMID: 24581119 PMCID: PMC3941612 DOI: 10.1186/1743-0003-11-18] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2012] [Accepted: 02/20/2014] [Indexed: 11/21/2022] Open
Abstract
Background Music conveys emotion by manipulating musical structures, particularly musical mode- and tempo-impact. The neural correlates of musical mode and tempo perception revealed by electroencephalography (EEG) have not been adequately addressed in the literature. Method This study used independent component analysis (ICA) to systematically assess spatio-spectral EEG dynamics associated with the changes of musical mode and tempo. Results Empirical results showed that music with major mode augmented delta-band activity over the right sensorimotor cortex, suppressed theta activity over the superior parietal cortex, and moderately suppressed beta activity over the medial frontal cortex, compared to minor-mode music, whereas fast-tempo music engaged significant alpha suppression over the right sensorimotor cortex. Conclusion The resultant EEG brain sources were comparable with previous studies obtained by other neuroimaging modalities, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). In conjunction with advanced dry and mobile EEG technology, the EEG results might facilitate the translation from laboratory-oriented research to real-life applications for music therapy, training and entertainment in naturalistic environments.
Collapse
Affiliation(s)
| | | | | | | | - Tzyy-Ping Jung
- Institute for Neural Computation and Institute of Engineering in Medicine, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
39
|
Bryant GA. Animal signals and emotion in music: coordinating affect across groups. Front Psychol 2013; 4:990. [PMID: 24427146 PMCID: PMC3872313 DOI: 10.3389/fpsyg.2013.00990] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Accepted: 12/11/2013] [Indexed: 12/02/2022] Open
Abstract
Researchers studying the emotional impact of music have not traditionally been concerned with the principled relationship between form and function in evolved animal signals. The acoustic structure of musical forms is related in important ways to emotion perception, and thus research on non-human animal vocalizations is relevant for understanding emotion in music. Musical behavior occurs in cultural contexts that include many other coordinated activities which mark group identity, and can allow people to communicate within and between social alliances. The emotional impact of music might be best understood as a proximate mechanism serving an ultimately social function. Recent work reveals intimate connections between properties of certain animal signals and evocative aspects of human music, including (1) examinations of the role of nonlinearities (e.g., broadband noise) in non-human animal vocalizations, and the analogous production and perception of these features in human music, and (2) an analysis of group musical performances and possible relationships to non-human animal chorusing and emotional contagion effects. Communicative features in music are likely due primarily to evolutionary by-products of phylogenetically older, but still intact communication systems. But in some cases, such as the coordinated rhythmic sounds produced by groups of musicians, our appreciation and emotional engagement might be driven by an adaptive social signaling system. Future empirical work should examine human musical behavior through the comparative lens of behavioral ecology and an adaptationist cognitive science. By this view, particular coordinated sound combinations generated by musicians exploit evolved perceptual response biases - many shared across species - and proliferate through cultural evolutionary processes.
Collapse
Affiliation(s)
- Gregory A. Bryant
- Department of Communication, Center for Behavior, Evolution, and Culture, University of California at Los AngelesLos Angeles, CA, USA
| |
Collapse
|
40
|
Featherstone CR, Morrison CM, Waterman MG, MacGregor LJ. Semantics, syntax or neither? A case for resolution in the interpretation of N500 and P600 responses to harmonic incongruities. PLoS One 2013; 8:e76600. [PMID: 24223704 PMCID: PMC3818369 DOI: 10.1371/journal.pone.0076600] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2011] [Accepted: 09/02/2013] [Indexed: 11/24/2022] Open
Abstract
The processing of notes and chords which are harmonically incongruous with their context has been shown to elicit two distinct late ERP effects. These effects strongly resemble two effects associated with the processing of linguistic incongruities: a P600, resembling a typical response to syntactic incongruities in language, and an N500, evocative of the N400, which is typically elicited in response to semantic incongruities in language. Despite the robustness of these two patterns in the musical incongruity literature, no consensus has yet been reached as to the reasons for the existence of two distinct responses to harmonic incongruities. This study was the first to use behavioural and ERP data to test two possible explanations for the existence of these two patterns: the musicianship of listeners, and the resolved or unresolved nature of the harmonic incongruities. Results showed that harmonically incongruous notes and chords elicited a late positivity similar to the P600 when they were embedded within sequences which started and ended in the same key (harmonically resolved). The notes and chords which indicated that there would be no return to the original key (leaving the piece harmonically unresolved) were associated with a further P600 in musicians, but with a negativity resembling the N500 in non-musicians. We suggest that the late positivity reflects the conscious perception of a specific element as being incongruous with its context and the efforts of musicians to integrate the harmonic incongruity into its local context as a result of their analytic listening style, while the late negativity reflects the detection of the absence of resolution in non-musicians as a result of their holistic listening style.
Collapse
Affiliation(s)
- Cara R Featherstone
- Institute of Psychological Sciences, University of Leeds, Leeds, United Kingdom
| | | | | | | |
Collapse
|
41
|
Beldzik E, Domagalik A, Daselaar S, Fafrowicz M, Froncisz W, Oginska H, Marek T. Contributive sources analysis: A measure of neural networks' contribution to brain activations. Neuroimage 2013; 76:304-12. [PMID: 23523811 DOI: 10.1016/j.neuroimage.2013.03.014] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2012] [Revised: 02/21/2013] [Accepted: 03/06/2013] [Indexed: 10/27/2022] Open
|
42
|
Wu J, Zhang J, Ding X, Li R, Zhou C. The effects of music on brain functional networks: a network analysis. Neuroscience 2013; 250:49-59. [PMID: 23806719 DOI: 10.1016/j.neuroscience.2013.06.021] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2013] [Revised: 06/12/2013] [Accepted: 06/13/2013] [Indexed: 10/26/2022]
Abstract
The human brain can dynamically adapt to the changing surroundings. To explore this issue, we adopted graph theoretical tools to examine changes in electroencephalography (EEG) functional networks while listening to music. Three different excerpts of Chinese Guqin music were played to 16 non-musician subjects. For the main frequency intervals, synchronizations between all pair-wise combinations of EEG electrodes were evaluated with phase lag index (PLI). Then, weighted connectivity networks were created and their organizations were characterized in terms of an average clustering coefficient and characteristic path length. We found an enhanced synchronization level in the alpha2 band during music listening. Music perception showed a decrease of both normalized clustering coefficient and path length in the alpha2 band. Moreover, differences in network measures were not observed between musical excerpts. These experimental results demonstrate an increase of functional connectivity as well as a more random network structure in the alpha2 band during music perception. The present study offers support for the effects of music on human brain functional networks with a trend toward a more efficient but less economical architecture.
Collapse
Affiliation(s)
- J Wu
- Cognitive Science Department, Xiamen University, Xiamen, China; Fujian Key Laboratory of the Brain-like Intelligent Systems, Xiamen University, Xiamen, China
| | | | | | | | | |
Collapse
|
43
|
Ma X, Zhang H, Zhao X, Yao L, Long Z. Semi-Blind Independent Component Analysis of fMRI Based on Real-Time fMRI System. IEEE Trans Neural Syst Rehabil Eng 2013; 21:416-26. [DOI: 10.1109/tnsre.2012.2184303] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
44
|
García-García I, Jurado M, Garolera M, Segura B, Marqués-Iturria I, Pueyo R, Vernet-Vernet M, Sender-Palacios M, Sala-Llonch R, Ariza M, Narberhaus A, Junqué C. Functional connectivity in obesity during reward processing. Neuroimage 2013; 66:232-9. [DOI: 10.1016/j.neuroimage.2012.10.035] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2012] [Revised: 09/18/2012] [Accepted: 10/12/2012] [Indexed: 12/27/2022] Open
|
45
|
Long Z, Li R, Hui M, Jin Z, Yao L. An improvement of independent component analysis with projection method applied to multi-task fMRI data. Comput Biol Med 2013; 43:200-10. [PMID: 23347509 DOI: 10.1016/j.compbiomed.2012.11.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2011] [Revised: 10/29/2012] [Accepted: 11/22/2012] [Indexed: 11/17/2022]
Abstract
Independent Component Analysis with projection (ICAp) method proposed by Long et al. Hum. Brain Mapp. 30 (2009) 417-431, can solve the interaction among task-related components of multi-task functional magnetic resonance imaging (fMRI) data. However, the departure of the ideal homodynamic response function (HRF) for projection from the true HRF may worse the ICAp results. In order to improve the performance of ICAp, the deconvolved ICAp (DICAp) method is proposed. Both the simulated and real fMRI experiments demonstrate that DICAp can separate more accurate time course corresponding to each task-related components and is more powerful to detect regions activated by each task only than ICAp.
Collapse
Affiliation(s)
- Zhiying Long
- State Key Lab of Cognitive Neuroscience and Learning, School of Information Science, Beijing Normal University, and Laboratory of Magnetic Resonance Imaging, Beijing 306 Hospital, Beijing 100875, China
| | | | | | | | | |
Collapse
|
46
|
Kay BP, DiFrancesco MW, Privitera MD, Gotman J, Holland SK, Szaflarski JP. Reduced default mode network connectivity in treatment-resistant idiopathic generalized epilepsy. Epilepsia 2013; 54:461-70. [PMID: 23293853 DOI: 10.1111/epi.12057] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/24/2012] [Indexed: 11/28/2022]
Abstract
PURPOSE Idiopathic generalized epilepsy (IGE) resistant to treatment is common, but its neuronal correlates are not entirely understood. Therefore, the aim of this study was to examine resting-state default mode network (DMN) functional connectivity in patients with treatment-resistant IGE. METHODS Treatment resistance was defined as continuing seizures despite an adequate dose of valproic acid (valproate, VPA). Data from 60 epilepsy patients and 38 healthy controls who underwent simultaneous electroencephalography (EEG) and resting-state functional magnetic resonance imaging (fMRI) were included (EEG/fMRI). Independent component analysis (ICA) and dual regression were used to quantify DMN connectivity. Confirmatory analysis using seed-based voxel correlation was performed. KEY FINDINGS There was a significant reduction of DMN connectivity in patients with treatment-resistant epilepsy when compared to patients who were treatment responsive and healthy controls. Connectivity was negatively correlated with duration of epilepsy. SIGNIFICANCE Our findings in this large sample of patients with IGE indicate the presence of reduced DMN connectivity in IGE and show that connectivity is further reduced in treatment-resistant epilepsy. DMN connectivity may be useful as a biomarker for treatment resistance.
Collapse
Affiliation(s)
- Benjamin P Kay
- Neuroscience Graduate Program, University of Cincinnati, Cincinnati, Ohio, USA.
| | | | | | | | | | | |
Collapse
|
47
|
Allendorfer JB, Lindsell CJ, Siegel M, Banks CL, Vannest J, Holland SK, Szaflarski JP. Females and males are highly similar in language performance and cortical activation patterns during verb generation. Cortex 2012; 48:1218-33. [PMID: 21676387 PMCID: PMC3179789 DOI: 10.1016/j.cortex.2011.05.014] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2010] [Revised: 02/18/2011] [Accepted: 05/16/2011] [Indexed: 10/18/2022]
Abstract
OBJECTIVE To test the existence of sex differences in cortical activation during verb generation when performance is controlled for. METHODS Twenty male and 20 female healthy adults underwent functional magnetic resonance imaging (fMRI) using a covert block-design verb generation task (BD-VGT) and its event-related version (ER-VGT) that allowed for intra-scanner recordings of overt responses. Task-specific activations were determined using the following contrasts: BD-VGT covert generation>finger-tapping; ER-VGT overt generation>repetition; ER-VGT overt>covert generation. Lateral cortical regions activated during each contrast were used for calculating language lateralization index scores. Voxelwise regressions were used to determine sex differences in activation, with and without controlling for performance. Each brain region showing male/female activation differences for ER-VGT overt generation>repetition (isolating noun-verb association) was defined as a region of interest (ROI). For each subject, the signal change in each ROI was extracted, and the association between ER-VGT activation related to noun-verb association and performance was assessed separately for each sex. RESULTS Males and females performed similarly on language assessments, had similar patterns of language lateralization, and exhibited similar activation patterns for each fMRI task contrast. Regression analysis controlling for overt intra-scanner performance either abolished (BD-VGT) or reduced (ER-VGT) the observed differences in activation between sexes. The main difference between sexes occurred during ER-VGT processing of noun-verb associations, where males showed greater activation than females in the right middle/superior frontal gyrus (MFG/SFG) and the right caudate/anterior cingulate gyrus (aCG) after controlling for performance. Better verb generation performance was associated with increased right caudate/aCG activation in males and with increased right MFG/SFG activation in females. CONCLUSIONS Males and females exhibit similar activation patterns during verb generation fMRI, and controlling for intra-scanner performance reduces or even abolishes sex differences in language-related activation. These results suggest that previous findings of sex differences in neuroimaging studies that did not control for task performance may reflect false positives.
Collapse
Affiliation(s)
- Jane B Allendorfer
- Department of Neurology, University of Cincinnati Academic Health Center, Cincinnati, OH 45267-0525, USA.
| | | | | | | | | | | | | |
Collapse
|
48
|
Sammler D, Koelsch S, Ball T, Brandt A, Grigutsch M, Huppertz HJ, Knösche TR, Wellmer J, Widman G, Elger CE, Friederici AD, Schulze-Bonhage A. Co-localizing linguistic and musical syntax with intracranial EEG. Neuroimage 2012; 64:134-46. [PMID: 23000255 DOI: 10.1016/j.neuroimage.2012.09.035] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2012] [Revised: 09/05/2012] [Accepted: 09/13/2012] [Indexed: 10/27/2022] Open
Abstract
Despite general agreement on shared syntactic resources in music and language, the neuroanatomical underpinnings of this overlap remain largely unexplored. While previous studies mainly considered frontal areas as supramodal grammar processors, the domain-general syntactic role of temporal areas has been so far neglected. Here we capitalized on the excellent spatial and temporal resolution of subdural EEG recordings to co-localize low-level syntactic processes in music and language in the temporal lobe in a within-subject design. We used Brain Surface Current Density mapping to localize and compare neural generators of the early negativities evoked by violations of phrase structure grammar in both music and spoken language. The results show that the processing of syntactic violations relies in both domains on bilateral temporo-fronto-parietal neural networks. We found considerable overlap of these networks in the superior temporal lobe, but also differences in the hemispheric timing and relative weighting of their fronto-temporal constituents. While alluding to the dissimilarity in how shared neural resources may be configured depending on the musical or linguistic nature of the perceived stimulus, the combined data lend support for a co-localization of early musical and linguistic syntax processing in the temporal lobe.
Collapse
Affiliation(s)
- Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
49
|
Long Z, Li R, Wen X, Jin Z, Chen K, Yao L. Separating 4D multi-task fMRI data of multiple subjects by independent component analysis with projection. Magn Reson Imaging 2012; 31:60-74. [PMID: 22898701 DOI: 10.1016/j.mri.2012.06.034] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2011] [Revised: 06/28/2012] [Accepted: 06/28/2012] [Indexed: 11/30/2022]
Abstract
Independent component analysis (ICA) is a widely accepted method to extract brain networks underlying cognitive processes from functional magnetic resonance imaging (fMRI) data. However, the application of ICA to multi-task fMRI data is limited due to the potential non-independency between task-related components. The ICA with projection (ICAp) method proposed by our group (Hum Brain Mapp 2009;30:417-31) is demonstrated to be able to solve the interactions among task-related components for single subject fMRI data. However, it still must be determined if ICAp is capable of processing multi-task fMRI data over a group of subjects. Moreover, it is unclear whether ICAp can be reliably applied to event-related (ER) fMRI data. In this study, we combined the projection method with the temporal concatenation method reported by Calhoun (Hum Brain Mapp 2008;29:828-38), referred to as group ICAp, to perform the group analysis of multi-task fMRI data. Both a human fMRI rest data-based simulation and real fMRI experiments, of block design and ER design, verified the feasibility and reliability of group ICAp, as well as demonstrated that ICAp had the strength to separate 4D multi-task fMRI data into multiple brain networks engaged in each cognitive task and to adequately find the commonalities and differences among multiple tasks.
Collapse
Affiliation(s)
- Zhiying Long
- State Key Lab of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | | | | | | | | | | |
Collapse
|
50
|
Norming the odd: creation, norming, and validation of a stimulus set for the study of incongruities across music and language. Behav Res Methods 2012; 44:81-94. [PMID: 21805062 DOI: 10.3758/s13428-011-0137-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Research into similarities between music and language processing is currently experiencing a strong renewed interest. Recent methodological advances have led to neuroimaging studies presenting striking similarities between neural patterns associated with the processing of music and language--notably, in the study of participants' responses to elements that are incongruous with their musical or linguistic context. Responding to a call for greater systematicity by leading researchers in the field of music and language psychology, this article describes the creation, selection, and validation of a set of auditory stimuli in which both congruence and resolution were manipulated in equivalent ways across harmony, rhythm, semantics, and syntax. Three conditions were created by changing the contexts preceding and following musical and linguistic incongruities originally used for effect by authors and composers: Stimuli in the incongruous-resolved condition reproduced the original incongruity and resolution into the same context; stimuli in the incongruous-unresolved condition reproduced the incongruity but continued postincongruity with a new context dictated by the incongruity; and stimuli in the congruous condition presented the same element of interest, but the entire context was adapted to match it so that it was no longer incongruous. The manipulations described in this article rendered unrecognizable the original incongruities from which the stimuli were adapted, while maintaining ecological validity. The norming procedure and validation study resulted in a significant increase in perceived oddity from congruous to incongruous-resolved and from incongruous-resolved to incongruous-unresolved in all four components of music and language, making this set of stimuli a theoretically grounded and empirically validated resource for this growing area of research.
Collapse
|