1
|
Reybrouck M, Schiavio A. Music performance as knowledge acquisition: a review and preliminary conceptual framework. Front Psychol 2024; 15:1331806. [PMID: 38390412 PMCID: PMC10883160 DOI: 10.3389/fpsyg.2024.1331806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/15/2024] [Indexed: 02/24/2024] Open
Abstract
To what extent does playing a musical instrument contribute to an individual's construction of knowledge? This paper aims to address this question by examining music performance from an embodied perspective and offering a narrative-style review of the main literature on the topic. Drawing from both older theoretical frameworks on motor learning and more recent theories on sensorimotor coupling and integration, this paper seeks to challenge and juxtapose established ideas with contemporary views inspired by recent work on embodied cognitive science. By doing so we advocate a centripetal approach to music performance, contrasting the prevalent centrifugal perspective: the sounds produced during performance not only originate from bodily action (centrifugal), but also cyclically return to it (centripetal). This perspective suggests that playing music involves a dynamic integration of both external and internal factors, transcending mere output-oriented actions and revealing music performance as a form of knowledge acquisition based on real-time sensorimotor experience.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Unit, KU Leuven, Leuven, Belgium
- Department of Musicology, IPEM, Ghent University, Ghent, Belgium
| | - Andrea Schiavio
- School of Arts and Creative Technologies, University of York, York, United Kingdom
| |
Collapse
|
2
|
Papadaki E, Koustakas T, Werner A, Lindenberger U, Kühn S, Wenger E. Resting-state functional connectivity in an auditory network differs between aspiring professional and amateur musicians and correlates with performance. Brain Struct Funct 2023; 228:2147-2163. [PMID: 37792073 PMCID: PMC10587189 DOI: 10.1007/s00429-023-02711-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 09/10/2023] [Indexed: 10/05/2023]
Abstract
Auditory experience-dependent plasticity is often studied in the domain of musical expertise. Available evidence suggests that years of musical practice are associated with structural and functional changes in auditory cortex and related brain regions. Resting-state functional magnetic resonance imaging (MRI) can be used to investigate neural correlates of musical training and expertise beyond specific task influences. Here, we compared two groups of musicians with varying expertise: 24 aspiring professional musicians preparing for their entrance exam at Universities of Arts versus 17 amateur musicians without any such aspirations but who also performed music on a regular basis. We used an interval recognition task to define task-relevant brain regions and computed functional connectivity and graph-theoretical measures in this network on separately acquired resting-state data. Aspiring professionals performed significantly better on all behavioral indicators including interval recognition and also showed significantly greater network strength and global efficiency than amateur musicians. Critically, both average network strength and global efficiency were correlated with interval recognition task performance assessed in the scanner, and with an additional measure of interval identification ability. These findings demonstrate that task-informed resting-state fMRI can capture connectivity differences that correspond to expertise-related differences in behavior.
Collapse
Affiliation(s)
- Eleftheria Papadaki
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany.
- International Max Planck Research School on the Life Course (LIFE), Berlin, Germany.
| | - Theodoros Koustakas
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - André Werner
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - Ulman Lindenberger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany, London, UK
| | - Simone Kühn
- Lise Meitner Group for Environmental Neuroscience, Max Planck Institute for Human Development, Berlin, Germany
- Neuronal Plasticity Working Group, Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Elisabeth Wenger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| |
Collapse
|
3
|
Vuong V, Hewan P, Perron M, Thaut MH, Alain C. The neural bases of familiar music listening in healthy individuals: An activation likelihood estimation meta-analysis. Neurosci Biobehav Rev 2023; 154:105423. [PMID: 37839672 DOI: 10.1016/j.neubiorev.2023.105423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 10/06/2023] [Accepted: 10/08/2023] [Indexed: 10/17/2023]
Abstract
Accumulating evidence suggests that the neural activations during music listening differs as a function of familiarity with the excerpts. However, the implicated brain areas are unclear. After an extensive literature search, we conducted an Activation Likelihood Estimation analysis on 23 neuroimaging studies (232 foci, 364 participants) to identify consistently activated brain regions when healthy adults listen to familiar music, compared to unfamiliar music or an equivalent condition. The results revealed a left cortical-subcortical co-activation pattern comprising three significant clusters localized to the supplementary motor areas (BA 6), inferior frontal gyrus (IFG, BA 44), and the claustrum/insula. Our results are discussed in a predictive coding framework, whereby temporal expectancies and familiarity may drive motor activations, despite any overt movement. Though conventionally associated with syntactic violation, our observed activation in the IFG may support a recent proposal of its involvement in a network that subserves both violation and prediction. Finally, the claustrum/insula plays an integral role in auditory processing, functioning as a hub that integrates sensory and limbic information to (sub)cortical structures.
Collapse
Affiliation(s)
- Veronica Vuong
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A8, Canada; Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada; Music and Health Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON M5S 2C5, Canada.
| | - Patrick Hewan
- Department of Psychology, York University, Toronto, ON M3J 1P3, Canada
| | - Maxime Perron
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada; Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
| | - Michael H Thaut
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A8, Canada; Music and Health Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON M5S 2C5, Canada; Rehabilitation Sciences Institute, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Claude Alain
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A8, Canada; Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada; Music and Health Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON M5S 2C5, Canada; Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
| |
Collapse
|
4
|
Schmuckler MA, Moranis R. Rhythm contour drives musical memory. Atten Percept Psychophys 2023; 85:2502-2514. [PMID: 36991289 DOI: 10.3758/s13414-023-02700-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2023] [Indexed: 03/31/2023]
Abstract
Listeners' use of contour information as a basis for memory of rhythmic patterns was explored in two experiments. Both studies employed a short-term memory paradigm in which listeners heard a standard rhythm, followed by a comparison rhythm, and judged whether the comparison was the same as the standard. Comparison rhythms included exact repetitions of the standard, same contour rhythms in which the relative interval durations of successive notes (but not the absolute durations of the notes themselves) were the same as the standard, and different contour rhythms in which the relative duration intervals of successive notes differed from the standard. Experiment 1 employed metric rhythms, whereas Experiment 2 employed ametric rhythms. D-prime analyses revealed that, in both experiments, listeners showed better discrimination for different contour rhythms relative to same contour rhythms. Paralleling classic work on melodic contour, these findings indicate that the concept of contour is both relevant to one's characterization of the rhythm of musical patterns and influences short-term memory for such patterns.
Collapse
Affiliation(s)
- Mark A Schmuckler
- Department of Psychology, University of Toronto Scarborough, 1265 Military Trail, Scarborough, ON, M1C 1A4, Canada.
| | | |
Collapse
|
5
|
Lantis KD, Schne P, Bland CR, Wilder J, Hock K, Glover NA, Hackney ME, Lustberg MB, Worthen-Chaudhari L. Biomechanical effect of neurologic dance training (NDT) for breast cancer survivors with chemotherapy-induced neuropathy: study protocol for a randomized controlled trail and preliminary baseline data. RESEARCH SQUARE 2023:rs.3.rs-2988661. [PMID: 37461666 PMCID: PMC10350217 DOI: 10.21203/rs.3.rs-2988661/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
Background Breast cancer (BC) is among the most common forms of cancer experienced by women. Up to 80% of BC survivors treated with chemotherapy experience chemotherapy-induced neuropathy (CIN), which degrades motor control, sensory function, and quality of life. CIN symptoms include numbness, tingling, and/or burning sensations in the extremities; deficits in neuromotor control; and increased fall risk. Physical activity (PA) and music-based medicine (MBM) are promising avenues to address sensorimotor symptoms. Therefore, we propose that we can combine the effects of music- and PA-based medicine through Neurologic Dance Training (NDT) through partnered Adapted Tango (NDT-Tango). We will assess the intervention effect of NDT-Tango v. home exercise (HEX) intervention on biomechanically-measured variables. We hypothesize that 8 weeks of NDT-Tango practice will improve the dynamics of posture and gait more than 8 weeks of HEX. Methods In a single-center, prospective, two-arm randomized controlled clinical trial, participants are randomly assigned (1:1 ratio) to the NDT-Tango experimental or the HEX active control intervention group. Primary endpoints are change from baseline to after intervention in posture and gait. Outcomes are collected at baseline, midpoint, post, 1mo follow up, and 6mo follow up. Secondary and tertiary outcomes include clinical and biomechanical tests of function and questionnaires used to compliment primary outcome measures. Linear mixed models will be used to model changes in postural, biomechanical, and PROs. The primary estimand will be the contrast representing the difference in mean change in outcome measure from baseline to week 8 between treatment groups. Discussion The scientific premise of this study is that NDT-Tango stands to achieve more gains than PA practice alone through combining PA with MBM and social engagement. Our findings may lead to a safe non-pharmacologic intervention that improves CIN-related deficits. Trial Registration This trial was first posted on 11/09/21 at ClinicalTrials.gov under the identifier NCT05114005.
Collapse
Affiliation(s)
- Kristen D Lantis
- College of Medicine, Department of Physical Medicine and Rehabilitation, The Ohio State University, Columbus, Ohio
| | - Patrick Schne
- College of Public Health, Division of Biostatistics, The Ohio State University, Columbus, Ohio
| | - Courtney R Bland
- College of Public Health, Division of Biostatistics, The Ohio State University, Columbus, Ohio
| | - Jacqueline Wilder
- College of Medicine, Department of Physical Medicine and Rehabilitation, The Ohio State University, Columbus, Ohio
| | - Karen Hock
- Comprehensive Cancer Center, The Ohio State University, Columbus OH
| | | | - Madeleine E Hackney
- Department of Medicine, Division of Geriatrics and Gerontology, Emory University School of Medicine, Atlanta, GA
- Atlanta VA Center for Visual and Neurocognitive Rehabilitation, Atlanta, GA
| | | | - Lise Worthen-Chaudhari
- College of Medicine, Department of Physical Medicine and Rehabilitation, The Ohio State University, Columbus, Ohio
| |
Collapse
|
6
|
Li Q, Gong D, Shen J, Rao C, Ni L, Zhang H. SF-MVPA: A from raw data to statistical results and surface space-based MVPA toolbox. Front Neurosci 2022; 16:1046752. [DOI: 10.3389/fnins.2022.1046752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 10/21/2022] [Indexed: 11/22/2022] Open
Abstract
Compared with traditional volume space-based multivariate pattern analysis (MVPA), surface space-based MVPA has many advantages and has received increasing attention. However, surface space-based MVPA requires considerable programming and is therefore difficult for people without a programming foundation. To address this, we developed a MATLAB toolbox based on a graphical interactive interface (GUI) called surface space-based multivariate pattern analysis (SF-MVPA) in this manuscript. Unlike the traditional MVPA toolboxes, which often only include MVPA calculation processes after data preprocessing, SF-MVPA covers the complete pipeline of surface space-based MVPA, including raw data format conversion, surface reconstruction, functional magnetic resonance (fMRI) data preprocessing, comparative analysis, surface space-based MVPA, leave one-run out cross validation, and family-wise error correction. With SF-MVPA, users can complete the complete pipeline of surface space-based MVPA without programming. In addition, SF-MVPA is designed for parallel computing and hence has high computational efficiency. After introducing SF-MVPA, we analyzed a sample dataset of tonal working memory load. By comparison with another surface space-based MVPA toolbox named CoSMoMVPA, we found that the two toolboxes obtained consistent results. We hope that through this toolbox, users can more easily implement surface space-based MVPA.
Collapse
|
7
|
Li Q, Gong D, Tang H, Tian J. The neural coding of tonal working memory load: An functional magnetic resonance imaging study. Front Neurosci 2022; 16:979787. [PMID: 36330345 PMCID: PMC9623178 DOI: 10.3389/fnins.2022.979787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 10/03/2022] [Indexed: 11/13/2022] Open
Abstract
Tonal working memory load refers to the number of pitches held in working memory. It has been found that different verbal working memory loads have different neural coding (local neural activity pattern). However, whether there exists a comparable phenomenon for tonal working memory load remains unclear. In this study, we used a delayed match-to-sample paradigm to evoke tonal working memory. Neural coding of different tonal working memory loads was studied with a surface space and convolution neural network (CNN)-based multivariate pattern analysis (SC-MVPA) method. We found that first, neural coding of tonal working memory was significantly different from that of the control condition in the bilateral superior temporal gyrus (STG), supplement motor area (SMA), and precentral gyrus (PCG). Second, neural coding of nonadjacent tonal working memory loads was distinguishable in the bilateral STG and PCG. Third, neural coding is gradually enhanced as the memory load increases. Finally, neural coding of tonal working memory was encoded in the bilateral STG in the encoding phase and shored in the bilateral PCG and SMA in the maintenance phase.
Collapse
Affiliation(s)
- Qiang Li
- College of Education Science, Guizhou Education University, Guiyang, China
- *Correspondence: Qiang Li,
| | | | - Huiyi Tang
- College of Education Science, Guizhou Education University, Guiyang, China
| | - Jing Tian
- College of Education Science, Guizhou Education University, Guiyang, China
| |
Collapse
|
8
|
Wheeler HJ, Hatch DR, Moody-Antonio SA, Nie Y. Music and Speech Perception in Prelingually Deafened Young Listeners With Cochlear Implants: A Preliminary Study Using Sung Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3951-3965. [PMID: 36179251 DOI: 10.1044/2022_jslhr-21-00271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE In the context of music and speech perception, this study aimed to assess the effect of variation in one of two auditory attributes-pitch contour and timbre-on the perception of the other in prelingually deafened young cochlear implant (CI) users, and the relationship between pitch contour perception and two cognitive functions of interest. METHOD Nine prelingually deafened CI users, aged 8.75-22.17 years, completed a melodic contour identification (MCI) task using stimuli of piano notes or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note), a speech perception task identifying matrix-styled sentences naturally intonated or sung with a fixed pitch (same pitch for each word) or a mixed pitch (different pitches for each word), a forward digit span test indexing auditory short-term memory (STM), and the matrices section of the Kaufman Brief Intelligence Test-Second Edition indexing nonverbal IQ. RESULTS MCI was significantly poorer for the mixed timbre condition. Speech perception was significantly poorer for the fixed and mixed pitch conditions than for the naturally intonated condition. Auditory STM positively correlated with MCI at 2- and 3-semitone note spacings. Relative to their normal-hearing peers from a related study using the same stimuli and tasks, the CI participants showed comparable MCI at 2- or 3-semitone note spacing, and a comparable level of significant decrement in speech perception across three pitch contour conditions. CONCLUSION Findings suggest that prelingually deafened CI users show similar trends of normal-hearing peers for the effect of variation in pitch contour or timbre on the perception of the other, and that cognitive functions may underlie these outcomes to some extent, at least for the perception of pitch contour. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21217937.
Collapse
Affiliation(s)
- Harley J Wheeler
- Department of Communication Sciences and Disorders, James Madison University, Harrisonburg, VA
| | - Debora R Hatch
- Department of Otolaryngology, Eastern Virginia Medical School, Norfolk
| | | | - Yingjiu Nie
- Department of Communication Sciences and Disorders, James Madison University, Harrisonburg, VA
| |
Collapse
|
9
|
Li Q, Gong D, Zhang Y, Zhang H, Liu G. The bottom-up information transfer process and top-down attention control underlying tonal working memory. Front Neurosci 2022; 16:935120. [PMID: 35979330 PMCID: PMC9376259 DOI: 10.3389/fnins.2022.935120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 06/30/2022] [Indexed: 11/24/2022] Open
Abstract
Tonal working memory has been less investigated by neuropsychological and neuroimaging studies and even less in terms of tonal working memory load. In this study, we analyzed the dynamic cortical processing process of tonal working memory with an original surface-space-based multivariate pattern analysis (sf-MVPA) method and found that this process constituted a bottom-up information transfer process. Then, the local cortical activity pattern, local cortical response strength, and cortical functional connectivity under different tonal working memory loads were investigated. No brain area’s local activity pattern or response strength was significantly different under different memory loads. Meanwhile, the interactions between the auditory cortex (AC) and an attention control network were linearly correlated with the memory load. This finding shows that the neural mechanism underlying the tonal working memory load does not arise from changes in local activity patterns or changes in the local response strength, but from top-down attention control. Our results indicate that the implementation of tonal working memory is based on the cooperation of the bottom-up information transfer process and top-down attention control.
Collapse
Affiliation(s)
- Qiang Li
- College of Education Science, Guizhou Education University, Guiyang, China
| | - Dinghong Gong
- Office of Academic Affairs, Guizhou Education University, Guiyang, China
| | - Yuan Zhang
- College of Education Science, Guizhou Education University, Guiyang, China
| | - Hongyi Zhang
- College of Education Science, Guizhou Education University, Guiyang, China
| | - Guangyuan Liu
- College of Electronic and Information Engineering, Southwest University, Chongqing, China
- *Correspondence: Guangyuan Liu,
| |
Collapse
|
10
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 99] [Impact Index Per Article: 49.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
11
|
Ashton K, Zinszer BD, Cichy RM, Nelson CA, Aslin RN, Bayet L. Time-resolved multivariate pattern analysis of infant EEG data: A practical tutorial. Dev Cogn Neurosci 2022; 54:101094. [PMID: 35248819 PMCID: PMC8897621 DOI: 10.1016/j.dcn.2022.101094] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 10/22/2021] [Accepted: 02/24/2022] [Indexed: 01/27/2023] Open
Abstract
Time-resolved multivariate pattern analysis (MVPA), a popular technique for analyzing magneto- and electro-encephalography (M/EEG) neuroimaging data, quantifies the extent and time-course by which neural representations support the discrimination of relevant stimuli dimensions. As EEG is widely used for infant neuroimaging, time-resolved MVPA of infant EEG data is a particularly promising tool for infant cognitive neuroscience. MVPA has recently been applied to common infant imaging methods such as EEG and fNIRS. In this tutorial, we provide and describe code to implement time-resolved, within-subject MVPA with infant EEG data. An example implementation of time-resolved MVPA based on linear SVM classification is described, with accompanying code in Matlab and Python. Results from a test dataset indicated that in both infants and adults this method reliably produced above-chance accuracy for classifying stimuli images. Extensions of the classification analysis are presented including both geometric- and accuracy-based representational similarity analysis, implemented in Python. Common choices of implementation are presented and discussed. As the amount of artifact-free EEG data contributed by each participant is lower in studies of infants than in studies of children and adults, we also explore and discuss the impact of varying participant-level inclusion thresholds on resulting MVPA findings in these datasets.
Collapse
Affiliation(s)
- Kira Ashton
- Department of Neuroscience, American University, Washington, DC 20016, USA; Center for Neuroscience and Behavior, American University, Washington, DC 20016, USA.
| | | | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany
| | - Charles A Nelson
- Boston Children's Hospital, Boston, MA 02115, USA; Department of Pediatrics, Harvard Medical School, Boston, MA 02115, USA; Graduate School of Education, Harvard, Cambridge, MA 02138, USA
| | - Richard N Aslin
- Haskins Laboratories, 300 George Street, New Haven, CT 06511, USA; Psychological Sciences Department, University of Connecticut, Storrs, CT 06269, USA; Department of Psychology, Yale University, New Haven, CT 06511, USA; Yale Child Study Center, School of Medicine, New Haven, CT 06519, USA
| | - Laurie Bayet
- Department of Neuroscience, American University, Washington, DC 20016, USA; Center for Neuroscience and Behavior, American University, Washington, DC 20016, USA
| |
Collapse
|
12
|
Sihvonen AJ, Pitkäniemi A, Leo V, Soinila S, Särkämö T. Resting-state language network neuroplasticity in post-stroke music listening: A randomized controlled trial. Eur J Neurosci 2021; 54:7886-7898. [PMID: 34763370 DOI: 10.1111/ejn.15524] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 10/13/2021] [Accepted: 11/08/2021] [Indexed: 01/31/2023]
Abstract
Recent evidence suggests that post-stroke vocal music listening can aid language recovery, but the network-level functional neuroplasticity mechanisms of this effect are unknown. Here, we sought to determine if improved language recovery observed after post-stroke listening to vocal music is driven by changes in longitudinal resting-state functional connectivity within the language network. Using data from a single-blind randomized controlled trial on stroke patients (N = 38), we compared the effects of daily listening to self-selected vocal music, instrumental music and audio books on changes of the resting-state functional connectivity within the language network and their correlation to improved language skills and verbal memory during the first 3 months post-stroke. From acute to 3-month stage, the vocal music and instrumental music groups increased functional connectivity between a cluster comprising the left inferior parietal areas and the language network more than the audio book group. However, the functional connectivity increase correlated with improved verbal memory only in the vocal music group cluster. This study shows that listening to vocal music post-stroke promotes recovery of verbal memory by inducing changes in longitudinal functional connectivity in the language network. Our results conform to the variable neurodisplacement theory underpinning aphasia recovery.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Centre for Clinical Research, The University of Queensland, Brisbane, Queensland, Australia
| | - Anni Pitkäniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Seppo Soinila
- Neurocenter, Turku University Hospital and Division of Clinical Neurosciences, University of Turku, Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
13
|
Murai S, Yang AN, Hiryu S, Kobayasi KI. Music in Noise: Neural Correlates Underlying Noise Tolerance in Music-Induced Emotion. Cereb Cortex Commun 2021; 2:tgab061. [PMID: 34746792 PMCID: PMC8564766 DOI: 10.1093/texcom/tgab061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 09/25/2021] [Accepted: 09/26/2021] [Indexed: 11/14/2022] Open
Abstract
Music can be experienced in various acoustic qualities. In this study, we investigated how the acoustic quality of the music can influence strong emotional experiences, such as musical chills, and the neural activity. The music’s acoustic quality was controlled by adding noise to musical pieces. Participants listened to clear and noisy musical pieces and pressed a button when they experienced chills. We estimated neural activity in response to chills under both clear and noisy conditions using functional magnetic resonance imaging (fMRI). The behavioral data revealed that compared with the clear condition, the noisy condition dramatically decreased the number of chills and duration of chills. The fMRI results showed that under both noisy and clear conditions the supplementary motor area, insula, and superior temporal gyrus were similarly activated when participants experienced chills. The involvement of these brain regions may be crucial for music-induced emotional processes under the noisy as well as the clear condition. In addition, we found a decrease in the activation of the right superior temporal sulcus when experiencing chills under the noisy condition, which suggests that music-induced emotional processing is sensitive to acoustic quality.
Collapse
Affiliation(s)
- Shota Murai
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Ae Na Yang
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Shizuko Hiryu
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Kohta I Kobayasi
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| |
Collapse
|
14
|
Boebinger D, Norman-Haignere SV, McDermott JH, Kanwisher N. Music-selective neural populations arise without musical training. J Neurophysiol 2021; 125:2237-2263. [PMID: 33596723 PMCID: PMC8285655 DOI: 10.1152/jn.00588.2020] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/12/2021] [Accepted: 02/12/2021] [Indexed: 11/22/2022] Open
Abstract
Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.NEW & NOTEWORTHY We show that music-selective neural populations are clearly present in people without musical training, demonstrating that they are a fundamental and widespread property of the human brain. Additionally, we show music-selective neural populations respond strongly to music from unfamiliar genres as well as music with rhythm but little pitch information, suggesting that they are broadly responsive to music as a whole.
Collapse
Affiliation(s)
- Dana Boebinger
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Sam V Norman-Haignere
- Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, Paris France
- Zuckerman Institute for Brain Research, Columbia University, New York, New York
| | - Josh H McDermott
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Nancy Kanwisher
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
15
|
Pitch direction on the perception of major and minor modes. Atten Percept Psychophys 2020; 83:399-414. [PMID: 33230730 DOI: 10.3758/s13414-020-02198-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/30/2020] [Indexed: 11/08/2022]
Abstract
One factor affecting the qualia of music perception is the major/minor mode distinction. Major modes are perceived as more arousing, happier, positive, brighter, and less awkward than minor modes. This difference in emotionality of modes is also affected by pitch direction, with ascending pitch associated with positive affect and decreasing pitch with negative affect. The present study examined whether pitch direction influenced the identification of major versus minor musical modes. In six experiments, participants were familiarized with ascending and descending major and minor modes. We then played ascending and descending scales or simple eight-note melodies and asked listeners to identify the mode (major or minor). Identification of mode was moderated by pitch direction: major modes were identified more accurately when played with ascending pitch, and minor modes were identified better when played with descending pitch. Additionally, we replicated the difference in emotional affect between major and minor modes. The crossover pattern in mode identification may result from dual activation of positive and negative constructs, under specific combinations of mode and pitch direction.
Collapse
|
16
|
Jasmin K, Dick F, Stewart L, Tierney AT. Altered functional connectivity during speech perception in congenital amusia. eLife 2020; 9:e53539. [PMID: 32762842 PMCID: PMC7449693 DOI: 10.7554/elife.53539] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 08/03/2020] [Indexed: 12/11/2022] Open
Abstract
Individuals with congenital amusia have a lifelong history of unreliable pitch processing. Accordingly, they downweight pitch cues during speech perception and instead rely on other dimensions such as duration. We investigated the neural basis for this strategy. During fMRI, individuals with amusia (N = 15) and controls (N = 15) read sentences where a comma indicated a grammatical phrase boundary. They then heard two sentences spoken that differed only in pitch and/or duration cues and selected the best match for the written sentence. Prominent reductions in functional connectivity were detected in the amusia group between left prefrontal language-related regions and right hemisphere pitch-related regions, which reflected the between-group differences in cue weights in the same groups of listeners. Connectivity differences between these regions were not present during a control task. Our results indicate that the reliability of perceptual dimensions is linked with functional connectivity between frontal and perceptual regions and suggest a compensatory mechanism.
Collapse
Affiliation(s)
- Kyle Jasmin
- Department of Psychological Sciences, Birkbeck University of LondonLondonUnited Kingdom
- UCL Institute of Cognitive Neuroscience, University College LondonLondonUnited Kingdom
| | - Frederic Dick
- Department of Psychological Sciences, Birkbeck University of LondonLondonUnited Kingdom
- Department of Experimental Psychology, University College LondonLondonUnited Kingdom
| | - Lauren Stewart
- Department of Psychology, Goldsmiths University of LondonLondonUnited Kingdom
| | - Adam Taylor Tierney
- Department of Psychological Sciences, Birkbeck University of LondonLondonUnited Kingdom
| |
Collapse
|
17
|
Lacey S, Nguyen J, Schneider P, Sathian K. Crossmodal Visuospatial Effects on Auditory Perception of Musical Contour. Multisens Res 2020; 34:113-127. [PMID: 33706275 DOI: 10.1163/22134808-bja10034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Accepted: 07/08/2020] [Indexed: 11/19/2022]
Abstract
The crossmodal correspondence between auditory pitch and visuospatial elevation (in which high- and low-pitched tones are associated with high and low spatial elevation respectively) has been proposed as the basis for Western musical notation. One implication of this is that music perception engages visuospatial processes and may not be exclusively auditory. Here, we investigated how music perception is influenced by concurrent visual stimuli. Participants listened to unfamiliar five-note musical phrases with four kinds of pitch contour (rising, falling, rising-falling, or falling-rising), accompanied by incidental visual contours that were either congruent (e.g., auditory rising/visual rising) or incongruent (e.g., auditory rising/visual falling) and judged whether the final note of the musical phrase was higher or lower in pitch than the first. Response times for the auditory judgment were significantly slower for incongruent compared to congruent trials, i.e., there was a congruency effect, even though the visual contours were incidental to the auditory task. These results suggest that music perception, although generally regarded as an auditory experience, may actually be multisensory in nature.
Collapse
Affiliation(s)
- Simon Lacey
- 1Department of Neurology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA.,2Department of Neural and Behavioral Sciences, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
| | - James Nguyen
- 1Department of Neurology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
| | - Peter Schneider
- 3Department of Neuroradiology, Heidelberg Medical School, Heidelberg, Germany.,4Department of Neurology, Heidelberg Medical School, Heidelberg, Germany
| | - K Sathian
- 1Department of Neurology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA.,2Department of Neural and Behavioral Sciences, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA.,5Department of Psychology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
| |
Collapse
|
18
|
Recursive music elucidates neural mechanisms supporting the generation and detection of melodic hierarchies. Brain Struct Funct 2020; 225:1997-2015. [PMID: 32591927 PMCID: PMC7473971 DOI: 10.1007/s00429-020-02105-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 06/16/2020] [Indexed: 12/17/2022]
Abstract
The ability to generate complex hierarchical structures is a crucial component of human cognition which can be expressed in the musical domain in the form of hierarchical melodic relations. The neural underpinnings of this ability have been investigated by comparing the perception of well-formed melodies with unexpected sequences of tones. However, these contrasts do not target specifically the representation of rules generating hierarchical structure. Here, we present a novel paradigm in which identical melodic sequences are generated in four steps, according to three different rules: The Recursive rule, generating new hierarchical levels at each step; The Iterative rule, adding tones within a fixed hierarchical level without generating new levels; and a control rule that simply repeats the third step. Using fMRI, we compared brain activity across these rules when participants are imagining the fourth step after listening to the third (generation phase), and when participants listened to a fourth step (test sound phase), either well-formed or a violation. We found that, in comparison with Repetition and Iteration, imagining the fourth step using the Recursive rule activated the superior temporal gyrus (STG). During the test sound phase, we found fronto-temporo-parietal activity and hippocampal de-activation when processing violations, but no differences between rules. STG activation during the generation phase suggests that generating new hierarchical levels from previous steps might rely on retrieving appropriate melodic hierarchy schemas. Previous findings highlighting the role of hippocampus and inferior frontal gyrus may reflect processing of unexpected melodic sequences, rather than hierarchy generation per se.
Collapse
|
19
|
Knorr FG, Neukam PT, Fröhner JH, Mohr H, Smolka MN, Marxen M. A comparison of fMRI and behavioral models for predicting inter-temporal choices. Neuroimage 2020; 211:116634. [DOI: 10.1016/j.neuroimage.2020.116634] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 02/10/2020] [Accepted: 02/12/2020] [Indexed: 10/25/2022] Open
|
20
|
The Rapid Emergence of Musical Pitch Structure in Human Cortex. J Neurosci 2020; 40:2108-2118. [PMID: 32001611 DOI: 10.1523/jneurosci.1399-19.2020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2019] [Revised: 01/06/2020] [Accepted: 01/07/2020] [Indexed: 11/21/2022] Open
Abstract
In tonal music, continuous acoustic waveforms are mapped onto discrete, hierarchically arranged, internal representations of pitch. To examine the neural dynamics underlying this transformation, we presented male and female human listeners with tones embedded within a Western tonal context while recording their cortical activity using magnetoencephalography. Machine learning classifiers were then trained to decode different tones from their underlying neural activation patterns at each peristimulus time sample, providing a dynamic measure of their dissimilarity in cortex. Comparing the time-varying dissimilarity between tones with the predictions of acoustic and perceptual models, we observed a temporal evolution in the brain's representational structure. Whereas initial dissimilarities mirrored their fundamental-frequency separation, dissimilarities beyond 200 ms reflected the perceptual status of each tone within the tonal hierarchy of Western music. These effects occurred regardless of stimulus regularities within the context or whether listeners were engaged in a task requiring explicit pitch analysis. Lastly, patterns of cortical activity that discriminated between tones became increasingly stable in time as the information coded by those patterns transitioned from low-to-high level properties. Current results reveal the dynamics with which the complex perceptual structure of Western tonal music emerges in cortex at the timescale of an individual tone.SIGNIFICANCE STATEMENT Little is understood about how the brain transforms an acoustic waveform into the complex perceptual structure of musical pitch. Applying neural decoding techniques to the cortical activity of human subjects engaged in music listening, we measured the dynamics of information processing in the brain on a moment-to-moment basis as subjects heard each tone. In the first 200 ms after onset, transient patterns of neural activity coded the fundamental frequency of tones. Subsequently, a period emerged during which more temporally stable activation patterns coded the perceptual status of each tone within the "tonal hierarchy" of Western music. Our results provide a crucial link between the complex perceptual structure of tonal music and the underlying neural dynamics from which it emerges.
Collapse
|
21
|
Koshimori Y, Thaut MH. New Perspectives on Music in Rehabilitation of Executive and Attention Functions. Front Neurosci 2019; 13:1245. [PMID: 31803013 PMCID: PMC6877665 DOI: 10.3389/fnins.2019.01245] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Accepted: 11/05/2019] [Indexed: 01/28/2023] Open
Abstract
Modern music therapy, starting around the middle of the twentieth century was primarily conceived to promote emotional well-being and to facilitate social group association and integration. Therefore, it was rooted mostly in social science concepts. More recently, music as therapy began to move decidedly toward perspectives of neuroscience. This has been facilitated by the advent of neuroimaging techniques that help uncover the therapeutic mechanisms for non-musical goals in the brain processes underlying music perception, cognition, and production. In this paper, we focus on executive function (EF) and attentional processes (AP) that are central for cognitive rehabilitation efforts. To this end, we summarize existing behavioral as well as neuroimaging and neurophysiological studies in musicians, non-musicians, and clinical populations. Musical improvisation and instrumental playing may have some potential for EF/AP stimulation and neurorehabilitation. However, more neuroimaging studies are needed to investigate the neural mechanisms for the active musical performance. Furthermore, more randomized clinical trials combined with neuroimaging techniques are warranted to demonstrate the specific efficacy and neuroplasticity induced by music-based interventions.
Collapse
Affiliation(s)
- Yuko Koshimori
- Music and Health Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON, Canada
| | | |
Collapse
|
22
|
Arco JE, González-García C, Díaz-Gutiérrez P, Ramírez J, Ruz M. Influence of activation pattern estimates and statistical significance tests in fMRI decoding analysis. J Neurosci Methods 2018; 308:248-260. [PMID: 30352691 DOI: 10.1016/j.jneumeth.2018.06.017] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Revised: 06/01/2018] [Accepted: 06/25/2018] [Indexed: 10/28/2022]
Abstract
The use of Multi-Voxel Pattern Analysis (MVPA) has increased considerably in recent functional magnetic resonance imaging (fMRI) studies. A crucial step consists in the choice of a method for the estimation of responses. However, a systematic comparison of the different estimation alternatives and their adequacy to predominant experimental design is missing. In the current study we compared three pattern estimation methods: Least-Squares Unitary (LSU), based on run-wise estimation, Least-Squares All (LSA) and Least-Squares Separate (LSS), which rely on trial-wise estimation. We compared the efficiency of these methods in an experiment where sustained activity needed to be isolated from zero-duration events as well as in a block-design approach and in a event-related design. We evaluated the sensitivity of the t-test in comparison with two non-parametric methods based on permutation testing: one proposed in Stelzer et al. (2013), equivalent to performing a permutation in each voxel separately and the Threshold-Free Cluster Enhancement. LSS resulted the most accurate approach to address the large overlap of signal among close events in the event-related designs. We found a larger sensitivity of Stelzer's method in all settings, especially in the event-related designs, where voxels close to surpass the statistical threshold with the other approaches were now marked as informative regions. Our results provide evidence that LSS is the most accurate approach for unmixing events with different duration and large overlap of signal. This is consistent with previous studies showing that LSS handles large collinearity better than other methods. Moreover, Stelzer's potentiates this better estimation with its large sensitivity.
Collapse
Affiliation(s)
- Juan E Arco
- Mind, Brain and Behavior Research Centre (CIMCYC), Spain
| | - Carlos González-García
- Mind, Brain and Behavior Research Centre (CIMCYC), Spain; Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, 9000 Ghent, Belgium
| | | | - Javier Ramírez
- Department of Signal Theory, Networking and Communications, University of Granada, Granada 18071, Spain.
| | - María Ruz
- Mind, Brain and Behavior Research Centre (CIMCYC), Spain
| |
Collapse
|
23
|
Karpati FJ, Giacosa C, Foster NEV, Penhune VB, Hyde KL. Structural Covariance Analysis Reveals Differences Between Dancers and Untrained Controls. Front Hum Neurosci 2018; 12:373. [PMID: 30319377 PMCID: PMC6167617 DOI: 10.3389/fnhum.2018.00373] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Accepted: 08/30/2018] [Indexed: 12/31/2022] Open
Abstract
Dancers and musicians differ in brain structure from untrained individuals. Structural covariance (SC) analysis can provide further insight into training-associated brain plasticity by evaluating interregional relationships in gray matter (GM) structure. The objectives of the present study were to compare SC of cortical thickness (CT) between expert dancers, expert musicians and untrained controls, as well as to examine the relationship between SC and performance on dance- and music-related tasks. A reduced correlation between CT in the left dorsolateral prefrontal cortex (DLPFC) and mean CT across the whole brain was found in the dancers compared to the controls, and a reduced correlation between these two CT measures was associated with higher performance on a dance video game task. This suggests that the left DLPFC is structurally decoupled in dancers and may be more strongly affected by local training-related factors than global factors in this group. This work provides a better understanding of structural brain connectivity and training-induced brain plasticity, as well as their interaction with behavior in dance and music.
Collapse
Affiliation(s)
- Falisha J Karpati
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Faculty of Medicine, McGill University, Montreal, QC, Canada
| | - Chiara Giacosa
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Nicholas E V Foster
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Department of Psychology, Université de Montréal, Montreal, QC, Canada
| | - Virginia B Penhune
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Krista L Hyde
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Faculty of Medicine, McGill University, Montreal, QC, Canada.,Department of Psychology, Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
24
|
Cognitive Load Changes during Music Listening and its Implication in Earcon Design in Public Environments: An fNIRS Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2018; 15:ijerph15102075. [PMID: 30248908 PMCID: PMC6210363 DOI: 10.3390/ijerph15102075] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 09/13/2018] [Accepted: 09/17/2018] [Indexed: 11/16/2022]
Abstract
A key for earcon design in public environments is to incorporate an individual’s perceived level of cognitive load for better communication. This study aimed to examine the cognitive load changes required to perform a melodic contour identification task (CIT). While healthy college students (N = 16) were presented with five CITs, behavioral (reaction time and accuracy) and cerebral hemodynamic responses were measured using functional near-infrared spectroscopy. Our behavioral findings showed a gradual increase in cognitive load from CIT1 to CIT3 followed by an abrupt increase between CIT4 (i.e., listening to two concurrent melodic contours in an alternating manner and identifying the direction of the target contour, p < 0.001) and CIT5 (i.e., listening to two concurrent melodic contours in a divided manner and identifying the directions of both contours, p < 0.001). Cerebral hemodynamic responses showed a congruent trend with behavioral findings. Specific to the frontopolar area (Brodmann’s area 10), oxygenated hemoglobin increased significantly between CIT4 and CIT5 (p < 0.05) while the level of deoxygenated hemoglobin decreased. Altogether, the findings indicate that the cognitive threshold for young adults (CIT5) and appropriate tuning of the relationship between timbre and pitch contour can lower the perceived cognitive load and, thus, can be an effective design strategy for earcon in a public environment.
Collapse
|
25
|
Stevenage SV, Neil GJ, Parsons B, Humphreys A. A sound effect: Exploration of the distinctiveness advantage in voice recognition. APPLIED COGNITIVE PSYCHOLOGY 2018; 32:526-536. [PMID: 30333682 PMCID: PMC6175009 DOI: 10.1002/acp.3424] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2017] [Revised: 05/21/2018] [Accepted: 06/01/2018] [Indexed: 11/30/2022]
Abstract
Two experiments are presented, which explore the presence of a distinctiveness advantage when recognising unfamiliar voices. In Experiment 1, distinctive voices were recognised significantly better, and with greater confidence, in a sequential same/different matching task compared with typical voices. These effects were replicated and extended in Experiment 2, as distinctive voices were recognised better even under challenging listening conditions imposed by nonsense sentences and temporal reversal. Taken together, the results aligned well with similar results when processing faces, and provided a useful point of comparison between voice and face processing.
Collapse
Affiliation(s)
| | - Greg J. Neil
- Southampton Solent UniversitySchool of Sport, Health and Social SciencesSouthamptonUK
| | - Beth Parsons
- University of WinchesterDepartment of PsychologyWinchesterUK
| | - Abi Humphreys
- University of SouthamptonDepartment of PsychologySouthamptonUK
| |
Collapse
|
26
|
Bhandari A, Gagne C, Badre D. Just above Chance: Is It Harder to Decode Information from Prefrontal Cortex Hemodynamic Activity Patterns? J Cogn Neurosci 2018; 30:1473-1498. [PMID: 29877764 DOI: 10.1162/jocn_a_01291] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The prefrontal cortex (PFC) is central to flexible, goal-directed cognition, and understanding its representational code is an important problem in cognitive neuroscience. In humans, multivariate pattern analysis (MVPA) of fMRI blood oxygenation level-dependent (BOLD) measurements has emerged as an important approach for studying neural representations. Many previous studies have implicitly assumed that MVPA of fMRI BOLD is just as effective in decoding information encoded in PFC neural activity as it is in visual cortex. However, MVPA studies of PFC have had mixed success. Here we estimate the base rate of decoding information from PFC BOLD activity patterns from a meta-analysis of published MVPA studies. We show that PFC has a significantly lower base rate (55.4%) than visual areas in occipital (66.6%) and temporal (71.0%) cortices and one that is close to chance levels. Our results have implications for the design and interpretation of MVPA studies of PFC and raise important questions about its functional organization.
Collapse
Affiliation(s)
| | | | - David Badre
- Brown University.,Carney Institute for Brain Science, Providence, RI
| |
Collapse
|
27
|
Nie Y, Galvin JJ, Morikawa M, André V, Wheeler H, Fu QJ. Music and Speech Perception in Children Using Sung Speech. Trends Hear 2018; 22:2331216518766810. [PMID: 29609496 PMCID: PMC5888806 DOI: 10.1177/2331216518766810] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.
Collapse
Affiliation(s)
- Yingjiu Nie
- 1 Department of Communication Sciences and Disorders, 3745 James Madison University , Harrisonburg, VA, USA
| | | | - Michael Morikawa
- 1 Department of Communication Sciences and Disorders, 3745 James Madison University , Harrisonburg, VA, USA
| | - Victoria André
- 1 Department of Communication Sciences and Disorders, 3745 James Madison University , Harrisonburg, VA, USA
| | - Harley Wheeler
- 1 Department of Communication Sciences and Disorders, 3745 James Madison University , Harrisonburg, VA, USA
| | - Qian-Jie Fu
- 3 Department of Head and Neck Surgery, University of California-Los Angeles, CA, USA
| |
Collapse
|
28
|
Stimulus-invariant auditory cortex threat encoding during fear conditioning with simple and complex sounds. Neuroimage 2017; 166:276-284. [PMID: 29122722 PMCID: PMC5770332 DOI: 10.1016/j.neuroimage.2017.11.009] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Revised: 07/27/2017] [Accepted: 11/03/2017] [Indexed: 12/11/2022] Open
Abstract
Learning to predict threat depends on amygdala plasticity and does not require auditory cortex (ACX) when threat predictors (conditioned stimuli, CS) are simple sine tones. However, ACX is required in rodents to learn from some naturally occurring CS. Yet, the precise function of ACX, and whether it differs for different CS types, is unknown. Here, we address how ACX encodes threat predictions during human fear conditioning using functional magnetic resonance imaging (fMRI) with multivariate pattern analysis. As in previous rodent work, CS+ and CS- were defined either by direction of frequency modulation (complex) or by frequency of pure tones (simple). In an instructed non-reinforcement context, different sets of simple and complex sounds were always presented without reinforcement (neutral sounds, NS). Threat encoding was measured by separation of fMRI response patterns induced by CS+/CS-, or similar NS1/NS2 pairs. We found that fMRI patterns in Heschl's gyrus encoded threat prediction over and above encoding the physical stimulus features also present in NS, i.e. CS+/CS- could be separated better than NS1/NS2. This was the case both for simple and complex CS. Furthermore, cross-prediction demonstrated that threat representations were similar for simple and complex CS, and thus unlikely to emerge from stimulus-specific top-down, or learning-induced, receptive field plasticity. Searchlight analysis across the entire ACX demonstrated further threat representations in a region including BA22 and BA42. However, in this region, patterns were distinct for simple and complex sounds, and could thus potentially arise from receptive field plasticity. Strikingly, across participants, individual size of Heschl's gyrus predicted strength of fear learning for complex sounds. Overall, our findings suggest that ACX represents threat predictions, and that Heschl's gyrus contains a threat representation that is invariant across physical stimulus categories.
Collapse
|
29
|
Casey MA. Music of the 7Ts: Predicting and Decoding Multivoxel fMRI Responses with Acoustic, Schematic, and Categorical Music Features. Front Psychol 2017; 8:1179. [PMID: 28769835 PMCID: PMC5509941 DOI: 10.3389/fpsyg.2017.01179] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2016] [Accepted: 06/28/2017] [Indexed: 11/26/2022] Open
Abstract
Underlying the experience of listening to music are parallel streams of auditory, categorical, and schematic qualia, whose representations and cortical organization remain largely unresolved. We collected high-field (7T) fMRI data in a music listening task, and analyzed the data using multivariate decoding and stimulus-encoding models. Twenty subjects participated in the experiment, which measured BOLD responses evoked by naturalistic listening to twenty-five music clips from five genres. Our first analysis applied machine classification to the multivoxel patterns that were evoked in temporal cortex. Results yielded above-chance levels for both stimulus identification and genre classification–cross-validated by holding out data from multiple of the stimuli during model training and then testing decoding performance on the held-out data. Genre model misclassifications were significantly correlated with those in a corresponding behavioral music categorization task, supporting the hypothesis that geometric properties of multivoxel pattern spaces underlie observed musical behavior. A second analysis employed a spherical searchlight regression analysis which predicted multivoxel pattern responses to music features representing melody and harmony across a large area of cortex. The resulting prediction-accuracy maps yielded significant clusters in the temporal, frontal, parietal, and occipital lobes, as well as in the parahippocampal gyrus and the cerebellum. These maps provide evidence in support of our hypothesis that geometric properties of music cognition are neurally encoded as multivoxel representational spaces. The maps also reveal a cortical topography that differentially encodes categorical and absolute-pitch information in distributed and overlapping networks, with smaller specialized regions that encode tonal music information in relative-pitch representations.
Collapse
Affiliation(s)
- Michael A Casey
- Bregman Music and Audio Lab, Computer Science and Music Departments, Dartmouth CollegeHanover, NH, United States
| |
Collapse
|
30
|
Functional neuroanatomy of speech signal decoding in primary progressive aphasias. Neurobiol Aging 2017; 56:190-201. [PMID: 28571652 PMCID: PMC5476347 DOI: 10.1016/j.neurobiolaging.2017.04.026] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Revised: 04/26/2017] [Accepted: 04/28/2017] [Indexed: 01/01/2023]
Abstract
The pathophysiology of primary progressive aphasias remains poorly understood. Here, we addressed this issue using activation fMRI in a cohort of 27 patients with primary progressive aphasia (nonfluent, semantic, and logopenic variants) versus 15 healthy controls. Participants listened passively to sequences of spoken syllables in which we manipulated 3-key auditory speech signal characteristics: temporal regularity, phonemic spectral structure, and pitch sequence entropy. Relative to healthy controls, nonfluent variant patients showed reduced activation of medial Heschl's gyrus in response to any auditory stimulation and reduced activation of anterior cingulate to temporal irregularity. Semantic variant patients had relatively reduced activation of caudate and anterior cingulate in response to increased entropy. Logopenic variant patients showed reduced activation of posterior superior temporal cortex to phonemic spectral structure. Taken together, our findings suggest that impaired processing of core speech signal attributes may drive particular progressive aphasia syndromes and could index a generic physiological mechanism of reduced computational efficiency relevant to all these syndromes, with implications for development of new biomarkers and therapeutic interventions.
Collapse
|
31
|
Hou J, Song B, Chen ACN, Sun C, Zhou J, Zhu H, Beauchaine TP. Review on Neural Correlates of Emotion Regulation and Music: Implications for Emotion Dysregulation. Front Psychol 2017; 8:501. [PMID: 28421017 PMCID: PMC5376620 DOI: 10.3389/fpsyg.2017.00501] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 03/16/2017] [Indexed: 12/15/2022] Open
Abstract
Previous studies have examined the neural correlates of emotion regulation and the neural changes that are evoked by music exposure. However, the link between music and emotion regulation is poorly understood. The objectives of this review are to (1) synthesize what is known about the neural correlates of emotion regulation and music-evoked emotions, and (2) consider the possibility of therapeutic effects of music on emotion dysregulation. Music-evoked emotions can modulate activities in both cortical and subcortical systems, and across cortical-subcortical networks. Functions within these networks are integral to generation and regulation of emotions. Since dysfunction in these networks are observed in numerous psychiatric disorders, a better understanding of neural correlates of music exposure may lead to more systematic and effective use of music therapy in emotion dysregulation.
Collapse
Affiliation(s)
- Jiancheng Hou
- Center for Educational Neuroscience, School of Psychology and Cognitive Science, East China Normal UniversityShanghai, China.,Department of Radiology, School of Medicine and Public Health, University of Wisconsin-MadisonMadison, WI, USA
| | - Bei Song
- Center for Educational Neuroscience, School of Psychology and Cognitive Science, East China Normal UniversityShanghai, China.,Music Conservatory of HarbinHarbin, China
| | - Andrew C N Chen
- Center for Higher Brain Functions and Institute for Brain Disorders, Capital Medical UniversityBeijing, China
| | - Changan Sun
- School of Education and Public Administration, Suzhou University of Science and TechnologySuzhou, China
| | - Jiaxian Zhou
- Center for Educational Neuroscience, School of Psychology and Cognitive Science, East China Normal UniversityShanghai, China
| | - Haidong Zhu
- Department of Psychology, Shihezi UniversityShihezi, China
| | | |
Collapse
|
32
|
Karpati FJ, Giacosa C, Foster NEV, Penhune VB, Hyde KL. Dance and music share gray matter structural correlates. Brain Res 2017; 1657:62-73. [PMID: 27923638 DOI: 10.1016/j.brainres.2016.11.029] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2016] [Revised: 11/23/2016] [Accepted: 11/25/2016] [Indexed: 01/31/2023]
Affiliation(s)
- Falisha J Karpati
- International Laboratory for Brain, Music and Sound Research (BRAMS), Pavillon 1420 Mont Royal, FAS, Département de psychologie, CP 6128, Succ. Centre Ville, Montréal, QC H3C 3J7, Canada; Faculty of Medicine, McGill University, 3605 Rue de la Montagne, Montreal, QC H3G 2M1, Canada.
| | - Chiara Giacosa
- International Laboratory for Brain, Music and Sound Research (BRAMS), Pavillon 1420 Mont Royal, FAS, Département de psychologie, CP 6128, Succ. Centre Ville, Montréal, QC H3C 3J7, Canada; Dept. of Psychology, Concordia University, 7141 Sherbrooke West, PY-146, Montreal, QC H4B 1R6, Canada.
| | - Nicholas E V Foster
- International Laboratory for Brain, Music and Sound Research (BRAMS), Pavillon 1420 Mont Royal, FAS, Département de psychologie, CP 6128, Succ. Centre Ville, Montréal, QC H3C 3J7, Canada; Dept. of Psychology, University of Montreal, Pavillon Marie-Victorin, 90 Avenue Vincent d'Indy, Montreal, QC H2V 2S9, Canada.
| | - Virginia B Penhune
- International Laboratory for Brain, Music and Sound Research (BRAMS), Pavillon 1420 Mont Royal, FAS, Département de psychologie, CP 6128, Succ. Centre Ville, Montréal, QC H3C 3J7, Canada; Dept. of Psychology, Concordia University, 7141 Sherbrooke West, PY-146, Montreal, QC H4B 1R6, Canada.
| | - Krista L Hyde
- International Laboratory for Brain, Music and Sound Research (BRAMS), Pavillon 1420 Mont Royal, FAS, Département de psychologie, CP 6128, Succ. Centre Ville, Montréal, QC H3C 3J7, Canada; Faculty of Medicine, McGill University, 3605 Rue de la Montagne, Montreal, QC H3G 2M1, Canada; Dept. of Psychology, University of Montreal, Pavillon Marie-Victorin, 90 Avenue Vincent d'Indy, Montreal, QC H2V 2S9, Canada.
| |
Collapse
|
33
|
Golden HL, Clark CN, Nicholas JM, Cohen MH, Slattery CF, Paterson RW, Foulkes AJM, Schott JM, Mummery CJ, Crutch SJ, Warren JD. Music Perception in Dementia. J Alzheimers Dis 2017; 55:933-949. [PMID: 27802226 PMCID: PMC5260961 DOI: 10.3233/jad-160359] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Despite much recent interest in music and dementia, music perception has not been widely studied across dementia syndromes using an information processing approach. Here we addressed this issue in a cohort of 30 patients representing major dementia syndromes of typical Alzheimer's disease (AD, n = 16), logopenic aphasia (LPA, an Alzheimer variant syndrome; n = 5), and progressive nonfluent aphasia (PNFA; n = 9) in relation to 19 healthy age-matched individuals. We designed a novel neuropsychological battery to assess perception of musical patterns in the dimensions of pitch and temporal information (requiring detection of notes that deviated from the established pattern based on local or global sequence features) and musical scene analysis (requiring detection of a familiar tune within polyphonic harmony). Performance on these tests was referenced to generic auditory (timbral) deviance detection and recognition of familiar tunes and adjusted for general auditory working memory performance. Relative to healthy controls, patients with AD and LPA had group-level deficits of global pitch (melody contour) processing while patients with PNFA as a group had deficits of local (interval) as well as global pitch processing. There was substantial individual variation within syndromic groups. Taking working memory performance into account, no specific deficits of musical temporal processing, timbre processing, musical scene analysis, or tune recognition were identified. The findings suggest that particular aspects of music perception such as pitch pattern analysis may open a window on the processing of information streams in major dementia syndromes. The potential selectivity of musical deficits for particular dementia syndromes and particular dimensions of processing warrants further systematic investigation.
Collapse
Affiliation(s)
- Hannah L Golden
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Camilla N Clark
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Jennifer M Nicholas
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
- London School of Hygiene and Tropical Medicine, University of London, London, United Kingdom
| | - Miriam H Cohen
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Catherine F Slattery
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Ross W Paterson
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Alexander J M Foulkes
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Jonathan M Schott
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Catherine J Mummery
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Sebastian J Crutch
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
34
|
Salmi J, Koistinen OP, Glerean E, Jylänki P, Vehtari A, Jääskeläinen IP, Mäkelä S, Nummenmaa L, Nummi-Kuisma K, Nummi I, Sams M. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex. Neuroimage 2016; 157:108-117. [PMID: 27932074 DOI: 10.1016/j.neuroimage.2016.12.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Revised: 11/02/2016] [Accepted: 12/03/2016] [Indexed: 11/25/2022] Open
Abstract
During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively.
Collapse
Affiliation(s)
- Juha Salmi
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland; Advanced Magnetic Imaging (AMI) Centre, School of Science, Aalto University, Finland; Institute of Behavioural Sciences, Division of Cognitive and Neuropsychology, University of Helsinki, Finland
| | - Olli-Pekka Koistinen
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Pasi Jylänki
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Aki Vehtari
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Iiro P Jääskeläinen
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Sasu Mäkelä
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Lauri Nummenmaa
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland; Turku PET Centre, University of Turku, Finland
| | | | - Ilari Nummi
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, Finland.
| |
Collapse
|
35
|
Lee YS, Zreik JT, Hamilton RH. Patterns of neural activity predict picture-naming performance of a patient with chronic aphasia. Neuropsychologia 2016; 94:52-60. [PMID: 27864027 DOI: 10.1016/j.neuropsychologia.2016.11.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2016] [Revised: 10/06/2016] [Accepted: 11/13/2016] [Indexed: 10/20/2022]
Abstract
Naming objects represents a substantial challenge for patients with chronic aphasia. This could be in part because the reorganized compensatory language networks of persons with aphasia may be less stable than the intact language systems of healthy individuals. Here, we hypothesized that the degree of stability would be instantiated by spatially differential neural patterns rather than either increased or diminished amplitudes of neural activity within a putative compensatory language system. We recruited a chronic aphasic patient (KL; 66 year-old male) who exhibited a semantic deficit (e.g., often said "milk" for "cow" and "pillow" for "blanket"). Over the course of four behavioral sessions involving a naming task performed in a mock scanner, we identified visual objects that yielded an approximately 50% success rate. We then conducted two fMRI sessions in which the patient performed a naming task for multiple exemplars of those objects. Multivoxel pattern analysis (MVPA) searchlight revealed differential activity patterns associated with correct and incorrect trials throughout intact brain regions. The most robust and largest cluster was found in the right occipito-temporal cortex encompassing fusiform cortex, lateral occipital cortex (LOC), and middle occipital cortex, which may account for the patient's propensity for semantic naming errors. None of these areas were found by a conventional univariate analysis. By using an alternative approach, we extend current evidence for compensatory naming processes that operate through spatially differential patterns within the reorganized language system.
Collapse
Affiliation(s)
- Yune Sang Lee
- Department of Speech and Hearing Science, The Ohio State University, Columbus, OH, USA.
| | - Jihad T Zreik
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA
| | - Roy H Hamilton
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
36
|
Rosemann S, Brunner F, Kastrup A, Fahle M. Musical, visual and cognitive deficits after middle cerebral artery infarction. eNeurologicalSci 2016; 6:25-32. [PMID: 29260010 PMCID: PMC5721573 DOI: 10.1016/j.ensci.2016.11.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Revised: 07/28/2016] [Accepted: 11/03/2016] [Indexed: 11/24/2022] Open
Abstract
The perception of music can be impaired after a stroke. This dysfunction is called amusia and amusia patients often also show deficits in visual abilities, language, memory, learning, and attention. The current study investigated whether deficits in music perception are selective for musical input or generalize to other perceptual abilities. Additionally, we tested the hypothesis that deficits in working memory or attention account for impairments in music perception. Twenty stroke patients with small infarctions in the supply area of the middle cerebral artery were investigated with tests for music and visual perception, categorization, neglect, working memory and attention. Two amusia patients with selective deficits in music perception and pronounced lesions were identified. Working memory and attention deficits were highly correlated across the patient group but no correlation with musical abilities was obtained. Lesion analysis revealed that lesions in small areas of the putamen and globus pallidus were connected to a rhythm perception deficit. We conclude that neither a general perceptual deficit nor a minor domain general deficit can account for impairments in the music perception task. But we find support for the modular organization of the music perception network with brain areas specialized for musical functions as musical deficits were not correlated to any other impairment.
Collapse
Affiliation(s)
| | | | | | - Manfred Fahle
- Department of Human-Neurobiology, University of Bremen, Germany
| |
Collapse
|
37
|
Woolgar A, Jackson J, Duncan J. Coding of Visual, Auditory, Rule, and Response Information in the Brain: 10 Years of Multivoxel Pattern Analysis. J Cogn Neurosci 2016; 28:1433-54. [PMID: 27315269 DOI: 10.1162/jocn_a_00981] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
How is the processing of task information organized in the brain? Many views of brain function emphasize modularity, with different regions specialized for processing different types of information. However, recent accounts also highlight flexibility, pointing especially to the highly consistent pattern of frontoparietal activation across many tasks. Although early insights from functional imaging were based on overall activation levels during different cognitive operations, in the last decade many researchers have used multivoxel pattern analyses to interrogate the representational content of activations, mapping out the brain regions that make particular stimulus, rule, or response distinctions. Here, we drew on 100 searchlight decoding analyses from 57 published papers to characterize the information coded in different brain networks. The outcome was highly structured. Visual, auditory, and motor networks predominantly (but not exclusively) coded visual, auditory, and motor information, respectively. By contrast, the frontoparietal multiple-demand network was characterized by domain generality, coding visual, auditory, motor, and rule information. The contribution of the default mode network and voxels elsewhere was minor. The data suggest a balanced picture of brain organization in which sensory and motor networks are relatively specialized for information in their own domain, whereas a specific frontoparietal network acts as a domain-general "core" with the capacity to code many different aspects of a task.
Collapse
Affiliation(s)
- Alexandra Woolgar
- Macquarie University, Sydney, Australia.,ARC Centre of Excellence in Cognition and its Disorders, Australia
| | - Jade Jackson
- Macquarie University, Sydney, Australia.,ARC Centre of Excellence in Cognition and its Disorders, Australia
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, Cambridge, UK.,University of Oxford
| |
Collapse
|
38
|
Jeong E, Ryu H. Melodic Contour Identification Reflects the Cognitive Threshold of Aging. Front Aging Neurosci 2016; 8:134. [PMID: 27378907 PMCID: PMC4904015 DOI: 10.3389/fnagi.2016.00134] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 05/27/2016] [Indexed: 01/16/2023] Open
Abstract
Cognitive decline is a natural phenomenon of aging. Although there exists a consensus that sensitivity to acoustic features of music is associated with such decline, no solid evidence has yet shown that structural elements and contexts of music explain this loss of cognitive performance. This study examined the extent and the type of cognitive decline that is related to the contour identification task (CIT) using tones with different pitches (i.e., melodic contours). Both younger and older adult groups participated in the CIT given in three listening conditions (i.e., focused, selective, and alternating). Behavioral data (accuracy and response times) and hemodynamic reactions were measured using functional near-infrared spectroscopy (fNIRS). Our findings showed cognitive declines in the older adult group but with a subtle difference from the younger adult group. The accuracy of the melodic CITs given in the target-like distraction task (CIT2) was significantly lower than that in the environmental noise (CIT1) condition in the older adult group, indicating that CIT2 may be a benchmark test for age-specific cognitive decline. The fNIRS findings also agreed with this interpretation, revealing significant increases in oxygenated hemoglobin (oxyHb) concentration in the younger (p < 0.05 for Δpre - on task; p < 0.01 for Δon – post task) rather than the older adult group (n.s for Δpre - on task; n.s for Δon – post task). We further concluded that the oxyHb difference was present in the brain regions near the right dorsolateral prefrontal cortex. Taken together, these findings suggest that CIT2 (i.e., the melodic contour task in the target-like distraction) is an optimized task that could indicate the degree and type of age-related cognitive decline.
Collapse
Affiliation(s)
- Eunju Jeong
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| | - Hokyoung Ryu
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| |
Collapse
|
39
|
The sound of emotions-Towards a unifying neural network perspective of affective sound processing. Neurosci Biobehav Rev 2016; 68:96-110. [PMID: 27189782 DOI: 10.1016/j.neubiorev.2016.05.002] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Revised: 05/01/2016] [Accepted: 05/04/2016] [Indexed: 12/15/2022]
Abstract
Affective sounds are an integral part of the natural and social environment that shape and influence behavior across a multitude of species. In human primates, these affective sounds span a repertoire of environmental and human sounds when we vocalize or produce music. In terms of neural processing, cortical and subcortical brain areas constitute a distributed network that supports our listening experience to these affective sounds. Taking an exhaustive cross-domain view, we accordingly suggest a common neural network that facilitates the decoding of the emotional meaning from a wide source of sounds rather than a traditional view that postulates distinct neural systems for specific affective sound types. This new integrative neural network view unifies the decoding of affective valence in sounds, and ascribes differential as well as complementary functional roles to specific nodes within a common neural network. It also highlights the importance of an extended brain network beyond the central limbic and auditory brain systems engaged in the processing of affective sounds.
Collapse
|
40
|
Sikka R, Cuddy LL, Johnsrude IS, Vanstone AD. An fMRI comparison of neural activity associated with recognition of familiar melodies in younger and older adults. Front Neurosci 2015; 9:356. [PMID: 26500480 PMCID: PMC4594019 DOI: 10.3389/fnins.2015.00356] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Accepted: 09/17/2015] [Indexed: 01/16/2023] Open
Abstract
Several studies of semantic memory in non-musical domains involving recognition of items from long-term memory have shown an age-related shift from the medial temporal lobe structures to the frontal lobe. However, the effects of aging on musical semantic memory remain unexamined. We compared activation associated with recognition of familiar melodies in younger and older adults. Recognition follows successful retrieval from the musical lexicon that comprises a lifetime of learned musical phrases. We used the sparse-sampling technique in fMRI to determine the neural correlates of melody recognition by comparing activation when listening to familiar vs. unfamiliar melodies, and to identify age differences. Recognition-related cortical activation was detected in the right superior temporal, bilateral inferior and superior frontal, left middle orbitofrontal, bilateral precentral, and left supramarginal gyri. Region-of-interest analysis showed greater activation for younger adults in the left superior temporal gyrus and for older adults in the left superior frontal, left angular, and bilateral superior parietal regions. Our study provides powerful evidence for these musical memory networks due to a large sample (N = 40) that includes older adults. This study is the first to investigate the neural basis of melody recognition in older adults and to compare the findings to younger adults.
Collapse
Affiliation(s)
- Ritu Sikka
- Centre for Neuroscience Studies, Queen's UniversityKingston, ON, Canada
| | - Lola L. Cuddy
- Music Cognition Lab, Department of Psychology, Queen's UniversityKingston, ON, Canada
| | - Ingrid S. Johnsrude
- Centre for Neuroscience Studies, Queen's UniversityKingston, ON, Canada
- Cognitive Neuroscience of Communication and Hearing, Department of Psychology, Queen's UniversityKingston, ON, Canada
| | - Ashley D. Vanstone
- Music Cognition Lab, Department of Psychology, Queen's UniversityKingston, ON, Canada
| |
Collapse
|
41
|
The Mismatch Negativity: An Indicator of Perception of Regularities in Music. Behav Neurol 2015; 2015:469508. [PMID: 26504352 PMCID: PMC4609411 DOI: 10.1155/2015/469508] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2015] [Revised: 07/23/2015] [Accepted: 07/26/2015] [Indexed: 11/17/2022] Open
Abstract
This paper reviews music research using Mismatch Negativity (MMN). MMN is a deviation-specific component of auditory event-related potential (EPR), which detects a deviation between a sound and an internal representation (e.g., memory trace). Recent studies have expanded the notion and the paradigms of MMN to higher-order music processing such as those involving short melodies, harmony chord, and music syntax. In this vein, we firstly reviewed the evolution of MMN from sound to music and then mainly compared the differences of MMN features between musicians and nonmusicians, followed by the discussion of the potential roles of the training effect and the natural exposure in MMN. Since MMN can serve as an index of neural plasticity, it thus can be widely used in clinical and other applied areas, such as detecting music preference in newborns or assessing wholeness of central auditory system of hearing illness. Finally, we pointed out some open questions and further directions. Current music perception research using MMN has mainly focused on relatively low hierarchical structure of music perception. To fully understand the neural substrates underlying processing of regularities in music, it is important and beneficial to combine MMN with other experimental paradigms such as early right-anterior negativity (ERAN).
Collapse
|
42
|
Lee YS, Peelle JE, Kraemer D, Lloyd S, Granger R. Multivariate sensitivity to voice during auditory categorization. J Neurophysiol 2015; 114:1819-26. [PMID: 26245316 DOI: 10.1152/jn.00407.2014] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Accepted: 07/31/2015] [Indexed: 11/22/2022] Open
Abstract
Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex.
Collapse
Affiliation(s)
- Yune Sang Lee
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire;
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri; and
| | - David Kraemer
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri; and Department of Education, Dartmouth College, Hanover, New Hampshire
| | - Samuel Lloyd
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire
| | - Richard Granger
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire
| |
Collapse
|
43
|
Brown RM, Zatorre RJ, Penhune VB. Expert music performance: cognitive, neural, and developmental bases. PROGRESS IN BRAIN RESEARCH 2015; 217:57-86. [DOI: 10.1016/bs.pbr.2014.11.021] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
44
|
|
45
|
Axelrod V. Minimizing bugs in cognitive neuroscience programming. Front Psychol 2014; 5:1435. [PMID: 25566120 PMCID: PMC4269119 DOI: 10.3389/fpsyg.2014.01435] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Accepted: 11/24/2014] [Indexed: 11/30/2022] Open
Affiliation(s)
- Vadim Axelrod
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University Ramat Gan, Israel ; Institute of Cognitive Neuroscience, University College London London, UK
| |
Collapse
|
46
|
Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers. Brain Cogn 2014; 91:35-44. [PMID: 25222292 DOI: 10.1016/j.bandc.2014.08.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Revised: 06/20/2014] [Accepted: 08/10/2014] [Indexed: 11/21/2022]
Abstract
In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and sensory brain activation rather mirrored expectation than stimulation. Silent music reading probably relies on these basic neurocognitive mechanisms.
Collapse
|
47
|
Thaut MH, Trimarchi PD, Parsons LM. Human brain basis of musical rhythm perception: common and distinct neural substrates for meter, tempo, and pattern. Brain Sci 2014; 4:428-52. [PMID: 24961770 PMCID: PMC4101486 DOI: 10.3390/brainsci4020428] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2014] [Revised: 05/26/2014] [Accepted: 05/30/2014] [Indexed: 11/24/2022] Open
Abstract
Rhythm as the time structure of music is composed of distinct temporal components such as pattern, meter, and tempo. Each feature requires different computational processes: meter involves representing repeating cycles of strong and weak beats; pattern involves representing intervals at each local time point which vary in length across segments and are linked hierarchically; and tempo requires representing frequency rates of underlying pulse structures. We explored whether distinct rhythmic elements engage different neural mechanisms by recording brain activity of adult musicians and non-musicians with positron emission tomography (PET) as they made covert same-different discriminations of (a) pairs of rhythmic, monotonic tone sequences representing changes in pattern, tempo, and meter, and (b) pairs of isochronous melodies. Common to pattern, meter, and tempo tasks were focal activities in right, or bilateral, areas of frontal, cingulate, parietal, prefrontal, temporal, and cerebellar cortices. Meter processing alone activated areas in right prefrontal and inferior frontal cortex associated with more cognitive and abstract representations. Pattern processing alone recruited right cortical areas involved in different kinds of auditory processing. Tempo processing alone engaged mechanisms subserving somatosensory and premotor information (e.g., posterior insula, postcentral gyrus). Melody produced activity different from the rhythm conditions (e.g., right anterior insula and various cerebellar areas). These exploratory findings suggest the outlines of some distinct neural components underlying the components of rhythmic structure.
Collapse
Affiliation(s)
- Michael H Thaut
- Center for Biomedical Research in Music, Colorado State University, Ft. Collins, CO 80523, USA.
| | | | | |
Collapse
|
48
|
Särkämö T, Ripollés P, Vepsäläinen H, Autti T, Silvennoinen HM, Salli E, Laitinen S, Forsblom A, Soinila S, Rodríguez-Fornells A. Structural changes induced by daily music listening in the recovering brain after middle cerebral artery stroke: a voxel-based morphometry study. Front Hum Neurosci 2014; 8:245. [PMID: 24860466 PMCID: PMC4029020 DOI: 10.3389/fnhum.2014.00245] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Accepted: 04/03/2014] [Indexed: 12/28/2022] Open
Abstract
Music is a highly complex and versatile stimulus for the brain that engages many temporal, frontal, parietal, cerebellar, and subcortical areas involved in auditory, cognitive, emotional, and motor processing. Regular musical activities have been shown to effectively enhance the structure and function of many brain areas, making music a potential tool also in neurological rehabilitation. In our previous randomized controlled study, we found that listening to music on a daily basis can improve cognitive recovery and improve mood after an acute middle cerebral artery stroke. Extending this study, a voxel-based morphometry (VBM) analysis utilizing cost function masking was performed on the acute and 6-month post-stroke stage structural magnetic resonance imaging data of the patients (n = 49) who either listened to their favorite music [music group (MG), n = 16] or verbal material [audio book group (ABG), n = 18] or did not receive any listening material [control group (CG), n = 15] during the 6-month recovery period. Although all groups showed significant gray matter volume (GMV) increases from the acute to the 6-month stage, there was a specific network of frontal areas [left and right superior frontal gyrus (SFG), right medial SFG] and limbic areas [left ventral/subgenual anterior cingulate cortex (SACC) and right ventral striatum (VS)] in patients with left hemisphere damage in which the GMV increases were larger in the MG than in the ABG and in the CG. Moreover, the GM reorganization in the frontal areas correlated with enhanced recovery of verbal memory, focused attention, and language skills, whereas the GM reorganization in the SACC correlated with reduced negative mood. This study adds on previous results, showing that music listening after stroke not only enhances behavioral recovery, but also induces fine-grained neuroanatomical changes in the recovering brain.
Collapse
Affiliation(s)
- Teppo Särkämö
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki , Helsinki , Finland ; Finnish Centre of Interdisciplinary Music Research, University of Helsinki , Helsinki , Finland
| | - Pablo Ripollés
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat , Barcelona , Spain ; Department of Basic Psychology, University of Barcelona , Barcelona , Spain
| | - Henna Vepsäläinen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki , Helsinki , Finland
| | - Taina Autti
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Central Hospital, University of Helsinki , Helsinki , Finland
| | - Heli M Silvennoinen
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Central Hospital, University of Helsinki , Helsinki , Finland
| | - Eero Salli
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Central Hospital, University of Helsinki , Helsinki , Finland
| | | | - Anita Forsblom
- Department of Music, University of Jyväskylä , Jyväskylä , Finland
| | - Seppo Soinila
- Department of Neurology, Turku University Hospital , Turku , Finland
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat , Barcelona , Spain ; Department of Basic Psychology, University of Barcelona , Barcelona , Spain ; Institució Catalana de Recerca i Estudis Avançats (ICREA) , Barcelona , Spain
| |
Collapse
|
49
|
Cortical pitch regions in humans respond primarily to resolved harmonics and are located in specific tonotopic regions of anterior auditory cortex. J Neurosci 2014; 33:19451-69. [PMID: 24336712 DOI: 10.1523/jneurosci.2880-13.2013] [Citation(s) in RCA: 101] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.
Collapse
|
50
|
Neural substrates of interactive musical improvisation: an FMRI study of 'trading fours' in jazz. PLoS One 2014; 9:e88665. [PMID: 24586366 PMCID: PMC3929604 DOI: 10.1371/journal.pone.0088665] [Citation(s) in RCA: 59] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2013] [Accepted: 01/14/2014] [Indexed: 11/19/2022] Open
Abstract
Interactive generative musical performance provides a suitable model for communication because, like natural linguistic discourse, it involves an exchange of ideas that is unpredictable, collaborative, and emergent. Here we show that interactive improvisation between two musicians is characterized by activation of perisylvian language areas linked to processing of syntactic elements in music, including inferior frontal gyrus and posterior superior temporal gyrus, and deactivation of angular gyrus and supramarginal gyrus, brain structures directly implicated in semantic processing of language. These findings support the hypothesis that musical discourse engages language areas of the brain specialized for processing of syntax but in a manner that is not contingent upon semantic processing. Therefore, we argue that neural regions for syntactic processing are not domain-specific for language but instead may be domain-general for communication.
Collapse
|