1
|
Fedorenko E, Ivanova AA, Regev TI. The language network as a natural kind within the broader landscape of the human brain. Nat Rev Neurosci 2024; 25:289-312. [PMID: 38609551 DOI: 10.1038/s41583-024-00802-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2024] [Indexed: 04/14/2024]
Abstract
Language behaviour is complex, but neuroscientific evidence disentangles it into distinct components supported by dedicated brain areas or networks. In this Review, we describe the 'core' language network, which includes left-hemisphere frontal and temporal areas, and show that it is strongly interconnected, independent of input and output modalities, causally important for language and language-selective. We discuss evidence that this language network plausibly stores language knowledge and supports core linguistic computations related to accessing words and constructions from memory and combining them to interpret (decode) or generate (encode) linguistic messages. We emphasize that the language network works closely with, but is distinct from, both lower-level - perceptual and motor - mechanisms and higher-level systems of knowledge and reasoning. The perceptual and motor mechanisms process linguistic signals, but, in contrast to the language network, are sensitive only to these signals' surface properties, not their meanings; the systems of knowledge and reasoning (such as the system that supports social reasoning) are sometimes engaged during language use but are not language-selective. This Review lays a foundation both for in-depth investigations of these different components of the language processing pipeline and for probing inter-component interactions.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- The Program in Speech and Hearing in Bioscience and Technology, Harvard University, Cambridge, MA, USA.
| | - Anna A Ivanova
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| | - Tamar I Regev
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
2
|
Shan T, Cappelloni MS, Maddox RK. Subcortical responses to music and speech are alike while cortical responses diverge. Sci Rep 2024; 14:789. [PMID: 38191488 PMCID: PMC10774448 DOI: 10.1038/s41598-023-50438-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 12/20/2023] [Indexed: 01/10/2024] Open
Abstract
Music and speech are encountered daily and are unique to human beings. Both are transformed by the auditory pathway from an initial acoustical encoding to higher level cognition. Studies of cortex have revealed distinct brain responses to music and speech, but differences may emerge in the cortex or may be inherited from different subcortical encoding. In the first part of this study, we derived the human auditory brainstem response (ABR), a measure of subcortical encoding, to recorded music and speech using two analysis methods. The first method, described previously and acoustically based, yielded very different ABRs between the two sound classes. The second method, however, developed here and based on a physiological model of the auditory periphery, gave highly correlated responses to music and speech. We determined the superiority of the second method through several metrics, suggesting there is no appreciable impact of stimulus class (i.e., music vs speech) on the way stimulus acoustics are encoded subcortically. In this study's second part, we considered the cortex. Our new analysis method resulted in cortical music and speech responses becoming more similar but with remaining differences. The subcortical and cortical results taken together suggest that there is evidence for stimulus-class dependent processing of music and speech at the cortical but not subcortical level.
Collapse
Affiliation(s)
- Tong Shan
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Madeline S Cappelloni
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ross K Maddox
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA.
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
- Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
3
|
Peterson M, Braga RM, Floris DL, Nielsen JA. Evidence for a Compensatory Relationship between Left- and Right-Lateralized Brain Networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.08.570817. [PMID: 38106130 PMCID: PMC10723397 DOI: 10.1101/2023.12.08.570817] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
The two hemispheres of the human brain are functionally asymmetric. At the network level, the language network exhibits left-hemisphere lateralization. While this asymmetry is widely replicated, the extent to which other functional networks demonstrate lateralization remains a subject of Investigation. Additionally, it is unknown how the lateralization of one functional network may affect the lateralization of other networks within individuals. We quantified lateralization for each of 17 networks by computing the relative surface area on the left and right cerebral hemispheres. After examining the ecological, convergent, and external validity and test-retest reliability of this surface area-based measure of lateralization, we addressed two hypotheses across multiple datasets (Human Connectome Project = 553, Human Connectome Project-Development = 343, Natural Scenes Dataset = 8). First, we hypothesized that networks associated with language, visuospatial attention, and executive control would show the greatest lateralization. Second, we hypothesized that relationships between lateralized networks would follow a dependent relationship such that greater left-lateralization of a network would be associated with greater right-lateralization of a different network within individuals, and that this pattern would be systematic across individuals. A language network was among the three networks identified as being significantly left-lateralized, and attention and executive control networks were among the five networks identified as being significantly right-lateralized. Next, correlation matrices, an exploratory factor analysis, and confirmatory factor analyses were used to test the second hypothesis and examine the organization of lateralized networks. We found general support for a dependent relationship between highly left- and right-lateralized networks, meaning that across subjects, greater left lateralization of a given network (such as a language network) was linked to greater right lateralization of another network (such as a ventral attention/salience network) and vice versa. These results further our understanding of brain organization at the macro-scale network level in individuals, carrying specific relevance for neurodevelopmental conditions characterized by disruptions in lateralization such as autism and schizophrenia.
Collapse
Affiliation(s)
- Madeline Peterson
- Department of Psychology, Brigham Young University, Provo, UT, 84602, USA
| | - Rodrigo M. Braga
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Dorothea L. Floris
- Methods of Plasticity Research, Department of Psychology, University of Zurich, Zurich, Switzerland
- Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Medical Center, Nijmegen, The Netherlands
| | - Jared A. Nielsen
- Department of Psychology, Brigham Young University, Provo, UT, 84602, USA
- Neuroscience Center, Brigham Young University, Provo, UT, 84604, USA
| |
Collapse
|
4
|
Turker S, Kuhnke P, Jiang Z, Hartwigsen G. Disrupted network interactions serve as a neural marker of dyslexia. Commun Biol 2023; 6:1114. [PMID: 37923809 PMCID: PMC10624919 DOI: 10.1038/s42003-023-05499-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 10/24/2023] [Indexed: 11/06/2023] Open
Abstract
Dyslexia, a frequent learning disorder, is characterized by severe impairments in reading and writing and hypoactivation in reading regions in the left hemisphere. Despite decades of research, it remains unclear to date if observed behavioural deficits are caused by aberrant network interactions during reading and whether differences in functional activation and connectivity are directly related to reading performance. Here we provide a comprehensive characterization of reading-related brain connectivity in adults with and without dyslexia. We find disrupted functional coupling between hypoactive reading regions, especially between the left temporo-parietal and occipito-temporal cortices, and an extensive functional disruption of the right cerebellum in adults with dyslexia. Network analyses suggest that individuals with dyslexia process written stimuli via a dorsal decoding route and show stronger reading-related interaction with the right cerebellum. Moreover, increased connectivity within networks is linked to worse reading performance in dyslexia. Collectively, our results provide strong evidence for aberrant task-related connectivity as a neural marker for dyslexia that directly impacts behavioural performance. The observed differences in activation and connectivity suggest that one effective way to alleviate reading problems in dyslexia is through modulating interactions within the reading network with neurostimulation methods.
Collapse
Affiliation(s)
- Sabrina Turker
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany.
- Wilhelm Wundt Institute for Psychology, Leipzig University, 04103, Leipzig, Germany.
| | - Philipp Kuhnke
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
- Wilhelm Wundt Institute for Psychology, Leipzig University, 04103, Leipzig, Germany
| | - Zhizhao Jiang
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
- Wilhelm Wundt Institute for Psychology, Leipzig University, 04103, Leipzig, Germany
| |
Collapse
|
5
|
Harris I, Niven EC, Griffin A, Scott SK. Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
Affiliation(s)
- Ilana Harris
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Efe C Niven
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alex Griffin
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
6
|
Kosakowski HL, Norman-Haignere S, Mynick A, Takahashi A, Saxe R, Kanwisher N. Preliminary evidence for selective cortical responses to music in one-month-old infants. Dev Sci 2023; 26:e13387. [PMID: 36951215 DOI: 10.1111/desc.13387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 02/17/2023] [Accepted: 02/21/2023] [Indexed: 03/24/2023]
Abstract
Prior studies have observed selective neural responses in the adult human auditory cortex to music and speech that cannot be explained by the differing lower-level acoustic properties of these stimuli. Does infant cortex exhibit similarly selective responses to music and speech shortly after birth? To answer this question, we attempted to collect functional magnetic resonance imaging (fMRI) data from 45 sleeping infants (2.0- to 11.9-weeks-old) while they listened to monophonic instrumental lullabies and infant-directed speech produced by a mother. To match acoustic variation between music and speech sounds we (1) recorded music from instruments that had a similar spectral range as female infant-directed speech, (2) used a novel excitation-matching algorithm to match the cochleagrams of music and speech stimuli, and (3) synthesized "model-matched" stimuli that were matched in spectrotemporal modulation statistics to (yet perceptually distinct from) music or speech. Of the 36 infants we collected usable data from, 19 had significant activations to sounds overall compared to scanner noise. From these infants, we observed a set of voxels in non-primary auditory cortex (NPAC) but not in Heschl's Gyrus that responded significantly more to music than to each of the other three stimulus types (but not significantly more strongly than to the background scanner noise). In contrast, our planned analyses did not reveal voxels in NPAC that responded more to speech than to model-matched speech, although other unplanned analyses did. These preliminary findings suggest that music selectivity arises within the first month of life. A video abstract of this article can be viewed at https://youtu.be/c8IGFvzxudk. RESEARCH HIGHLIGHTS: Responses to music, speech, and control sounds matched for the spectrotemporal modulation-statistics of each sound were measured from 2- to 11-week-old sleeping infants using fMRI. Auditory cortex was significantly activated by these stimuli in 19 out of 36 sleeping infants. Selective responses to music compared to the three other stimulus classes were found in non-primary auditory cortex but not in nearby Heschl's Gyrus. Selective responses to speech were not observed in planned analyses but were observed in unplanned, exploratory analyses.
Collapse
Affiliation(s)
- Heather L Kosakowski
- Department of Brain and Cognitive Sciences, Massachusetts Institute, of Technology, Cambridge, Massachusetts, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Center for Brains, Minds and Machines, Cambridge, Massachusetts, USA
| | | | - Anna Mynick
- Psychological and Brain Sciences, Dartmouth College, Hannover, New Hampshire, USA
| | - Atsushi Takahashi
- Department of Brain and Cognitive Sciences, Massachusetts Institute, of Technology, Cambridge, Massachusetts, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute, of Technology, Cambridge, Massachusetts, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Center for Brains, Minds and Machines, Cambridge, Massachusetts, USA
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute, of Technology, Cambridge, Massachusetts, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Center for Brains, Minds and Machines, Cambridge, Massachusetts, USA
| |
Collapse
|
7
|
Tsai CG, Fu YF, Li CW. Prediction errors arising from switches between major and minor modes in music: An fMRI study. Brain Cogn 2023; 169:105987. [PMID: 37126951 DOI: 10.1016/j.bandc.2023.105987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Revised: 04/15/2023] [Accepted: 04/17/2023] [Indexed: 05/03/2023]
Abstract
The major and minor modes in Western music have positive and negative connotations, respectively. The present fMRI study examined listeners' neural responses to switches between major and minor modes. We manipulated the final chords of J. S. Bach's keyboard pieces so that each major-mode passage ended with either the major (Major-Major) or minor (Major-Minor) tonic chord, and each minor-mode passage ended with either the minor (Minor-Minor) or major (Minor-Major) tonic chord. If the final major and minor chords have positive and negative reward values respectively, the Major-Minor and Minor-Major stimuli would cause negative and positive reward prediction errors (RPEs) respectively in a listener's brain. We found that activity in a frontoparietal network was significantly higher for Major-Minor than for Major-Major. Based on previous research, these results support the idea that a major-to-minor switch causes negative RPE. The contrast of Minor-Major minus Minor-Minor yielded activation in the ventral insula and visual cortex, speaking against the idea that a minor-to-major switch causes positive RPE. We discuss our results in relation to executive functions and the emotional connotations of major versus minor modes.
Collapse
Affiliation(s)
- Chen-Gia Tsai
- Graduate Institute of Musicology, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan
| | - Yi-Fan Fu
- Department of Bio-Industry Communication and Development, National Taiwan University, Taipei, Taiwan
| | - Chia-Wei Li
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
8
|
Chen X, Affourtit J, Ryskin R, Regev TI, Norman-Haignere S, Jouravlev O, Malik-Moraleda S, Kean H, Varley R, Fedorenko E. The human language system, including its inferior frontal component in "Broca's area," does not support music perception. Cereb Cortex 2023; 33:7904-7929. [PMID: 37005063 PMCID: PMC10505454 DOI: 10.1093/cercor/bhad087] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 04/04/2023] Open
Abstract
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within "Broca's area." However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions' responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
Collapse
Affiliation(s)
- Xuanyi Chen
- Department of Cognitive Sciences, Rice University, TX 77005, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rachel Ryskin
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive & Information Sciences, University of California, Merced, Merced, CA 95343, United States
| | - Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Samuel Norman-Haignere
- Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States
| | - Olessia Jouravlev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Science, Carleton University, Ottawa, ON, Canada
| | - Saima Malik-Moraleda
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rosemary Varley
- Psychology & Language Sciences, UCL, London, WCN1 1PF, United Kingdom
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
9
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
10
|
Kasdan AV, Burgess AN, Pizzagalli F, Scartozzi A, Chern A, Kotz SA, Wilson SM, Gordon RL. Identifying a brain network for musical rhythm: A functional neuroimaging meta-analysis and systematic review. Neurosci Biobehav Rev 2022; 136:104588. [PMID: 35259422 PMCID: PMC9195154 DOI: 10.1016/j.neubiorev.2022.104588] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 01/31/2022] [Accepted: 02/14/2022] [Indexed: 01/05/2023]
Abstract
We conducted a systematic review and meta-analysis of 30 functional magnetic resonance imaging studies investigating processing of musical rhythms in neurotypical adults. First, we identified a general network for musical rhythm, encompassing all relevant sensory and motor processes (Beat-based, rest baseline, 12 contrasts) which revealed a large network involving auditory and motor regions. This network included the bilateral superior temporal cortices, supplementary motor area (SMA), putamen, and cerebellum. Second, we identified more precise loci for beat-based musical rhythms (Beat-based, audio-motor control, 8 contrasts) in the bilateral putamen. Third, we identified regions modulated by beat based rhythmic complexity (Complexity, 16 contrasts) which included the bilateral SMA-proper/pre-SMA, cerebellum, inferior parietal regions, and right temporal areas. This meta-analysis suggests that musical rhythm is largely represented in a bilateral cortico-subcortical network. Our findings align with existing theoretical frameworks about auditory-motor coupling to a musical beat and provide a foundation for studying how the neural bases of musical rhythm may overlap with other cognitive domains.
Collapse
Affiliation(s)
- Anna V Kasdan
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Curb Center for Art, Enterprise, and Public Policy, Nashville, TN, USA.
| | - Andrea N Burgess
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | | | - Alyssa Scartozzi
- Department of Otolaryngology - Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Alexander Chern
- Department of Otolaryngology - Head & Neck Surgery, New York-Presbyterian/Columbia University Irving Medical Center and Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA; Department of Otolaryngology - Head and Neck Surgery, New York-Presbyterian/Weill Cornell Medical Center, New York, NY, USA
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, The Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Stephen M Wilson
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Reyna L Gordon
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Curb Center for Art, Enterprise, and Public Policy, Nashville, TN, USA; Department of Otolaryngology - Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
11
|
Whitehead JC, Armony JL. Intra-individual Reliability of Voice- and Music-elicited Responses and their Modulation by Expertise. Neuroscience 2022; 487:184-197. [PMID: 35182696 DOI: 10.1016/j.neuroscience.2022.02.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 01/19/2022] [Accepted: 02/10/2022] [Indexed: 10/19/2022]
Abstract
A growing number of functional neuroimaging studies have identified regions within the temporal lobe, particularly along the planum polare and planum temporale, that respond more strongly to music than other types of acoustic stimuli, including voice. This "music preferred" regions have been reported using a variety of stimulus sets, paradigms and analysis approaches and their consistency across studies confirmed through meta-analyses. However, the critical question of intra-subject reliability of these responses has received less attention. Here, we directly assessed this important issue by contrasting brain responses to musical vs. vocal stimuli in the same subjects across three consecutive fMRI runs, using different types of stimuli. Moreover, we investigated whether these music- and voice-preferred responses were reliably modulated by expertise. Results demonstrated that music-preferred activity previously reported in temporal regions, and its modulation by expertise, exhibits a high intra-subject reliability. However, we also found that activity in some extra-temporal regions, such as the precentral and middle frontal gyri, did depend on the particular stimuli employed, which may explain why these are less consistently reported in the literature. Taken together, our findings confirm and extend the notion that specific regions in the brain consistently respond more strongly to certain socially-relevant stimulus categories, such as faces, voices and music, but that some of these responses appear to depend, at least to some extent, on the specific features of the paradigm employed.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Integrated Program in Neuroscience, McGill University, Montreal, Canada.
| | - Jorge L Armony
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Department of Psychiatry, McGill University, Montreal, Canada
| |
Collapse
|
12
|
Tuckute G, Paunov A, Kean H, Small H, Mineroff Z, Blank I, Fedorenko E. Frontal language areas do not emerge in the absence of temporal language areas: A case study of an individual born without a left temporal lobe. Neuropsychologia 2022; 169:108184. [DOI: 10.1016/j.neuropsychologia.2022.108184] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 12/07/2021] [Accepted: 02/15/2022] [Indexed: 10/19/2022]
|
13
|
Murai S, Yang AN, Hiryu S, Kobayasi KI. Music in Noise: Neural Correlates Underlying Noise Tolerance in Music-Induced Emotion. Cereb Cortex Commun 2021; 2:tgab061. [PMID: 34746792 PMCID: PMC8564766 DOI: 10.1093/texcom/tgab061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 09/25/2021] [Accepted: 09/26/2021] [Indexed: 11/14/2022] Open
Abstract
Music can be experienced in various acoustic qualities. In this study, we investigated how the acoustic quality of the music can influence strong emotional experiences, such as musical chills, and the neural activity. The music’s acoustic quality was controlled by adding noise to musical pieces. Participants listened to clear and noisy musical pieces and pressed a button when they experienced chills. We estimated neural activity in response to chills under both clear and noisy conditions using functional magnetic resonance imaging (fMRI). The behavioral data revealed that compared with the clear condition, the noisy condition dramatically decreased the number of chills and duration of chills. The fMRI results showed that under both noisy and clear conditions the supplementary motor area, insula, and superior temporal gyrus were similarly activated when participants experienced chills. The involvement of these brain regions may be crucial for music-induced emotional processes under the noisy as well as the clear condition. In addition, we found a decrease in the activation of the right superior temporal sulcus when experiencing chills under the noisy condition, which suggests that music-induced emotional processing is sensitive to acoustic quality.
Collapse
Affiliation(s)
- Shota Murai
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Ae Na Yang
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Shizuko Hiryu
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| | - Kohta I Kobayasi
- Graduate School of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto 610-0321, Japan
| |
Collapse
|
14
|
Falcon C, Navarro-Plaza MC, Gramunt N, Arenaza-Urquijo EM, Grau-Rivera O, Cacciaglia R, González-de-Echavarria JM, Sánchez-Benavides G, Operto G, Knezevic I, Molinuevo JL, Gispert JD. Soundtrack of life: An fMRI study. Behav Brain Res 2021; 418:113634. [PMID: 34710508 DOI: 10.1016/j.bbr.2021.113634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 10/01/2021] [Accepted: 10/15/2021] [Indexed: 11/29/2022]
Abstract
Most people have a soundtrack of life, a set of special musical pieces closely linked to certain biographical experiences. Autobiographical memories (AM) and music listening (ML) involve complex mental processes ruled by differentiate brain networks. The aim of the paper was to determine the way both networks interact in linked occurrences. We performed an fMRI experiment on 31 healthy participants (age: 32.4 ± 7.6, 11 men, 4 left-handers). Participants had to recall AMs prompted by music they reported to be associated with personal biographical events (LMM: linked AM-ML events). In the main control task, participants were prompted to recall emotional AMs while listening known tracks from a pool of popular music (UMM: unlinked AM-ML events). We wanted to investigate to what extent LMM network exceeded the overlap of AM and ML networks by contrasting the activation obtained in LMM versus UMM. The contrast LMM>UMM showed the areas (at P < 0.05 FWE corrected at voxel level and cluster size>20): right frontal inferior operculum, frontal middle gyrus, pars triangularis of inferior frontal gyrus, occipital superior gyrus and bilateral basal ganglia (caudate, putamen and pallidum), occipital (middle and inferior), parietal (inferior and superior), precentral and cerebellum (6, 7 L, 8 and vermis 6 and 7). Complementary results were obtained from additional control tasks. Provided part of tLMM>UMM areas might not be related to ML-AM linkage, we assessed LMM brain network by an independent component analysis (ICA) on contrast images. Results from ICA suggest the existence of a cortico-ponto-cerebellar network including left precuneus, bilateral anterior cingulum, parahippocampal gyri, frontal inferior operculum, ventral anterior part of the insula, frontal medial orbital gyri, caudate nuclei, cerebellum 6 and vermis, which might rule the ML-induced retrieval of AM in closely linked AM-ML events. This topography may suggest that the pathway by which ML is linked to AM is attentional and directly related to perceptual processing, involving salience network, instead of the natural way of remembering typically associated with default mode network.
Collapse
Affiliation(s)
- Carles Falcon
- Barcelonaβeta Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; Centro de Investigación Biomédica en Red Bioingeniería, Biomateriales y Nanomedicina, Madrid, Spain; IMIM (Hospital del Mar Medical Research Institute), Barcelona.
| | | | | | - Eider M Arenaza-Urquijo
- Barcelonaβeta Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; IMIM (Hospital del Mar Medical Research Institute), Barcelona; Centro de Investigación Biomédica en Red de Fragilidad y Envejecimiento Saludable (CIBERFES), Madrid, Spain
| | - Oriol Grau-Rivera
- Barcelonaβeta Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; Centro de Investigación Biomédica en Red Bioingeniería, Biomateriales y Nanomedicina, Madrid, Spain; Centro de Investigación Biomédica en Red de Fragilidad y Envejecimiento Saludable (CIBERFES), Madrid, Spain; Servei de Neurologia, Hospital del Mar, Barcelona, Spain
| | - Raffaele Cacciaglia
- Barcelonaβeta Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; IMIM (Hospital del Mar Medical Research Institute), Barcelona; Centro de Investigación Biomédica en Red de Fragilidad y Envejecimiento Saludable (CIBERFES), Madrid, Spain
| | - José María González-de-Echavarria
- Barcelonaβeta Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; IMIM (Hospital del Mar Medical Research Institute), Barcelona
| | - Gonzalo Sánchez-Benavides
- Barcelonaβeta Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; IMIM (Hospital del Mar Medical Research Institute), Barcelona; Centro de Investigación Biomédica en Red de Fragilidad y Envejecimiento Saludable (CIBERFES), Madrid, Spain
| | - Grégory Operto
- Barcelonaβeta Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; IMIM (Hospital del Mar Medical Research Institute), Barcelona; Centro de Investigación Biomédica en Red de Fragilidad y Envejecimiento Saludable (CIBERFES), Madrid, Spain
| | - Iva Knezevic
- Barcelonaβeta Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain
| | - José Luis Molinuevo
- Barcelonaβeta Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; IMIM (Hospital del Mar Medical Research Institute), Barcelona; Centro de Investigación Biomédica en Red de Fragilidad y Envejecimiento Saludable (CIBERFES), Madrid, Spain
| | - Juan Domingo Gispert
- Barcelonaβeta Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; Centro de Investigación Biomédica en Red Bioingeniería, Biomateriales y Nanomedicina, Madrid, Spain; IMIM (Hospital del Mar Medical Research Institute), Barcelona; Centro de Investigación Biomédica en Red de Fragilidad y Envejecimiento Saludable (CIBERFES), Madrid, Spain.
| |
Collapse
|
15
|
White PA. The extended present: an informational context for perception. Acta Psychol (Amst) 2021; 220:103403. [PMID: 34454251 DOI: 10.1016/j.actpsy.2021.103403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 08/04/2021] [Accepted: 08/19/2021] [Indexed: 01/29/2023] Open
Abstract
Several previous authors have proposed a kind of specious or subjective present moment that covers a few seconds of recent information. This article proposes a new hypothesis about the subjective present, renamed the extended present, defined not in terms of time covered but as a thematically connected information structure held in working memory and in transiently accessible form in long-term memory. The three key features of the extended present are that information in it is thematically connected, both internally and to current attended perceptual input, it is organised in a hierarchical structure, and all information in it is marked with temporal information, specifically ordinal and duration information. Temporal boundaries to the information structure are determined by hierarchical structure processing and by limits on processing and storage capacity. Supporting evidence for the importance of hierarchical structure analysis is found in the domains of music perception, speech and language processing, perception and production of goal-directed action, and exact arithmetical calculation. Temporal information marking is also discussed and a possible mechanism for representing ordinal and duration information on the time scale of the extended present is proposed. It is hypothesised that the extended present functions primarily as an informational context for making sense of current perceptual input, and as an enabler for perception and generation of complex structures and operations in language, action, music, exact calculation, and other domains.
Collapse
|
16
|
Asano R, Boeckx C, Seifert U. Hierarchical control as a shared neurocognitive mechanism for language and music. Cognition 2021; 216:104847. [PMID: 34311153 DOI: 10.1016/j.cognition.2021.104847] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 05/14/2021] [Accepted: 07/11/2021] [Indexed: 12/16/2022]
Abstract
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany.
| | - Cedric Boeckx
- Section of General Linguistics, University of Barcelona, Spain; University of Barcelona Institute for Complex Systems (UBICS), Spain; Catalan Institute for Advanced Studies and Research (ICREA), Spain
| | - Uwe Seifert
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany
| |
Collapse
|
17
|
Kuhnke P, Kiefer M, Hartwigsen G. Task-Dependent Functional and Effective Connectivity during Conceptual Processing. Cereb Cortex 2021; 31:3475-3493. [PMID: 33677479 PMCID: PMC8196308 DOI: 10.1093/cercor/bhab026] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 01/21/2021] [Accepted: 01/22/2021] [Indexed: 11/13/2022] Open
Abstract
Conceptual knowledge is central to cognition. Previous neuroimaging research indicates that conceptual processing involves both modality-specific perceptual-motor areas and multimodal convergence zones. For example, our previous functional magnetic resonance imaging (fMRI) study revealed that both modality-specific and multimodal regions respond to sound and action features of concepts in a task-dependent fashion (Kuhnke P, Kiefer M, Hartwigsen G. 2020b. Task-dependent recruitment of modality-specific and multimodal regions during conceptual processing. Cereb Cortex. 30:3938–3959.). However, it remains unknown whether and how modality-specific and multimodal areas interact during conceptual tasks. Here, we asked 1) whether multimodal and modality-specific areas are functionally coupled during conceptual processing, 2) whether their coupling depends on the task, 3) whether information flows top-down, bottom-up or both, and 4) whether their coupling is behaviorally relevant. We combined psychophysiological interaction analyses with dynamic causal modeling on the fMRI data of our previous study. We found that functional coupling between multimodal and modality-specific areas strongly depended on the task, involved both top-down and bottom-up information flow, and predicted conceptually guided behavior. Notably, we also found coupling between different modality-specific areas and between different multimodal areas. These results suggest that functional coupling in the conceptual system is extensive, reciprocal, task-dependent, and behaviorally relevant. We propose a new model of the conceptual system that incorporates task-dependent functional interactions between modality-specific and multimodal areas.
Collapse
Affiliation(s)
- Philipp Kuhnke
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Markus Kiefer
- Department of Psychiatry, Ulm University, Ulm 89081, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| |
Collapse
|
18
|
Boebinger D, Norman-Haignere SV, McDermott JH, Kanwisher N. Music-selective neural populations arise without musical training. J Neurophysiol 2021; 125:2237-2263. [PMID: 33596723 PMCID: PMC8285655 DOI: 10.1152/jn.00588.2020] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/12/2021] [Accepted: 02/12/2021] [Indexed: 11/22/2022] Open
Abstract
Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.NEW & NOTEWORTHY We show that music-selective neural populations are clearly present in people without musical training, demonstrating that they are a fundamental and widespread property of the human brain. Additionally, we show music-selective neural populations respond strongly to music from unfamiliar genres as well as music with rhythm but little pitch information, suggesting that they are broadly responsive to music as a whole.
Collapse
Affiliation(s)
- Dana Boebinger
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Sam V Norman-Haignere
- Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, Paris France
- Zuckerman Institute for Brain Research, Columbia University, New York, New York
| | - Josh H McDermott
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Nancy Kanwisher
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
19
|
Kuhnke P, Kiefer M, Hartwigsen G. Task-Dependent Recruitment of Modality-Specific and Multimodal Regions during Conceptual Processing. Cereb Cortex 2020; 30:3938-3959. [PMID: 32219378 PMCID: PMC7264643 DOI: 10.1093/cercor/bhaa010] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 01/08/2020] [Accepted: 01/15/2020] [Indexed: 01/12/2023] Open
Abstract
Conceptual knowledge is central to cognitive abilities such as word comprehension. Previous neuroimaging evidence indicates that concepts are at least partly composed of perceptual and motor features that are represented in the same modality-specific brain regions involved in actual perception and action. However, it is unclear to what extent the retrieval of perceptual-motor features and the resulting engagement of modality-specific regions depend on the concurrent task. To address this issue, we measured brain activity in 40 young and healthy participants using functional magnetic resonance imaging, while they performed three different tasks-lexical decision, sound judgment, and action judgment-on words that independently varied in their association with sounds and actions. We found neural activation for sound and action features of concepts selectively when they were task-relevant in brain regions also activated during auditory and motor tasks, respectively, as well as in higher-level, multimodal regions which were recruited during both sound and action feature retrieval. For the first time, we show that not only modality-specific perceptual-motor areas but also multimodal regions are engaged in conceptual processing in a flexible, task-dependent fashion, responding selectively to task-relevant conceptual features.
Collapse
Affiliation(s)
- Philipp Kuhnke
- Lise Meitner Research Group ‘Cognition and Plasticity’, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103 Leipzig, Germany
- Department of Neuropsychology, Research Group ‘Modulation of Language Networks’, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103 Leipzig, Germany
| | - Markus Kiefer
- Department of Psychiatry, Ulm University, Leimgrubenweg 12, 89075 Ulm, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group ‘Cognition and Plasticity’, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103 Leipzig, Germany
- Department of Neuropsychology, Research Group ‘Modulation of Language Networks’, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103 Leipzig, Germany
| |
Collapse
|
20
|
Musicians use speech-specific areas when processing tones: The key to their superior linguistic competence? Behav Brain Res 2020; 390:112662. [PMID: 32442547 DOI: 10.1016/j.bbr.2020.112662] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 04/21/2020] [Accepted: 04/22/2020] [Indexed: 11/23/2022]
Abstract
It is known that musicians compared to non-musicians have some superior speech and language competence, yet the mechanisms how musical training leads to this advantage are not well specified. This event-related fMRI study confirmed that musicians outperformed non-musicians in processing not only of musical tones but also syllables and identified a network differentiating musicians from non-musicians during processing of linguistic sounds. Within this network, the activation of bilateral superior temporal gyrus was shared with all subjects during processing of the acoustically well-matched musical and linguistic sounds, and with the activation distinguishing tones with a complex harmonic spectrum (bowed tone) from a simpler one (plucked tone). These results confirm that better speech processing in musicians relies on improved cross-domain spectral analysis. Activation of left posterior superior temporal sulcus (pSTS), premotor cortex, inferior frontal and fusiform gyrus (FG) also distinguishing musicians from non-musicians during syllable processing overlapped with the activation segregating linguistic from musical sounds in all subjects. Since these brain-regions were not involved during tone processing in non-musicians, they could code for functions which are specialized for speech. Musicians recruited pSTS and FG during tone processing, thus these speech-specialized brain-areas processed musical sounds in the presence of musical training. This study shows that the linguistic advantage of musicians is linked not only to improved cross-domain spectral analysis, but also to the functional adaptation of brain resources that are specialized for speech, but accessible to the domain of music in the presence of musical training.
Collapse
|
21
|
Lawless MS, Vigeant MC. Sensitivity of the human auditory cortex and reward network to reverberant musical stimuli. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:2121. [PMID: 32359334 DOI: 10.1121/10.0000984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 03/12/2020] [Indexed: 06/11/2023]
Abstract
A room's acoustics can alter subjective impressions of music, including preference. However, little research has characterized the brain's response to room conditions. Functional magnetic resonance imaging (fMRI) was used to investigate the auditory and reward responses to concert hall stimuli. Before the fMRI testing, 18 participants rated their preferences to a solo-instrumental passage and an orchestral motif simulated in eight room acoustic conditions outside an MRI scanner to identify their most liked and disliked conditions. In the MRI, the most-liked (reverberation time, RT = 1.0-2.8 s) and most-disliked (RT = 7.2 s) conditions, along with the [anechoic and scrambled versions] anechoic and scrambled versions of the musical passages were presented. The auditory cortex was found to be sensitive to the temporal coherence of the stimuli as it exhibited stronger activations for simpler stimuli, i.e., the solo-instrumental and anechoic conditions, than for stimuli containing temporally incoherent auditory objects-the orchestral and reverberant conditions. In contrasts between liked and disliked reverberant stimuli, a reward response in the basal ganglia was detected in a region of interest analysis using a temporal derivative model of the hemodynamic response function. This response may indicate differences in preference between subtle variations in room acoustics applied to the same musical passage.
Collapse
Affiliation(s)
- Martin S Lawless
- Graduate Program in Acoustics, The Pennsylvania State University, 201 Applied Science Building, University Park, Pennsylvania 16802, USA
| | - Michelle C Vigeant
- Graduate Program in Acoustics, The Pennsylvania State University, 201 Applied Science Building, University Park, Pennsylvania 16802, USA
| |
Collapse
|
22
|
Bonnasse-Gahot L. Efficient Communication in Written and Performed Music. Cogn Sci 2020; 44:e12826. [PMID: 32215961 DOI: 10.1111/cogs.12826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 02/05/2020] [Accepted: 02/13/2020] [Indexed: 11/29/2022]
Abstract
Since its inception, Shannon's information theory has attracted interest for the study of language and music. Recently, a wide range of converging studies have shown how efficient communication pervades language, from phonetics to syntax. Efficient principles imply that more resources should be assigned to highly informative items. For instance, average information content was shown to be a better predictor of word length than frequency, revisiting the famous Zipf's law. However, in spite of the success of the efficient communication framework in the study of language and speech, very little work has investigated its relevance in the analysis of music. Here, we examine the organization of harmonic information in two large corpora of Western music, one made of MIDI files directly sequenced from scores, and the other made of MIDI recordings of live performances of highly skilled piano players. We show that there is a clear positive relationship between (contextual) information content of harmonic sequences and two essential musical properties, namely duration and loudness: the more unexpected a harmonic event is, the longer and the louder it is.
Collapse
|
23
|
The Rapid Emergence of Musical Pitch Structure in Human Cortex. J Neurosci 2020; 40:2108-2118. [PMID: 32001611 DOI: 10.1523/jneurosci.1399-19.2020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2019] [Revised: 01/06/2020] [Accepted: 01/07/2020] [Indexed: 11/21/2022] Open
Abstract
In tonal music, continuous acoustic waveforms are mapped onto discrete, hierarchically arranged, internal representations of pitch. To examine the neural dynamics underlying this transformation, we presented male and female human listeners with tones embedded within a Western tonal context while recording their cortical activity using magnetoencephalography. Machine learning classifiers were then trained to decode different tones from their underlying neural activation patterns at each peristimulus time sample, providing a dynamic measure of their dissimilarity in cortex. Comparing the time-varying dissimilarity between tones with the predictions of acoustic and perceptual models, we observed a temporal evolution in the brain's representational structure. Whereas initial dissimilarities mirrored their fundamental-frequency separation, dissimilarities beyond 200 ms reflected the perceptual status of each tone within the tonal hierarchy of Western music. These effects occurred regardless of stimulus regularities within the context or whether listeners were engaged in a task requiring explicit pitch analysis. Lastly, patterns of cortical activity that discriminated between tones became increasingly stable in time as the information coded by those patterns transitioned from low-to-high level properties. Current results reveal the dynamics with which the complex perceptual structure of Western tonal music emerges in cortex at the timescale of an individual tone.SIGNIFICANCE STATEMENT Little is understood about how the brain transforms an acoustic waveform into the complex perceptual structure of musical pitch. Applying neural decoding techniques to the cortical activity of human subjects engaged in music listening, we measured the dynamics of information processing in the brain on a moment-to-moment basis as subjects heard each tone. In the first 200 ms after onset, transient patterns of neural activity coded the fundamental frequency of tones. Subsequently, a period emerged during which more temporally stable activation patterns coded the perceptual status of each tone within the "tonal hierarchy" of Western music. Our results provide a crucial link between the complex perceptual structure of tonal music and the underlying neural dynamics from which it emerges.
Collapse
|
24
|
Tavor I, Botvinik‐Nezer R, Bernstein‐Eliav M, Tsarfaty G, Assaf Y. Short-term plasticity following motor sequence learning revealed by diffusion magnetic resonance imaging. Hum Brain Mapp 2019; 41:442-452. [PMID: 31596547 PMCID: PMC7267908 DOI: 10.1002/hbm.24814] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Revised: 09/12/2019] [Accepted: 09/22/2019] [Indexed: 01/06/2023] Open
Abstract
Current noninvasive methods to detect structural plasticity in humans are mainly used to study long-term changes. Diffusion magnetic resonance imaging (MRI) was recently proposed as a novel approach to reveal gray matter changes following spatial navigation learning and object-location memory tasks. In the present work, we used diffusion MRI to investigate the short-term neuroplasticity that accompanies motor sequence learning. Following a 45-min training session in which participants learned to accurately play a short sequence on a piano keyboard, changes in diffusion properties were revealed mainly in motor system regions such as the premotor cortex and cerebellum. In a second learning session taking place immediately afterward, feedback was given on the timing of key pressing instead of accuracy, while participants continued to learn. This second session induced a different plasticity pattern, demonstrating the dynamic nature of learning-induced plasticity, formerly thought to require months of training in order to be detectable. These results provide us with an important reminder that the brain is an extremely dynamic structure. Furthermore, diffusion MRI offers a novel measure to follow tissue plasticity particularly over short timescales, allowing new insights into the dynamics of structural brain plasticity.
Collapse
Affiliation(s)
- Ido Tavor
- Department of Anatomy and Anthropology, Sackler Faculty of MedicineTel Aviv UniversityTel AvivIsrael
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Affiliated to the Sackler School of MedicineTel Aviv UniversityTel AvivIsrael
- Sagol School of NeuroscienceTel Aviv UniversityTel AvivIsrael
| | - Rotem Botvinik‐Nezer
- Sagol School of NeuroscienceTel Aviv UniversityTel AvivIsrael
- Department of Neurobiology, Faculty of Life SciencesTel Aviv UniversityTel AvivIsrael
| | - Michal Bernstein‐Eliav
- Department of Anatomy and Anthropology, Sackler Faculty of MedicineTel Aviv UniversityTel AvivIsrael
- Sagol School of NeuroscienceTel Aviv UniversityTel AvivIsrael
| | - Galia Tsarfaty
- Department of Diagnostic Imaging, Sheba Medical Center, Tel Hashomer, Affiliated to the Sackler School of MedicineTel Aviv UniversityTel AvivIsrael
| | - Yaniv Assaf
- Sagol School of NeuroscienceTel Aviv UniversityTel AvivIsrael
- Department of Neurobiology, Faculty of Life SciencesTel Aviv UniversityTel AvivIsrael
| |
Collapse
|
25
|
Ogg M, Moraczewski D, Kuchinsky SE, Slevc LR. Separable neural representations of sound sources: Speaker identity and musical timbre. Neuroimage 2019; 191:116-126. [PMID: 30731247 DOI: 10.1016/j.neuroimage.2019.01.075] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 12/14/2018] [Accepted: 01/30/2019] [Indexed: 11/28/2022] Open
Abstract
Human listeners can quickly and easily recognize different sound sources (objects and events) in their environment. Understanding how this impressive ability is accomplished can improve signal processing and machine intelligence applications along with assistive listening technologies. However, it is not clear how the brain represents the many sounds that humans can recognize (such as speech and music) at the level of individual sources, categories and acoustic features. To examine the cortical organization of these representations, we used patterns of fMRI responses to decode 1) four individual speakers and instruments from one another (separately, within each category), 2) the superordinate category labels associated with each stimulus (speech or instrument), and 3) a set of simple synthesized sounds that could be differentiated entirely on their acoustic features. Data were collected using an interleaved silent steady state sequence to increase the temporal signal-to-noise ratio, and mitigate issues with auditory stimulus presentation in fMRI. Largely separable clusters of voxels in the temporal lobes supported the decoding of individual speakers and instruments from other stimuli in the same category. Decoding the superordinate category of each sound was more accurate and involved a larger portion of the temporal lobes. However, these clusters all overlapped with areas that could decode simple, acoustically separable stimuli. Thus, individual sound sources from different sound categories are represented in separate regions of the temporal lobes that are situated within regions implicated in more general acoustic processes. These results bridge an important gap in our understanding of cortical representations of sounds and their acoustics.
Collapse
Affiliation(s)
- Mattson Ogg
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Department of Psychology, University of Maryland, College Park, MD, 20742, USA.
| | - Dustin Moraczewski
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Department of Psychology, University of Maryland, College Park, MD, 20742, USA
| | - Stefanie E Kuchinsky
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Center for Advanced Study of Language, University of Maryland, College Park, MD, 20742, USA; Maryland Neuroimaging Center, University of Maryland, College Park, MD, 20742, USA
| | - L Robert Slevc
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Department of Psychology, University of Maryland, College Park, MD, 20742, USA
| |
Collapse
|
26
|
Pritchett BL, Hoeflin C, Koldewyn K, Dechter E, Fedorenko E. High-level language processing regions are not engaged in action observation or imitation. J Neurophysiol 2018; 120:2555-2570. [PMID: 30156457 DOI: 10.1152/jn.00222.2018] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
A set of left frontal, temporal, and parietal brain regions respond robustly during language comprehension and production (e.g., Fedorenko E, Hsieh PJ, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. J Neurophysiol 104: 1177-1194, 2010; Menenti L, Gierhan SM, Segaert K, Hagoort P. Psychol Sci 22: 1173-1182, 2011). These regions have been further shown to be selective for language relative to other cognitive processes, including arithmetic, aspects of executive function, and music perception (e.g., Fedorenko E, Behr MK, Kanwisher N. Proc Natl Acad Sci USA 108: 16428-16433, 2011; Monti MM, Osherson DN. Brain Res 1428: 33-42, 2012). However, one claim about overlap between language and nonlinguistic cognition remains prominent. In particular, some have argued that language processing shares computational demands with action observation and/or execution (e.g., Rizzolatti G, Arbib MA. Trends Neurosci 21: 188-194, 1998; Koechlin E, Jubault T. Neuron 50: 963-974, 2006; Tettamanti M, Weniger D. Cortex 42: 491-494, 2006). However, the evidence for these claims is indirect, based on observing activation for language and action tasks within the same broad anatomical areas (e.g., on the lateral surface of the left frontal lobe). To test whether language indeed shares machinery with action observation/execution, we examined the responses of language brain regions, defined functionally in each individual participant (Fedorenko E, Hsieh PJ, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. J Neurophysiol 104: 1177-1194, 2010) to action observation ( experiments 1, 2, and 3a) and action imitation ( experiment 3b). With the exception of the language region in the angular gyrus, all language regions, including those in the inferior frontal gyrus (within "Broca's area"), showed little or no response during action observation/imitation. These results add to the growing body of literature suggesting that high-level language regions are highly selective for language processing (see Fedorenko E, Varley R. Ann NY Acad Sci 1369: 132-153, 2016 for a review). NEW & NOTEWORTHY Many have argued for overlap in the machinery used to interpret language and others' actions, either because action observation was a precursor to linguistic communication or because both require interpreting hierarchically-structured stimuli. However, existing evidence is indirect, relying on group analyses or reverse inference. We examined responses to action observation in language regions defined functionally in individual participants and found no response. Thus language comprehension and action observation recruit distinct circuits in the modern brain.
Collapse
Affiliation(s)
- Brianna L Pritchett
- Department of Brain and Cognitive Sciences/McGovern Institute for Brain Research, Massachusetts Institute of Technology , Cambridge, Massachusetts
| | - Caitlyn Hoeflin
- Department of Brain and Cognitive Sciences/McGovern Institute for Brain Research, Massachusetts Institute of Technology , Cambridge, Massachusetts
| | - Kami Koldewyn
- School of Psychology, Bangor University, Gwynedd, United Kingdom
| | - Eyal Dechter
- Department of Brain and Cognitive Sciences/McGovern Institute for Brain Research, Massachusetts Institute of Technology , Cambridge, Massachusetts
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences/McGovern Institute for Brain Research, Massachusetts Institute of Technology , Cambridge, Massachusetts.,Department of Psychiatry, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Psychiatry, Harvard Medical School , Boston, Massachusetts
| |
Collapse
|
27
|
Whitehead JC, Armony JL. Singing in the brain: Neural representation of music and voice as revealed by fMRI. Hum Brain Mapp 2018; 39:4913-4924. [PMID: 30120854 DOI: 10.1002/hbm.24333] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 05/25/2018] [Accepted: 07/15/2018] [Indexed: 12/13/2022] Open
Abstract
The ubiquity of music across cultures as a means of emotional expression, and its proposed evolutionary relation to speech, motivated researchers to attempt a characterization of its neural representation. Several neuroimaging studies have reported that specific regions in the anterior temporal lobe respond more strongly to music than to other auditory stimuli, including spoken voice. Nonetheless, because most studies have employed instrumental music, which has important acoustic distinctions from human voice, questions still exist as to the specificity of the observed "music-preferred" areas. Here, we sought to address this issue by testing 24 healthy young adults with fast, high-resolution fMRI, to record neural responses to a large and varied set of musical stimuli, which, critically, included a capella singing, as well as purely instrumental excerpts. Our results confirmed that music; vocal or instrumental, preferentially engaged regions in the superior STG, particularly in the anterior planum polare, bilaterally. In contrast, human voice, either spoken or sung, activated more strongly a large area along the superior temporal sulcus. Findings were consistent between univariate and multivariate analyses, as well as with the use of a "silent" sparse acquisition sequence that minimizes any potential influence of scanner noise on the resulting activations. Activity in music-preferred regions could not be accounted for by any basic acoustic parameter tested, suggesting these areas integrate, likely in a nonlinear fashion, a combination of acoustic attributes that, together, result in the perceived musicality of the stimuli, consistent with proposed hierarchical processing of complex auditory information within the temporal lobes.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Douglas Mental Health University Institute, Verdun, Canada.,BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada.,Integrated Program in Neuroscience, McGill University, Montreal, Canada
| | - Jorge L Armony
- Douglas Mental Health University Institute, Verdun, Canada.,BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada.,Department of Psychiatry, McGill University, Montreal, Canada
| |
Collapse
|
28
|
Decoding the dynamic representation of musical pitch from human brain activity. Sci Rep 2018; 8:839. [PMID: 29339790 PMCID: PMC5770452 DOI: 10.1038/s41598-018-19222-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 12/27/2017] [Indexed: 11/30/2022] Open
Abstract
In music, the perception of pitch is governed largely by its tonal function given the preceding harmonic structure of the music. While behavioral research has advanced our understanding of the perceptual representation of musical pitch, relatively little is known about its representational structure in the brain. Using Magnetoencephalography (MEG), we recorded evoked neural responses to different tones presented within a tonal context. Multivariate Pattern Analysis (MVPA) was applied to “decode” the stimulus that listeners heard based on the underlying neural activity. We then characterized the structure of the brain’s representation using decoding accuracy as a proxy for representational distance, and compared this structure to several well established perceptual and acoustic models. The observed neural representation was best accounted for by a model based on the Standard Tonal Hierarchy, whereby differences in the neural encoding of musical pitches correspond to their differences in perceived stability. By confirming that perceptual differences honor those in the underlying neuronal population coding, our results provide a crucial link in understanding the cognitive foundations of musical pitch across psychological and neural domains.
Collapse
|
29
|
Proverbio AM, De Benedetto F. Auditory enhancement of visual memory encoding is driven by emotional content of the auditory material and mediated by superior frontal cortex. Biol Psychol 2017; 132:164-175. [PMID: 29292233 DOI: 10.1016/j.biopsycho.2017.12.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Revised: 07/12/2017] [Accepted: 12/19/2017] [Indexed: 12/31/2022]
Abstract
BACKGROUND The aim of the present study was to investigate how auditory background interacts with learning and memory. Both facilitatory (e.g., "Mozart effect") and interfering effects of background have been reported, depending on the type of auditory stimulation and of concurrent cognitive tasks. METHOD Here we recorded event related potentials (ERPs) during face encoding followed by an old/new memory test to investigate the effect of listening to classical music (Čajkovskij, dramatic), environmental sounds (rain) or silence on learning. Participants were 15 healthy non-musician university students. Almost 400 (previously unknown) faces of women and men of various age were presented. RESULTS Listening to music during study led to a better encoding of faces as indexed by an increased Anterior Negativity. The FN400 response recorded during the memory test showed a gradient in its amplitude reflecting face familiarity. FN400 was larger to new than old faces, and to faces studied during rain sound listening and silence than music listening. CONCLUSION The results indicate that listening to music enhances memory recollection of faces by merging with visual information. A swLORETA analysis showed the main involvement of Superior Temporal Gyrus (STG) and medial frontal gyrus in the integration of audio-visual information.
Collapse
Affiliation(s)
- A M Proverbio
- NeuroMI Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Italy.
| | - F De Benedetto
- NeuroMI Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Italy
| |
Collapse
|
30
|
Blank IA, Kiran S, Fedorenko E. Can neuroimaging help aphasia researchers? Addressing generalizability, variability, and interpretability. Cogn Neuropsychol 2017; 34:377-393. [PMID: 29188746 PMCID: PMC6157596 DOI: 10.1080/02643294.2017.1402756] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Neuroimaging studies of individuals with brain damage seek to link brain structure and activity to cognitive impairments, spontaneous recovery, or treatment outcomes. To date, such studies have relied on the critical assumption that a given anatomical landmark corresponds to the same functional unit(s) across individuals. However, this assumption is fallacious even across neurologically healthy individuals. Here, we discuss the severe implications of this issue, and argue for an approach that circumvents it, whereby: (i) functional brain regions are defined separately for each subject using fMRI, allowing for inter-individual variability in their precise location; (ii) the response profile of these subject-specific regions are characterized using various other tasks; and (iii) the results are averaged across individuals, guaranteeing generalizabliity. This method harnesses the complementary strengths of single-case studies and group studies, and it eliminates the need for post hoc "reverse inference" from anatomical landmarks back to cognitive operations, thus improving data interpretability.
Collapse
Affiliation(s)
- Idan A Blank
- a McGovern Institute for Brain Research , Massachusetts Institute of Technology , Cambridge , MA , USA
| | - Swathi Kiran
- b Department of Speech Language and Hearing Sciences, Aphasia Research Laboratory , Sargent College, Boston University , Boston , MA , USA
| | - Evelina Fedorenko
- c Department of Psychiatry , Massachusetts General Hospital , Charlestown , MA , USA
- d Department of Psychiatry , Harvard Medical School , Boston , MA , USA
| |
Collapse
|
31
|
Garcea FE, Chernoff BL, Diamond B, Lewis W, Sims MH, Tomlinson SB, Teghipco A, Belkhir R, Gannon SB, Erickson S, Smith SO, Stone J, Liu L, Tollefson T, Langfitt J, Marvin E, Pilcher WH, Mahon BZ. Direct Electrical Stimulation in the Human Brain Disrupts Melody Processing. Curr Biol 2017; 27:2684-2691.e7. [PMID: 28844645 DOI: 10.1016/j.cub.2017.07.051] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Revised: 06/13/2017] [Accepted: 07/24/2017] [Indexed: 11/17/2022]
Abstract
Prior research using functional magnetic resonance imaging (fMRI) [1-4] and behavioral studies of patients with acquired or congenital amusia [5-8] suggest that the right posterior superior temporal gyrus (STG) in the human brain is specialized for aspects of music processing (for review, see [9-12]). Intracranial electrical brain stimulation in awake neurosurgery patients is a powerful means to determine the computations supported by specific brain regions and networks [13-21] because it provides reversible causal evidence with high spatial resolution (for review, see [22, 23]). Prior intracranial stimulation or cortical cooling studies have investigated musical abilities related to reading music scores [13, 14] and singing familiar songs [24, 25]. However, individuals with amusia (congenitally, or from a brain injury) have difficulty humming melodies but can be spared for singing familiar songs with familiar lyrics [26]. Here we report a detailed study of a musician with a low-grade tumor in the right temporal lobe. Functional MRI was used pre-operatively to localize music processing to the right STG, and the patient subsequently underwent awake intraoperative mapping using direct electrical stimulation during a melody repetition task. Stimulation of the right STG induced "music arrest" and errors in pitch but did not affect language processing. These findings provide causal evidence for the functional segregation of music and language processing in the human brain and confirm a specific role of the right STG in melody processing. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Frank E Garcea
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY 14627, USA; University of Rochester, Center for Language Sciences, 358 Meliora Hall, Rochester, NY 14627, USA; University of Rochester, Center for Visual Science, 274 Meliora Hall, Rochester, NY 14627, USA
| | - Benjamin L Chernoff
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Bram Diamond
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Wesley Lewis
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Maxwell H Sims
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Samuel B Tomlinson
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY 14642, USA
| | - Alexander Teghipco
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Raouf Belkhir
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY 14627, USA
| | - Sarah B Gannon
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY 14642, USA
| | - Steve Erickson
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY 14642, USA
| | - Susan O Smith
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY 14642, USA
| | - Jonathan Stone
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY 14642, USA
| | - Lynn Liu
- University of Rochester Medical Center, Department of Neurology, 601 Elmwood Avenue, Rochester, NY 14642, USA
| | - Trenton Tollefson
- University of Rochester Medical Center, Department of Neurology, 601 Elmwood Avenue, Rochester, NY 14642, USA
| | - John Langfitt
- University of Rochester Medical Center, Department of Neurology, 601 Elmwood Avenue, Rochester, NY 14642, USA
| | - Elizabeth Marvin
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY 14627, USA; University of Rochester, Eastman School of Music, 26 Gibbs Street, Rochester, NY 14604, USA
| | - Webster H Pilcher
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY 14642, USA
| | - Bradford Z Mahon
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY 14627, USA; University of Rochester, Center for Language Sciences, 358 Meliora Hall, Rochester, NY 14627, USA; University of Rochester, Center for Visual Science, 274 Meliora Hall, Rochester, NY 14627, USA; University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY 14642, USA; University of Rochester Medical Center, Department of Neurology, 601 Elmwood Avenue, Rochester, NY 14642, USA.
| |
Collapse
|
32
|
Merrill J, Bangert M, Sammler D, Friederici AD. Classifying song and speech: effects of focal temporal lesions and musical disorder. Neurocase 2016; 22:496-504. [PMID: 27726501 DOI: 10.1080/13554794.2016.1237660] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Song and speech represent two auditory categories the brain usually classifies fairly easily. Functionally, this classification ability may depend to a great extent on characteristic features of pitch patterns present in song melody and speech prosody. Anatomically, the temporal lobe (TL) has been discussed as playing a prominent role in the processing of both. Here we tested individuals with congenital amusia and patients with unilateral left and right TL lesions in their ability to categorize song and speech. In a forced-choice paradigm, specifically designed auditory stimuli representing sung, spoken and "ambiguous" stimuli (being perceived as "halfway between" song and speech), had to be classified as either "song" or "speech". Congenital amusics and TL patients, contrary to controls, exhibited a surprising bias to classifying the ambiguous stimuli as "song" despite their apparent deficit to correctly process features typical for song. This response bias possibly reflects a strategy where, based on available context information (here: forced choice for either speech or song), classification of non-processable items may be achieved through elimination of processable classes. This speech-based strategy masks the pitch processing deficit in congenital amusics and TL lesion patients.
Collapse
Affiliation(s)
- Julia Merrill
- a Department of Neuropsychology , Max Planck Institute for Human Cognitive and Brain Sciences , Leipzig , Germany.,b Music Department , Max Planck Institute for Empirical Aesthetics , Frankfurt , Germany.,c Institute of Music , University of Kassel , Kassel , Germany
| | - Marc Bangert
- a Department of Neuropsychology , Max Planck Institute for Human Cognitive and Brain Sciences , Leipzig , Germany.,d Institute of Musicians' Medicine , Dresden University of Music , Dresden , Germany
| | - Daniela Sammler
- a Department of Neuropsychology , Max Planck Institute for Human Cognitive and Brain Sciences , Leipzig , Germany.,e Otto Hahn Group "Neural Bases of Intonation in Speech and Music" , Max Planck Institute for Human Cognitive and Brain Sciences , Leipzig , Germany
| | - Angela D Friederici
- a Department of Neuropsychology , Max Planck Institute for Human Cognitive and Brain Sciences , Leipzig , Germany
| |
Collapse
|
33
|
Rosemann S, Brunner F, Kastrup A, Fahle M. Musical, visual and cognitive deficits after middle cerebral artery infarction. eNeurologicalSci 2016; 6:25-32. [PMID: 29260010 PMCID: PMC5721573 DOI: 10.1016/j.ensci.2016.11.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Revised: 07/28/2016] [Accepted: 11/03/2016] [Indexed: 11/24/2022] Open
Abstract
The perception of music can be impaired after a stroke. This dysfunction is called amusia and amusia patients often also show deficits in visual abilities, language, memory, learning, and attention. The current study investigated whether deficits in music perception are selective for musical input or generalize to other perceptual abilities. Additionally, we tested the hypothesis that deficits in working memory or attention account for impairments in music perception. Twenty stroke patients with small infarctions in the supply area of the middle cerebral artery were investigated with tests for music and visual perception, categorization, neglect, working memory and attention. Two amusia patients with selective deficits in music perception and pronounced lesions were identified. Working memory and attention deficits were highly correlated across the patient group but no correlation with musical abilities was obtained. Lesion analysis revealed that lesions in small areas of the putamen and globus pallidus were connected to a rhythm perception deficit. We conclude that neither a general perceptual deficit nor a minor domain general deficit can account for impairments in the music perception task. But we find support for the modular organization of the music perception network with brain areas specialized for musical functions as musical deficits were not correlated to any other impairment.
Collapse
Affiliation(s)
| | | | | | - Manfred Fahle
- Department of Human-Neurobiology, University of Bremen, Germany
| |
Collapse
|
34
|
Reliable individual-level neural markers of high-level language processing: A necessary precursor for relating neural variability to behavioral and genetic variability. Neuroimage 2016; 139:74-93. [DOI: 10.1016/j.neuroimage.2016.05.073] [Citation(s) in RCA: 80] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2015] [Revised: 05/25/2016] [Accepted: 05/27/2016] [Indexed: 12/17/2022] Open
|
35
|
Rigoulot S, Armony JL. Early selectivity for vocal and musical sounds: electrophysiological evidence from an adaptation paradigm. Eur J Neurosci 2016; 44:2786-2794. [PMID: 27600697 DOI: 10.1111/ejn.13391] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 08/29/2016] [Accepted: 08/31/2016] [Indexed: 11/27/2022]
Abstract
There is growing interest in characterizing the neural basis of music perception and, in particular, assessing how similar, or not, it is to that of speech. To further explore this question, we employed an EEG adaptation paradigm in which we compared responses to short sounds belonging to the same category, either speech (pseudo-sentences) or music (piano or violin), depending on whether they were immediately preceded by a same- or different-category sound. We observed a larger reduction in the N100 component magnitude in response to musical sounds when they were preceded by music (either the same or different instrument) than by speech. In contrast, the N100 amplitude was not affected by the preceding stimulus category in the case of speech. For P200 component, we observed a diminution of amplitude when speech sounds were preceded speech, compared to music. No such decrease was found when we compared the responses to music sounds. These differences in the processing of speech and music are consistent with the proposal that some degree of category selectivity for these two classes of complex stimuli already occurs at early stages of auditory processing, possibly subserved by partly separated neuronal populations.
Collapse
Affiliation(s)
- Simon Rigoulot
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| | - Jorge L Armony
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| |
Collapse
|
36
|
Processing structure in language and music: a case for shared reliance on cognitive control. Psychon Bull Rev 2016; 22:637-52. [PMID: 25092390 DOI: 10.3758/s13423-014-0712-4] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
The relationship between structural processing in music and language has received increasing interest in the past several years, spurred by the influential Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, Nature Neuroscience, 6, 674-681, 2003). According to this resource-sharing framework, music and language rely on separable syntactic representations but recruit shared cognitive resources to integrate these representations into evolving structures. The SSIRH is supported by findings of interactions between structural manipulations in music and language. However, other recent evidence suggests that such interactions also can arise with nonstructural manipulations, and some recent neuroimaging studies report largely nonoverlapping neural regions involved in processing musical and linguistic structure. These conflicting results raise the question of exactly what shared (and distinct) resources underlie musical and linguistic structural processing. This paper suggests that one shared resource is prefrontal cortical mechanisms of cognitive control, which are recruited to detect and resolve conflict that occurs when expectations are violated and interpretations must be revised. By this account, musical processing involves not just the incremental processing and integration of musical elements as they occur, but also the incremental generation of musical predictions and expectations, which must sometimes be overridden and revised in light of evolving musical input.
Collapse
|
37
|
Norman-Haignere S, Kanwisher NG, McDermott JH. Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition. Neuron 2016; 88:1281-1296. [PMID: 26687225 DOI: 10.1016/j.neuron.2015.11.035] [Citation(s) in RCA: 181] [Impact Index Per Article: 22.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Revised: 10/03/2015] [Accepted: 11/23/2015] [Indexed: 11/19/2022]
Abstract
The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech.
Collapse
Affiliation(s)
| | - Nancy G Kanwisher
- Department of Brain and Cognitive Sciences, MIT
- McGovern Institute for Brain Science, MIT
| | | |
Collapse
|
38
|
Fedorenko E, Varley R. Language and thought are not the same thing: evidence from neuroimaging and neurological patients. Ann N Y Acad Sci 2016; 1369:132-53. [PMID: 27096882 PMCID: PMC4874898 DOI: 10.1111/nyas.13046] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 02/18/2016] [Accepted: 02/25/2016] [Indexed: 01/29/2023]
Abstract
Is thought possible without language? Individuals with global aphasia, who have almost no ability to understand or produce language, provide a powerful opportunity to find out. Surprisingly, despite their near-total loss of language, these individuals are nonetheless able to add and subtract, solve logic problems, think about another person's thoughts, appreciate music, and successfully navigate their environments. Further, neuroimaging studies show that healthy adults strongly engage the brain's language areas when they understand a sentence, but not when they perform other nonlinguistic tasks such as arithmetic, storing information in working memory, inhibiting prepotent responses, or listening to music. Together, these two complementary lines of evidence provide a clear answer: many aspects of thought engage distinct brain regions from, and do not depend on, language.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Psychiatry Department, Massachusetts General Hospital, Charlestown, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Center for Academic Research and Training in Anthropogeny (CARTA), University of California, San Diego, La Jolla, California
| | | |
Collapse
|
39
|
Peretz I, Vuvan D, Lagrois MÉ, Armony JL. Neural overlap in processing music and speech. Philos Trans R Soc Lond B Biol Sci 2016; 370:20140090. [PMID: 25646513 DOI: 10.1098/rstb.2014.0090] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing.
Collapse
Affiliation(s)
- Isabelle Peretz
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Dominique Vuvan
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Marie-Élaine Lagrois
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Jorge L Armony
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychiatry, McGill University and Douglas Mental Health University Institute, Montreal, Quebec, Canada
| |
Collapse
|
40
|
Heffner CC, Slevc LR. Prosodic Structure as a Parallel to Musical Structure. Front Psychol 2015; 6:1962. [PMID: 26733930 PMCID: PMC4687474 DOI: 10.3389/fpsyg.2015.01962] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2015] [Accepted: 12/07/2015] [Indexed: 11/13/2022] Open
Abstract
What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis.
Collapse
Affiliation(s)
- Christopher C. Heffner
- Program in Neuroscience and Cognitive Science, University of Maryland, College ParkMD, USA
- Department of Linguistics, University of Maryland, College ParkMD, USA
- Department of Hearing and Speech Sciences, University of Maryland, College ParkMD, USA
| | - L. Robert Slevc
- Program in Neuroscience and Cognitive Science, University of Maryland, College ParkMD, USA
- Department of Psychology, University of Maryland, College ParkMD, USA
| |
Collapse
|
41
|
Fogel AR, Rosenberg JC, Lehman FM, Kuperberg GR, Patel AD. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method. Front Psychol 2015; 6:1718. [PMID: 26617548 PMCID: PMC4641899 DOI: 10.3389/fpsyg.2015.01718] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Accepted: 10/26/2015] [Indexed: 11/13/2022] Open
Abstract
Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in music and language.
Collapse
Affiliation(s)
| | - Jason C Rosenberg
- Department of Arts and Humanities, Yale-NUS College Singapore, Singapore
| | | | - Gina R Kuperberg
- Department of Psychology, Tufts University, Medford MA, USA ; MGH/HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown MA, USA ; Department of Psychiatry, Massachusetts General Hospital, Charlestown MA, USA
| | | |
Collapse
|
42
|
Kunert R, Willems RM, Casasanto D, Patel AD, Hagoort P. Music and Language Syntax Interact in Broca's Area: An fMRI Study. PLoS One 2015; 10:e0141069. [PMID: 26536026 PMCID: PMC4633113 DOI: 10.1371/journal.pone.0141069] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2014] [Accepted: 09/17/2015] [Indexed: 12/31/2022] Open
Abstract
Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
Collapse
Affiliation(s)
- Richard Kunert
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
- * E-mail:
| | - Roel M. Willems
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
| | - Daniel Casasanto
- Psychology Department, University of Chicago, Chicago, Illinois, United States of America
| | | | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
| |
Collapse
|
43
|
Mather M, Cacioppo JT, Kanwisher N. How fMRI Can Inform Cognitive Theories. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2015; 8:108-13. [PMID: 23544033 DOI: 10.1177/1745691612469037] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
How can functional magnetic resonance imaging (fMRI) advance cognitive theory? Some have argued that fMRI can do little beyond localizing brain regions that carry out certain cognitive functions (and may not even be able to do that). However, in this article, we argue that fMRI can inform theories of cognition by helping to answer at least four distinct kinds of questions. Which mental functions are performed in brain regions specialized for just that function (and which are performed in more general-purpose brain machinery)? When fMRI markers of a particular Mental Process X are found, is Mental Process X engaged when people perform Task Y? How distinct are the representations of different stimulus classes? Do specific pairs of tasks engage common or distinct processing mechanisms? Thus, fMRI data can be used to address theoretical debates that have nothing to do with where in the brain a particular process is carried out.
Collapse
|
44
|
The specificity of neural responses to music and their relation to voice processing: An fMRI-adaptation study. Neurosci Lett 2015; 593:35-9. [DOI: 10.1016/j.neulet.2015.03.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2014] [Revised: 02/03/2015] [Accepted: 03/06/2015] [Indexed: 11/20/2022]
|
45
|
Fedorenko E, Hsieh PJ, Balewski Z. A possible functional localizer for identifying brain regions sensitive to sentence-level prosody. LANGUAGE, COGNITION AND NEUROSCIENCE 2015; 30:120-148. [PMID: 25642425 PMCID: PMC4306436 DOI: 10.1080/01690965.2013.861917] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Investigations of how we produce and perceive prosodic patterns are not only interesting in their own right but can inform fundamental questions in language research. We here argue that functional magnetic resonance imaging (fMRI) in general - and the functional localization approach in particular (e.g., Kanwisher et al., 1997; Saxe et al., 2006; Fedorenko et al., 2010; Nieto-Castañon & Fedorenko, 2012) - has the potential to help address open research questions in prosody research and at the intersection of prosody and other domains. Critically, this approach can go beyond questions like "where in the brain does mental process x produce activation" and toward questions that probe the nature of the representations and computations that subserve different mental abilities. We describe one way to functionally define regions sensitive to sentence-level prosody in individual subjects. This or similar "localizer" contrasts can be used in future studies to test hypotheses about the precise contributions of prosody-sensitive brain regions to prosodic processing and cognition more broadly.
Collapse
Affiliation(s)
| | - Po-Jang Hsieh
- Neuroscience and Behavioral Disorders Program, Duke-NUS Graduate Medical School
| | | |
Collapse
|
46
|
Abstract
As has been found in nicotine research on animals, research on humans has shown that acute nicotine enhances reinforcement from rewards unrelated to nicotine intake, but this effect may be specific to rewards from stimuli that are "sensory" in nature. We assessed acute effects of nicotine via smoking on responding for music or video rewards (sensory), for monetary reward (nonsensory), or for no reward (control), to gauge the generalizability of nicotine's reinforcement-enhancing effects. Using a fully within-subjects design, dependent smokers (N = 20) participated in 3 similar experimental sessions, each following overnight abstinence (verified by carbon monoxide <10 ppm) and varying only in the smoking condition. Sessions involved no smoking or smoking "denicotinized" ("denic;" 0.05 mg) or nicotine (0.6 mg) Quest brand cigarettes in controlled fashion prior to responding on a simple operant computer task for each reward separately using a progressive ratio schedule. The reinforcing effects of music and video rewards, but not money, were significantly greater due to the nicotine versus denic cigarette (i.e., nicotine per se), whereas there were no differences between denic cigarette smoking and no smoking (i.e., smoking behavior per se), except for no reward. These effects were not influenced by withdrawal relief from either cigarette. Results that generalize from an auditory to a visual reward confirm that acute nicotine intake per se enhances the reinforcing value of sensory rewards, but its effects on the value of other (perhaps nonsensory) types of rewards may be more modest.
Collapse
|
47
|
Elmer S, Hänggi J, Jäncke L. Interhemispheric transcallosal connectivity between the left and right planum temporale predicts musicianship, performance in temporal speech processing, and functional specialization. Brain Struct Funct 2014; 221:331-44. [DOI: 10.1007/s00429-014-0910-x] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2014] [Accepted: 09/29/2014] [Indexed: 12/01/2022]
|
48
|
Angulo-Perkins A, Aubé W, Peretz I, Barrios FA, Armony JL, Concha L. Music listening engages specific cortical regions within the temporal lobes: differences between musicians and non-musicians. Cortex 2014; 59:126-37. [PMID: 25173956 DOI: 10.1016/j.cortex.2014.07.013] [Citation(s) in RCA: 64] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2013] [Revised: 02/22/2014] [Accepted: 07/18/2014] [Indexed: 11/26/2022]
Abstract
Music and speech are two of the most relevant and common sounds in the human environment. Perceiving and processing these two complex acoustical signals rely on a hierarchical functional network distributed throughout several brain regions within and beyond the auditory cortices. Given their similarities, the neural bases for processing these two complex sounds overlap to a certain degree, but particular brain regions may show selectivity for one or the other acoustic category, which we aimed to identify. We examined 53 subjects (28 of them professional musicians) by functional magnetic resonance imaging (fMRI), using a paradigm designed to identify regions showing increased activity in response to different types of musical stimuli, compared to different types of complex sounds, such as speech and non-linguistic vocalizations. We found a region in the anterior portion of the superior temporal gyrus (aSTG) (planum polare) that showed preferential activity in response to musical stimuli and was present in all our subjects, regardless of musical training, and invariant across different musical instruments (violin, piano or synthetic piano). Our data show that this cortical region is preferentially involved in processing musical, as compared to other complex sounds, suggesting a functional role as a second-order relay, possibly integrating acoustic characteristics intrinsic to music (e.g., melody extraction). Moreover, we assessed whether musical experience modulates the response of cortical regions involved in music processing and found evidence of functional differences between musicians and non-musicians during music listening. In particular, bilateral activation of the planum polare was more prevalent, but not exclusive, in musicians than non-musicians, and activation of the right posterior portion of the superior temporal gyrus (planum temporale) differed between groups. Our results provide evidence of functional specialization for music processing in specific regions of the auditory cortex and show domain-specific functional differences possibly correlated with musicianship.
Collapse
Affiliation(s)
- Arafat Angulo-Perkins
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Querétaro, Querétaro, México
| | - William Aubé
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada; Department of Psychology, Université de Montréal, Montreal, Québec, Canada
| | - Isabelle Peretz
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada; Department of Psychology, Université de Montréal, Montreal, Québec, Canada
| | - Fernando A Barrios
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Querétaro, Querétaro, México
| | - Jorge L Armony
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada; Department of Psychology, Université de Montréal, Montreal, Québec, Canada; Douglas Institute and Department of Psychiatry, McGill University, Montreal, Québec, Canada
| | - Luis Concha
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Querétaro, Querétaro, México; International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada.
| |
Collapse
|
49
|
Perrachione TK, Fedorenko EG, Vinke L, Gibson E, Dilley LC. Evidence for shared cognitive processing of pitch in music and language. PLoS One 2013; 8:e73372. [PMID: 23977386 PMCID: PMC3744486 DOI: 10.1371/journal.pone.0073372] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2012] [Accepted: 07/28/2013] [Indexed: 11/19/2022] Open
Abstract
Language and music epitomize the complex representational and computational capacities of the human mind. Strikingly similar in their structural and expressive features, a longstanding question is whether the perceptual and cognitive mechanisms underlying these abilities are shared or distinct--either from each other or from other mental processes. One prominent feature shared between language and music is signal encoding using pitch, conveying pragmatics and semantics in language and melody in music. We investigated how pitch processing is shared between language and music by measuring consistency in individual differences in pitch perception across language, music, and three control conditions intended to assess basic sensory and domain-general cognitive processes. Individuals' pitch perception abilities in language and music were most strongly related, even after accounting for performance in all control conditions. These results provide behavioral evidence, based on patterns of individual differences, that is consistent with the hypothesis that cognitive mechanisms for pitch processing may be shared between language and music.
Collapse
Affiliation(s)
- Tyler K. Perrachione
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Evelina G. Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Louis Vinke
- Department of Psychology, Bowling Green State University, Bowling Green, Ohio, United States of America
| | - Edward Gibson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Laura C. Dilley
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, Michigan, United States of America
| |
Collapse
|