1
|
Barchet AV, Henry MJ, Pelofi C, Rimmele JM. Auditory-motor synchronization and perception suggest partially distinct time scales in speech and music. COMMUNICATIONS PSYCHOLOGY 2024; 2:2. [PMID: 39242963 PMCID: PMC11332030 DOI: 10.1038/s44271-023-00053-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 12/19/2023] [Indexed: 09/09/2024]
Abstract
Speech and music might involve specific cognitive rhythmic timing mechanisms related to differences in the dominant rhythmic structure. We investigate the influence of different motor effectors on rate-specific processing in both domains. A perception and a synchronization task involving syllable and piano tone sequences and motor effectors typically associated with speech (whispering) and music (finger-tapping) were tested at slow (~2 Hz) and fast rates (~4.5 Hz). Although synchronization performance was generally better at slow rates, the motor effectors exhibited specific rate preferences. Finger-tapping was advantaged compared to whispering at slow but not at faster rates, with synchronization being effector-dependent at slow, but highly correlated at faster rates. Perception of speech and music was better at different rates and predicted by a fast general and a slow finger-tapping synchronization component. Our data suggests partially independent rhythmic timing mechanisms for speech and music, possibly related to a differential recruitment of cortical motor circuitry.
Collapse
Affiliation(s)
- Alice Vivien Barchet
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Molly J Henry
- Research Group 'Neural and Environmental Rhythms', Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Claire Pelofi
- Music and Audio Research Laboratory, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
| | - Johanna M Rimmele
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA.
| |
Collapse
|
2
|
Harris I, Niven EC, Griffin A, Scott SK. Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
Affiliation(s)
- Ilana Harris
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Efe C Niven
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alex Griffin
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
3
|
Belden A, Quinci MA, Geddes M, Donovan NJ, Hanser SB, Loui P. Functional Organization of Auditory and Reward Systems in Aging. J Cogn Neurosci 2023; 35:1570-1592. [PMID: 37432735 PMCID: PMC10513766 DOI: 10.1162/jocn_a_02028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
The intrinsic organization of functional brain networks is known to change with age, and is affected by perceptual input and task conditions. Here, we compare functional activity and connectivity during music listening and rest between younger (n = 24) and older (n = 24) adults, using whole-brain regression, seed-based connectivity, and ROI-ROI connectivity analyses. As expected, activity and connectivity of auditory and reward networks scaled with liking during music listening in both groups. Younger adults show higher within-network connectivity of auditory and reward regions as compared with older adults, both at rest and during music listening, but this age-related difference at rest was reduced during music listening, especially in individuals who self-report high musical reward. Furthermore, younger adults showed higher functional connectivity between auditory network and medial prefrontal cortex that was specific to music listening, whereas older adults showed a more globally diffuse pattern of connectivity, including higher connectivity between auditory regions and bilateral lingual and inferior frontal gyri. Finally, connectivity between auditory and reward regions was higher when listening to music selected by the participant. These results highlight the roles of aging and reward sensitivity on auditory and reward networks. Results may inform the design of music-based interventions for older adults and improve our understanding of functional network dynamics of the brain at rest and during a cognitively engaging task.
Collapse
Affiliation(s)
| | | | | | - Nancy J Donovan
- Brigham and Women's Hospital and Harvard Medical School, Boston, MA
| | | | | |
Collapse
|
4
|
Olszewska AM, Droździel D, Gaca M, Kulesza A, Obrębski W, Kowalewski J, Widlarz A, Marchewka A, Herman AM. Unlocking the musical brain: A proof-of-concept study on playing the piano in MRI scanner with naturalistic stimuli. Heliyon 2023; 9:e17877. [PMID: 37501960 PMCID: PMC10368778 DOI: 10.1016/j.heliyon.2023.e17877] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 04/26/2023] [Accepted: 06/29/2023] [Indexed: 07/29/2023] Open
Abstract
Music is a universal human phenomenon, and can be studied for itself or as a window into the understanding of the brain. Few neuroimaging studies investigate actual playing in the MRI scanner, likely because of the lack of available experimental hardware and analysis tools. Here, we offer an innovative paradigm that addresses this issue in neuromusicology using naturalistic, polyphonic musical stimuli, presents a commercially available MRI-compatible piano, and a flexible approach to quantify participant's performance. We show how making errors while playing can be investigated using an altered auditory feedback paradigm. In the spirit of open science, we make our experimental paradigms and analysis tools available to other researchers studying pianists in MRI. Altogether, we present a proof-of-concept study which shows the feasibility of playing the novel piano in MRI, and a step towards using more naturalistic stimuli.
Collapse
Affiliation(s)
- Alicja M. Olszewska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Dawid Droździel
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Maciej Gaca
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Agnieszka Kulesza
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Wojciech Obrębski
- Department of Nuclear and Medical Electronics, Faculty of Electronics and Information Technology, Warsaw University of Technology, 1 Politechniki Square, 00-661 Warsaw, Poland
- 10 Murarska Street, 08-110 Siedlce, Poland
| | | | - Agnieszka Widlarz
- Chair of Rhythmics and Piano Improvisation, Department of Choir Conducting and Singing, Music Education and Rhythmics, The Chopin University of Music, Okolnik 2 Street, 00–368 Warsaw, Poland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Aleksandra M. Herman
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| |
Collapse
|
5
|
Zamorano AM, Zatorre RJ, Vuust P, Friberg A, Birbaumer N, Kleber B. Singing training predicts increased insula connectivity with speech and respiratory sensorimotor areas at rest. Brain Res 2023:148418. [PMID: 37217111 DOI: 10.1016/j.brainres.2023.148418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 03/28/2023] [Accepted: 05/17/2023] [Indexed: 05/24/2023]
Abstract
The insula contributes to the detection of salient events during goal-directed behavior and participates in the coordination of motor, multisensory, and cognitive systems. Recent task-fMRI studies with trained singers suggest that singing experience can enhance the access to these resources. However, the long-term effects of vocal training on insula-based networks are still unknown. In this study, we employed resting-state fMRI to assess experience-dependent differences in insula co-activation patterns between conservatory-trained singers and non-singers. Results indicate enhanced bilateral anterior insula connectivity in singers relative to non-singers with constituents of the speech sensorimotor network. Specifically, with the cerebellum (lobule V-VI) and the superior parietal lobes. The reversed comparison showed no effects. The amount of accumulated singing training predicted enhanced bilateral insula co-activation with primary sensorimotor areas representing the diaphragm and the larynx/phonation area-crucial regions for cortico-motor control of complex vocalizations-as well as the bilateral thalamus and the left putamen. Together, these findings highlight the neuroplastic effect of expert singing training on insula-based networks, as evidenced by the association between enhanced insula co-activation profiles in singers and the brain's speech motor system components.
Collapse
Affiliation(s)
- A M Zamorano
- Center for Neuroplasticity and Pain (CNAP), Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - R J Zatorre
- McGill University-Montreal Neurological Institute, Neuropsychology and Cognitive Neuroscience, Montreal, Canada; International Laboratory for Brain, Music and Sound research (BRAMS), Montreal, Canada
| | - P Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - A Friberg
- Speech, Music and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden
| | - N Birbaumer
- Institute for Medical Psychology and Behavioral Neurobiology, University of Tübingen, Germany
| | - B Kleber
- Institute for Medical Psychology and Behavioral Neurobiology, University of Tübingen, Germany; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, & The Royal Academy of Music Aarhus/Aalborg, Denmark.
| |
Collapse
|
6
|
Scharinger M, Knoop CA, Wagner V, Menninghaus W. Neural processing of poems and songs is based on melodic properties. Neuroimage 2022; 257:119310. [PMID: 35569784 DOI: 10.1016/j.neuroimage.2022.119310] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/26/2022] [Accepted: 05/11/2022] [Indexed: 11/30/2022] Open
Abstract
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Pilgrimstein 16, Marburg 35032, Germany; Center for Mind, Brain and Behavior, Universities of Marburg and Gießen, Germany.
| | - Christine A Knoop
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Valentin Wagner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Experimental Psychology Unit, Helmut Schmidt University / University of the Federal Armed Forces Hamburg, Germany
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
7
|
Norman-Haignere SV, Feather J, Boebinger D, Brunner P, Ritaccio A, McDermott JH, Schalk G, Kanwisher N. A neural population selective for song in human auditory cortex. Curr Biol 2022; 32:1470-1484.e12. [PMID: 35196507 PMCID: PMC9092957 DOI: 10.1016/j.cub.2022.01.069] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 10/26/2021] [Accepted: 01/24/2022] [Indexed: 12/18/2022]
Abstract
How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Institute, Columbia University, New York, NY, USA; HHMI Fellow of the Life Sciences Research Foundation, Chevy Chase, MD, USA; Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, ENS, PSL University, CNRS, Paris, France; Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, USA; Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, USA; Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Jenelle Feather
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA
| | - Dana Boebinger
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Peter Brunner
- Department of Neurology, Albany Medical College, Albany, NY, USA; National Center for Adaptive Neurotechnologies, Albany, NY, USA; Department of Neurosurgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Anthony Ritaccio
- Department of Neurology, Albany Medical College, Albany, NY, USA; Department of Neurology, Mayo Clinic, Jacksonville, FL, USA
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA; Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Gerwin Schalk
- Department of Neurology, Albany Medical College, Albany, NY, USA
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA
| |
Collapse
|
8
|
Dietziker J, Staib M, Frühholz S. Neural competition between concurrent speech production and other speech perception. Neuroimage 2020; 228:117710. [PMID: 33385557 DOI: 10.1016/j.neuroimage.2020.117710] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 11/28/2020] [Accepted: 12/19/2020] [Indexed: 10/22/2022] Open
Abstract
Understanding others' speech while individuals simultaneously produce speech utterances implies neural competition and requires specific mechanisms for a neural resolution given that previous studies proposed opposing signal dynamics for both processes in the auditory cortex (AC). We here used neuroimaging in humans to investigate this neural competition by lateralized stimulations with other speech samples and ipsilateral or contralateral lateralized feedback of actively produced self speech utterances in the form of various speech vowels. In experiment 1, we show, first, that others' speech classifications during active self speech lead to activity in the planum temporale (PTe) when both self and other speech samples were presented together to only the left or right ear. The contralateral PTe also seemed to indifferently respond to single self and other speech samples. Second, specific activity in the left anterior superior temporal cortex (STC) was found during dichotic stimulations (i.e. self and other speech presented to separate ears). Unlike previous studies, this left anterior STC activity supported self speech rather than other speech processing. Furthermore, right mid and anterior STC was more involved in other speech processing. These results signify specific mechanisms for self and other speech processing in the left and right STC beyond a more general speech processing in PTe. Third, other speech recognition in the context of listening to recorded self speech in experiment 2 led to largely symmetric activity in STC and additionally in inferior frontal subregions. The latter was previously reported to be generally relevant for other speech perception and classification, but we found frontal activity only when other speech classification was challenged by recorded but not by active self speech samples. Altogether, unlike formerly established brain networks for uncompetitive other speech perception, active self speech during other speech perception seemingly leads to a neural reordering, functional reassignment, and unusual lateralization of AC and frontal brain activations.
Collapse
Affiliation(s)
- Joris Dietziker
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland.
| | - Matthias Staib
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Switzerland; Department of Psychology, University of Oslo, Norway.
| |
Collapse
|
9
|
Haiduk F, Quigley C, Fitch WT. Song Is More Memorable Than Speech Prosody: Discrete Pitches Aid Auditory Working Memory. Front Psychol 2020; 11:586723. [PMID: 33362651 PMCID: PMC7758421 DOI: 10.3389/fpsyg.2020.586723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 11/17/2020] [Indexed: 12/03/2022] Open
Abstract
Vocal music and spoken language both have important roles in human communication, but it is unclear why these two different modes of vocal communication exist. Although similar, speech and song differ in certain design features. One interesting difference is in the pitch intonation contour, which consists of discrete tones in song, vs. gliding intonation contours in speech. Here, we investigated whether vocal phrases consisting of discrete pitches (song-like) or gliding pitches (speech-like) are remembered better, conducting three studies implementing auditory same-different tasks at three levels of difficulty. We tested two hypotheses: that discrete pitch contours aid auditory memory, independent of musical experience ("song memory advantage hypothesis"), or that the higher everyday experience perceiving and producing speech make speech intonation easier to remember ("experience advantage hypothesis"). We used closely matched stimuli, controlling for rhythm and timbre, and we included a stimulus intermediate between song-like and speech-like pitch contours (with partially gliding and partially discrete pitches). We also assessed participants' musicality to evaluate experience-dependent effects. We found that song-like vocal phrases are remembered better than speech-like vocal phrases, and that intermediate vocal phrases evoked a similar advantage to song-like vocal phrases. Participants with more musical experience were better in remembering all three types of vocal phrases. The precise roles of absolute and relative pitch perception and the influence of top-down vs. bottom-up processing should be clarified in future studies. However, our results suggest that one potential reason for the emergence of discrete pitch-a feature that characterises music across cultures-might be that it enhances auditory memory.
Collapse
Affiliation(s)
- Felix Haiduk
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
| | - Cliodhna Quigley
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
- Konrad Lorenz Institute of Ethology, University of Veterinary Medicine Vienna, Vienna, Austria
| | - W. Tecumseh Fitch
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
10
|
Hahn LE, Benders T, Snijders TM, Fikkert P. Six-month-old infants recognize phrases in song and speech. INFANCY 2020; 25:699-718. [PMID: 32794372 DOI: 10.1111/infa.12357] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 06/09/2020] [Accepted: 07/02/2020] [Indexed: 11/29/2022]
Abstract
Infants exploit acoustic boundaries to perceptually organize phrases in speech. This prosodic parsing ability is well-attested and is a cornerstone to the development of speech perception and grammar. However, infants also receive linguistic input in child songs. This study provides evidence that infants parse songs into meaningful phrasal units and replicates previous research for speech. Six-month-old Dutch infants (n = 80) were tested in the song or speech modality in the head-turn preference procedure. First, infants were familiarized to two versions of the same word sequence: One version represented a well-formed unit, and the other contained a phrase boundary halfway through. At test, infants were presented two passages, each containing one version of the familiarized sequence. The results for speech replicated the previously observed preference for the passage containing the well-formed sequence, but only in a more fine-grained analysis. The preference for well-formed phrases was also observed in the song modality, indicating that infants recognize phrase structure in song. There were acoustic differences between stimuli of the current and previous studies, suggesting that infants are flexible in their processing of boundary cues while also providing a possible explanation for differences in effect sizes.
Collapse
Affiliation(s)
- Laura E Hahn
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands.,International Max Planck Research School for Language Sciences, Nijmegen, The Netherlands
| | - Titia Benders
- Department of Linguistics, Macquarie University, Sydney, NSW, Australia
| | - Tineke M Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
| | - Paula Fikkert
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
11
|
Chien PJ, Friederici AD, Hartwigsen G, Sammler D. Neural correlates of intonation and lexical tone in tonal and non-tonal language speakers. Hum Brain Mapp 2020; 41:1842-1858. [PMID: 31957928 PMCID: PMC7268089 DOI: 10.1002/hbm.24916] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 12/10/2019] [Accepted: 12/18/2019] [Indexed: 12/31/2022] Open
Abstract
Intonation, the modulation of pitch in speech, is a crucial aspect of language that is processed in right‐hemispheric regions, beyond the classical left‐hemispheric language system. Whether or not this notion generalises across languages remains, however, unclear. Particularly, tonal languages are an interesting test case because of the dual linguistic function of pitch that conveys lexical meaning in form of tone, in addition to intonation. To date, only few studies have explored how intonation is processed in tonal languages, how this compares to tone and between tonal and non‐tonal language speakers. The present fMRI study addressed these questions by testing Mandarin and German speakers with Mandarin material. Both groups categorised mono‐syllabic Mandarin words in terms of intonation, tone, and voice gender. Systematic comparisons of brain activity of the two groups between the three tasks showed large cross‐linguistic commonalities in the neural processing of intonation in left fronto‐parietal, right frontal, and bilateral cingulo‐opercular regions. These areas are associated with general phonological, specific prosodic, and controlled categorical decision‐making processes, respectively. Tone processing overlapped with intonation processing in left fronto‐parietal areas, in both groups, but evoked additional activity in bilateral temporo‐parietal semantic regions and subcortical areas in Mandarin speakers only. Together, these findings confirm cross‐linguistic commonalities in the neural implementation of intonation processing but dissociations for semantic processing of tone only in tonal language speakers.
Collapse
Affiliation(s)
- Pei-Ju Chien
- International Max Planck Research School NeuroCom, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Lise Meitner Research Group "Cognition and Plasticity", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group "Cognition and Plasticity", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
12
|
Rossi S, Gugler MF, Rungger M, Galvan O, Zorowka PG, Seebacher J. How the Brain Understands Spoken and Sung Sentences. Brain Sci 2020; 10:E36. [PMID: 31936356 PMCID: PMC7017195 DOI: 10.3390/brainsci10010036] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 12/19/2019] [Accepted: 01/06/2020] [Indexed: 11/24/2022] Open
Abstract
The present study investigates whether meaning is similarly extracted from spoken and sung sentences. For this purpose, subjects listened to semantically correct and incorrect sentences while performing a correctness judgement task. In order to examine underlying neural mechanisms, a multi-methodological approach was chosen combining two neuroscientific methods with behavioral data. In particular, fast dynamic changes reflected in the semantically associated N400 component of the electroencephalography (EEG) were simultaneously assessed with the topographically more fine-grained vascular signals acquired by the functional near-infrared spectroscopy (fNIRS). EEG results revealed a larger N400 for incorrect compared to correct sentences in both spoken and sung sentences. However, the N400 was delayed for sung sentences, potentially due to the longer sentence duration. fNIRS results revealed larger activations for spoken compared to sung sentences irrespective of semantic correctness at predominantly left-hemispheric areas, potentially suggesting a greater familiarity with spoken material. Furthermore, the fNIRS revealed a widespread activation for correct compared to incorrect sentences irrespective of modality, potentially indicating a successful processing of sentence meaning. The combined results indicate similar semantic processing in speech and song.
Collapse
Affiliation(s)
- Sonja Rossi
- ICONE-Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Manfred F Gugler
- Department for Medical Psychology, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Markus Rungger
- Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Oliver Galvan
- Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Patrick G Zorowka
- Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Josef Seebacher
- Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| |
Collapse
|
13
|
Tsai CG, Li CW. Is It Speech or Song? Effect of Melody Priming on Pitch Perception of Modified Mandarin Speech. Brain Sci 2019; 9:brainsci9100286. [PMID: 31652522 PMCID: PMC6826721 DOI: 10.3390/brainsci9100286] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2019] [Revised: 10/17/2019] [Accepted: 10/21/2019] [Indexed: 01/22/2023] Open
Abstract
Tonal languages make use of pitch variation for distinguishing lexical semantics, and their melodic richness seems comparable to that of music. The present study investigated a novel priming effect of melody on the pitch processing of Mandarin speech. When a spoken Mandarin utterance is preceded by a musical melody, which mimics the melody of the utterance, the listener is likely to perceive this utterance as song. We used functional magnetic resonance imaging to examine the neural substrates of this speech-to-song transformation. Pitch contours of spoken utterances were modified so that these utterances can be perceived as either speech or song. When modified speech (target) was preceded by a musical melody (prime) that mimics the speech melody, a task of judging the melodic similarity between the target and prime was associated with increased activity in the inferior frontal gyrus (IFG) and superior/middle temporal gyrus (STG/MTG) during target perception. We suggest that the pars triangularis of the right IFG may allocate attentional resources to the multi-modal processing of speech melody, and the STG/MTG may integrate the phonological and musical (melodic) information of this stimulus. These results are discussed in relation to subvocal rehearsal, a speech-to-song illusion, and song perception.
Collapse
Affiliation(s)
- Chen-Gia Tsai
- Graduate Institute of Musicology, National Taiwan University, Taipei 106, Taiwan.
- Neurobiology and Cognitive Science Center, National Taiwan University, Taipei 106, Taiwan.
| | - Chia-Wei Li
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan.
| |
Collapse
|
14
|
Angulo-Perkins A, Concha L. Discerning the functional networks behind processing of music and speech through human vocalizations. PLoS One 2019; 14:e0222796. [PMID: 31600231 PMCID: PMC6786620 DOI: 10.1371/journal.pone.0222796] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 09/06/2019] [Indexed: 01/28/2023] Open
Abstract
A fundamental question regarding music processing is its degree of independence from speech processing, in terms of their underlying neuroanatomy and influence of cognitive traits and abilities. Although a straight answer to that question is still lacking, a large number of studies have described where in the brain and in which contexts (tasks, stimuli, populations) this independence is, or is not, observed. We examined the independence between music and speech processing using functional magnetic resonance imagining and a stimulation paradigm with different human vocal sounds produced by the same voice. The stimuli were grouped as Speech (spoken sentences), Hum (hummed melodies), and Song (sung sentences); the sentences used in Speech and Song categories were the same, as well as the melodies used in the two musical categories. Each category had a scrambled counterpart which allowed us to render speech and melody unintelligible, while preserving global amplitude and frequency characteristics. Finally, we included a group of musicians to evaluate the influence of musical expertise. Similar global patterns of cortical activity were related to all sound categories compared to baseline, but important differences were evident. Regions more sensitive to musical sounds were located bilaterally in the anterior and posterior superior temporal gyrus (planum polare and temporale), the right supplementary and premotor areas, and the inferior frontal gyrus. However, only temporal areas and supplementary motor cortex remained music-selective after subtracting brain activity related to the scrambled stimuli. Speech-selective regions mainly affected by intelligibility of the stimuli were observed on the left pars opecularis and the anterior portion of the medial temporal gyrus. We did not find differences between musicians and non-musicians Our results confirmed music-selective cortical regions in associative cortices, independent of previous musical training.
Collapse
Affiliation(s)
- Arafat Angulo-Perkins
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, Querétaro, México
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria
| | - Luis Concha
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, Querétaro, México
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada
| |
Collapse
|
15
|
Bird LJ, Jackson GD, Wilson SJ. Music training is neuroprotective for verbal cognition in focal epilepsy. Brain 2019; 142:1973-1987. [PMID: 31074775 DOI: 10.1093/brain/awz124] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2018] [Revised: 02/18/2019] [Accepted: 03/07/2019] [Indexed: 12/21/2022] Open
Abstract
Focal epilepsy is a unilateral brain network disorder, providing an ideal neuropathological model with which to study the effects of focal neural disruption on a range of cognitive processes. While language and memory functions have been extensively investigated in focal epilepsy, music cognition has received less attention, particularly in patients with music training or expertise. This represents a critical gap in the literature. A better understanding of the effects of epilepsy on music cognition may provide greater insight into the mechanisms behind disease- and training-related neuroplasticity, which may have implications for clinical practice. In this cross-sectional study, we comprehensively profiled music and non-music cognition in 107 participants; musicians with focal epilepsy (n = 35), non-musicians with focal epilepsy (n = 39), and healthy control musicians and non-musicians (n = 33). Parametric group comparisons revealed a specific impairment in verbal cognition in non-musicians with epilepsy but not musicians with epilepsy, compared to healthy musicians and non-musicians (P = 0.029). This suggests a possible neuroprotective effect of music training against the cognitive sequelae of focal epilepsy, and implicates potential training-related cognitive transfer that may be underpinned by enhancement of auditory processes primarily supported by temporo-frontal networks. Furthermore, our results showed that musicians with an earlier age of onset of music training performed better on a composite score of melodic learning and memory compared to non-musicians (P = 0.037), while late-onset musicians did not differ from non-musicians. For most composite scores of music cognition, although no significant group differences were observed, a similar trend was apparent. We discuss these key findings in the context of a proposed model of three interacting dimensions (disease status, music expertise, and cognitive domain), and their implications for clinical practice, music education, and music neuroscience research.
Collapse
Affiliation(s)
- Laura J Bird
- Melbourne School of Psychological Sciences, The University of Melbourne, Grattan Street, Parkville, Victoria, Australia.,The Florey Institute of Neuroscience and Mental Health, Melbourne Brain Centre, 245 Burgundy Street, Heidelberg, Victoria, Australia
| | - Graeme D Jackson
- The Florey Institute of Neuroscience and Mental Health, Melbourne Brain Centre, 245 Burgundy Street, Heidelberg, Victoria, Australia.,Department of Medicine, The University of Melbourne, Grattan Street, Parkville, Victoria, Australia
| | - Sarah J Wilson
- Melbourne School of Psychological Sciences, The University of Melbourne, Grattan Street, Parkville, Victoria, Australia.,The Florey Institute of Neuroscience and Mental Health, Melbourne Brain Centre, 245 Burgundy Street, Heidelberg, Victoria, Australia
| |
Collapse
|
16
|
Coumel M, Christiner M, Reiterer SM. Second Language Accent Faking Ability Depends on Musical Abilities, Not on Working Memory. Front Psychol 2019; 10:257. [PMID: 30809178 PMCID: PMC6379457 DOI: 10.3389/fpsyg.2019.00257] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 01/28/2019] [Indexed: 01/25/2023] Open
Abstract
Studies involving direct language imitation tasks have shown that pronunciation ability is related to musical competence and working memory capacities. However, this type of task may measure individual differences in many different linguistic dimensions, other than just phonetic ones. The present study uses an indirect imitation task by asking participants to a fake a foreign accent in order to specifically target individual differences in phonetic abilities. Its aim is to investigate whether musical expertise and working memory capacities relate to phonological awareness (i.e., participants’ implicit knowledge about the phonological system of the target language and its structural properties at the segmental, suprasegmental, and phonotactic levels) as measured on this task. To this end, French native listeners (N = 36) graded how well German native imitators (N = 25) faked a French accent while speaking in German. The imitators also performed a musicality test, a self-assessment of their singing abilities and working memory tasks. The results indicate that the ability to fake a French accent correlates with singing ability and musical perceptual abilities, but not with working memory capacities. This suggests that heightened musical abilities may lead to an increased phonological awareness probably by providing participants with highly efficient memorization strategies and highly accurate long-term phonetic representations of foreign sounds. Comparison with data of previous studies shows that working memory could be implicated in the pronunciation learning process which direct imitation tasks target, whereas musical expertise influences both storing of knowledge and later retrieval here assessed via an indirect imitation task.
Collapse
Affiliation(s)
- Marion Coumel
- Department of Linguistics, University of Vienna, Vienna, Austria.,Department of Psychology, University of Warwick, Coventry, United Kingdom
| | - Markus Christiner
- Department of Linguistics, University of Vienna, Vienna, Austria.,Department of Neurology, Section of Biomagnetism, University of Heidelberg Medical School, Heidelberg, Germany
| | - Susanne Maria Reiterer
- Department of Linguistics, University of Vienna, Vienna, Austria.,Teacher Education Center, University of Vienna, Vienna, Austria
| |
Collapse
|
17
|
Kuang J, Liberman M. Integrating Voice Quality Cues in the Pitch Perception of Speech and Non-speech Utterances. Front Psychol 2018; 9:2147. [PMID: 30555365 PMCID: PMC6281971 DOI: 10.3389/fpsyg.2018.02147] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 10/18/2018] [Indexed: 11/13/2022] Open
Abstract
Pitch perception plays a crucial role in speech processing. Since F0 is highly ambiguous and variable in the speech signal, effective pitch-range perception is important in perceiving the intended linguistic pitch targets. This study argues that the effectiveness of pitch-range perception can be achieved by taking advantage of other signal-internal information that co-varies with F0, such as voice quality cues. This study provides direct perceptual evidence that voice quality cues as an indicator of pitch ranges can effectively affect the pitch-height perception. A series of forced-choice pitch classification experiments with four spectral conditions were conducted to investigate the degree to which manipulating spectral slope affects pitch-height perception. Both non-speech and speech stimuli were investigated. The results suggest that the pitch classification function is significantly shifted under different spectral conditions. Listeners are likely to perceive a higher pitch when the spectrum has higher high-frequency energy (i.e., tenser phonation). The direction of the shift is consistent with the correlation between voice quality and pitch range. Moreover, cue integration is affected by the speech mode, where listeners are more sensitive to relative difference within an utterance when hearing speech stimuli. This study generally supports the hypothesis that voice quality is an important enhancement cue for pitch range.
Collapse
Affiliation(s)
- Jianjing Kuang
- Department of Linguistics, University of Pennsylvania, Philadelphia, PA, United States
| | | |
Collapse
|
18
|
Filippa M, Monaci MG, Grandjean D. Emotion Attribution in Nonverbal Vocal Communication Directed to Preterm Infants. JOURNAL OF NONVERBAL BEHAVIOR 2018. [DOI: 10.1007/s10919-018-0288-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
19
|
Sammler D, Cunitz K, Gierhan SME, Anwander A, Adermann J, Meixensberger J, Friederici AD. White matter pathways for prosodic structure building: A case study. BRAIN AND LANGUAGE 2018; 183:1-10. [PMID: 29758365 DOI: 10.1016/j.bandl.2018.05.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 03/14/2018] [Accepted: 05/03/2018] [Indexed: 06/08/2023]
Abstract
The relevance of left dorsal and ventral fiber pathways for syntactic and semantic comprehension is well established, while pathways for prosody are little explored. The present study examined linguistic prosodic structure building in a patient whose right arcuate/superior longitudinal fascicles and posterior corpus callosum were transiently compromised by a vasogenic peritumoral edema. Compared to ten matched healthy controls, the patient's ability to detect irregular prosodic structure significantly improved between pre- and post-surgical assessment. This recovery was accompanied by an increase in average fractional anisotropy (FA) in right dorsal and posterior transcallosal fiber tracts. Neither general cognitive abilities nor (non-prosodic) syntactic comprehension nor FA in right ventral and left dorsal fiber tracts showed a similar pre-post increase. Together, these findings suggest a contribution of right dorsal and inter-hemispheric pathways to prosody perception, including the right-dorsal tracking and structuring of prosodic pitch contours that is transcallosally informed by concurrent syntactic information.
Collapse
Affiliation(s)
- Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany.
| | - Katrin Cunitz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital Ulm, Steinhövelstraße 5, 89075 Ulm, Germany
| | - Sarah M E Gierhan
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Berlin School of Mind and Brain, Humboldt University Berlin, Unter den Linden 6, 10099 Berlin, Germany
| | - Alfred Anwander
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
| | - Jens Adermann
- University Hospital Leipzig, Clinic and Policlinic for Neurosurgery, Liebigstraße 20, 04103 Leipzig, Germany
| | - Jürgen Meixensberger
- University Hospital Leipzig, Clinic and Policlinic for Neurosurgery, Liebigstraße 20, 04103 Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Berlin School of Mind and Brain, Humboldt University Berlin, Unter den Linden 6, 10099 Berlin, Germany
| |
Collapse
|
20
|
Alain C, Du Y, Bernstein LJ, Barten T, Banai K. Listening under difficult conditions: An activation likelihood estimation meta-analysis. Hum Brain Mapp 2018. [PMID: 29536592 DOI: 10.1002/hbm.24031] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
The brain networks supporting speech identification and comprehension under difficult listening conditions are not well specified. The networks hypothesized to underlie effortful listening include regions responsible for executive control. We conducted meta-analyses of auditory neuroimaging studies to determine whether a common activation pattern of the frontal lobe supports effortful listening under different speech manipulations. Fifty-three functional neuroimaging studies investigating speech perception were divided into three independent Activation Likelihood Estimate analyses based on the type of speech manipulation paradigm used: Speech-in-noise (SIN, 16 studies, involving 224 participants); spectrally degraded speech using filtering techniques (15 studies involving 270 participants); and linguistic complexity (i.e., levels of syntactic, lexical and semantic intricacy/density, 22 studies, involving 348 participants). Meta-analysis of the SIN studies revealed higher effort was associated with activation in left inferior frontal gyrus (IFG), left inferior parietal lobule, and right insula. Studies using spectrally degraded speech demonstrated increased activation of the insula bilaterally and the left superior temporal gyrus (STG). Studies manipulating linguistic complexity showed activation in the left IFG, right middle frontal gyrus, left middle temporal gyrus and bilateral STG. Planned contrasts revealed left IFG activation in linguistic complexity studies, which differed from activation patterns observed in SIN or spectral degradation studies. Although there were no significant overlap in prefrontal activation across these three speech manipulation paradigms, SIN and spectral degradation showed overlapping regions in left and right insula. These findings provide evidence that there is regional specialization within the left IFG and differential executive networks underlie effortful listening.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Health Centre, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Lori J Bernstein
- Department of Supportive Care, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.,Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Thijs Barten
- Rotman Research Institute, Baycrest Health Centre, Toronto, Ontario, Canada
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
21
|
Ravignani A, Thompson B, Filippi P. The Evolution of Musicality: What Can Be Learned from Language Evolution Research? Front Neurosci 2018; 12:20. [PMID: 29467601 PMCID: PMC5808206 DOI: 10.3389/fnins.2018.00020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2017] [Accepted: 01/10/2018] [Indexed: 11/22/2022] Open
Abstract
Language and music share many commonalities, both as natural phenomena and as subjects of intellectual inquiry. Rather than exhaustively reviewing these connections, we focus on potential cross-pollination of methodological inquiries and attitudes. We highlight areas in which scholarship on the evolution of language may inform the evolution of music. We focus on the value of coupled empirical and formal methodologies, and on the futility of mysterianism, the declining view that the nature, origins and evolution of language cannot be addressed empirically. We identify key areas in which the evolution of language as a discipline has flourished historically, and suggest ways in which these advances can be integrated into the study of the evolution of music.
Collapse
Affiliation(s)
- Andrea Ravignani
- Department of Language and Cognition, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium
- Research Department, Sealcentre Pieterburen, Pieterburen, Netherlands
| | - Bill Thompson
- Department of Language and Cognition, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium
| | - Piera Filippi
- Department of Language and Cognition, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Institute of Language, Communication and the Brain, Aix-en-Provence, France
- Laboratoire Parole et Langage LPL UMR 7309, Centre National de la Recherche Scientifique, Aix-Marseille Université, Aix-en-Provence, France
- Laboratoire de Psychologie Cognitive LPC UMR7290, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France
| |
Collapse
|
22
|
Graber E, Simchy-Gross R, Margulis EH. Musical and linguistic listening modes in the speech-to-song illusion bias timing perception and absolute pitch memory. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:3593. [PMID: 29289094 DOI: 10.1121/1.5016806] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The speech-to-song (STS) illusion is a phenomenon in which some spoken utterances perceptually transform to song after repetition [Deutsch, Henthorn, and Lapidis (2011). J. Acoust. Soc. Am. 129, 2245-2252]. Tierney, Dick, Deutsch, and Sereno [(2013). Cereb. Cortex. 23, 249-254] developed a set of stimuli where half tend to transform to perceived song with repetition and half do not. Those that transform and those that do not can be understood to induce a musical or linguistic mode of listening, respectively. By comparing performance on perceptual tasks related to transforming and non-transforming utterances, the current study examines whether the musical mode of listening entails higher sensitivity to temporal regularity and better absolute pitch (AP) memory compared to the linguistic mode. In experiment 1, inter-stimulus intervals within STS trials were steady, slightly variable, or highly variable. Participants reported how temporally regular utterance entrances were. In experiment 2, participants performed an AP memory task after a blocked STS exposure phase. Utterances identically matching those used in the exposure phase were targets among transposed distractors in the test phase. Results indicate that listeners exhibit heightened awareness of temporal manipulations but reduced awareness of AP manipulations to transforming utterances. This methodology establishes a framework for implicitly differentiating musical from linguistic perception.
Collapse
Affiliation(s)
- Emily Graber
- Center for Computer Research in Music and Acoustics, Stanford University, 660 Lomita Court, Stanford, California 94305, USA
| | - Rhimmon Simchy-Gross
- Department of Psychological Science, University of Arkansas, 216 Memorial Hall, Fayetteville, Arkansas 72701, USA
| | | |
Collapse
|
23
|
DePriest J, Glushko A, Steinhauer K, Koelsch S. Language and music phrase boundary processing in Autism Spectrum Disorder: An ERP study. Sci Rep 2017; 7:14465. [PMID: 29089535 PMCID: PMC5663964 DOI: 10.1038/s41598-017-14538-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Accepted: 10/12/2017] [Indexed: 11/08/2022] Open
Abstract
Autism spectrum disorder (ASD) is frequently associated with communicative impairment, regardless of intelligence level or mental age. Impairment of prosodic processing in particular is a common feature of ASD. Despite extensive overlap in neural resources involved in prosody and music processing, music perception seems to be spared in this population. The present study is the first to investigate prosodic phrasing in ASD in both language and music, combining event-related brain potential (ERP) and behavioral methods. We tested phrase boundary processing in language and music in neuro-typical adults and high-functioning individuals with ASD. We targeted an ERP response associated with phrase boundary processing in both language and music - i.e., the Closure Positive Shift (CPS). While a language-CPS was observed in the neuro-typical group, for ASD participants a smaller response failed to reach statistical significance. In music, we found a boundary-onset music-CPS for both groups during pauses between musical phrases. Our results support the view of preserved processing of musical cues in ASD individuals, with a corresponding prosodic impairment. This suggests that, despite the existence of a domain-general processing mechanism (the CPS), key differences in the integration of features of language and music may lead to the prosodic impairment in ASD.
Collapse
Affiliation(s)
- John DePriest
- Freie Universität Berlin, Berlin, Germany.
- Program in Linguistics, Tulane University, New Orleans, Louisiana, United States of America.
| | - Anastasia Glushko
- Freie Universität Berlin, Berlin, Germany
- The Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec, Canada
| | - Karsten Steinhauer
- School of Communication Sciences and Disorders, McGill University, Montreal, Quebec, Canada
- The Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec, Canada
| | - Stefan Koelsch
- Freie Universität Berlin, Berlin, Germany
- University of Bergen, Bergen, Norway
| |
Collapse
|
24
|
Tryfon A, Foster NEV, Sharda M, Hyde KL. Speech perception in autism spectrum disorder: An activation likelihood estimation meta-analysis. Behav Brain Res 2017; 338:118-127. [PMID: 29074403 DOI: 10.1016/j.bbr.2017.10.025] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 10/13/2017] [Accepted: 10/20/2017] [Indexed: 02/06/2023]
Abstract
Autism spectrum disorder (ASD) is often characterized by atypical language profiles and auditory and speech processing. These can contribute to aberrant language and social communication skills in ASD. The study of the neural basis of speech perception in ASD can serve as a potential neurobiological marker of ASD early on, but mixed results across studies renders it difficult to find a reliable neural characterization of speech processing in ASD. To this aim, the present study examined the functional neural basis of speech perception in ASD versus typical development (TD) using an activation likelihood estimation (ALE) meta-analysis of 18 qualifying studies. The present study included separate analyses for TD and ASD, which allowed us to examine patterns of within-group brain activation as well as both common and distinct patterns of brain activation across the ASD and TD groups. Overall, ASD and TD showed mostly common brain activation of speech processing in bilateral superior temporal gyrus (STG) and left inferior frontal gyrus (IFG). However, the results revealed trends for some distinct activation in the TD group showing additional activation in higher-order brain areas including left superior frontal gyrus (SFG), left medial frontal gyrus (MFG), and right IFG. These results provide a more reliable neural characterization of speech processing in ASD relative to previous single neuroimaging studies and motivate future work to investigate how these brain signatures relate to behavioral measures of speech processing in ASD.
Collapse
Affiliation(s)
- Ana Tryfon
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Pavillon 1420 Mont-Royal, Department of Psychology, University of Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec, H3C 3J7, Canada; Faculty of Medicine, McIntyre Medical Building, McGill University, 3655 Sir William Osler, Montreal, Quebec H3G 1Y6, Canada.
| | - Nicholas E V Foster
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Pavillon 1420 Mont-Royal, Department of Psychology, University of Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec, H3C 3J7, Canada
| | - Megha Sharda
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Pavillon 1420 Mont-Royal, Department of Psychology, University of Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec, H3C 3J7, Canada
| | - Krista L Hyde
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Pavillon 1420 Mont-Royal, Department of Psychology, University of Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec, H3C 3J7, Canada; Faculty of Medicine, McIntyre Medical Building, McGill University, 3655 Sir William Osler, Montreal, Quebec H3G 1Y6, Canada
| |
Collapse
|
25
|
Neural correlates of infants' sensitivity to vocal expressions of peers. Dev Cogn Neurosci 2017; 26:39-44. [PMID: 28456088 PMCID: PMC6987768 DOI: 10.1016/j.dcn.2017.04.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Revised: 04/10/2017] [Accepted: 04/11/2017] [Indexed: 11/20/2022] Open
Abstract
Responding to others' emotional expressions is an essential and early developing social skill among humans. Much research has focused on how infants process facial expressions, while much less is known about infants' processing of vocal expressions. We examined 8-month-old infants' processing of other infants' vocalizations by measuring event-related brain potentials (ERPs) to positive (infant laughter), negative (infant cries), and neutral (adult hummed speech) vocalizations. Our ERP results revealed that hearing another infant cry elicited an enhanced negativity (N200) at temporal electrodes around 200ms, whereas listening to another infant laugh resulted in an enhanced positivity (P300) at central electrodes around 300ms. This indexes that infants' brains rapidly respond to a crying peer during early auditory processing stages, but also selectively respond to a laughing peer during later stages associated with familiarity detection processes. These findings provide evidence for infants' sensitivity to vocal expressions of peers and shed new light on the neural processes underpinning emotion processing in infants.
Collapse
|
26
|
A graded tractographic parcellation of the temporal lobe. Neuroimage 2017; 155:503-512. [PMID: 28411156 PMCID: PMC5518769 DOI: 10.1016/j.neuroimage.2017.04.016] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Revised: 04/06/2017] [Accepted: 04/06/2017] [Indexed: 02/06/2023] Open
Abstract
The temporal lobe has been implicated in multiple cognitive domains through lesion studies as well as cognitive neuroimaging research. There has been a recent increased interest in the structural and connective architecture that underlies these functions. However there has not yet been a comprehensive exploration of the patterns of connectivity that appear across the temporal lobe. This article uses a data driven, spectral reordering approach in order to understand the general axes of structural connectivity within the temporal lobe. Two important findings emerge from the study. Firstly, the temporal lobe's overarching patterns of connectivity are organised along two key structural axes: medial to lateral and anteroventral to posterodorsal, mirroring findings in the functional literature. Secondly, the connective organisation of the temporal lobe is graded and transitional; this is reminiscent of the original work of 19th Century neuroanatomists, who posited the existence of some regions which transitioned between one another in a graded fashion. While regions with unique connectivity exist, the boundaries between these are not always sharp. Instead there are zones of graded connectivity reflecting the influence and overlap of shared connectivity. A graded parcellation identified changes in connectivity across the temporal lobe Connective organisation of the temporal lobe was graded and transitional Two axes of organisation were found: medial-lateral and anterovental-posterodorsal While regions of distinct connectivity exist, their boundaries are not always sharp Zones of graded connectivity exist reflecting influence of shared connectivity
Collapse
|
27
|
Markiewicz CJ, Bohland JW. Mapping the cortical representation of speech sounds in a syllable repetition task. Neuroimage 2016; 141:174-190. [DOI: 10.1016/j.neuroimage.2016.07.023] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 07/08/2016] [Accepted: 07/10/2016] [Indexed: 11/17/2022] Open
|
28
|
Filippi P. Emotional and Interactional Prosody across Animal Communication Systems: A Comparative Approach to the Emergence of Language. Front Psychol 2016; 7:1393. [PMID: 27733835 PMCID: PMC5039945 DOI: 10.3389/fpsyg.2016.01393] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 08/31/2016] [Indexed: 01/29/2023] Open
Abstract
Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody - and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.
Collapse
Affiliation(s)
- Piera Filippi
- Department of Artificial Intelligence, Vrije Universiteit BrusselBrussels, Belgium
| |
Collapse
|
29
|
Rigoulot S, Armony JL. Early selectivity for vocal and musical sounds: electrophysiological evidence from an adaptation paradigm. Eur J Neurosci 2016; 44:2786-2794. [PMID: 27600697 DOI: 10.1111/ejn.13391] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 08/29/2016] [Accepted: 08/31/2016] [Indexed: 11/27/2022]
Abstract
There is growing interest in characterizing the neural basis of music perception and, in particular, assessing how similar, or not, it is to that of speech. To further explore this question, we employed an EEG adaptation paradigm in which we compared responses to short sounds belonging to the same category, either speech (pseudo-sentences) or music (piano or violin), depending on whether they were immediately preceded by a same- or different-category sound. We observed a larger reduction in the N100 component magnitude in response to musical sounds when they were preceded by music (either the same or different instrument) than by speech. In contrast, the N100 amplitude was not affected by the preceding stimulus category in the case of speech. For P200 component, we observed a diminution of amplitude when speech sounds were preceded speech, compared to music. No such decrease was found when we compared the responses to music sounds. These differences in the processing of speech and music are consistent with the proposal that some degree of category selectivity for these two classes of complex stimuli already occurs at early stages of auditory processing, possibly subserved by partly separated neuronal populations.
Collapse
Affiliation(s)
- Simon Rigoulot
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| | - Jorge L Armony
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| |
Collapse
|
30
|
Ong JH, Burnham D, Stevens CJ, Escudero P. Naïve Learners Show Cross-Domain Transfer after Distributional Learning: The Case of Lexical and Musical Pitch. Front Psychol 2016; 7:1189. [PMID: 27551272 PMCID: PMC4976504 DOI: 10.3389/fpsyg.2016.01189] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Accepted: 07/27/2016] [Indexed: 11/13/2022] Open
Abstract
Experienced listeners of a particular acoustic cue in either speech or music appear to have an advantage when perceiving a similar cue in the other domain (i.e., they exhibit cross-domain transfer). One explanation for cross-domain transfer relates to the acquisition of the foundations of speech and music: if acquiring pitch-based elements in speech or music results in heightened attention to pitch in general, then cross-domain transfer of pitch may be observed, which may explain the cross-domain phenomenon seen among listeners of a tone language and listeners with musical training. Here, we investigate this possibility in naïve adult learners, who were trained to acquire pitch-based elements using a distributional learning paradigm, to provide a proof-of-concept for the explanation. Learners were exposed to a stimulus distribution spanning either a Thai lexical tone minimal pair or a novel musical chord minimal pair. Within each domain, the distribution highlights pitch to facilitate learning of two different sounds (Bimodal distribution) or the distribution minimizes pitch so that the input is inferred to be from a single sound (Unimodal distribution). Learning was assessed before and after exposure to the distribution using discrimination tasks with both Thai tone and musical chord minimal pairs. We hypothesize: (i) distributional learning for learners in both the tone and the chord distributions, that is, pre-to-post improvement in discrimination after exposure to the Bimodal but not the Unimodal distribution; and (ii) for both the tone and chord conditions, learners in the Bimodal conditions but not those in the Unimodal conditions will show cross-domain transfer, as indexed by improvement in discrimination of test items in the domain other than what they were trained on. The results support both hypotheses, suggesting that distributional learning is not only used to acquire the foundations of speech and music, but may also play a role in cross-domain transfer: as a result of learning primitives based on a particular cue, learners show heightened attention to that cue in any auditory signal.
Collapse
Affiliation(s)
- Jia Hoong Ong
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney NSW, Australia
| | - Denis Burnham
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney NSW, Australia
| | - Catherine J Stevens
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney NSW, Australia
| | - Paola Escudero
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney NSW, Australia
| |
Collapse
|
31
|
Karmonik C, Brandt A, Anderson J, Brooks F, Lytle J, Silverman E, Frazier JT. Music Listening modulates Functional Connectivity and Information Flow in the Human Brain. Brain Connect 2016; 6:632-641. [PMID: 27464741 DOI: 10.1089/brain.2016.0428] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Listening to familiar music has recently been reported to be beneficial during recovery from stroke. A better understanding of changes in functional connectivity and information flow is warranted in order to further optimize and target this approach through music therapy. Twelve healthy volunteers listened to seven different auditory samples during an fMRI scanning session: a musical piece chosen by the volunteer that evokes a strong emotional response (referred to as: "self-selected emotional"), two unfamiliar music pieces (Invention #1 by J. S. Bach* and Gagaku - Japanese classical opera, referred to as "unfamiliar"), the Bach piece repeated with visual guidance (DML: Directed Music Listening) and three spoken language pieces (unfamiliar African click language, an excerpt of emotionally charged language, and an unemotional reading of a news bulletin). Functional connectivity and betweenness (BTW) maps, a measure for information flow, were created with a graph-theoretical approach. Distinct variation in functional connectivity was found for different music pieces consistently for all subjects. Largest brain areas were recruited for processing self-selected music with emotional attachment or culturally unfamiliar music. Maps of information flow correlated significantly with fMRI BOLD activation maps (p<0.05). Observed differences in BOLD activation and functional connectivity may help explain previously observed beneficial effects in stroke recovery, as increased blood flow to damaged brain areas stimulated by active engagement through music listening may have supported a state more conducive to therapy.
Collapse
Affiliation(s)
- Christof Karmonik
- Houston Methodist Research Institute, 167626, Houston, Texas, United States ;
| | - Anthony Brandt
- Rice University, 3990, Shepard School of Music, Houston, Texas, United States ;
| | - Jeff Anderson
- Houston Methodist Research Institute, 167626, Houston, Texas, United States ;
| | - Forrest Brooks
- Houston Methodist Hospital, 23534, Center for Performing Arts Medicine, Houston, Texas, United States ;
| | - Julie Lytle
- Houston Methodist Hospital, 23534, Center for Performing Arts Medicine, Houston, Texas, United States ;
| | - Elliott Silverman
- Lahey Hospital and Medical Center Burlington, 2094, Burlington, Massachusetts, United States ;
| | - Jeff T Frazier
- Houston Methodist Hospital, 23534, Center for Performing Arts Medicine, Houston, Texas, United States ;
| |
Collapse
|
32
|
Beck Lidén C, Krüger O, Schwarz L, Erb M, Kardatzki B, Scheffler K, Ethofer T. Neurobiology of knowledge and misperception of lyrics. Neuroimage 2016; 134:12-21. [PMID: 27085504 DOI: 10.1016/j.neuroimage.2016.03.080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2015] [Revised: 03/30/2016] [Accepted: 03/31/2016] [Indexed: 10/21/2022] Open
Abstract
We conducted two functional magnetic resonance imaging (fMRI) experiments to investigate the neural underpinnings of knowledge and misperception of lyrics. In fMRI experiment 1, a linear relationship between familiarity with lyrics and activation was found in left-hemispheric speech-related as well as bilateral striatal areas which is in line with previous research on generation of lyrics. In fMRI experiment 2, we employed so called Mondegreens and Soramimi to induce misperceptions of lyrics revealing a bilateral network including middle temporal and inferior frontal areas as well as anterior cingulate cortex (ACC) and mediodorsal thalamus. ACC activation also correlated with the extent to which misperceptions were judged as amusing corroborating previous neuroimaging results on the role of this area in mediating the pleasant experience of chills during music perception. Finally, we examined the areas engaged during misperception of lyrics using diffusion-weighted imaging (DWI) to determine their structural connectivity. These combined fMRI/DWI results could serve as a neurobiological model for future studies on other types of misunderstanding which are events with potentially strong impact on our social life.
Collapse
Affiliation(s)
- Claudia Beck Lidén
- Department of Biomedical Magnetic Resonance, University of Tübingen, Otfried-Müller-Str. 51, 72076 Tübingen, Germany
| | - Oliver Krüger
- Department of Biomedical Magnetic Resonance, University of Tübingen, Otfried-Müller-Str. 51, 72076 Tübingen, Germany
| | - Lena Schwarz
- Department of Biomedical Magnetic Resonance, University of Tübingen, Otfried-Müller-Str. 51, 72076 Tübingen, Germany; University Clinic for Psychiatry and Psychotherapy, University of Tübingen, Calwer Str. 14, 72076 Tübingen, Germany
| | - Michael Erb
- Department of Biomedical Magnetic Resonance, University of Tübingen, Otfried-Müller-Str. 51, 72076 Tübingen, Germany
| | - Bernd Kardatzki
- Department of Biomedical Magnetic Resonance, University of Tübingen, Otfried-Müller-Str. 51, 72076 Tübingen, Germany
| | - Klaus Scheffler
- Department of Biomedical Magnetic Resonance, University of Tübingen, Otfried-Müller-Str. 51, 72076 Tübingen, Germany; Max-Planck-Institute for Biological Cybernetics, Speemannstraße 38-40, 72076 Tübingen, Germany
| | - Thomas Ethofer
- Department of Biomedical Magnetic Resonance, University of Tübingen, Otfried-Müller-Str. 51, 72076 Tübingen, Germany; University Clinic for Psychiatry and Psychotherapy, University of Tübingen, Calwer Str. 14, 72076 Tübingen, Germany; Max-Planck-Institute for Biological Cybernetics, Speemannstraße 38-40, 72076 Tübingen, Germany
| |
Collapse
|
33
|
Woolgar A, Jackson J, Duncan J. Coding of Visual, Auditory, Rule, and Response Information in the Brain: 10 Years of Multivoxel Pattern Analysis. J Cogn Neurosci 2016; 28:1433-54. [PMID: 27315269 DOI: 10.1162/jocn_a_00981] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
How is the processing of task information organized in the brain? Many views of brain function emphasize modularity, with different regions specialized for processing different types of information. However, recent accounts also highlight flexibility, pointing especially to the highly consistent pattern of frontoparietal activation across many tasks. Although early insights from functional imaging were based on overall activation levels during different cognitive operations, in the last decade many researchers have used multivoxel pattern analyses to interrogate the representational content of activations, mapping out the brain regions that make particular stimulus, rule, or response distinctions. Here, we drew on 100 searchlight decoding analyses from 57 published papers to characterize the information coded in different brain networks. The outcome was highly structured. Visual, auditory, and motor networks predominantly (but not exclusively) coded visual, auditory, and motor information, respectively. By contrast, the frontoparietal multiple-demand network was characterized by domain generality, coding visual, auditory, motor, and rule information. The contribution of the default mode network and voxels elsewhere was minor. The data suggest a balanced picture of brain organization in which sensory and motor networks are relatively specialized for information in their own domain, whereas a specific frontoparietal network acts as a domain-general "core" with the capacity to code many different aspects of a task.
Collapse
Affiliation(s)
- Alexandra Woolgar
- Macquarie University, Sydney, Australia.,ARC Centre of Excellence in Cognition and its Disorders, Australia
| | - Jade Jackson
- Macquarie University, Sydney, Australia.,ARC Centre of Excellence in Cognition and its Disorders, Australia
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, Cambridge, UK.,University of Oxford
| |
Collapse
|
34
|
Weidema JL, Roncaglia-Denissen MP, Honing H. Top-Down Modulation on the Perception and Categorization of Identical Pitch Contours in Speech and Music. Front Psychol 2016; 7:817. [PMID: 27313552 PMCID: PMC4889578 DOI: 10.3389/fpsyg.2016.00817] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2016] [Accepted: 05/17/2016] [Indexed: 11/18/2022] Open
Abstract
Whether pitch in language and music is governed by domain-specific or domain-general cognitive mechanisms is contentiously debated. The aim of the present study was to investigate whether mechanisms governing pitch contour perception operate differently when pitch information is interpreted as either speech or music. By modulating listening mode, this study aspired to demonstrate that pitch contour perception relies on domain-specific cognitive mechanisms, which are regulated by top-down influences from language and music. Three groups of participants (Mandarin speakers, Dutch speaking non-musicians, and Dutch musicians) were exposed to identical pitch contours, and tested on their ability to identify these contours in a language and musical context. Stimuli consisted of disyllabic words spoken in Mandarin, and melodic tonal analogs, embedded in a linguistic and melodic carrier phrase, respectively. Participants classified identical pitch contours as significantly different depending on listening mode. Top-down influences from language appeared to alter the perception of pitch contour in speakers of Mandarin. This was not the case for non-musician speakers of Dutch. Moreover, this effect was lacking in Dutch speaking musicians. The classification patterns of pitch contours in language and music seem to suggest that domain-specific categorization is modulated by top-down influences from language and music.
Collapse
Affiliation(s)
- Joey L. Weidema
- Music Cognition Group, Amsterdam Brain and Cognition, Institute for Logic, Language, and Computation, University of AmsterdamAmsterdam, Netherlands
| | | | | |
Collapse
|
35
|
Jaisin K, Suphanchaimat R, Figueroa Candia MA, Warren JD. The Speech-to-Song Illusion Is Reduced in Speakers of Tonal (vs. Non-Tonal) Languages. Front Psychol 2016; 7:662. [PMID: 27242580 PMCID: PMC4860502 DOI: 10.3389/fpsyg.2016.00662] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2016] [Accepted: 04/21/2016] [Indexed: 11/13/2022] Open
Abstract
The speech-to-song illusion has attracted interest as a probe of the perceptual interface between language and music. One might anticipate differential speech-to-song effects in tonal vs. non-tonal languages, since these language classes differ importantly in the linguistic value they assign to tones. Here we addressed this issue for the first time in a cohort of 20 healthy younger adults whose native language was either tonal (Thai, Mandarin) or non-tonal (German, Italian) and all of whom were also fluent in English. All participants were assessed using a protocol designed to induce the speech-to-song illusion on speech excerpts presented in each of the five study languages. Over the combined participant group, there was evidence of a speech-to-song illusion effect for all language stimuli and the extent to which individual participants rated stimuli as "song-like" at baseline was significantly positively correlated with the strength of the speech-to-song effect. However, tonal and non-tonal language stimuli elicited comparable speech-to-song effects and no acoustic language parameter was found to predict the effect. Examining the effect of the listener's native language, tonal language native speakers experienced significantly weaker speech-to-song effects than non-tonal native speakers across languages. Both non-tonal native language and inability to understand the stimulus language significantly predicted the speech-to-song illusion. These findings together suggest that relative propensity to perceive prosodic structures as inherently linguistic vs. musical may modulate the speech-to-song illusion.
Collapse
Affiliation(s)
- Kankamol Jaisin
- Dementia Research Centre, UCL Institute of Neurology, University College LondonLondon, UK; Department of Psychiatry, Faculty of Medicine, Thammasat UniversityBangkok, Thailand
| | - Rapeepong Suphanchaimat
- Department of Global Health and Development, London School of Hygiene and Tropical MedicineLondon, UK; International Health Policy Program, Ministry of Public HealthBangkok, Thailand
| | - Mauricio A Figueroa Candia
- Department of Speech, Hearing and Phonetic Sciences, Faculty of Brain Sciences, University College London London, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London London, UK
| |
Collapse
|
36
|
Abstract
UNLABELLED The brain's circuitry for perceiving and producing speech may show a notable level of overlap that is crucial for normal development and behavior. The extent to which sensorimotor integration plays a role in speech perception remains highly controversial, however. Methodological constraints related to experimental designs and analysis methods have so far prevented the disentanglement of neural responses to acoustic versus articulatory speech features. Using a passive listening paradigm and multivariate decoding of single-trial fMRI responses to spoken syllables, we investigated brain-based generalization of articulatory features (place and manner of articulation, and voicing) beyond their acoustic (surface) form in adult human listeners. For example, we trained a classifier to discriminate place of articulation within stop syllables (e.g., /pa/ vs /ta/) and tested whether this training generalizes to fricatives (e.g., /fa/ vs /sa/). This novel approach revealed generalization of place and manner of articulation at multiple cortical levels within the dorsal auditory pathway, including auditory, sensorimotor, motor, and somatosensory regions, suggesting the representation of sensorimotor information. Additionally, generalization of voicing included the right anterior superior temporal sulcus associated with the perception of human voices as well as somatosensory regions bilaterally. Our findings highlight the close connection between brain systems for speech perception and production, and in particular, indicate the availability of articulatory codes during passive speech perception. SIGNIFICANCE STATEMENT Sensorimotor integration is central to verbal communication and provides a link between auditory signals of speech perception and motor programs of speech production. It remains highly controversial, however, to what extent the brain's speech perception system actively uses articulatory (motor), in addition to acoustic/phonetic, representations. In this study, we examine the role of articulatory representations during passive listening using carefully controlled stimuli (spoken syllables) in combination with multivariate fMRI decoding. Our approach enabled us to disentangle brain responses to acoustic and articulatory speech properties. In particular, it revealed articulatory-specific brain responses of speech at multiple cortical levels, including auditory, sensorimotor, and motor regions, suggesting the representation of sensorimotor information during passive speech perception.
Collapse
|
37
|
Neural correlates of binding lyrics and melodies for the encoding of new songs. Neuroimage 2016; 127:333-345. [DOI: 10.1016/j.neuroimage.2015.12.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Revised: 12/07/2015] [Accepted: 12/11/2015] [Indexed: 01/19/2023] Open
|
38
|
Peretz I, Vuvan D, Lagrois MÉ, Armony JL. Neural overlap in processing music and speech. Philos Trans R Soc Lond B Biol Sci 2016; 370:20140090. [PMID: 25646513 DOI: 10.1098/rstb.2014.0090] [Citation(s) in RCA: 116] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing.
Collapse
Affiliation(s)
- Isabelle Peretz
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Dominique Vuvan
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Marie-Élaine Lagrois
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Jorge L Armony
- International Laboratory of Brain, Music and Sound Research (BRAMS), and Center for Research on Brain, Language and Music (CRBLM), University of Montreal, Montreal, Quebec, Canada Department of Psychiatry, McGill University and Douglas Mental Health University Institute, Montreal, Quebec, Canada
| |
Collapse
|
39
|
Nikolsky A. Evolution of tonal organization in music mirrors symbolic representation of perceptual reality. Part-1: Prehistoric. Front Psychol 2015; 6:1405. [PMID: 26528193 PMCID: PMC4607869 DOI: 10.3389/fpsyg.2015.01405] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2015] [Accepted: 09/03/2015] [Indexed: 11/21/2022] Open
Abstract
This paper reveals the way in which musical pitch works as a peculiar form of cognition that reflects upon the organization of the surrounding world as perceived by majority of music users within a socio-cultural formation. The evidence from music theory, ethnography, archeology, organology, anthropology, psychoacoustics, and evolutionary biology is plotted against experimental evidence. Much of the methodology for this investigation comes from studies conducted within the territory of the former USSR. To date, this methodology has remained solely confined to Russian speaking scholars. A brief overview of pitch-set theory demonstrates the need to distinguish between vertical and horizontal harmony, laying out the framework for virtual music space that operates according to the perceptual laws of tonal gravity. Brought to life by bifurcation of music and speech, tonal gravity passed through eleven discrete stages of development until the onset of tonality in the seventeenth century. Each stage presents its own method of integration of separate musical tones into an auditory-cognitive unity. The theory of “melodic intonation” is set forth as a counterpart to harmonic theory of chords. Notions of tonality, modality, key, diatonicity, chromaticism, alteration, and modulation are defined in terms of their perception, and categorized according to the way in which they have developed historically. Tonal organization in music, and perspective organization in fine arts are explained as products of the same underlying mental process. Music seems to act as a unique medium of symbolic representation of reality through the concept of pitch. Tonal organization of pitch reflects the culture of thinking, adopted as a standard within a community of music users. Tonal organization might be a naturally formed system of optimizing individual perception of reality within a social group and its immediate environment, setting conventional standards of intellectual and emotional intelligence.
Collapse
|
40
|
Hills CS, Pancaroglu R, Duchaine B, Barton JJS. Word and text processing in acquired prosopagnosia. Ann Neurol 2015; 78:258-71. [PMID: 25976067 DOI: 10.1002/ana.24437] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2014] [Revised: 05/05/2015] [Accepted: 05/06/2015] [Indexed: 11/07/2022]
Abstract
OBJECTIVE A novel hypothesis of object recognition asserts that multiple regions are engaged in processing an object type, and that cerebral regions participate in processing multiple types of objects. In particular, for high-level expert processing, it proposes shared rather than dedicated resources for word and face perception, and predicts that prosopagnosic subjects would have minor deficits in visual word processing, and alexic subjects would have subtle impairments in face perception. In this study, we evaluated whether prosopagnosic subjects had deficits in processing either the word content or the style of visual text. METHODS Eleven prosopagnosic subjects, 6 with unilateral right lesions and 5 with bilateral lesions, participated. In the first study, we evaluated their word length effect in reading single words. In the second study, we assessed their time and accuracy for sorting text by word content independent of style, and for sorting text by handwriting or font style independent of word content. RESULTS Only subjects with bilateral lesions showed mildly elevated word length effects. Subjects were not slowed in sorting text by word content, but were nearly uniformly impaired in accuracy for sorting text by style. INTERPRETATION Our results show that prosopagnosic subjects are impaired not only in face recognition but also in perceiving stylistic aspects of text. This supports a modified version of the many-to-many hypothesis that incorporates hemispheric specialization for processing different aspects of visual text.
Collapse
Affiliation(s)
- Charlotte S Hills
- Human Vision and Eye Movement Laboratory, Departments of Ophthalmology and Visual Sciences, and of Medicine (Neurology), University of British Columbia, Vancouver, British Columbia, Canada
| | - Raika Pancaroglu
- Human Vision and Eye Movement Laboratory, Departments of Ophthalmology and Visual Sciences, and of Medicine (Neurology), University of British Columbia, Vancouver, British Columbia, Canada
| | - Brad Duchaine
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH
| | - Jason J S Barton
- Human Vision and Eye Movement Laboratory, Departments of Ophthalmology and Visual Sciences, and of Medicine (Neurology), University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
41
|
Voss P, Zatorre RJ. Early visual deprivation changes cortical anatomical covariance in dorsal-stream structures. Neuroimage 2015; 108:194-202. [PMID: 25562825 DOI: 10.1016/j.neuroimage.2014.12.063] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2014] [Revised: 10/17/2014] [Accepted: 12/24/2014] [Indexed: 11/19/2022] Open
Abstract
Early blind individuals possess thicker occipital cortex compared to sighted ones. Occipital cortical thickness is also predictive of performance on several auditory discrimination tasks in the blind, which suggests that it can serve as a neuroanatomical marker of auditory behavioural abilities. In light of this atypical relationship between occipital thickness and auditory function, we sought to investigate here the covariation of occipital cortical morphology in occipital areas with that of all other areas across the cortical surface, to assess whether the anatomical covariance with the occipital cortex differs between early blind and sighted individuals. We observed a reduction in anatomical covariance between the right occipital cortex and several areas of the visual dorsal stream in a group of early blind individuals relative to sighted controls. In a separate analysis, we show that the performance of the early blind in a transposed melody discrimination task was strongly predicted by the strength of the cortical covariance between the occipital cortex and intraparietal sulcus, a region for which cortical thickness in the sighted was previously shown to predict performance in the same task. These findings therefore constitute the first evidence linking altered anatomical covariance to early sensory deprivation. Moreover, since covariation of cortical morphology could potentially be related to anatomical connectivity or driven by experience-dependent plasticity, it could consequently help guide future functional connectivity and diffusion tractography studies.
Collapse
Affiliation(s)
- Patrice Voss
- Montreal Neurological Institute, McGill University, Montreal, Canada; International laboratory for Brain, Music and Sound research (BRAMS), Montreal, Canada.
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Canada; International laboratory for Brain, Music and Sound research (BRAMS), Montreal, Canada
| |
Collapse
|
42
|
Hymers M, Prendergast G, Liu C, Schulze A, Young ML, Wastling SJ, Barker GJ, Millman RE. Neural mechanisms underlying song and speech perception can be differentiated using an illusory percept. Neuroimage 2014; 108:225-33. [PMID: 25512041 DOI: 10.1016/j.neuroimage.2014.12.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2014] [Revised: 10/27/2014] [Accepted: 12/04/2014] [Indexed: 11/16/2022] Open
Abstract
The issue of whether human perception of speech and song recruits integrated or dissociated neural systems is contentious. This issue is difficult to address directly since these stimulus classes differ in their physical attributes. We therefore used a compelling illusion (Deutsch et al. 2011) in which acoustically identical auditory stimuli are perceived as either speech or song. Deutsch's illusion was used in a functional MRI experiment to provide a direct, within-subject investigation of the brain regions involved in the perceptual transformation from speech into song, independent of the physical characteristics of the presented stimuli. An overall differential effect resulting from the perception of song compared with that of speech was revealed in right midposterior superior temporal sulcus/right middle temporal gyrus. A left frontotemporal network, previously implicated in higher-level cognitive analyses of music and speech, was found to co-vary with a behavioural measure of the subjective vividness of the illusion, and this effect was driven by the illusory transformation. These findings provide evidence that illusory song perception is instantiated by a network of brain regions that are predominantly shared with the speech perception network.
Collapse
Affiliation(s)
- Mark Hymers
- York Neuroimaging Centre, University of York, York Science Park, YO10 5NY, United Kingdom.
| | - Garreth Prendergast
- York Neuroimaging Centre, University of York, York Science Park, YO10 5NY, United Kingdom; Audiology and Deafness Group, School of Psychological Sciences, University of Manchester, Manchester, M13 9PL, UK
| | - Can Liu
- Department of Psychology, University of York, YO10 5DD, United Kingdom
| | - Anja Schulze
- Department of Psychology, University of York, YO10 5DD, United Kingdom
| | - Michellie L Young
- Department of Psychology, University of York, YO10 5DD, United Kingdom
| | | | - Gareth J Barker
- Institute of Psychiatry, King's College London, SE5 8AF, United Kingdom
| | - Rebecca E Millman
- York Neuroimaging Centre, University of York, York Science Park, YO10 5NY, United Kingdom
| |
Collapse
|
43
|
Sturm I, Blankertz B, Potes C, Schalk G, Curio G. ECoG high gamma activity reveals distinct cortical representations of lyrics passages, harmonic and timbre-related changes in a rock song. Front Hum Neurosci 2014; 8:798. [PMID: 25352799 PMCID: PMC4195312 DOI: 10.3389/fnhum.2014.00798] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2014] [Accepted: 09/19/2014] [Indexed: 11/13/2022] Open
Abstract
Listening to music moves our minds and moods, stirring interest in its neural underpinnings. A multitude of compositional features drives the appeal of natural music. How such original music, where a composer's opus is not manipulated for experimental purposes, engages a listener's brain has not been studied until recently. Here, we report an in-depth analysis of two electrocorticographic (ECoG) data sets obtained over the left hemisphere in ten patients during presentation of either a rock song or a read-out narrative. First, the time courses of five acoustic features (intensity, presence/absence of vocals with lyrics, spectral centroid, harmonic change, and pulse clarity) were extracted from the audio tracks and found to be correlated with each other to varying degrees. In a second step, we uncovered the specific impact of each musical feature on ECoG high-gamma power (70-170 Hz) by calculating partial correlations to remove the influence of the other four features. In the music condition, the onset and offset of vocal lyrics in ongoing instrumental music was consistently identified within the group as the dominant driver for ECoG high-gamma power changes over temporal auditory areas, while concurrently subject-individual activation spots were identified for sound intensity, timbral, and harmonic features. The distinct cortical activations to vocal speech-related content embedded in instrumental music directly demonstrate that song integrated in instrumental music represents a distinct dimension in complex music. In contrast, in the speech condition, the full sound envelope was reflected in the high gamma response rather than the onset or offset of the vocal lyrics. This demonstrates how the contributions of stimulus features that modulate the brain response differ across the two examples of a full-length natural stimulus, which suggests a context-dependent feature selection in the processing of complex auditory stimuli.
Collapse
Affiliation(s)
- Irene Sturm
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany ; Neurotechnology Group, Department of Electrical Engineering and Computer Science, Berlin Institute of Technology Berlin, Germany ; Neurophysics Group, Department of Neurology and Clinical Neurophysiology, Charité - University Medicine Berlin Berlin, Germany
| | - Benjamin Blankertz
- Neurotechnology Group, Department of Electrical Engineering and Computer Science, Berlin Institute of Technology Berlin, Germany ; Bernstein Focus: Neurotechnology Berlin, Germany
| | - Cristhian Potes
- National Resource Center for Adaptive Neurotechnologies, Wadsworth Center, New York State Department of Health Albany, NY, USA ; Department of Electrical and Computer Engineering, University of Texas at El Paso El Paso, TX, USA
| | - Gerwin Schalk
- National Resource Center for Adaptive Neurotechnologies, Wadsworth Center, New York State Department of Health Albany, NY, USA ; Department of Electrical and Computer Engineering, University of Texas at El Paso El Paso, TX, USA ; Department of Neurosurgery, Washington University in St. Louis St. Louis, MO, USA ; Department of Biomedical Engineering, Rensselaer Polytechnic Institute Troy, NY, USA ; Department of Neurology, Albany Medical College Albany, NY, USA ; Department of Neurosurgery, Washington University in St. Louis St. Louis, MO, USA
| | - Gabriel Curio
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany ; Neurophysics Group, Department of Neurology and Clinical Neurophysiology, Charité - University Medicine Berlin Berlin, Germany ; Bernstein Focus: Neurotechnology Berlin, Germany
| |
Collapse
|
44
|
Méndez Orellana CP, van de Sandt-Koenderman ME, Saliasi E, van der Meulen I, Klip S, van der Lugt A, Smits M. Insight into the neurophysiological processes of melodically intoned language with functional MRI. Brain Behav 2014; 4:615-25. [PMID: 25328839 PMCID: PMC4107379 DOI: 10.1002/brb3.245] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Revised: 06/05/2014] [Accepted: 06/09/2014] [Indexed: 11/30/2022] Open
Abstract
BACKGROUND Melodic Intonation Therapy (MIT) uses the melodic elements of speech to improve language production in severe nonfluent aphasia. A crucial element of MIT is the melodically intoned auditory input: the patient listens to the therapist singing a target utterance. Such input of melodically intoned language facilitates production, whereas auditory input of spoken language does not. METHODS Using a sparse sampling fMRI sequence, we examined the differential auditory processing of spoken and melodically intoned language. Nineteen right-handed healthy volunteers performed an auditory lexical decision task in an event related design consisting of spoken and melodically intoned meaningful and meaningless items. The control conditions consisted of neutral utterances, either melodically intoned or spoken. RESULTS Irrespective of whether the items were normally spoken or melodically intoned, meaningful items showed greater activation in the supramarginal gyrus and inferior parietal lobule, predominantly in the left hemisphere. Melodically intoned language activated both temporal lobes rather symmetrically, as well as the right frontal lobe cortices, indicating that these regions are engaged in the acoustic complexity of melodically intoned stimuli. Compared to spoken language, melodically intoned language activated sensory motor regions and articulatory language networks in the left hemisphere, but only when meaningful language was used. DISCUSSION Our results suggest that the facilitatory effect of MIT may - in part - depend on an auditory input which combines melody and meaning. CONCLUSION Combined melody and meaning provide a sound basis for the further investigation of melodic language processing in aphasic patients, and eventually the neurophysiological processes underlying MIT.
Collapse
Affiliation(s)
- Carolina P Méndez Orellana
- Department of Radiology, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands ; Department of Neurology, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands
| | - Mieke E van de Sandt-Koenderman
- Rehabilitation Medicine, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands ; Rijndam Rehabilitation Center Rotterdam, The Netherlands
| | - Emi Saliasi
- Department of Neurology - University Medical Center Groningen Groningen, The Netherlands
| | - Ineke van der Meulen
- Rehabilitation Medicine, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands ; Rijndam Rehabilitation Center Rotterdam, The Netherlands
| | - Simone Klip
- Department of Radiology, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands
| | - Aad van der Lugt
- Department of Radiology, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands
| | - Marion Smits
- Department of Radiology, Erasmus MC - University Medical Center Rotterdam Rotterdam, The Netherlands
| |
Collapse
|
45
|
Flagmeier SG, Ray KL, Parkinson AL, Li K, Vargas R, Price LR, Laird AR, Larson CR, Robin DA. The neural changes in connectivity of the voice network during voice pitch perturbation. BRAIN AND LANGUAGE 2014; 132:7-13. [PMID: 24681401 PMCID: PMC4526025 DOI: 10.1016/j.bandl.2014.02.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2013] [Revised: 01/28/2014] [Accepted: 02/04/2014] [Indexed: 06/03/2023]
Abstract
Voice control is critical to communication. To date, studies have used behavioral, electrophysiological and functional data to investigate the neural correlates of voice control using perturbation tasks, but have yet to examine the interactions of these neural regions. The goal of this study was to use structural equation modeling of functional neuroimaging data to examine network properties of voice with and without perturbation. Results showed that the presence of a pitch shift, which was processed as an error in vocalization, altered connections between right STG and left STG. Other regions that revealed differences in connectivity during error detection and correction included bilateral inferior frontal gyrus, and the primary and pre motor cortices. Results indicated that STG plays a critical role in voice control, specifically, during error detection and correction. Additionally, pitch perturbation elicits changes in the voice network that suggest the right hemisphere is critical to pitch modulation.
Collapse
Affiliation(s)
- Sabina G Flagmeier
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States
| | - Kimberly L Ray
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States
| | - Amy L Parkinson
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States
| | - Karl Li
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States
| | - Robert Vargas
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States
| | - Larry R Price
- Department of Mathematics and College of Education, Texas State University, San Marcos, TX, United States
| | - Angela R Laird
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States; Department of Physics, Florida International University, Miami, FL, United States
| | - Charles R Larson
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, United States
| | - Donald A Robin
- Research Imaging Institute, University of Texas Health Science Center at San Antonio, United States; Neurology, University of Texas Health Science Center at San Antonio, United States; Biomedical Engineering, University of Texas Health Science Center at San Antonio, United States; Radiology, University of Texas Health Science Center at San Antonio, United States; Honor's College, University of Texas San Antonio, San Antonio, United States.
| |
Collapse
|
46
|
Alonso I, Sammler D, Valabrègue R, Dinkelacker V, Dupont S, Belin P, Samson S. Hippocampal Sclerosis Affects fMR-Adaptation of Lyrics and Melodies in Songs. Front Hum Neurosci 2014; 8:111. [PMID: 24578688 PMCID: PMC3936190 DOI: 10.3389/fnhum.2014.00111] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Accepted: 02/13/2014] [Indexed: 11/13/2022] Open
Abstract
Songs constitute a natural combination of lyrics and melodies, but it is unclear whether and how these two song components are integrated during the emergence of a memory trace. Network theories of memory suggest a prominent role of the hippocampus, together with unimodal sensory areas, in the build-up of conjunctive representations. The present study tested the modulatory influence of the hippocampus on neural adaptation to songs in lateral temporal areas. Patients with unilateral hippocampal sclerosis and healthy matched controls were presented with blocks of short songs in which lyrics and/or melodies were varied or repeated in a crossed factorial design. Neural adaptation effects were taken as correlates of incidental emergent memory traces. We hypothesized that hippocampal lesions, particularly in the left hemisphere, would weaken adaptation effects, especially the integration of lyrics and melodies. Results revealed that lateral temporal lobe regions showed weaker adaptation to repeated lyrics as well as a reduced interaction of the adaptation effects for lyrics and melodies in patients with left hippocampal sclerosis. This suggests a deficient build-up of a sensory memory trace for lyrics and a reduced integration of lyrics with melodies, compared to healthy controls. Patients with right hippocampal sclerosis showed a similar profile of results although the effects did not reach significance in this population. We highlight the finding that the integrated representation of lyrics and melodies typically shown in healthy participants is likely tied to the integrity of the left medial temporal lobe. This novel finding provides the first neuroimaging evidence for the role of the hippocampus during repetitive exposure to lyrics and melodies and their integration into a song.
Collapse
Affiliation(s)
- Irene Alonso
- Laboratoire de Neurosciences Fonctionnelles et Pathologies (EA 4559), Université Lille-Nord de France , Lille , France ; Epilepsy Unit, Hôpital de la Pitié-Salpêtrière , Paris , France ; Centre de NeuroImagerie de Recherche, Groupe Hospitalier Pitié-Salpêtrière , Paris , France ; Centre de Recherche de l'Institut du Cerveau et de la Moëlle Épinière, UPMC - UMR 7225 CNRS - UMRS 975 INSERM , Paris , France
| | - Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences , Leipzig , Germany
| | - Romain Valabrègue
- Centre de NeuroImagerie de Recherche, Groupe Hospitalier Pitié-Salpêtrière , Paris , France ; Centre de Recherche de l'Institut du Cerveau et de la Moëlle Épinière, UPMC - UMR 7225 CNRS - UMRS 975 INSERM , Paris , France
| | - Vera Dinkelacker
- Epilepsy Unit, Hôpital de la Pitié-Salpêtrière , Paris , France ; Centre de Recherche de l'Institut du Cerveau et de la Moëlle Épinière, UPMC - UMR 7225 CNRS - UMRS 975 INSERM , Paris , France
| | - Sophie Dupont
- Epilepsy Unit, Hôpital de la Pitié-Salpêtrière , Paris , France ; Centre de Recherche de l'Institut du Cerveau et de la Moëlle Épinière, UPMC - UMR 7225 CNRS - UMRS 975 INSERM , Paris , France
| | - Pascal Belin
- Centre for Cognitive Neuroimaging, Department of Psychology, University of Glasgow , Glasgow , UK ; Laboratories for Brain, Music and Sound, Université de Montréal and McGill University , Montreal, QC , Canada ; Institut des Neurosciences de la Timone, UMR7289, CNRS-Université Aix Marseille , Marseille , France
| | - Séverine Samson
- Laboratoire de Neurosciences Fonctionnelles et Pathologies (EA 4559), Université Lille-Nord de France , Lille , France ; Epilepsy Unit, Hôpital de la Pitié-Salpêtrière , Paris , France
| |
Collapse
|
47
|
Frühholz S, Grandjean D. Processing of emotional vocalizations in bilateral inferior frontal cortex. Neurosci Biobehav Rev 2013; 37:2847-55. [PMID: 24161466 DOI: 10.1016/j.neubiorev.2013.10.007] [Citation(s) in RCA: 84] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2013] [Revised: 08/09/2013] [Accepted: 10/14/2013] [Indexed: 12/16/2022]
Abstract
A current view proposes that the right inferior frontal cortex (IFC) is particularly responsible for attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Although some studies seem to support this view, an exhaustive review of all recent imaging studies points to an important functional role of both the right and the left IFC in processing vocal emotions. Second, besides a supposed predominant role of the IFC for an attentive processing and evaluation of emotional voices in IFC, these recent studies also point to a possible role of the IFC in preattentive and implicit processing of vocal emotions. The studies specifically provide evidence that both the right and the left IFC show a similar anterior-to-posterior gradient of functional activity in response to emotional vocalizations. This bilateral IFC gradient depends both on the nature or medium of emotional vocalizations (emotional prosody versus nonverbal expressions) and on the level of attentive processing (explicit versus implicit processing), closely resembling the distribution of terminal regions of distinct auditory pathways, which provide either global or dynamic acoustic information. Here we suggest a functional distribution in which several IFC subregions process different acoustic information conveyed by emotional vocalizations. Although the rostro-ventral IFC might categorize emotional vocalizations, the caudo-dorsal IFC might be specifically sensitive to their temporal features.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| | | |
Collapse
|
48
|
Fedorenko E, McDermott JH, Norman-Haignere S, Kanwisher N. Sensitivity to musical structure in the human brain. J Neurophysiol 2012; 108:3289-300. [PMID: 23019005 PMCID: PMC3544885 DOI: 10.1152/jn.00209.2012] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2012] [Accepted: 09/23/2012] [Indexed: 11/22/2022] Open
Abstract
Evidence from brain-damaged patients suggests that regions in the temporal lobes, distinct from those engaged in lower-level auditory analysis, process the pitch and rhythmic structure in music. In contrast, neuroimaging studies targeting the representation of music structure have primarily implicated regions in the inferior frontal cortices. Combining individual-subject fMRI analyses with a scrambling method that manipulated musical structure, we provide evidence of brain regions sensitive to musical structure bilaterally in the temporal lobes, thus reconciling the neuroimaging and patient findings. We further show that these regions are sensitive to the scrambling of both pitch and rhythmic structure but are insensitive to high-level linguistic structure. Our results suggest the existence of brain regions with representations of musical structure that are distinct from high-level linguistic representations and lower-level acoustic representations. These regions provide targets for future research investigating possible neural specialization for music or its associated mental processes.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.
| | | | | | | |
Collapse
|
49
|
Brandt A, Gebrian M, Slevc LR. Music and early language acquisition. Front Psychol 2012; 3:327. [PMID: 22973254 PMCID: PMC3439120 DOI: 10.3389/fpsyg.2012.00327] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2012] [Accepted: 08/15/2012] [Indexed: 11/13/2022] Open
Abstract
Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability - one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, we challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, we argue that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development.
Collapse
Affiliation(s)
- Anthony Brandt
- Shepherd School of Music, Rice UniversityHouston, TX, USA
| | - Molly Gebrian
- Shepherd School of Music, Rice UniversityHouston, TX, USA
| | - L. Robert Slevc
- Psychology, Language and Music Cognition Lab, University of MarylandCollege Park, MD, USA
| |
Collapse
|