1
|
Burunat I, Levitin DJ, Toiviainen P. Breaking (musical) boundaries by investigating brain dynamics of event segmentation during real-life music-listening. Proc Natl Acad Sci U S A 2024; 121:e2319459121. [PMID: 39186645 PMCID: PMC11388323 DOI: 10.1073/pnas.2319459121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 06/26/2024] [Indexed: 08/28/2024] Open
Abstract
The perception of musical phrase boundaries is a critical aspect of human musical experience: It allows us to organize, understand, derive pleasure from, and remember music. Identifying boundaries is a prerequisite for segmenting music into meaningful chunks, facilitating efficient processing and storage while providing an enjoyable, fulfilling listening experience through the anticipation of upcoming musical events. Expanding on Sridharan et al.'s [Neuron 55, 521-532 (2007)] work on coarse musical boundaries between symphonic movements, we examined finer-grained boundaries. We measured the fMRI responses of 18 musicians and 18 nonmusicians during music listening. Using general linear model, independent component analysis, and Granger causality, we observed heightened auditory integration in anticipation to musical boundaries, and an extensive decrease within the fronto-temporal-parietal network during and immediately following boundaries. Notably, responses were modulated by musicianship. Findings uncover the intricate interplay between musical structure, expertise, and cognitive processing, advancing our knowledge of how the brain makes sense of music.
Collapse
Affiliation(s)
- Iballa Burunat
- Centre of Excellence in Music, Mind, Body and Brain, Department of Music, Arts and Culture Studies, University of Jyväskylä, Jyväskylä 40014, Finland
| | - Daniel J Levitin
- School of Social Sciences, Minerva University, San Francisco, CA 94103
- Department of Psychology, McGill University, Montreal, QC H3A 1G1, Canada
| | - Petri Toiviainen
- Centre of Excellence in Music, Mind, Body and Brain, Department of Music, Arts and Culture Studies, University of Jyväskylä, Jyväskylä 40014, Finland
| |
Collapse
|
2
|
Bonetti L, Fernández-Rubio G, Lumaca M, Carlomagno F, Risgaard Olsen E, Criscuolo A, Kotz SA, Vuust P, Brattico E, Kringelbach ML. Age-related neural changes underlying long-term recognition of musical sequences. Commun Biol 2024; 7:1036. [PMID: 39209979 PMCID: PMC11362492 DOI: 10.1038/s42003-024-06587-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 07/15/2024] [Indexed: 09/04/2024] Open
Abstract
Aging is often associated with decline in brain processing power and neural predictive capabilities. To challenge this notion, we used magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to record the whole-brain activity of 39 older adults (over 60 years old) and 37 young adults (aged 18-25 years) during recognition of previously memorised and varied musical sequences. Results reveal that when recognising memorised sequences, the brain of older compared to young adults reshapes its functional organisation. In fact, it shows increased early activity in sensory regions such as the left auditory cortex (100 ms and 250 ms after each note), and only moderate decreased activity (350 ms) in medial temporal lobe and prefrontal regions. When processing the varied sequences, older adults show a marked reduction of the fast-scale functionality (250 ms after each note) of higher-order brain regions including hippocampus, ventromedial prefrontal and inferior temporal cortices, while no differences are observed in the auditory cortex. Accordingly, young outperform older adults in the recognition of novel sequences, while no behavioural differences are observed with regards to memorised ones. Our findings show age-related neural changes in predictive and memory processes, integrating existing theories on compensatory neural mechanisms in non-pathological aging.
Collapse
Affiliation(s)
- Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark.
- Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK.
- Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Gemma Fernández-Rubio
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
| | - Massimo Lumaca
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
| | - Francesco Carlomagno
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
- Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy
| | - Emma Risgaard Olsen
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
| | - Antonio Criscuolo
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
- Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
- Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
- Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
3
|
Heng JG, Zhang J, Bonetti L, Lim WPH, Vuust P, Agres K, Chen SHA. Understanding music and aging through the lens of Bayesian inference. Neurosci Biobehav Rev 2024; 163:105768. [PMID: 38908730 DOI: 10.1016/j.neubiorev.2024.105768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/24/2024]
Abstract
Bayesian inference has recently gained momentum in explaining music perception and aging. A fundamental mechanism underlying Bayesian inference is the notion of prediction. This framework could explain how predictions pertaining to musical (melodic, rhythmic, harmonic) structures engender action, emotion, and learning, expanding related concepts of music research, such as musical expectancies, groove, pleasure, and tension. Moreover, a Bayesian perspective of music perception may shed new insights on the beneficial effects of music in aging. Aging could be framed as an optimization process of Bayesian inference. As predictive inferences refine over time, the reliance on consolidated priors increases, while the updating of prior models through Bayesian inference attenuates. This may affect the ability of older adults to estimate uncertainties in their environment, limiting their cognitive and behavioral repertoire. With Bayesian inference as an overarching framework, this review synthesizes the literature on predictive inferences in music and aging, and details how music could be a promising tool in preventive and rehabilitative interventions for older adults through the lens of Bayesian inference.
Collapse
Affiliation(s)
- Jiamin Gladys Heng
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| | - Jiayi Zhang
- Interdisciplinary Graduate Program, Nanyang Technological University, Singapore; School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, United Kingdom; Department of Psychiatry, University of Oxford, United Kingdom; Department of Psychology, University of Bologna, Italy
| | | | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark
| | - Kat Agres
- Centre for Music and Health, National University of Singapore, Singapore; Yong Siew Toh Conservatory of Music, National University of Singapore, Singapore
| | - Shen-Hsing Annabel Chen
- School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; National Institute of Education, Nanyang Technological University, Singapore.
| |
Collapse
|
4
|
Teng X, Larrouy-Maestri P, Poeppel D. Segmenting and Predicting Musical Phrase Structure Exploits Neural Gain Modulation and Phase Precession. J Neurosci 2024; 44:e1331232024. [PMID: 38926087 PMCID: PMC11270514 DOI: 10.1523/jneurosci.1331-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 05/29/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024] Open
Abstract
Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range ∼0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 s) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using noninvasive recording techniques.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Pauline Larrouy-Maestri
- Music Department, Max-Planck-Institute for Empirical Aesthetics, Frankfurt 60322, Germany
- Center for Language, Music, and Emotion (CLaME), New York, New York 10003
| | - David Poeppel
- Center for Language, Music, and Emotion (CLaME), New York, New York 10003
- Department of Psychology, New York University, New York, New York 10003
- Ernst Struengmann Institute for Neuroscience, Frankfurt 60528, Germany
- Music and Audio Research Laboratory (MARL), New York, New York 11201
| |
Collapse
|
5
|
Herff SA, Bonetti L, Cecchetti G, Vuust P, Kringelbach ML, Rohrmeier MA. Hierarchical syntax model of music predicts theta power during music listening. Neuropsychologia 2024; 199:108905. [PMID: 38740179 DOI: 10.1016/j.neuropsychologia.2024.108905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 03/07/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
Linguistic research showed that the depth of syntactic embedding is reflected in brain theta power. Here, we test whether this also extends to non-linguistic stimuli, specifically music. We used a hierarchical model of musical syntax to continuously quantify two types of expert-annotated harmonic dependencies throughout a piece of Western classical music: prolongation and preparation. Prolongations can roughly be understood as a musical analogue to linguistic coordination between constituents that share the same function (e.g., 'pizza' and 'pasta' in 'I ate pizza and pasta'). Preparation refers to the dependency between two harmonies whereby the first implies a resolution towards the second (e.g., dominant towards tonic; similar to how the adjective implies the presence of a noun in 'I like spicy … '). Source reconstructed MEG data of sixty-five participants listening to the musical piece was then analysed. We used Bayesian Mixed Effects models to predict theta envelope in the brain, using the number of open prolongation and preparation dependencies as predictors whilst controlling for audio envelope. We observed that prolongation and preparation both carry independent and distinguishable predictive value for theta band fluctuation in key linguistic areas such as the Angular, Superior Temporal, and Heschl's Gyri, or their right-lateralised homologues, with preparation showing additional predictive value for areas associated with the reward system and prediction. Musical expertise further mediated these effects in language-related brain areas. Results show that predictions of precisely formalised music-theoretical models are reflected in the brain activity of listeners which furthers our understanding of the perception and cognition of musical structure.
Collapse
Affiliation(s)
- Steffen A Herff
- Sydney Conservatorium of Music, University of Sydney, Sydney, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Gabriele Cecchetti
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
6
|
Saskovets M, Liang Z, Piumarta I, Saponkova I. Effects of Sound Interventions on the Mental Stress Response in Adults: Protocol for a Scoping Review. JMIR Res Protoc 2024; 13:e54030. [PMID: 38935945 PMCID: PMC11240062 DOI: 10.2196/54030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 04/29/2024] [Accepted: 05/01/2024] [Indexed: 06/29/2024] Open
Abstract
BACKGROUND Sound therapy methods have seen a surge in popularity, with a predominant focus on music among all types of sound stimulation. There is substantial evidence documenting the integrative impact of music therapy on psycho-emotional and physiological outcomes, rendering it beneficial for addressing stress-related conditions such as pain syndromes, depression, and anxiety. Despite these advancements, the therapeutic aspects of sound, as well as the mechanisms underlying its efficacy, remain incompletely understood. Existing research on music as a holistic cultural phenomenon often overlooks crucial aspects of sound therapy mechanisms, particularly those related to speech acoustics or the so-called "music of speech." OBJECTIVE This study aims to provide an overview of empirical research on sound interventions to elucidate the mechanism underlying their positive effects. Specifically, we will focus on identifying therapeutic factors and mechanisms of change associated with sound interventions. Our analysis will compare the most prevalent types of sound interventions reported in clinical studies and experiments. Moreover, we will explore the therapeutic effects of sound beyond music, encompassing natural human speech and intermediate forms such as traditional poetry performances. METHODS This review adheres to the methodological guidance of the Joanna Briggs Institute and follows the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) checklist for reporting review studies, which is adapted from the Arksey and O'Malley framework. Our search strategy encompasses PubMed, Web of Science, Scopus, and PsycINFO or EBSCOhost, covering literature from 1990 to the present. Among the different study types, randomized controlled trials, clinical trials, laboratory experiments, and field experiments were included. RESULTS Data collection began in October 2022. We found a total of 2027 items. Our initial search uncovered an asymmetry in the distribution of studies, with a larger number focused on music therapy compared with those exploring prosody in spoken interventions such as guided meditation or hypnosis. We extracted and selected papers using Rayyan software (Rayyan) and identified 41 eligible papers after title and abstract screening. The completion of the scoping review is anticipated by October 2024, with key steps comprising the analysis of findings by May 2024, drafting and revising the study by July 2024, and submitting the paper for publication in October 2024. CONCLUSIONS In the next step, we will conduct a quality evaluation of the papers and then chart and group the therapeutic factors extracted from them. This process aims to unveil conceptual gaps in existing studies. Gray literature sources, such as Google Scholar, ClinicalTrials.gov, nonindexed conferences, and reference list searches of retrieved studies, will be added to our search strategy to increase the number of relevant papers that we cover. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/54030.
Collapse
Affiliation(s)
- Marina Saskovets
- Faculty of Engineering, Kyoto University of Advanced Science, Kyoto, Japan
| | - Zilu Liang
- Faculty of Engineering, Kyoto University of Advanced Science, Kyoto, Japan
| | - Ian Piumarta
- Faculty of Engineering, Kyoto University of Advanced Science, Kyoto, Japan
| | - Irina Saponkova
- Department of Psychology, St Petersburg University, St Petersburg, Russian Federation
| |
Collapse
|
7
|
Sihvonen AJ, Ferguson MA, Chen V, Soinila S, Särkämö T, Joutsa J. Focal Brain Lesions Causing Acquired Amusia Map to a Common Brain Network. J Neurosci 2024; 44:e1922232024. [PMID: 38423761 PMCID: PMC11007473 DOI: 10.1523/jneurosci.1922-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 02/15/2024] [Accepted: 02/22/2024] [Indexed: 03/02/2024] Open
Abstract
Music is a universal human attribute. The study of amusia, a neurologic music processing deficit, has increasingly elaborated our view on the neural organization of the musical brain. However, lesions causing amusia occur in multiple brain locations and often also cause aphasia, leaving the distinct neural networks for amusia unclear. Here, we utilized lesion network mapping to identify these networks. A systematic literature search was carried out to identify all published case reports of lesion-induced amusia. The reproducibility and specificity of the identified amusia network were then tested in an independent prospective cohort of 97 stroke patients (46 female and 51 male) with repeated structural brain imaging, specifically assessed for both music perception and language abilities. Lesion locations in the case reports were heterogeneous but connected to common brain regions, including bilateral temporoparietal and insular cortices, precentral gyrus, and cingulum. In the prospective cohort, lesions causing amusia mapped to a common brain network, centering on the right superior temporal cortex and clearly distinct from the network causally associated with aphasia. Lesion-induced longitudinal structural effects in the amusia circuit were confirmed as reduction of both gray and white matter volume, which correlated with the severity of amusia. We demonstrate that despite the heterogeneity of lesion locations disrupting music processing, there is a common brain network that is distinct from the language network. These results provide evidence for the distinct neural substrate of music processing, differentiating music-related functions from language, providing a testable target for noninvasive brain stimulation to treat amusia.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki 00014, Finland
- Centre of Excellence in Music, Mind, Body and Brain, University of Helsinki, Helsinki 00014, Finland
- Queensland Aphasia Research Centre, University of Queensland, Brisbane, Queensland 4072, Australia
- Department of Neurology, Neurocenter, Helsinki University Hospital, Helsinki 00029, Finland
| | - Michael A Ferguson
- Center for Brain Circuit Therapeutics, Brigham and Women's Hospital, Boston, Massachusetts 02115
- Harvard Medical School, Boston, Massachusetts 02115
- Center for the Study of World Religions, Harvard Divinity School, Cambridge, Massachusetts 02138
| | - Vicky Chen
- Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Seppo Soinila
- Division of Clinical Neurosciences, University of Turku and Neurocenter, Turku University Hospital, Turku 20521, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki 00014, Finland
- Centre of Excellence in Music, Mind, Body and Brain, University of Helsinki, Helsinki 00014, Finland
| | - Juho Joutsa
- Turku Brain and Mind Center, Clinical Neurosciences, University of Turku, Turku 20521, Finland
- Neurocenter and Turku PET Center, Turku University Hospital, Turku 20521, Finland
| |
Collapse
|
8
|
Rimmer C, Dahary H, Quintin EM. Links between musical beat perception and phonological skills for autistic children. Child Neuropsychol 2024; 30:361-380. [PMID: 37104762 DOI: 10.1080/09297049.2023.2202902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 04/10/2023] [Indexed: 04/29/2023]
Abstract
Exploring non-linguistic predictors of phonological awareness, such as musical beat perception, is valuable for children who present with language difficulties and diverse support needs. Studies on the musical abilities of children on the autism spectrum show that they have average or above-average musical production and auditory processing abilities. This study aimed to explore the relationship between musical beat perception and phonological awareness skills of children on the autism spectrum with a wide range of cognitive abilities. A total of 21 autistic children between the ages of 6 to 11 years old (M = 8.9, SD = 1.5) with full scale IQs ranging from 52 to 105 (M = 74, SD = 16) completed a beat perception and a phonological awareness task. Results revealed that phonological awareness and beat perception are positively correlated for children on the autism spectrum. Findings lend support to the potential use of beat and rhythm perception as a screening tool for early literacy skills, specifically for phonological awareness, for children with diverse support needs as an alternative to traditional verbal tasks that tend to underestimate the potential of children on the autism spectrum.
Collapse
Affiliation(s)
- Charlotte Rimmer
- Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
- The Centre for Research on Brain, Language and Music, McGill University, Montreal, Quebec, Canada
| | - Hadas Dahary
- Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
- The Centre for Research on Brain, Language and Music, McGill University, Montreal, Quebec, Canada
- Azrieli Centre for Autism Research, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Eve-Marie Quintin
- Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
- The Centre for Research on Brain, Language and Music, McGill University, Montreal, Quebec, Canada
- Azrieli Centre for Autism Research, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
9
|
Ma W, Bowers L, Behrend D, Hellmuth Margulis E, Forde Thompson W. Child word learning in song and speech. Q J Exp Psychol (Hove) 2024; 77:343-362. [PMID: 37073951 DOI: 10.1177/17470218231172494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2023]
Abstract
Listening to sung words rather than spoken words can facilitate word learning and memory in adults and school-aged children. To explore the development of this effect in young children, this study examined word learning (assessed as forming word-object associations) in 1- to 2-year olds and 3- to 4-year olds, and word long-term memory (LTM) in 4- to 5-year olds several days after the initial learning. In an intermodal preferential looking paradigm, children were taught a pair of words utilising adult-directed speech (ADS) and a pair of sung words. Word learning performance was better with sung words than with ADS words in 1- to 2-year olds (Experiments 1a and 1b), 3- to 4-year olds (Experiment 1a), and 4- to 5-year olds (Experiment 2b), revealing a benefit of song in word learning in all age ranges recruited. We also examined whether children successfully learned the words by comparing their performance against chance. The 1- to 2-year olds only learned sung words, but the 3- to 4-year olds learned both sung and ADS words, suggesting that the reliance on music features in word learning observed at ages 1-2 decreased with age. Furthermore, song facilitated the word mapping-recognition processes. Results on children's LTM performance showed that the 4- to 5-year olds' LTM performance did not differ between sung and ADS words. However, the 4- to 5-year olds reliably recalled sung words but not spoken words. The reliable LTM of sung words arose from hearing sung words during the initial learning rather than at test. Finally, the benefit of song on word learning and the reliable LTM of sung words observed at ages 3-5 cannot be explained as an attentional effect.
Collapse
Affiliation(s)
- Weiyi Ma
- School of Human Environmental Sciences, University of Arkansas, Fayetteville, AR, USA
| | - Lisa Bowers
- Department of Rehabilitation, Human Resources and Communication Disorders, University of Arkansas, Fayetteville, AR, USA
| | - Douglas Behrend
- Department of Psychological Science, University of Arkansas, Fayetteville, AR, USA
| | | | | |
Collapse
|
10
|
Colverson A, Barsoum S, Cohen R, Williamson J. Rhythmic musical activities may strengthen connectivity between brain networks associated with aging-related deficits in timing and executive functions. Exp Gerontol 2024; 186:112354. [PMID: 38176601 DOI: 10.1016/j.exger.2023.112354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 12/14/2023] [Accepted: 12/27/2023] [Indexed: 01/06/2024]
Abstract
Brain aging and common conditions of aging (e.g., hypertension) affect networks important in organizing information, processing speed and action programming (i.e., executive functions). Declines in these networks may affect timing and could have an impact on the ability to perceive and perform musical rhythms. There is evidence that participation in rhythmic musical activities may help to maintain and even improve executive functioning (near transfer), perhaps due to similarities in brain regions underlying timing, musical rhythm perception and production, and executive functioning. Rhythmic musical activities may present as a novel and fun activity for older adults to stimulate interacting brain regions that deteriorate with aging. However, relatively little is known about neurobehavioral interactions between aging, timing, rhythm perception and production, and executive functioning. In this review, we account for these brain-behavior interactions to suggest that deeper knowledge of overlapping brain regions associated with timing, rhythm, and cognition may assist in designing more targeted preventive and rehabilitative interventions to reduce age-related cognitive decline and improve quality of life in populations with neurodegenerative disease. Further research is needed to elucidate the functional relationships between brain regions associated with aging, timing, rhythm perception and production, and executive functioning to direct design of targeted interventions.
Collapse
Affiliation(s)
- Aaron Colverson
- Memory and Aging Center, Weill Institute for Neurosciences, University of California, 1651 4th street, San Francisco, CA, United States of America.
| | - Stephanie Barsoum
- Center for Cognitive Aging and Memory, College of Medicine, University of Florida, PO Box 100277, Gainesville, FL 32610-0277, United States of America
| | - Ronald Cohen
- Center for Cognitive Aging and Memory, College of Medicine, University of Florida, PO Box 100277, Gainesville, FL 32610-0277, United States of America
| | - John Williamson
- Center for Cognitive Aging and Memory, College of Medicine, University of Florida, PO Box 100277, Gainesville, FL 32610-0277, United States of America
| |
Collapse
|
11
|
Shan T, Cappelloni MS, Maddox RK. Subcortical responses to music and speech are alike while cortical responses diverge. Sci Rep 2024; 14:789. [PMID: 38191488 PMCID: PMC10774448 DOI: 10.1038/s41598-023-50438-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 12/20/2023] [Indexed: 01/10/2024] Open
Abstract
Music and speech are encountered daily and are unique to human beings. Both are transformed by the auditory pathway from an initial acoustical encoding to higher level cognition. Studies of cortex have revealed distinct brain responses to music and speech, but differences may emerge in the cortex or may be inherited from different subcortical encoding. In the first part of this study, we derived the human auditory brainstem response (ABR), a measure of subcortical encoding, to recorded music and speech using two analysis methods. The first method, described previously and acoustically based, yielded very different ABRs between the two sound classes. The second method, however, developed here and based on a physiological model of the auditory periphery, gave highly correlated responses to music and speech. We determined the superiority of the second method through several metrics, suggesting there is no appreciable impact of stimulus class (i.e., music vs speech) on the way stimulus acoustics are encoded subcortically. In this study's second part, we considered the cortex. Our new analysis method resulted in cortical music and speech responses becoming more similar but with remaining differences. The subcortical and cortical results taken together suggest that there is evidence for stimulus-class dependent processing of music and speech at the cortical but not subcortical level.
Collapse
Affiliation(s)
- Tong Shan
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Madeline S Cappelloni
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ross K Maddox
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA.
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
- Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
12
|
Cecchetti G, Tomasini CA, Herff SA, Rohrmeier MA. Interpreting Rhythm as Parsing: Syntactic-Processing Operations Predict the Migration of Visual Flashes as Perceived During Listening to Musical Rhythms. Cogn Sci 2023; 47:e13389. [PMID: 38038624 DOI: 10.1111/cogs.13389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 11/10/2023] [Accepted: 11/13/2023] [Indexed: 12/02/2023]
Abstract
Music can be interpreted by attributing syntactic relationships to sequential musical events, and, computationally, such musical interpretation represents an analogous combinatorial task to syntactic processing in language. While this perspective has been primarily addressed in the domain of harmony, we focus here on rhythm in the Western tonal idiom, and we propose for the first time a framework for modeling the moment-by-moment execution of processing operations involved in the interpretation of music. Our approach is based on (1) a music-theoretically motivated grammar formalizing the competence of rhythmic interpretation in terms of three basic types of dependency (preparation, syncopation, and split; Rohrmeier, 2020), and (2) psychologically plausible predictions about the complexity of structural integration and memory storage operations, necessary for parsing hierarchical dependencies, derived from the dependency locality theory (Gibson, 2000). With a behavioral experiment, we exemplify an empirical implementation of the proposed theoretical framework. One hundred listeners were asked to reproduce the location of a visual flash presented while listening to three rhythmic excerpts, each exemplifying a different interpretation under the formal grammar. The hypothesized execution of syntactic-processing operations was found to be a significant predictor of the observed displacement between the reported and the objective location of the flashes. Overall, this study presents a theoretical approach and a first empirical proof-of-concept for modeling the cognitive process resulting in such interpretation as a form of syntactic parsing with algorithmic similarities to its linguistic counterpart. Results from the present small-scale experiment should not be read as a final test of the theory, but they are consistent with the theoretical predictions after controlling for several possible confounding factors and may form the basis for further large-scale and ecological testing.
Collapse
Affiliation(s)
- Gabriele Cecchetti
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Cédric A Tomasini
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Steffen A Herff
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| |
Collapse
|
13
|
Al Roumi F, Planton S, Wang L, Dehaene S. Brain-imaging evidence for compression of binary sound sequences in human memory. eLife 2023; 12:e84376. [PMID: 37910588 PMCID: PMC10619979 DOI: 10.7554/elife.84376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 10/14/2023] [Indexed: 11/03/2023] Open
Abstract
According to the language-of-thought hypothesis, regular sequences are compressed in human memory using recursive loops akin to a mental program that predicts future items. We tested this theory by probing memory for 16-item sequences made of two sounds. We recorded brain activity with functional MRI and magneto-encephalography (MEG) while participants listened to a hierarchy of sequences of variable complexity, whose minimal description required transition probabilities, chunking, or nested structures. Occasional deviant sounds probed the participants' knowledge of the sequence. We predicted that task difficulty and brain activity would be proportional to the complexity derived from the minimal description length in our formal language. Furthermore, activity should increase with complexity for learned sequences, and decrease with complexity for deviants. These predictions were upheld in both fMRI and MEG, indicating that sequence predictions are highly dependent on sequence structure and become weaker and delayed as complexity increases. The proposed language recruited bilateral superior temporal, precentral, anterior intraparietal, and cerebellar cortices. These regions overlapped extensively with a localizer for mathematical calculation, and much less with spoken or written language processing. We propose that these areas collectively encode regular sequences as repetitions with variations and their recursive composition into nested structures.
Collapse
Affiliation(s)
- Fosca Al Roumi
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, CNRS, NeuroSpin centerGif/YvetteFrance
| | - Samuel Planton
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, CNRS, NeuroSpin centerGif/YvetteFrance
| | - Liping Wang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of SciencesShanghaiChina
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, CNRS, NeuroSpin centerGif/YvetteFrance
- Collège de France, Université Paris Sciences Lettres (PSL)ParisFrance
| |
Collapse
|
14
|
Friederici AD. Evolutionary neuroanatomical expansion of Broca's region serving a human-specific function. Trends Neurosci 2023; 46:786-796. [PMID: 37596132 DOI: 10.1016/j.tins.2023.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 06/23/2023] [Accepted: 07/20/2023] [Indexed: 08/20/2023]
Abstract
The question concerning the evolution of language is directly linked to the debate on whether language and action are dependent or not and to what extent Broca's region serves as a common neural basis. The debate resulted in two opposing views, one arguing for and one against the dependence of language and action mainly based on neuroscientific data. This article presents an evolutionary neuroanatomical framework which may offer a solution to this dispute. It is proposed that in humans, Broca's region houses language and action independently in spatially separated subregions. This became possible due to an evolutionary expansion of Broca's region in the human brain, which was not paralleled by a similar expansion in the chimpanzee's brain, providing additional space needed for the neural representation of language in humans.
Collapse
Affiliation(s)
- Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Stephanstraße 1A, 04103 Leipzig, Germany.
| |
Collapse
|
15
|
Cartocci G, Inguscio BMS, Giorgi A, Vozzi A, Leone CA, Grassia R, Di Nardo W, Di Cesare T, Fetoni AR, Freni F, Ciodaro F, Galletti F, Albera R, Canale A, Piccioni LO, Babiloni F. Music in noise recognition: An EEG study of listening effort in cochlear implant users and normal hearing controls. PLoS One 2023; 18:e0288461. [PMID: 37561758 PMCID: PMC10414671 DOI: 10.1371/journal.pone.0288461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 06/27/2023] [Indexed: 08/12/2023] Open
Abstract
Despite the plethora of studies investigating listening effort and the amount of research concerning music perception by cochlear implant (CI) users, the investigation of the influence of background noise on music processing has never been performed. Given the typical speech in noise recognition task for the listening effort assessment, the aim of the present study was to investigate the listening effort during an emotional categorization task on musical pieces with different levels of background noise. The listening effort was investigated, in addition to participants' ratings and performances, using EEG features known to be involved in such phenomenon, that is alpha activity in parietal areas and in the left inferior frontal gyrus (IFG), that includes the Broca's area. Results showed that CI users performed worse than normal hearing (NH) controls in the recognition of the emotional content of the stimuli. Furthermore, when considering the alpha activity corresponding to the listening to signal to noise ratio (SNR) 5 and SNR10 conditions subtracted of the activity while listening to the Quiet condition-ideally removing the emotional content of the music and isolating the difficulty level due to the SNRs- CI users reported higher levels of activity in the parietal alpha and in the homologous of the left IFG in the right hemisphere (F8 EEG channel), in comparison to NH. Finally, a novel suggestion of a particular sensitivity of F8 for SNR-related listening effort in music was provided.
Collapse
Affiliation(s)
- Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| | | | - Andrea Giorgi
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| | | | - Carlo Antonio Leone
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Rosa Grassia
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Walter Di Nardo
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Tiziana Di Cesare
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Anna Rita Fetoni
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Francesco Freni
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Ciodaro
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Galletti
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Roberto Albera
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Andrea Canale
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Lucia Oriella Piccioni
- Department of Otolaryngology-Head and Neck Surgery, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| |
Collapse
|
16
|
Fiveash A, Ladányi E, Camici J, Chidiac K, Bush CT, Canette LH, Bedoin N, Gordon RL, Tillmann B. Regular rhythmic primes improve sentence repetition in children with developmental language disorder. NPJ SCIENCE OF LEARNING 2023; 8:23. [PMID: 37429839 PMCID: PMC10333339 DOI: 10.1038/s41539-023-00170-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 06/07/2023] [Indexed: 07/12/2023]
Abstract
Recently reported links between rhythm and grammar processing have opened new perspectives for using rhythm in clinical interventions for children with developmental language disorder (DLD). Previous research using the rhythmic priming paradigm has shown improved performance on language tasks after regular rhythmic primes compared to control conditions. However, this research has been limited to effects of rhythmic priming on grammaticality judgments. The current study investigated whether regular rhythmic primes could also benefit sentence repetition, a task requiring proficiency in complex syntax-an area of difficultly for children with DLD. Regular rhythmic primes improved sentence repetition performance compared to irregular rhythmic primes in children with DLD and with typical development-an effect that did not occur with a non-linguistic control task. These findings suggest processing overlap for musical rhythm and linguistic syntax, with implications for the use of rhythmic stimulation for treatment of children with DLD in clinical research and practice.
Collapse
Affiliation(s)
- Anna Fiveash
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM U1028, F-69000, Lyon, France.
- University of Lyon 1, Lyon, France.
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia.
| | - Enikő Ladányi
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Linguistics, University of Potsdam, Potsdam, Germany.
| | - Julie Camici
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM U1028, F-69000, Lyon, France
- University of Lyon 1, Lyon, France
| | - Karen Chidiac
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM U1028, F-69000, Lyon, France
- University of Lyon 1, Lyon, France
| | - Catherine T Bush
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Laure-Hélène Canette
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM U1028, F-69000, Lyon, France
- University of Lyon 1, Lyon, France
| | - Nathalie Bedoin
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM U1028, F-69000, Lyon, France
- University of Lyon 1, Lyon, France
- University of Lyon 2, Lyon, F-69000, France
| | - Reyna L Gordon
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM U1028, F-69000, Lyon, France
- University of Lyon 1, Lyon, France
- Laboratory for Research on Learning and Development, LEAD - CNRS UMR5022, Université de Bourgogne, Dijon, France
| |
Collapse
|
17
|
Olszewska AM, Droździel D, Gaca M, Kulesza A, Obrębski W, Kowalewski J, Widlarz A, Marchewka A, Herman AM. Unlocking the musical brain: A proof-of-concept study on playing the piano in MRI scanner with naturalistic stimuli. Heliyon 2023; 9:e17877. [PMID: 37501960 PMCID: PMC10368778 DOI: 10.1016/j.heliyon.2023.e17877] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 04/26/2023] [Accepted: 06/29/2023] [Indexed: 07/29/2023] Open
Abstract
Music is a universal human phenomenon, and can be studied for itself or as a window into the understanding of the brain. Few neuroimaging studies investigate actual playing in the MRI scanner, likely because of the lack of available experimental hardware and analysis tools. Here, we offer an innovative paradigm that addresses this issue in neuromusicology using naturalistic, polyphonic musical stimuli, presents a commercially available MRI-compatible piano, and a flexible approach to quantify participant's performance. We show how making errors while playing can be investigated using an altered auditory feedback paradigm. In the spirit of open science, we make our experimental paradigms and analysis tools available to other researchers studying pianists in MRI. Altogether, we present a proof-of-concept study which shows the feasibility of playing the novel piano in MRI, and a step towards using more naturalistic stimuli.
Collapse
Affiliation(s)
- Alicja M. Olszewska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Dawid Droździel
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Maciej Gaca
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Agnieszka Kulesza
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Wojciech Obrębski
- Department of Nuclear and Medical Electronics, Faculty of Electronics and Information Technology, Warsaw University of Technology, 1 Politechniki Square, 00-661 Warsaw, Poland
- 10 Murarska Street, 08-110 Siedlce, Poland
| | | | - Agnieszka Widlarz
- Chair of Rhythmics and Piano Improvisation, Department of Choir Conducting and Singing, Music Education and Rhythmics, The Chopin University of Music, Okolnik 2 Street, 00–368 Warsaw, Poland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| | - Aleksandra M. Herman
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of the Polish Academy of Sciences, 3 Pasteur Street, 02-093, Warsaw, Poland
| |
Collapse
|
18
|
Chen X, Affourtit J, Ryskin R, Regev TI, Norman-Haignere S, Jouravlev O, Malik-Moraleda S, Kean H, Varley R, Fedorenko E. The human language system, including its inferior frontal component in "Broca's area," does not support music perception. Cereb Cortex 2023; 33:7904-7929. [PMID: 37005063 PMCID: PMC10505454 DOI: 10.1093/cercor/bhad087] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 04/04/2023] Open
Abstract
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within "Broca's area." However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions' responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
Collapse
Affiliation(s)
- Xuanyi Chen
- Department of Cognitive Sciences, Rice University, TX 77005, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rachel Ryskin
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive & Information Sciences, University of California, Merced, Merced, CA 95343, United States
| | - Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Samuel Norman-Haignere
- Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States
| | - Olessia Jouravlev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Science, Carleton University, Ottawa, ON, Canada
| | - Saima Malik-Moraleda
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rosemary Varley
- Psychology & Language Sciences, UCL, London, WCN1 1PF, United Kingdom
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
19
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
20
|
León Méndez MDC, Fernández García L, Daza González MT. Effectiveness of rhythmic training on linguistics skill development in deaf children and adolescents with cochlear implants: A systematic review. Int J Pediatr Otorhinolaryngol 2023; 169:111561. [PMID: 37088038 DOI: 10.1016/j.ijporl.2023.111561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 04/17/2023] [Accepted: 04/18/2023] [Indexed: 04/25/2023]
Abstract
OBJECTIVE This review compiles the scientific evidence to date on the effectiveness of musical/rhythmic training for improving and/or enhancing the development of language skills in deaf children aged 6-16 years with cochlear implants. METHODS PubMed, ScienceDirect, and Web of Science were used for the research following the PRISMA protocol. RESULTS The reviewed studies indicate that rhythmic training can improve language skills (perception, production, and comprehension) in this population, as well as in other cognitive skills. CONCLUSION Although further research is still needed, the current evidence can help identify new and more effective early intervention methods for deaf children.
Collapse
Affiliation(s)
| | - Laura Fernández García
- Department of Psychology, University of Almería, Almería, Spain; Center for Neuropsychological Assessment and Rehabilitation (CERNEP), University of Almería, Almería, Spain
| | - María Teresa Daza González
- Department of Psychology, University of Almería, Almería, Spain; Center for Neuropsychological Assessment and Rehabilitation (CERNEP), University of Almería, Almería, Spain.
| |
Collapse
|
21
|
Markov I, Kharitonova K, Grigorenko EL. Language: Its Origin and Ongoing Evolution. J Intell 2023; 11:jintelligence11040061. [PMID: 37103246 PMCID: PMC10142271 DOI: 10.3390/jintelligence11040061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/17/2023] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
With the present paper, we sought to use research findings to illustrate the following thesis: the evolution of language follows the principles of human evolution. We argued that language does not exist for its own sake, it is one of a multitude of skills that developed to achieve a shared communicative goal, and all its features are reflective of this. Ongoing emerging language adaptations strive to better fit the present state of the human species. Theories of language have evolved from a single-modality to multimodal, from human-specific to usage-based and goal-driven. We proposed that language should be viewed as a multitude of communication techniques that have developed and are developing in response to selective pressure. The precise nature of language is shaped by the needs of the species (arguably, uniquely H. sapiens) utilizing it, and the emergence of new situational adaptations, as well as new forms and types of human language, demonstrates that language includes an act driven by a communicative goal. This article serves as an overview of the current state of psycholinguistic research on the topic of language evolution.
Collapse
Affiliation(s)
- Ilia Markov
- Department of Psychology, University of Houston, Houston, TX 77204, USA
- Texas Institute for Measurement, Evaluation, and Statistics (TIMES), The University of Houston, Houston, TX 77204, USA
- Center for Cognitive Sciences, Sirius University for Science and Technology, Sochi 354340, Russia
| | | | - Elena L. Grigorenko
- Department of Psychology, University of Houston, Houston, TX 77204, USA
- Texas Institute for Measurement, Evaluation, and Statistics (TIMES), The University of Houston, Houston, TX 77204, USA
- Center for Cognitive Sciences, Sirius University for Science and Technology, Sochi 354340, Russia
- Baylor College of Medicine, Houston, TX 77030, USA
- Child Study Center and Haskins Laboratories, Yale University, New Haven, CT 06520, USA
- Rector’s Office, Moscow State University for Psychology and Education, Moscow 127051, Russia
- Correspondence:
| |
Collapse
|
22
|
Jiang L, Zhang R, Tao L, Zhang Y, Zhou Y, Cai Q. Neural mechanisms of musical structure and tonality, and the effect of musicianship. Front Psychol 2023; 14:1092051. [PMID: 36844277 PMCID: PMC9948014 DOI: 10.3389/fpsyg.2023.1092051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 01/16/2023] [Indexed: 02/11/2023] Open
Abstract
Introduction The neural basis for the processing of musical syntax has previously been examined almost exclusively in classical tonal music, which is characterized by a strictly organized hierarchical structure. Musical syntax may differ in different music genres caused by tonality varieties. Methods The present study investigated the neural mechanisms for processing musical syntax across genres varying in tonality - classical, impressionist, and atonal music - and, in addition, examined how musicianship modulates such processing. Results Results showed that, first, the dorsal stream, including the bilateral inferior frontal gyrus and superior temporal gyrus, plays a key role in the perception of tonality. Second, right frontotemporal regions were crucial in allowing musicians to outperform non-musicians in musical syntactic processing; musicians also benefit from a cortical-subcortical network including pallidum and cerebellum, suggesting more auditory-motor interaction in musicians than in non-musicians. Third, left pars triangularis carries out online computations independently of tonality and musicianship, whereas right pars triangularis is sensitive to tonality and partly dependent on musicianship. Finally, unlike tonal music, the processing of atonal music could not be differentiated from that of scrambled notes, both behaviorally and neurally, even among musicians. Discussion The present study highlights the importance of studying varying music genres and experience levels and provides a better understanding of musical syntax and tonality processing and how such processing is modulated by music experience.
Collapse
Affiliation(s)
- Lei Jiang
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China,School of Music, East China Normal University, Shanghai, China
| | - Ruiqing Zhang
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Lily Tao
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yuxin Zhang
- Shanghai High School International Division, Shanghai, China
| | - Yongdi Zhou
- School of Psychology, Shenzhen University, Shenzhen, China,Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, United States,Yongdi Zhou, ✉
| | - Qing Cai
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China,Shanghai Changning Mental Health Center, Shanghai, China,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China,*Correspondence: Qing Cai, ✉
| |
Collapse
|
23
|
Kappen PR, van den Brink J, Jeekel J, Dirven CMF, Klimek M, Donders-Kamphuis M, Docter-Kerkhof CS, Mooijman SA, Collee E, Nandoe Tewarie RDS, Broekman MLD, Smits M, Vincent AJPE, Satoer D. The effect of musicality on language recovery after awake glioma surgery. Front Hum Neurosci 2023; 16:1028897. [PMID: 36704093 PMCID: PMC9873262 DOI: 10.3389/fnhum.2022.1028897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 12/21/2022] [Indexed: 01/11/2023] Open
Abstract
Introduction Awake craniotomy is increasingly used to resect intrinsic brain tumors while preserving language. The level of musical training might affect the speed and extend of postoperative language recovery, as increased white matter connectivity in the corpus callosum is described in musicians compared to non-musicians. Methods In this cohort study, we included adult patients undergoing treatment for glioma with an awake resection procedure at two neurosurgical centers and assessed language preoperatively (T1) and postoperatively at three months (T2) and one year (T3) with the Diagnostic Instrument for Mild Aphasia (DIMA), transferred to z-scores. Moreover, patients' musicality was divided into three groups based on the Musical Expertise Criterion (MEC) and automated volumetric measures of the corpus callosum were conducted. Results We enrolled forty-six patients, between June 2015 and September 2021, and divided in: group A (non-musicians, n = 19, 41.3%), group B (amateur musicians, n = 17, 36.9%) and group C (trained musicians, n = 10, 21.7%). No significant differences on postoperative language course between the three musicality groups were observed in the main analyses. However, a trend towards less deterioration of language (mean/SD z-scores) was observed within the first three months on the phonological domain (A: -0.425/0.951 vs. B: -0.00100/1.14 vs. C: 0.0289/0.566, p-value = 0.19) with a significant effect between non-musicians vs. instrumentalists (A: -0.425/0.951 vs. B + C: 0.201/0.699, p = 0.04). Moreover, a non-significant trend towards a larger volume (mean/SD cm3) of the corpus callosum was observed between the three musicality groups (A: 6.67/1.35 vs. B: 7.09/1.07 vs. C: 8.30/2.30, p = 0.13), with the largest difference of size in the anterior corpus callosum in non-musicians compared to trained musicians (A: 3.28/0.621 vs. C: 4.90/1.41, p = 0.02). Conclusion With first study on this topic, we support that musicality contributes to language recovery after awake glioma surgery, possibly attributed to a higher white matter connectivity at the anterior part of the corpus callosum. Our conclusion should be handled with caution and interpreted as hypothesis generating only, as most of our results were not significant. Future studies with larger sample sizes are needed to confirm our hypothesis.
Collapse
Affiliation(s)
- Pablo R. Kappen
- Department of Neurosurgery, Erasmus University Medical Center, Rotterdam, Netherlands,*Correspondence: Pablo R. Kappen,
| | - Jan van den Brink
- Department of Neurosurgery, Erasmus University Medical Center, Rotterdam, Netherlands
| | - Johannes Jeekel
- Department of Neuroscience, Erasmus University Medical Center, Rotterdam, Netherlands
| | - Clemens M. F. Dirven
- Department of Neurosurgery, Erasmus University Medical Center, Rotterdam, Netherlands
| | - Markus Klimek
- Department of Anesthesiology, Erasmus University Medical Center, Rotterdam, Netherlands
| | - Marike Donders-Kamphuis
- Department of Neurosurgery, Erasmus University Medical Center, Rotterdam, Netherlands,Department of Speech and Language Pathology, Haaglanden Medisch Centrum, The Hague, Netherlands
| | | | - Saskia A. Mooijman
- Department of Neurosurgery, Erasmus University Medical Center, Rotterdam, Netherlands
| | - Ellen Collee
- Department of Neurosurgery, Erasmus University Medical Center, Rotterdam, Netherlands
| | | | - Marike L. D. Broekman
- Department of Neurosurgery, Haaglanden Medisch Centrum, The Hague, Netherlands,Department of Neurosurgery, Leiden University Medical Center, Leiden, Netherlands
| | - Marion Smits
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Center, Rotterdam, Netherlands,Medical Delta, Delft, Netherlands,Brain Tumor Center, Erasmus MC Cancer Institute, Rotterdam, Netherlands
| | | | - Djaina Satoer
- Department of Neurosurgery, Erasmus University Medical Center, Rotterdam, Netherlands
| |
Collapse
|
24
|
Order of statistical learning depends on perceptive uncertainty. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 4:100080. [PMID: 36926596 PMCID: PMC10011828 DOI: 10.1016/j.crneur.2023.100080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 02/02/2023] [Accepted: 02/06/2023] [Indexed: 03/05/2023] Open
Abstract
Statistical learning (SL) is an innate mechanism by which the brain automatically encodes the n-th order transition probability (TP) of a sequence and grasps the uncertainty of the TP distribution. Through SL, the brain predicts a subsequent event (e n+1 ) based on the preceding events (e n ) that have a length of "n". It is now known that uncertainty modulates prediction in top-down processing by the human predictive brain. However, the manner in which the human brain modulates the order of SL strategies based on the degree of uncertainty remains an open question. The present study examined how uncertainty modulates the neural effects of SL and whether differences in uncertainty alter the order of SL strategies. It used auditory sequences in which the uncertainty of sequential information is manipulated based on the conditional entropy. Three sequences with different TP ratios of 90:10, 80:20, and 67:33 were prepared as low-, intermediate, and high-uncertainty sequences, respectively (conditional entropy: 0.47, 0.72, and 0.92 bit, respectively). Neural responses were recorded when the participants listened to the three sequences. The results showed that stimuli with lower TPs elicited a stronger neural response than those with higher TPs, as demonstrated by a number of previous studies. Furthermore, we found that participants adopted higher-order SL strategies in the high uncertainty sequence. These results may indicate that the human brain has an ability to flexibly alter the order based on the uncertainty. This uncertainty may be an important factor that determines the order of SL strategies. Particularly, considering that a higher-order SL strategy mathematically allows the reduction of uncertainty in information, we assumed that the brain may take higher-order SL strategies when encountering high uncertain information in order to reduce the uncertainty. The present study may shed new light on understanding individual differences in SL performance across different uncertain situations.
Collapse
|
25
|
Scharinger M, Knoop CA, Wagner V, Menninghaus W. Neural processing of poems and songs is based on melodic properties. Neuroimage 2022; 257:119310. [PMID: 35569784 DOI: 10.1016/j.neuroimage.2022.119310] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/26/2022] [Accepted: 05/11/2022] [Indexed: 11/30/2022] Open
Abstract
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Pilgrimstein 16, Marburg 35032, Germany; Center for Mind, Brain and Behavior, Universities of Marburg and Gießen, Germany.
| | - Christine A Knoop
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Valentin Wagner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Experimental Psychology Unit, Helmut Schmidt University / University of the Federal Armed Forces Hamburg, Germany
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
26
|
Bulut T. Meta-analytic connectivity modeling of the left and right inferior frontal gyri. Cortex 2022; 155:107-131. [DOI: 10.1016/j.cortex.2022.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 05/21/2022] [Accepted: 07/15/2022] [Indexed: 11/03/2022]
|
27
|
Chiappetta B, Patel AD, Thompson CK. Musical and linguistic syntactic processing in agrammatic aphasia: An ERP study. JOURNAL OF NEUROLINGUISTICS 2022; 62:101043. [PMID: 35002061 PMCID: PMC8740885 DOI: 10.1016/j.jneuroling.2021.101043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Language and music rely on complex sequences organized according to syntactic principles that are implicitly understood by enculturated listeners. Across both domains, syntactic processing involves predicting and integrating incoming elements into higher-order structures. According to the Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, 2003), musical and linguistic syntactic processing rely on shared resources for integrating incoming elements (e.g., chords, words) into unfolding sequences. One prediction of the SSIRH is that people with agrammatic aphasia (whose deficits are due to syntactic integration problems) should present with deficits in processing musical syntax. We report the first neural study to test this prediction: event-related potentials (ERPs) were measured in response to musical and linguistic syntactic violations in a group of people with agrammatic aphasia (n=7) compared to a group of healthy controls (n=14) using an acceptability judgement task. The groups were matched with respect to age, education, and extent of musical training. Violations were based on morpho-syntactic relations in sentences and harmonic relations in chord sequences. Both groups presented with a significant P600 response to syntactic violations across both domains. The aphasic participants presented with a reduced-amplitude posterior P600 compared to the healthy adults in response to linguistic, but not musical, violations. Participants with aphasia did however present with larger frontal positivities in response to violations in both domains. Intriguingly, extent of musical training was associated with larger posterior P600 responses to syntactic violations of language and music in both groups. Overall, these findings are not consistent with the predictions of the SSIRH, and instead suggest that linguistic, but not musical, syntactic processing may be selectively impaired in stroke-induced agrammatic aphasia. However, the findings also suggest a relationship between musical training and linguistic syntactic processing, which may have clinical implications for people with aphasia, and motivates more research on the relationship between these two domains.
Collapse
Affiliation(s)
- Brianne Chiappetta
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, MA, USA
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, ON, CA
| | - Cynthia K. Thompson
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Northwestern University, Chicago, IL, USA
- Department of Neurology, Northwestern University, Chicago, IL, USA
| |
Collapse
|
28
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 101] [Impact Index Per Article: 50.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
29
|
Grzywacz NM, Aleem H. Does Amount of Information Support Aesthetic Values? Front Neurosci 2022; 16:805658. [PMID: 35392414 PMCID: PMC8982361 DOI: 10.3389/fnins.2022.805658] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 02/16/2022] [Indexed: 11/24/2022] Open
Abstract
Obtaining information from the world is important for survival. The brain, therefore, has special mechanisms to extract as much information as possible from sensory stimuli. Hence, given its importance, the amount of available information may underlie aesthetic values. Such information-based aesthetic values would be significant because they would compete with others to drive decision-making. In this article, we ask, "What is the evidence that amount of information support aesthetic values?" An important concept in the measurement of informational volume is entropy. Research on aesthetic values has thus used Shannon entropy to evaluate the contribution of quantity of information. We review here the concepts of information and aesthetic values, and research on the visual and auditory systems to probe whether the brain uses entropy or other relevant measures, specially, Fisher information, in aesthetic decisions. We conclude that information measures contribute to these decisions in two ways: first, the absolute quantity of information can modulate aesthetic preferences for certain sensory patterns. However, the preference for volume of information is highly individualized, with information-measures competing with organizing principles, such as rhythm and symmetry. In addition, people tend to be resistant to too much entropy, but not necessarily, high amounts of Fisher information. We show that this resistance may stem in part from the distribution of amount of information in natural sensory stimuli. Second, the measurement of entropic-like quantities over time reveal that they can modulate aesthetic decisions by varying degrees of surprise given temporally integrated expectations. We propose that amount of information underpins complex aesthetic values, possibly informing the brain on the allocation of resources or the situational appropriateness of some cognitive models.
Collapse
Affiliation(s)
- Norberto M. Grzywacz
- Department of Psychology, Loyola University Chicago, Chicago, IL, United States
- Department of Molecular Pharmacology and Neuroscience, Loyola University Chicago, Chicago, IL, United States
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, United States
| | - Hassan Aleem
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, United States
| |
Collapse
|
30
|
Chabin T, Pazart L, Gabriel D. Vocal melody and musical background are simultaneously processed by the brain for musical predictions. Ann N Y Acad Sci 2022; 1512:126-140. [PMID: 35229293 DOI: 10.1111/nyas.14755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 01/18/2022] [Indexed: 12/18/2022]
Abstract
Musical pleasure is related to the capacity to predict and anticipate the music. By recording early cerebral responses of 16 participants with electroencephalography during periods of silence inserted in known and unknown songs, we aimed to measure the contribution of different musical attributes to musical predictions. We investigated the mismatch between past encoded musical features and the current sensory inputs when listening to lyrics associated with vocal melody, only background instrumental material, or both attributes grouped together. When participants were listening to chords and lyrics for known songs, the brain responses related to musical violation produced event-related potential responses around 150-200 ms that were of a larger amplitude than for chords or lyrics only. Microstate analysis also revealed that for chords and lyrics, the global field power had an increased stability and a longer duration. The source localization identified that the right superior temporal and frontal gyri and the inferior and medial frontal gyri were activated for a longer time for chords and lyrics, likely caused by the increased complexity of the stimuli. We conclude that grouped together, a broader integration and retrieval of several musical attributes at the same time recruit larger neuronal networks that lead to more accurate predictions.
Collapse
Affiliation(s)
- Thibault Chabin
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France
| | - Lionel Pazart
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Bourgogne Franche-Comté, France
| | - Damien Gabriel
- Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| |
Collapse
|
31
|
Cui AX, Troje NF, Cuddy LL. Electrophysiological and behavioral indicators of musical knowledge about unfamiliar music. Sci Rep 2022; 12:441. [PMID: 35013467 PMCID: PMC8748445 DOI: 10.1038/s41598-021-04211-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022] Open
Abstract
Most listeners possess sophisticated knowledge about the music around them without being aware of it or its intricacies. Previous research shows that we develop such knowledge through exposure. This knowledge can then be assessed using behavioral and neurophysiological measures. It remains unknown however, which neurophysiological measures accompany the development of musical long-term knowledge. In this series of experiments, we first identified a potential ERP marker of musical long-term knowledge by comparing EEG activity following musically unexpected and expected tones within the context of known music (n = 30). We then validated the marker by showing that it does not differentiate between such tones within the context of unknown music (n = 34). In a third experiment, we exposed participants to unknown music (n = 40) and compared EEG data before and after exposure to explore effects of time. Although listeners’ behavior indicated musical long-term knowledge, we did not find any effects of time on the ERP marker. Instead, the relationship between behavioral and EEG data suggests musical long-term knowledge may have formed before we could confirm its presence through behavioral measures. Listeners are thus not only knowledgeable about music but seem to also be incredibly fast music learners.
Collapse
Affiliation(s)
- Anja-Xiaoxing Cui
- Queen's University, Kingston, Canada. .,University of British Columbia, Vancouver, Canada.
| | | | | |
Collapse
|
32
|
Al-Zubaidi A, Bräuer S, Holdgraf CR, Schepers IM, Rieger JW. OUP accepted manuscript. Cereb Cortex Commun 2022; 3:tgac007. [PMID: 35281216 PMCID: PMC8914075 DOI: 10.1093/texcom/tgac007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/26/2022] [Accepted: 01/29/2022] [Indexed: 11/14/2022] Open
Affiliation(s)
- Arkan Al-Zubaidi
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
- Research Center Neurosensory Science, Oldenburg University, 26129 Oldenburg, Germany
| | - Susann Bräuer
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
| | - Chris R Holdgraf
- Department of Statistics, UC Berkeley, Berkeley, CA 94720, USA
- International Interactive Computing Collaboration
| | - Inga M Schepers
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
| | - Jochem W Rieger
- Corresponding author: Department of Psychology, Faculty VI, Oldenburg University, 26129 Oldenburg, Germany.
| |
Collapse
|
33
|
Bianco R, Novembre G, Ringer H, Kohler N, Keller PE, Villringer A, Sammler D. Lateral Prefrontal Cortex Is a Hub for Music Production from Structural Rules to Movements. Cereb Cortex 2021; 32:3878-3895. [PMID: 34965579 PMCID: PMC9476625 DOI: 10.1093/cercor/bhab454] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 11/08/2021] [Accepted: 11/09/2021] [Indexed: 11/13/2022] Open
Abstract
Complex sequential behaviors, such as speaking or playing music, entail flexible rule-based chaining of single acts. However, it remains unclear how the brain translates abstract structural rules into movements. We combined music production with multimodal neuroimaging to dissociate high-level structural and low-level motor planning. Pianists played novel musical chord sequences on a muted MR-compatible piano by imitating a model hand on screen. Chord sequences were manipulated in terms of musical harmony and context length to assess structural planning, and in terms of fingers used for playing to assess motor planning. A model of probabilistic sequence processing confirmed temporally extended dependencies between chords, as opposed to local dependencies between movements. Violations of structural plans activated the left inferior frontal and middle temporal gyrus, and the fractional anisotropy of the ventral pathway connecting these two regions positively predicted behavioral measures of structural planning. A bilateral frontoparietal network was instead activated by violations of motor plans. Both structural and motor networks converged in lateral prefrontal cortex, with anterior regions contributing to musical structure building, and posterior areas to movement planning. These results establish a promising approach to study sequence production at different levels of action representation.
Collapse
Affiliation(s)
- Roberta Bianco
- UCL Ear Institute, University College London, London WC1X 8EE, UK.,Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome 00161, Italy
| | - Hanna Ringer
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Institute of Psychology, University of Leipzig, Leipzig 04109, Germany
| | - Natalie Kohler
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Peter E Keller
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Aarhus 8000, Denmark.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW 2751, Australia
| | - Arno Villringer
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Daniela Sammler
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| |
Collapse
|
34
|
Sakai KL, Oshiba Y, Horisawa R, Miyamae T, Hayano R. Music-Experience-Related and Musical-Error-Dependent Activations in the Brain. Cereb Cortex 2021; 32:4229-4242. [PMID: 34937087 PMCID: PMC9528789 DOI: 10.1093/cercor/bhab478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 11/20/2021] [Accepted: 11/21/2021] [Indexed: 11/29/2022] Open
Abstract
Although music is one of human-unique traits such as language, its neural basis for cortical organization has not been well understood. In the present functional magnetic resonance imaging study, we tested an error-detection task with different types of musical error (pitch, tempo, stress, and articulation conditions) and examined three groups of secondary school students having different levels of music experience. First, we observed distinct activation patterns under these music conditions, such that specific activations under the pitch condition were consistently replicated for all tested groups in the auditory areas, as well as in the left language areas under the articulation condition. Second, music-experience-related activations were observed in multiple regions, including the right sensorimotor area under the pitch condition, as well as in the right premotor cortex under the articulation condition. Indeed, the right homologs of the language areas were specifically activated under the stress and articulation conditions. Third, activations specific to the group with the highest proficiency in music were observed under the tempo condition mostly in the right regions. These results demonstrate the existence of music-related signatures in the brain activations, including both universal and experience-related mechanisms.
Collapse
Affiliation(s)
- Kuniyoshi L Sakai
- Department of Basic Science, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo 153-8902, Japan
| | - Yoshiaki Oshiba
- Department of Basic Science, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo 153-8902, Japan
| | - Reiya Horisawa
- Department of Basic Science, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo 153-8902, Japan
| | - Takeaki Miyamae
- Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA 15261, USA.,Suzuki School of Music, The Talent Education Research Institute, Matsumoto-shi 390-8511, Japan
| | - Ryugo Hayano
- Suzuki School of Music, The Talent Education Research Institute, Matsumoto-shi 390-8511, Japan.,Department of Physics, School of Science, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
35
|
Bonetti L, Brattico E, Carlomagno F, Donati G, Cabral J, Haumann NT, Deco G, Vuust P, Kringelbach ML. Rapid encoding of musical tones discovered in whole-brain connectivity. Neuroimage 2021; 245:118735. [PMID: 34813972 DOI: 10.1016/j.neuroimage.2021.118735] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 09/30/2021] [Accepted: 11/14/2021] [Indexed: 11/26/2022] Open
Abstract
Information encoding has received a wide neuroscientific attention, but the underlying rapid spatiotemporal brain dynamics remain largely unknown. Here, we investigated the rapid brain mechanisms for encoding of sounds forming a complex temporal sequence. Specifically, we used magnetoencephalography (MEG) to record the brain activity of 68 participants while they listened to a highly structured musical prelude. Functional connectivity analyses performed using phase synchronisation and graph theoretical measures showed a large network of brain areas recruited during encoding of sounds, comprising primary and secondary auditory cortices, frontal operculum, insula, hippocampus and basal ganglia. Moreover, our results highlighted the rapid transition of brain activity from primary auditory cortex to higher order association areas including insula and superior temporal pole within a whole-brain network, occurring during the first 220 ms of the encoding process. Further, we discovered that individual differences along cognitive abilities and musicianship modulated the degree centrality of the brain areas implicated in the encoding process. Indeed, participants with higher musical expertise presented a stronger centrality of superior temporal gyrus and insula, while individuals with high working memory abilities showed a stronger centrality of frontal operculum. In conclusion, our study revealed the rapid unfolding of brain network dynamics responsible for the encoding of sounds and their relationship with individual differences, showing a complex picture which extends beyond the well-known involvement of auditory areas. Indeed, our results expanded our understanding of the general mechanisms underlying auditory pattern encoding in the human brain.
Collapse
Affiliation(s)
- L Bonetti
- Centre for Eudaimonia and Human Flourishing, University of Oxford, United Kingdom; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark; Department of Psychiatry, University of Oxford, Oxford, United Kingdom; Department of Psychology, University of Bologna, Italy.
| | - E Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, Italy
| | - F Carlomagno
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - G Donati
- Department of Psychology, University of Bologna, Italy; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - J Cabral
- Centre for Eudaimonia and Human Flourishing, University of Oxford, United Kingdom; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark; Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, 4710-057 Braga, Portugal
| | - N T Haumann
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - G Deco
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Passeig Lluís Companys 23, Barcelona, 08010, Spain; Computational and Theoretical Neuroscience Group, Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| | - P Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - M L Kringelbach
- Centre for Eudaimonia and Human Flourishing, University of Oxford, United Kingdom; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
36
|
Sihvonen AJ, Pitkäniemi A, Leo V, Soinila S, Särkämö T. Resting-state language network neuroplasticity in post-stroke music listening: A randomized controlled trial. Eur J Neurosci 2021; 54:7886-7898. [PMID: 34763370 DOI: 10.1111/ejn.15524] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 10/13/2021] [Accepted: 11/08/2021] [Indexed: 01/31/2023]
Abstract
Recent evidence suggests that post-stroke vocal music listening can aid language recovery, but the network-level functional neuroplasticity mechanisms of this effect are unknown. Here, we sought to determine if improved language recovery observed after post-stroke listening to vocal music is driven by changes in longitudinal resting-state functional connectivity within the language network. Using data from a single-blind randomized controlled trial on stroke patients (N = 38), we compared the effects of daily listening to self-selected vocal music, instrumental music and audio books on changes of the resting-state functional connectivity within the language network and their correlation to improved language skills and verbal memory during the first 3 months post-stroke. From acute to 3-month stage, the vocal music and instrumental music groups increased functional connectivity between a cluster comprising the left inferior parietal areas and the language network more than the audio book group. However, the functional connectivity increase correlated with improved verbal memory only in the vocal music group cluster. This study shows that listening to vocal music post-stroke promotes recovery of verbal memory by inducing changes in longitudinal functional connectivity in the language network. Our results conform to the variable neurodisplacement theory underpinning aphasia recovery.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Centre for Clinical Research, The University of Queensland, Brisbane, Queensland, Australia
| | - Anni Pitkäniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Seppo Soinila
- Neurocenter, Turku University Hospital and Division of Clinical Neurosciences, University of Turku, Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
37
|
Planton S, Dehaene S. Cerebral representation of sequence patterns across multiple presentation formats. Cortex 2021; 145:13-36. [PMID: 34673292 DOI: 10.1016/j.cortex.2021.09.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 08/06/2021] [Accepted: 09/01/2021] [Indexed: 01/29/2023]
Abstract
The ability to detect the abstract pattern underlying a temporal sequence of events is crucial to many human activities, including language and mathematics, but its cortical correlates remain poorly understood. It is also unclear whether repeated exposure to the same sequence of sensory stimuli is sufficient to induce the encoding of an abstract amodal representation of the pattern. Using functional MRI, we probed the existence of such abstract codes for sequential patterns, their localization in the human brain, and their relation to existing language and math-responsive networks. We used a passive sequence violation paradigm, in which a given sequence is repeatedly presented before rare deviant sequences are introduced. We presented two binary patterns, AABB and ABAB, in four presentation formats, either visual or auditory, and either cued by the identity of the stimuli or by their spatial location. Regardless of the presentation format, a habituation to the repeated pattern and a response to pattern violations were seen in a set of inferior frontal, intraparietal and temporal areas. Within language areas, such pattern-violation responses were only found in the inferior frontal gyrus (IFG), whereas all math-responsive regions responded to pattern changes. Most of these regions also responded whenever the modality or the cue changed, suggesting a general sensitivity to violation detection. Thus, the representation of sequence patterns appears to be distributed, yet to include a core set of abstract amodal regions, particularly the IFG.
Collapse
Affiliation(s)
- Samuel Planton
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France.
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France; Collège de France, Université PSL Paris Sciences Lettres, Paris, France
| |
Collapse
|
38
|
White PA. The extended present: an informational context for perception. Acta Psychol (Amst) 2021; 220:103403. [PMID: 34454251 DOI: 10.1016/j.actpsy.2021.103403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 08/04/2021] [Accepted: 08/19/2021] [Indexed: 01/29/2023] Open
Abstract
Several previous authors have proposed a kind of specious or subjective present moment that covers a few seconds of recent information. This article proposes a new hypothesis about the subjective present, renamed the extended present, defined not in terms of time covered but as a thematically connected information structure held in working memory and in transiently accessible form in long-term memory. The three key features of the extended present are that information in it is thematically connected, both internally and to current attended perceptual input, it is organised in a hierarchical structure, and all information in it is marked with temporal information, specifically ordinal and duration information. Temporal boundaries to the information structure are determined by hierarchical structure processing and by limits on processing and storage capacity. Supporting evidence for the importance of hierarchical structure analysis is found in the domains of music perception, speech and language processing, perception and production of goal-directed action, and exact arithmetical calculation. Temporal information marking is also discussed and a possible mechanism for representing ordinal and duration information on the time scale of the extended present is proposed. It is hypothesised that the extended present functions primarily as an informational context for making sense of current perceptual input, and as an enabler for perception and generation of complex structures and operations in language, action, music, exact calculation, and other domains.
Collapse
|
39
|
Kim CH, Jin SH, Kim JS, Kim Y, Yi SW, Chung CK. Dissociation of Connectivity for Syntactic Irregularity and Perceptual Ambiguity in Musical Chord Stimuli. Front Neurosci 2021; 15:693629. [PMID: 34526877 PMCID: PMC8435864 DOI: 10.3389/fnins.2021.693629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 07/30/2021] [Indexed: 11/18/2022] Open
Abstract
Musical syntax has been studied mainly in terms of “syntactic irregularity” in harmonic/melodic sequences. However, “perceptual ambiguity” referring to the uncertainty of judgment/classification of presented stimuli can in addition be involved in our musical stimuli using three different chord sequences. The present study addresses how “syntactic irregularity” and “perceptual ambiguity” on musical syntax are dissociated, in terms of effective connectivity between the bilateral inferior frontal gyrus (IFGs) and superior temporal gyrus (STGs) by linearized time-delayed mutual information (LTDMI). Three conditions were of five-chord sequences with endings of dominant to tonic, dominant to submediant, and dominant to supertonic. The dominant to supertonic is most irregular, compared with the regular dominant to tonic. The dominant to submediant of the less irregular condition is the most ambiguous condition. In the LTDMI results, connectivity from the right to the left IFG (IFG-LTDMI) was enhanced for the most irregular condition, whereas that from the right to the left STG (STG-LTDMI) was enhanced for the most ambiguous condition (p = 0.024 in IFG-LTDMI, p < 0.001 in STG-LTDMI, false discovery rate (FDR) corrected). Correct rate was negatively correlated with STG-LTDMI, further reflecting perceptual ambiguity (p = 0.026). We found for the first time that syntactic irregularity and perceptual ambiguity coexist in chord stimulus testing musical syntax and that the two processes are dissociated in interhemispheric connectivities in the IFG and STG, respectively.
Collapse
Affiliation(s)
- Chan Hee Kim
- Interdisciplinary Program in Neuroscience, College of Natural Science, Seoul National University, Seoul, South Korea.,Department of Neurosurgery, MEG Center, Seoul National University Hospital, Seoul, South Korea
| | - Seung-Hyun Jin
- Department of Neurosurgery, MEG Center, Seoul National University Hospital, Seoul, South Korea
| | - June Sic Kim
- Department of Neurosurgery, MEG Center, Seoul National University Hospital, Seoul, South Korea.,Research Institute of Basic Sciences, Seoul National University, Seoul, South Korea
| | - Youn Kim
- Department of Music, School of Humanities, The University of Hong Kong, Hong Kong, Hong Kong SAR China
| | - Suk Won Yi
- College of Music, Seoul National University, Seoul, South Korea.,Western Music Research Institute, Seoul National University, Seoul, South Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Neuroscience, College of Natural Science, Seoul National University, Seoul, South Korea.,Department of Neurosurgery, MEG Center, Seoul National University Hospital, Seoul, South Korea.,Department of Brain and Cognitive Science, College of Natural Science, Seoul National University, Seoul, South Korea.,Department of Neurosurgery, Seoul National University Hospital, Seoul, South Korea
| |
Collapse
|
40
|
Sivasathiaseelan H, Marshall CR, Benhamou E, van Leeuwen JEP, Bond RL, Russell LL, Greaves C, Moore KM, Hardy CJD, Frost C, Rohrer JD, Scott SK, Warren JD. Laughter as a paradigm of socio-emotional signal processing in dementia. Cortex 2021; 142:186-203. [PMID: 34273798 PMCID: PMC8438290 DOI: 10.1016/j.cortex.2021.05.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 04/01/2021] [Accepted: 05/21/2021] [Indexed: 11/03/2022]
Abstract
Laughter is a fundamental communicative signal in our relations with other people and is used to convey a diverse repertoire of social and emotional information. It is therefore potentially a useful probe of impaired socio-emotional signal processing in neurodegenerative diseases. Here we investigated the cognitive and affective processing of laughter in forty-seven patients representing all major syndromes of frontotemporal dementia, a disease spectrum characterised by severe socio-emotional dysfunction (twenty-two with behavioural variant frontotemporal dementia, twelve with semantic variant primary progressive aphasia, thirteen with nonfluent-agrammatic variant primary progressive aphasia), in relation to fifteen patients with typical amnestic Alzheimer's disease and twenty healthy age-matched individuals. We assessed cognitive labelling (identification) and valence rating (affective evaluation) of samples of spontaneous (mirthful and hostile) and volitional (posed) laughter versus two auditory control conditions (a synthetic laughter-like stimulus and spoken numbers). Neuroanatomical associations of laughter processing were assessed using voxel-based morphometry of patients' brain MR images. While all dementia syndromes were associated with impaired identification of laughter subtypes relative to healthy controls, this was significantly more severe overall in frontotemporal dementia than in Alzheimer's disease and particularly in the behavioural and semantic variants, which also showed abnormal affective evaluation of laughter. Over the patient cohort, laughter identification accuracy was correlated with measures of daily-life socio-emotional functioning. Certain striking syndromic signatures emerged, including enhanced liking for hostile laughter in behavioural variant frontotemporal dementia, impaired processing of synthetic laughter in the nonfluent-agrammatic variant (consistent with a generic complex auditory perceptual deficit) and enhanced liking for numbers ('numerophilia') in the semantic variant. Across the patient cohort, overall laughter identification accuracy correlated with regional grey matter in a core network encompassing inferior frontal and cingulo-insular cortices; and more specific correlates of laughter identification accuracy were delineated in cortical regions mediating affective disambiguation (identification of hostile and posed laughter in orbitofrontal cortex) and authenticity (social intent) decoding (identification of mirthful and posed laughter in anteromedial prefrontal cortex) (all p < .05 after correction for multiple voxel-wise comparisons over the whole brain). These findings reveal a rich diversity of cognitive and affective laughter phenotypes in canonical dementia syndromes and suggest that laughter is an informative probe of neural mechanisms underpinning socio-emotional dysfunction in neurodegenerative disease.
Collapse
Affiliation(s)
- Harri Sivasathiaseelan
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom.
| | - Charles R Marshall
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom; Preventive Neurology Unit, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London, United Kingdom
| | - Elia Benhamou
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Janneke E P van Leeuwen
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Rebecca L Bond
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Lucy L Russell
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Caroline Greaves
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Katrina M Moore
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Chris J D Hardy
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Chris Frost
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom; Department of Medical Statistics, Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, United Kingdom
| | - Jonathan D Rohrer
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Jason D Warren
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
41
|
Asano R, Boeckx C, Seifert U. Hierarchical control as a shared neurocognitive mechanism for language and music. Cognition 2021; 216:104847. [PMID: 34311153 DOI: 10.1016/j.cognition.2021.104847] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 05/14/2021] [Accepted: 07/11/2021] [Indexed: 12/16/2022]
Abstract
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany.
| | - Cedric Boeckx
- Section of General Linguistics, University of Barcelona, Spain; University of Barcelona Institute for Complex Systems (UBICS), Spain; Catalan Institute for Advanced Studies and Research (ICREA), Spain
| | - Uwe Seifert
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany
| |
Collapse
|
42
|
Tervaniemi M, Putkinen V, Nie P, Wang C, Du B, Lu J, Li S, Cowley BU, Tammi T, Tao S. Improved Auditory Function Caused by Music Versus Foreign Language Training at School Age: Is There a Difference? Cereb Cortex 2021; 32:63-75. [PMID: 34265850 PMCID: PMC8634570 DOI: 10.1093/cercor/bhab194] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 05/28/2021] [Accepted: 05/28/2021] [Indexed: 12/03/2022] Open
Abstract
In adults, music and speech share many neurocognitive functions, but how do they interact in a developing brain? We compared the effects of music and foreign language training on auditory neurocognition in Chinese children aged 8–11 years. We delivered group-based training programs in music and foreign language using a randomized controlled trial. A passive control group was also included. Before and after these year-long extracurricular programs, auditory event-related potentials were recorded (n = 123 and 85 before and after the program, respectively). Through these recordings, we probed early auditory predictive brain processes. To our surprise, the language program facilitated the children’s early auditory predictive brain processes significantly more than did the music program. This facilitation was most evident in pitch encoding when the experimental paradigm was musically relevant. When these processes were probed by a paradigm more focused on basic sound features, we found early predictive pitch encoding to be facilitated by music training. Thus, a foreign language program is able to foster auditory and music neurocognition, at least in tonal language speakers, in a manner comparable to that by a music program. Our results support the tight coupling of musical and linguistic brain functions also in the developing brain.
Collapse
Affiliation(s)
- Mari Tervaniemi
- Cicero Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland.,Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Advanced Innovation Center for Future Education, Beijing Normal University, Beijing, China
| | - Vesa Putkinen
- Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Turku PET Centre, University of Turku, Turku, Finland
| | - Peixin Nie
- Cicero Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland.,Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Cuicui Wang
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Bin Du
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Jing Lu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Shuting Li
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Benjamin Ultan Cowley
- Faculty of Educational Sciences, University of Helsinki, Finland.,Cognitive Science, Department of Digital Humanities, Faculty of Arts, University of Helsinki, Finland
| | - Tuisku Tammi
- Cognitive Science, Department of Digital Humanities, Faculty of Arts, University of Helsinki, Finland
| | - Sha Tao
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
43
|
Sihvonen AJ, Ripollés P, Leo V, Saunavaara J, Parkkola R, Rodríguez-Fornells A, Soinila S, Särkämö T. Vocal music listening enhances post-stroke language network reorganization. eNeuro 2021; 8:ENEURO.0158-21.2021. [PMID: 34140351 PMCID: PMC8266215 DOI: 10.1523/eneuro.0158-21.2021] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 05/24/2021] [Accepted: 06/06/2021] [Indexed: 11/25/2022] Open
Abstract
Listening to vocal music has been recently shown to improve language recovery in stroke survivors. The neuroplasticity mechanisms supporting this effect are, however, still unknown. Using data from a three-arm single-blind randomized controlled trial including acute stroke patients (N=38) and a 3-month follow-up, we set out to compare the neuroplasticity effects of daily listening to self-selected vocal music, instrumental music, and audiobooks on both brain activity and structural connectivity of the language network. Using deterministic tractography we show that the 3-month intervention induced an enhancement of the microstructural properties of the left frontal aslant tract (FAT) for the vocal music group as compared to the audiobook group. Importantly, this increase in the strength of the structural connectivity of the left FAT correlated with improved language skills. Analyses of stimulus-specific activation changes showed that the vocal music group exhibited increased activations in the frontal termination points of the left FAT during vocal music listening as compared to the audiobook group from acute to 3-month post-stroke stage. The increased activity correlated with the structural neuroplasticity changes in the left FAT. These results suggest that the beneficial effects of vocal music listening on post-stroke language recovery are underpinned by structural neuroplasticity changes within the language network and extend our understanding of music-based interventions in stroke rehabilitation.Significance statementPost-stroke language deficits have a devastating effect on patients and their families. Current treatments yield highly variable outcomes and the evidence for their long-term effects is limited. Patients often receive insufficient treatment that are predominantly given outside the optimal time window for brain plasticity. Post-stroke vocal music listening improves language outcome which is underpinned by neuroplasticity changes within the language network. Vocal music listening provides a complementary rehabilitation strategy which could be safely implemented in the early stages of stroke rehabilitation and seems to specifically target language symptoms and recovering language network.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
- Centre for Clinical Research, The University of Queensland, Australia
| | - Pablo Ripollés
- Department of Psychology, New York University, USA
- Music and Audio Research Laboratory, New York University, USA
- Center for Language Music and emotion, New York UniversityUSA
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Jani Saunavaara
- Department of Medical Physics, Turku University Hospital, Turku, Finland
| | - Riitta Parkkola
- Department of Radiology, Turku University Hospital and University of Turku, Finland
| | - Antoni Rodríguez-Fornells
- Department of Cognition, Development and Education Psychology, University of Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
- Division of Clinical Neurosciences, Department of Neurology, Turku University Hospital and University of Turku, Finland
| | - Seppo Soinila
- Division of Clinical Neurosciences, Department of Neurology, Turku University Hospital and University of Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| |
Collapse
|
44
|
Li CW, Guo FY, Tsai CG. Predictive processing, cognitive control, and tonality stability of music: An fMRI study of chromatic harmony. Brain Cogn 2021; 151:105751. [PMID: 33991840 DOI: 10.1016/j.bandc.2021.105751] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Revised: 05/01/2021] [Accepted: 05/03/2021] [Indexed: 10/21/2022]
Abstract
The present study aimed at identifying the brain regions which preferentially responded to music with medium degrees of key stability. There were three types of auditory stimuli. Diatonic music based strictly on major and minor scales has the highest key stability, whereas atonal music has the lowest key stability. Between these two extremes, chromatic music is characterized by sophisticated uses of out-of-key notes, which challenge the internal model of musical pitch and lead to higher precision-weighted prediction error compared to diatonic and atonal music. The brain activity of 29 adults with excellent relative pitch was measured with functional magnetic resonance imaging while they listened to diatonic music, chromatic music, and atonal random note sequences. Several frontoparietal regions showed significantly greater response to chromatic music than to diatonic music and atonal sequences, including the pre-supplementary motor area (extending into the dorsal anterior cingulate cortex), dorsolateral prefrontal cortex, rostrolateral prefrontal cortex, intraparietal sulcus, and precuneus. We suggest that these frontoparietal regions may support working memory processes, hierarchical sequencing, and conflict resolution of remotely related harmonic elements during the predictive processing of chromatic music. This finding suggested a possible correlation between precision-weighted prediction error and the frontoparietal regions implicated in cognitive control.
Collapse
Affiliation(s)
- Chia-Wei Li
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
| | - Fong-Yi Guo
- Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei, Taiwan
| | - Chen-Gia Tsai
- Graduate Institute of Musicology, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
45
|
Araneda R, Silva Moura S, Dricot L, De Volder AG. Beat Detection Recruits the Visual Cortex in Early Blind Subjects. Life (Basel) 2021; 11:life11040296. [PMID: 33807372 PMCID: PMC8066101 DOI: 10.3390/life11040296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 03/25/2021] [Accepted: 03/29/2021] [Indexed: 11/16/2022] Open
Abstract
Using functional magnetic resonance imaging, here we monitored the brain activity in 12 early blind subjects and 12 blindfolded control subjects, matched for age, gender and musical experience, during a beat detection task. Subjects were required to discriminate regular ("beat") from irregular ("no beat") rhythmic sequences composed of sounds or vibrotactile stimulations. In both sensory modalities, the brain activity differences between the two groups involved heteromodal brain regions including parietal and frontal cortical areas and occipital brain areas, that were recruited in the early blind group only. Accordingly, early blindness induced brain plasticity changes in the cerebral pathways involved in rhythm perception, with a participation of the visually deprived occipital brain areas whatever the sensory modality for input. We conclude that the visually deprived cortex switches its input modality from vision to audition and vibrotactile sense to perform this temporal processing task, supporting the concept of a metamodal, multisensory organization of this cortex.
Collapse
Affiliation(s)
- Rodrigo Araneda
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
| | - Sandra Silva Moura
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
| | - Laurence Dricot
- Institute of Neuroscience (IoNS; NEUR Section), Université Catholique de Louvain, 1200 Brussels, Belgium;
| | - Anne G. De Volder
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
- Correspondence: ; Tel.: +32-2-764-54-82
| |
Collapse
|
46
|
Marian V, Hayakawa S. Measuring Bilingualism: The Quest for a "Bilingualism Quotient". APPLIED PSYCHOLINGUISTICS 2021; 42:527-548. [PMID: 34054162 PMCID: PMC8158058 DOI: 10.1017/s0142716420000533] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The study of bilingualism has a history that extends from deciphering ancient multilingual texts to mapping the structure of the multilingual brain. The language experiences of individual bilinguals are equally diverse and characterized by unique contexts of acquisition and use that can shape not only sociocultural identity, but also cognitive and neural function. Perhaps unsurprisingly, this variability in scholarly perspectives and language experiences has given rise to a range of methods for defining bilingualism. The goal of this paper is to initiate a conversation about the utility of a more unified approach to how we think about, study, and measure bilingualism. Using concrete case studies, we illustrate the value of enhancing communication and streamlining terminology across researchers with different methodologies within questions, different questions within domains, and different domains within scientific inquiry. We specifically consider the utility and feasibility of a Bilingualism Quotient (BQ) construct, discuss the idea of a BQ relative to the well-established Intelligence Quotient (IQ), and include recommendations for next steps. We conclude that though the variability in language backgrounds and approaches to defining bilingualism presents significant challenges, concerted efforts to systematize and synthesize research across the field may enable the construction of a valid and generalizable index of multilingual experience.
Collapse
|
47
|
Klarendić M, Gorišek VR, Granda G, Avsenik J, Zgonc V, Kojović M. Auditory agnosia with anosognosia. Cortex 2021; 137:255-270. [PMID: 33647851 DOI: 10.1016/j.cortex.2020.12.025] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 04/17/2020] [Accepted: 12/14/2020] [Indexed: 10/22/2022]
Abstract
A 66-year-old right-handed female medical doctor suffered two consecutive cardioembolic strokes, initially affecting the right frontal lobe and the right insula, followed by a lesion in the left temporal lobe. The patient presented with distinctive phenomenology of general auditory agnosia with anosognosia for the deficit. She did not understand verbal requests and her answers to oral questions were fluent but unrelated to the topic. However, she was able to correctly answer written questions, name objects, and fluently describe their purpose, which is characteristic for verbal auditory agnosia. She was also unable to recognise environmental sounds or to recognise and repeat any melody. These inabilities represent environmental sound agnosia and amusia, respectively. Surprisingly, she was not aware of the problem, not asking any questions regarding her symptoms, and avoiding discussing her inability to understand spoken language, which is indicative of anosognosia. The deficits in our patient followed a distinct pattern of recovery. The verbal auditory agnosia was the first to resolve, followed by environmental sound agnosia. Amusia persisted the longest. The patient was clinically assessed from the first day of symptom onset and the evolution of symptoms was video documented. We give a detailed account of the patient's behaviour and provide results of audiological and neuropsychological evaluations. We discuss the anatomy of auditory agnosia and anosognosia relevant to the case. This case study may serve to better understand auditory agnosia in clinical settings. It is important to distinguish auditory agnosia from Wernicke's aphasia, because use of written language may enable normal communication.
Collapse
Affiliation(s)
- Maja Klarendić
- Department of Neurology, University Medical Centre Ljubljana, Ljubljana, Slovenia
| | - Veronika R Gorišek
- Department of Neurology, University Medical Centre Ljubljana, Ljubljana, Slovenia
| | - Gal Granda
- Department of Neurology, University Medical Centre Ljubljana, Ljubljana, Slovenia
| | - Jernej Avsenik
- Department of Neuroradiology, University Medical Centre Ljubljana, Ljubljana, Slovenia
| | - Vid Zgonc
- Department of Neurology, University Medical Centre Ljubljana, Ljubljana, Slovenia
| | - Maja Kojović
- Department of Neurology, University Medical Centre Ljubljana, Ljubljana, Slovenia.
| |
Collapse
|
48
|
Musical Training and Brain Volume in Older Adults. Brain Sci 2021; 11:brainsci11010050. [PMID: 33466337 PMCID: PMC7824792 DOI: 10.3390/brainsci11010050] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/24/2020] [Accepted: 12/28/2020] [Indexed: 12/14/2022] Open
Abstract
Musical practice, including musical training and musical performance, has been found to benefit cognitive function in older adults. Less is known about the role of musical experiences on brain structure in older adults. The present study examined the role of different types of musical behaviors on brain structure in older adults. We administered the Goldsmiths Musical Sophistication Index, a questionnaire that includes questions about a variety of musical behaviors, including performance on an instrument, musical practice, allocation of time to music, musical listening expertise, and emotional responses to music. We demonstrated that musical training, defined as the extent of musical training, musical practice, and musicianship, was positively and significantly associated with the volume of the inferior frontal cortex and parahippocampus. In addition, musical training was positively associated with volume of the posterior cingulate cortex, insula, and medial orbitofrontal cortex. Together, the present study suggests that musical behaviors relate to a circuit of brain regions involved in executive function, memory, language, and emotion. As gray matter often declines with age, our study has promising implications for the positive role of musical practice on aging brain health.
Collapse
|
49
|
Barron HC, Mars RB, Dupret D, Lerch JP, Sampaio-Baptista C. Cross-species neuroscience: closing the explanatory gap. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190633. [PMID: 33190601 PMCID: PMC7116399 DOI: 10.1098/rstb.2019.0633] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2020] [Indexed: 12/17/2022] Open
Abstract
Neuroscience has seen substantial development in non-invasive methods available for investigating the living human brain. However, these tools are limited to coarse macroscopic measures of neural activity that aggregate the diverse responses of thousands of cells. To access neural activity at the cellular and circuit level, researchers instead rely on invasive recordings in animals. Recent advances in invasive methods now permit large-scale recording and circuit-level manipulations with exquisite spatio-temporal precision. Yet, there has been limited progress in relating these microcircuit measures to complex cognition and behaviour observed in humans. Contemporary neuroscience thus faces an explanatory gap between macroscopic descriptions of the human brain and microscopic descriptions in animal models. To close the explanatory gap, we propose adopting a cross-species approach. Despite dramatic differences in the size of mammalian brains, this approach is broadly justified by preserved homology. Here, we outline a three-armed approach for effective cross-species investigation that highlights the need to translate different measures of neural activity into a common space. We discuss how a cross-species approach has the potential to transform basic neuroscience while also benefiting neuropsychiatric drug development where clinical translation has, to date, seen minimal success. This article is part of the theme issue 'Key relationships between non-invasive functional neuroimaging and the underlying neuronal activity'.
Collapse
Affiliation(s)
- Helen C. Barron
- Medical Research Council Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Mansfield Road, Oxford OX1 3TH, UK
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, FMRIB, John Radcliffe Hospital, Oxford OX3 9DU, UK
| | - Rogier B. Mars
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, FMRIB, John Radcliffe Hospital, Oxford OX3 9DU, UK
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands
| | - David Dupret
- Medical Research Council Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Mansfield Road, Oxford OX1 3TH, UK
| | - Jason P. Lerch
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, FMRIB, John Radcliffe Hospital, Oxford OX3 9DU, UK
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, CanadaM5G 1L7
| | - Cassandra Sampaio-Baptista
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, FMRIB, John Radcliffe Hospital, Oxford OX3 9DU, UK
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| |
Collapse
|
50
|
Pesek M, Medvešek Š, Podlesek A, Tkalčič M, Marolt M. A Comparison of Human and Computational Melody Prediction Through Familiarity and Expertise. Front Psychol 2020; 11:557398. [PMID: 33362622 PMCID: PMC7756065 DOI: 10.3389/fpsyg.2020.557398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 11/13/2020] [Indexed: 11/16/2022] Open
Abstract
Melody prediction is an important aspect of music listening. The success of prediction, i.e., whether the next note played in a song is the same as the one predicted by the listener, depends on various factors. In the paper, we present two studies, where we assess how music familiarity and music expertise influence melody prediction in human listeners, and, expressed in appropriate data/algorithmic ways, computational models. To gather data on human listeners, we designed a melody prediction user study, where familiarity was controlled by two different music collections, while expertise was assessed by adapting the Music Sophistication Index instrument to Slovenian language. In the second study, we evaluated the melody prediction accuracy of computational melody prediction models. We evaluated two models, the SymCHM and the Implication-Realization model, which differ substantially in how they approach melody prediction. Our results show that both music familiarity and expertise affect the prediction accuracy of human listeners, as well as of computational models.
Collapse
Affiliation(s)
- Matevž Pesek
- Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
| | - Špela Medvešek
- Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
| | - Anja Podlesek
- Faculty of Arts, University of Ljubljana, Ljubljana, Slovenia
| | - Marko Tkalčič
- Faculty of Mathematics, Natural Sciences and Information Technologies, University of Primorska, Koper, Slovenia
| | - Matija Marolt
- Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|