1
|
Jeon EK, Driscoll V, Mussoi BS, Scheperle R, Guthe E, Gfeller K, Abbas PJ, Brown CJ. Evaluating Changes in Adult Cochlear Implant Users' Brain and Behavior Following Auditory Training. Ear Hear 2024:00003446-990000000-00316. [PMID: 39044323 DOI: 10.1097/aud.0000000000001569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
OBJECTIVES To describe the effects of two types of auditory training on both behavioral and physiological measures of auditory function in cochlear implant (CI) users, and to examine whether a relationship exists between the behavioral and objective outcome measures. DESIGN This study involved two experiments, both of which used a within-subject design. Outcome measures included behavioral and cortical electrophysiological measures of auditory processing. In Experiment I, 8 CI users participated in a music-based auditory training. The training program included both short training sessions completed in the laboratory as well as a set of 12 training sessions that participants completed at home over the course of a month. As part of the training program, study participants listened to a range of different musical stimuli and were asked to discriminate stimuli that differed in pitch or timbre and to identify melodic changes. Performance was assessed before training and at three intervals during and after training was completed. In Experiment II, 20 CI users participated in a more focused auditory training task: the detection of spectral ripple modulation depth. Training consisted of a single 40-minute session that took place in the laboratory under the supervision of the investigators. Behavioral and physiologic measures of spectral ripple modulation depth detection were obtained immediately pre- and post-training. Data from both experiments were analyzed using mixed linear regressions, paired t tests, correlations, and descriptive statistics. RESULTS In Experiment I, there was a significant improvement in behavioral measures of pitch discrimination after the study participants completed the laboratory and home-based training sessions. There was no significant effect of training on electrophysiologic measures of the auditory N1-P2 onset response and acoustic change complex (ACC). There were no significant relationships between electrophysiologic measures and behavioral outcomes after the month-long training. In Experiment II, there was no significant effect of training on the ACC, although there was a small but significant improvement in behavioral spectral ripple modulation depth thresholds after the short-term training. CONCLUSIONS This study demonstrates that auditory training improves spectral cue perception in CI users, with significant perceptual gains observed despite cortical electrophysiological responses like the ACC not reliably predicting training benefits across short- and long-term interventions. Future research should further explore individual factors that may lead to greater benefit from auditory training, in addition to optimization of training protocols and outcome measures, as well as demonstrate the generalizability of these findings.
Collapse
Affiliation(s)
- Eun Kyung Jeon
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Virginia Driscoll
- Department of Music Education and Therapy, East Carolina University, Greenville, North Carolina, USA
| | - Bruna S Mussoi
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, Tennessee, USA
| | - Rachel Scheperle
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Emily Guthe
- Department of Music Therapy, Cleveland State University, Cleveland, Ohio, USA
| | - Kate Gfeller
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Paul J Abbas
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Carolyn J Brown
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
2
|
Vigl J, Talamini F, Strauss H, Zentner M. Prosodic discrimination skills mediate the association between musical aptitude and vocal emotion recognition ability. Sci Rep 2024; 14:16462. [PMID: 39014043 PMCID: PMC11252295 DOI: 10.1038/s41598-024-66889-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Accepted: 07/04/2024] [Indexed: 07/18/2024] Open
Abstract
The current study tested the hypothesis that the association between musical ability and vocal emotion recognition skills is mediated by accuracy in prosody perception. Furthermore, it was investigated whether this association is primarily related to musical expertise, operationalized by long-term engagement in musical activities, or musical aptitude, operationalized by a test of musical perceptual ability. To this end, we conducted three studies: In Study 1 (N = 85) and Study 2 (N = 93), we developed and validated a new instrument for the assessment of prosodic discrimination ability. In Study 3 (N = 136), we examined whether the association between musical ability and vocal emotion recognition was mediated by prosodic discrimination ability. We found evidence for a full mediation, though only in relation to musical aptitude and not in relation to musical expertise. Taken together, these findings suggest that individuals with high musical aptitude have superior prosody perception skills, which in turn contribute to their vocal emotion recognition skills. Importantly, our results suggest that these benefits are not unique to musicians, but extend to non-musicians with high musical aptitude.
Collapse
Affiliation(s)
- Julia Vigl
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria.
| | - Francesca Talamini
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| | - Hannah Strauss
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| |
Collapse
|
3
|
Loukas S, Filippa M, de Almeida JS, Boehringer AS, Tolsa CB, Barcos-Munoz F, Grandjean DM, van de Ville D, Hüppi PS. Newborn's neural representation of instrumental and vocal music as revealed by fMRI: A dynamic effective brain connectivity study. Hum Brain Mapp 2024; 45:e26724. [PMID: 39001584 PMCID: PMC11245569 DOI: 10.1002/hbm.26724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 05/06/2024] [Accepted: 05/08/2024] [Indexed: 07/16/2024] Open
Abstract
Music is ubiquitous, both in its instrumental and vocal forms. While speech perception at birth has been at the core of an extensive corpus of research, the origins of the ability to discriminate instrumental or vocal melodies is still not well investigated. In previous studies comparing vocal and musical perception, the vocal stimuli were mainly related to speaking, including language, and not to the non-language singing voice. In the present study, to better compare a melodic instrumental line with the voice, we used singing as a comparison stimulus, to reduce the dissimilarities between the two stimuli as much as possible, separating language perception from vocal musical perception. In the present study, 45 newborns were scanned, 10 full-term born infants and 35 preterm infants at term-equivalent age (mean gestational age at test = 40.17 weeks, SD = 0.44) using functional magnetic resonance imaging while listening to five melodies played by a musical instrument (flute) or sung by a female voice. To examine the dynamic task-based effective connectivity, we employed a psychophysiological interaction of co-activation patterns (PPI-CAPs) analysis, using the auditory cortices as seed region, to investigate moment-to-moment changes in task-driven modulation of cortical activity during an fMRI task. Our findings reveal condition-specific, dynamically occurring patterns of co-activation (PPI-CAPs). During the vocal condition, the auditory cortex co-activates with the sensorimotor and salience networks, while during the instrumental condition, it co-activates with the visual cortex and the superior frontal cortex. Our results show that the vocal stimulus elicits sensorimotor aspects of the auditory perception and is processed as a more salient stimulus while the instrumental condition activated higher-order cognitive and visuo-spatial networks. Common neural signatures for both auditory stimuli were found in the precuneus and posterior cingulate gyrus. Finally, this study adds knowledge on the dynamic brain connectivity underlying the newborns capability of early and specialized auditory processing, highlighting the relevance of dynamic approaches to study brain function in newborn populations.
Collapse
Affiliation(s)
- Serafeim Loukas
- Division of Development and Growth, Department of Pediatrics, University of Geneva, Geneva, Switzerland
- Institute of Bioengineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Manuela Filippa
- Division of Development and Growth, Department of Pediatrics, University of Geneva, Geneva, Switzerland
- Swiss Center for Affective Sciences, Department of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Joana Sa de Almeida
- Division of Development and Growth, Department of Pediatrics, University of Geneva, Geneva, Switzerland
| | - Andrew S Boehringer
- Division of Development and Growth, Department of Pediatrics, University of Geneva, Geneva, Switzerland
- Lemanic Neuroscience Doctoral School, University of Geneva, Geneva, Switzerland
| | - Cristina Borradori Tolsa
- Division of Development and Growth, Department of Pediatrics, University of Geneva, Geneva, Switzerland
| | - Francisca Barcos-Munoz
- Division of Pediatric Intensive Care and Neonatology, Department of Women, Children and Adolescents, University Hospital of Geneva, Geneva, Switzerland
| | - Didier M Grandjean
- Swiss Center for Affective Sciences, Department of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Dimitri van de Ville
- Institute of Bioengineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland
| | - Petra S Hüppi
- Division of Development and Growth, Department of Pediatrics, University of Geneva, Geneva, Switzerland
| |
Collapse
|
4
|
Zipse L, Gallée J, Shattuck-Hufnagel S. A targeted review of prosodic production in agrammatic aphasia. Neuropsychol Rehabil 2024:1-41. [PMID: 38848458 DOI: 10.1080/09602011.2024.2362243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 05/24/2024] [Indexed: 06/09/2024]
Abstract
It is unclear whether individuals with agrammatic aphasia have particularly disrupted prosody, or in fact have relatively preserved prosody they can use in a compensatory way. A targeted literature review was undertaken to examine the evidence regarding the capacity of speakers with agrammatic aphasia to produce prosody. The aim was to answer the question, how much prosody can a speaker "do" with limited syntax? The literature was systematically searched for articles examining the production of grammatical prosody in people with agrammatism, and yielded 16 studies that were ultimately included in this review. Participant inclusion criteria, spoken language tasks, and analysis procedures vary widely across studies. The evidence indicates that timing aspects of prosody are disrupted in people with agrammatic aphasia, while the use of pitch and amplitude cues is more likely to be preserved in this population. Some, but not all, of these timing differences may be attributable to motor speech programming deficits (AOS) rather than aphasia, as these conditions frequently co-occur. Many of the included studies do not address AOS and its possible role in any observed effects. Finally, the available evidence indicates that even speakers with severe aphasia show a degree of preserved prosody in functional communication.
Collapse
Affiliation(s)
- Lauryn Zipse
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA, USA
| | - Jeanne Gallée
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA, USA
- Department of Medicine, University of Washington, Seattle, WA, USA
| | | |
Collapse
|
5
|
Jin X, Zhang L, Wu G, Wang X, Du Y. Compensation or Preservation? Different Roles of Functional Lateralization in Speech Perception of Older Non-musicians and Musicians. Neurosci Bull 2024:10.1007/s12264-024-01234-x. [PMID: 38839688 DOI: 10.1007/s12264-024-01234-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/15/2024] [Indexed: 06/07/2024] Open
Abstract
Musical training can counteract age-related decline in speech perception in noisy environments. However, it remains unclear whether older non-musicians and musicians rely on functional compensation or functional preservation to counteract the adverse effects of aging. This study utilized resting-state functional connectivity (FC) to investigate functional lateralization, a fundamental organization feature, in older musicians (OM), older non-musicians (ONM), and young non-musicians (YNM). Results showed that OM outperformed ONM and achieved comparable performance to YNM in speech-in-noise and speech-in-speech tasks. ONM exhibited reduced lateralization than YNM in lateralization index (LI) of intrahemispheric FC (LI_intra) in the cingulo-opercular network (CON) and LI of interhemispheric heterotopic FC (LI_he) in the language network (LAN). Conversely, OM showed higher neural alignment to YNM (i.e., a more similar lateralization pattern) compared to ONM in CON, LAN, frontoparietal network (FPN), dorsal attention network (DAN), and default mode network (DMN), indicating preservation of youth-like lateralization patterns due to musical experience. Furthermore, in ONM, stronger left-lateralized and lower alignment-to-young of LI_intra in the somatomotor network (SMN) and DAN and LI_he in DMN correlated with better speech performance, indicating a functional compensation mechanism. In contrast, stronger right-lateralized LI_intra in FPN and DAN and higher alignment-to-young of LI_he in LAN correlated with better performance in OM, suggesting a functional preservation mechanism. These findings highlight the differential roles of functional preservation and compensation of lateralization in speech perception in noise among elderly individuals with and without musical expertise, offering insights into successful aging theories from the lens of functional lateralization and speech perception.
Collapse
Affiliation(s)
- Xinhu Jin
- Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
| | - Lei Zhang
- Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Guowei Wu
- Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Xiuyi Wang
- Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
| | - Yi Du
- Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, 200031, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| |
Collapse
|
6
|
Michel L, Ricou C, Bonnet-Brilhault F, Houy-Durand E, Latinus M. Sounds Pleasantness Ratings in Autism: Interaction Between Social Information and Acoustical Noise Level. J Autism Dev Disord 2024; 54:2148-2157. [PMID: 37118645 DOI: 10.1007/s10803-023-05989-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2023] [Indexed: 04/30/2023]
Abstract
A lack of response to voices, and a great interest for music are part of the behavioral expressions, commonly (self-)reported in Autism Spectrum Disorder (ASD). These atypical interests for vocal and musical sounds could be attributable to different levels of acoustical noise, quantified in the harmonic-to-noise ratio (HNR). No previous study has investigated explicit auditory pleasantness in ASD comparing vocal and non-vocal sounds, in relation to acoustic noise level. The aim of this study is to objectively evaluate auditory pleasantness. 16 adults on the autism spectrum and 16 neuro-typical (NT) matched adults rated the likeability of vocal and non-vocal sounds, with varying harmonic-to-noise ratio levels. A group by category interaction in pleasantness judgements revealed that participants on the autism spectrum judged vocal sounds as less pleasant than non-vocal sounds; an effect not found for NT participants. A category by HNR level interaction revealed that participants of both groups rated sounds with a high HNR as more pleasant for non-vocal sounds. A significant group by HNR interaction revealed that people on the autism spectrum tended to judge as less pleasant sounds with high HNR and more pleasant those with low HNR than NT participants. Acoustical noise level of sounds alone does not appear to explain atypical interest for voices and greater interest in music in ASD.
Collapse
Affiliation(s)
- Lisa Michel
- UMR 1253, iBrain, Université de Tours, INSERM, 37000, Tours, France.
| | - Camille Ricou
- UMR 1253, iBrain, Université de Tours, INSERM, 37000, Tours, France
| | - Frédérique Bonnet-Brilhault
- UMR 1253, iBrain, Université de Tours, INSERM, 37000, Tours, France
- EXAC.T, Centre Universitaire de Pédopsychiatrie, CHRU de Tours, Tours, France
| | - Emannuelle Houy-Durand
- UMR 1253, iBrain, Université de Tours, INSERM, 37000, Tours, France
- EXAC.T, Centre Universitaire de Pédopsychiatrie, CHRU de Tours, Tours, France
| | - Marianne Latinus
- UMR 1253, iBrain, Université de Tours, INSERM, 37000, Tours, France
- Centro de Estudios en Neurociencia Humana y Neuropsicología, Facultad de Psicología, Universidad Diego Portales, Santiago, Chile
| |
Collapse
|
7
|
Arnold CA, Bagg MK, Harvey AR. The psychophysiology of music-based interventions and the experience of pain. Front Psychol 2024; 15:1361857. [PMID: 38800683 PMCID: PMC11122921 DOI: 10.3389/fpsyg.2024.1361857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 04/22/2024] [Indexed: 05/29/2024] Open
Abstract
In modern times there is increasing acceptance that music-based interventions are useful aids in the clinical treatment of a range of neurological and psychiatric conditions, including helping to reduce the perception of pain. Indeed, the belief that music, whether listening or performing, can alter human pain experiences has a long history, dating back to the ancient Greeks, and its potential healing properties have long been appreciated by indigenous cultures around the world. The subjective experience of acute or chronic pain is complex, influenced by many intersecting physiological and psychological factors, and it is therefore to be expected that the impact of music therapy on the pain experience may vary from one situation to another, and from one person to another. Where pain persists and becomes chronic, aberrant central processing is a key feature associated with the ongoing pain experience. Nonetheless, beneficial effects of exposure to music on pain relief have been reported across a wide range of acute and chronic conditions, and it has been shown to be effective in neonates, children and adults. In this comprehensive review we examine the various neurochemical, physiological and psychological factors that underpin the impact of music on the pain experience, factors that potentially operate at many levels - the periphery, spinal cord, brainstem, limbic system and multiple areas of cerebral cortex. We discuss the extent to which these factors, individually or in combination, influence how music affects both the quality and intensity of pain, noting that there remains controversy about the respective roles that diverse central and peripheral processes play in this experience. Better understanding of the mechanisms that underlie music's impact on pain perception together with insights into central processing of pain should aid in developing more effective synergistic approaches when music therapy is combined with clinical treatments. The ubiquitous nature of music also facilitates application from the therapeutic environment into daily life, for ongoing individual and social benefit.
Collapse
Affiliation(s)
- Carolyn A. Arnold
- Department of Anaesthesiology and Perioperative Medicine, Monash University, Melbourne, VIC, Australia
- Caulfield Pain Management and Research Centre, Alfred Health, Melbourne, VIC, Australia
| | - Matthew K. Bagg
- School of Health Sciences, University of Notre Dame Australia, Fremantle, WA, Australia
- Perron Institute for Neurological and Translational Science, Perth, WA, Australia
- Centre for Pain IMPACT, Neuroscience Research Institute, Sydney, NSW, Australia
- Curtin Health Innovation Research Institute, Faculty of Health Sciences, Curtin University, Bentley, WA, Australia
| | - Alan R. Harvey
- Perron Institute for Neurological and Translational Science, Perth, WA, Australia
- School of Human Sciences and Conservatorium of Music, The University of Western Australia, Perth, WA, Australia
| |
Collapse
|
8
|
Nussbaum C, Schirmer A, Schweinberger SR. Musicality - Tuned to the melody of vocal emotions. Br J Psychol 2024; 115:206-225. [PMID: 37851369 DOI: 10.1111/bjop.12684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 09/12/2023] [Accepted: 09/24/2023] [Indexed: 10/19/2023]
Abstract
Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Institute of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
9
|
Chang A, Teng X, Assaneo MF, Poeppel D. The human auditory system uses amplitude modulation to distinguish music from speech. PLoS Biol 2024; 22:e3002631. [PMID: 38805517 PMCID: PMC11132470 DOI: 10.1371/journal.pbio.3002631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 04/17/2024] [Indexed: 05/30/2024] Open
Abstract
Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.
Collapse
Affiliation(s)
- Andrew Chang
- Department of Psychology, New York University, New York, New York, United States of America
| | - Xiangbin Teng
- Department of Psychology, Chinese University of Hong Kong, Hong Kong SAR, China
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, México
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, United States of America
- Ernst Struengmann Institute for Neuroscience, Frankfurt am Main, Germany
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, United States of America
- Music and Audio Research Lab (MARL), New York University, New York, New York, United States of America
| |
Collapse
|
10
|
Sankaran N, Leonard MK, Theunissen F, Chang EF. Encoding of melody in the human auditory cortex. SCIENCE ADVANCES 2024; 10:eadk0010. [PMID: 38363839 PMCID: PMC10871532 DOI: 10.1126/sciadv.adk0010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 01/17/2024] [Indexed: 02/18/2024]
Abstract
Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception varies along several pitch-based dimensions: (i) the absolute pitch of notes, (ii) the difference in pitch between successive notes, and (iii) the statistical expectation of each note given prior context. How the brain represents these dimensions and whether their encoding is specialized for music remains unknown. We recorded high-density neurophysiological activity directly from the human auditory cortex while participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial map for representing distinct melodic dimensions. The same participants listened to spoken English, and we compared responses to music and speech. Cortical sites selective for music encoded expectation, while sites that encoded pitch and pitch-change in music used the same neural code to represent equivalent properties of speech. Findings reveal how the perception of melody recruits both music-specific and general-purpose sound representations.
Collapse
Affiliation(s)
- Narayan Sankaran
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Matthew K. Leonard
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Frederic Theunissen
- Department of Psychology, University of California, Berkeley, 2121 Berkeley Way, Berkeley, CA 94720, USA
| | - Edward F. Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| |
Collapse
|
11
|
T. Zaatar M, Alhakim K, Enayeh M, Tamer R. The transformative power of music: Insights into neuroplasticity, health, and disease. Brain Behav Immun Health 2024; 35:100716. [PMID: 38178844 PMCID: PMC10765015 DOI: 10.1016/j.bbih.2023.100716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 12/04/2023] [Accepted: 12/08/2023] [Indexed: 01/06/2024] Open
Abstract
Music is a universal language that can elicit profound emotional and cognitive responses. In this literature review, we explore the intricate relationship between music and the brain, from how it is decoded by the nervous system to its therapeutic potential in various disorders. Music engages a diverse network of brain regions and circuits, including sensory-motor processing, cognitive, memory, and emotional components. Music-induced brain network oscillations occur in specific frequency bands, and listening to one's preferred music can grant easier access to these brain functions. Moreover, music training can bring about structural and functional changes in the brain, and studies have shown its positive effects on social bonding, cognitive abilities, and language processing. We also discuss how music therapy can be used to retrain impaired brain circuits in different disorders. Understanding how music affects the brain can open up new avenues for music-based interventions in healthcare, education, and wellbeing.
Collapse
Affiliation(s)
- Muriel T. Zaatar
- Department of Biological and Physical Sciences, American University in Dubai, Dubai, United Arab Emirates
| | | | | | | |
Collapse
|
12
|
Shan T, Cappelloni MS, Maddox RK. Subcortical responses to music and speech are alike while cortical responses diverge. Sci Rep 2024; 14:789. [PMID: 38191488 PMCID: PMC10774448 DOI: 10.1038/s41598-023-50438-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 12/20/2023] [Indexed: 01/10/2024] Open
Abstract
Music and speech are encountered daily and are unique to human beings. Both are transformed by the auditory pathway from an initial acoustical encoding to higher level cognition. Studies of cortex have revealed distinct brain responses to music and speech, but differences may emerge in the cortex or may be inherited from different subcortical encoding. In the first part of this study, we derived the human auditory brainstem response (ABR), a measure of subcortical encoding, to recorded music and speech using two analysis methods. The first method, described previously and acoustically based, yielded very different ABRs between the two sound classes. The second method, however, developed here and based on a physiological model of the auditory periphery, gave highly correlated responses to music and speech. We determined the superiority of the second method through several metrics, suggesting there is no appreciable impact of stimulus class (i.e., music vs speech) on the way stimulus acoustics are encoded subcortically. In this study's second part, we considered the cortex. Our new analysis method resulted in cortical music and speech responses becoming more similar but with remaining differences. The subcortical and cortical results taken together suggest that there is evidence for stimulus-class dependent processing of music and speech at the cortical but not subcortical level.
Collapse
Affiliation(s)
- Tong Shan
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Madeline S Cappelloni
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ross K Maddox
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA.
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
- Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
13
|
Barchet AV, Henry MJ, Pelofi C, Rimmele JM. Auditory-motor synchronization and perception suggest partially distinct time scales in speech and music. COMMUNICATIONS PSYCHOLOGY 2024; 2:2. [PMID: 39242963 PMCID: PMC11332030 DOI: 10.1038/s44271-023-00053-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 12/19/2023] [Indexed: 09/09/2024]
Abstract
Speech and music might involve specific cognitive rhythmic timing mechanisms related to differences in the dominant rhythmic structure. We investigate the influence of different motor effectors on rate-specific processing in both domains. A perception and a synchronization task involving syllable and piano tone sequences and motor effectors typically associated with speech (whispering) and music (finger-tapping) were tested at slow (~2 Hz) and fast rates (~4.5 Hz). Although synchronization performance was generally better at slow rates, the motor effectors exhibited specific rate preferences. Finger-tapping was advantaged compared to whispering at slow but not at faster rates, with synchronization being effector-dependent at slow, but highly correlated at faster rates. Perception of speech and music was better at different rates and predicted by a fast general and a slow finger-tapping synchronization component. Our data suggests partially independent rhythmic timing mechanisms for speech and music, possibly related to a differential recruitment of cortical motor circuitry.
Collapse
Affiliation(s)
- Alice Vivien Barchet
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Molly J Henry
- Research Group 'Neural and Environmental Rhythms', Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Claire Pelofi
- Music and Audio Research Laboratory, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
| | - Johanna M Rimmele
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA.
| |
Collapse
|
14
|
Wesseldijk LW, Gordon RL, Mosing MA, Ullén F. Music and verbal ability - a twin study of genetic and environmental associations. PSYCHOLOGY OF AESTHETICS, CREATIVITY, AND THE ARTS 2023; 17:675-681. [PMID: 38269365 PMCID: PMC10805386 DOI: 10.1037/aca0000401] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Musical aptitude and music training are associated with language-related cognitive outcomes, even when controlling for general intelligence. However, genetic and environmental influences on these associations have not been studied, and it remains unclear whether music training can causally increase verbal ability. In a sample of 1,336 male twins, we tested the associations between verbal ability measured at time of conscription at age 18 and two music related variables: overall musical aptitude and total amount of music training before the age of 18. We estimated the amount of specific genetic and environmental influences on the association between verbal ability and musical aptitude, over and above the factors shared with general intelligence, using classical twin modelling. Further, we tested whether music training could causally influence verbal ability using a co-twin-control analysis. Musical aptitude and music training were significantly associated with verbal ability. Controlling for general intelligence only slightly attenuated the correlations. The partial association between musical aptitude and verbal ability, corrected for general intelligence, was mostly explained by shared genetic factors (50%) and non-shared environmental influences (35%). The co-twin-control-analysis gave no support for causal effects of early music training on verbal ability at age 18. Overall, our findings in a sizeable population sample converge with known associations between the music and language domains, while results from twin modelling suggested that this reflected a shared underlying aetiology rather than causal transfer.
Collapse
Affiliation(s)
- Laura W. Wesseldijk
- Department of Neuroscience, Karolinska Institutet, Solnavägen 9, SE-171 77 Stockholm, Sweden
- Department of Psychiatry, Amsterdam UMC, University of Amsterdam, Meibergdreef 5, 1105 AZ Amsterdam, The Netherlands
| | - Reyna L. Gordon
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center
- Department of Psychology, Vanderbilt University
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center
| | - Miriam A. Mosing
- Department of Neuroscience, Karolinska Institutet, Solnavägen 9, SE-171 77 Stockholm, Sweden
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Nobels v 12A, 171 77 Stockholm, Sweden
| | - Fredrik Ullén
- Department of Neuroscience, Karolinska Institutet, Solnavägen 9, SE-171 77 Stockholm, Sweden
| |
Collapse
|
15
|
Harris I, Niven EC, Griffin A, Scott SK. Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
Affiliation(s)
- Ilana Harris
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Efe C Niven
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alex Griffin
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
16
|
Al Roumi F, Planton S, Wang L, Dehaene S. Brain-imaging evidence for compression of binary sound sequences in human memory. eLife 2023; 12:e84376. [PMID: 37910588 PMCID: PMC10619979 DOI: 10.7554/elife.84376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 10/14/2023] [Indexed: 11/03/2023] Open
Abstract
According to the language-of-thought hypothesis, regular sequences are compressed in human memory using recursive loops akin to a mental program that predicts future items. We tested this theory by probing memory for 16-item sequences made of two sounds. We recorded brain activity with functional MRI and magneto-encephalography (MEG) while participants listened to a hierarchy of sequences of variable complexity, whose minimal description required transition probabilities, chunking, or nested structures. Occasional deviant sounds probed the participants' knowledge of the sequence. We predicted that task difficulty and brain activity would be proportional to the complexity derived from the minimal description length in our formal language. Furthermore, activity should increase with complexity for learned sequences, and decrease with complexity for deviants. These predictions were upheld in both fMRI and MEG, indicating that sequence predictions are highly dependent on sequence structure and become weaker and delayed as complexity increases. The proposed language recruited bilateral superior temporal, precentral, anterior intraparietal, and cerebellar cortices. These regions overlapped extensively with a localizer for mathematical calculation, and much less with spoken or written language processing. We propose that these areas collectively encode regular sequences as repetitions with variations and their recursive composition into nested structures.
Collapse
Affiliation(s)
- Fosca Al Roumi
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, CNRS, NeuroSpin centerGif/YvetteFrance
| | - Samuel Planton
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, CNRS, NeuroSpin centerGif/YvetteFrance
| | - Liping Wang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of SciencesShanghaiChina
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, CNRS, NeuroSpin centerGif/YvetteFrance
- Collège de France, Université Paris Sciences Lettres (PSL)ParisFrance
| |
Collapse
|
17
|
Sankaran N, Leonard MK, Theunissen F, Chang EF. Encoding of melody in the human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.17.562771. [PMID: 37905047 PMCID: PMC10614915 DOI: 10.1101/2023.10.17.562771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception of melody varies along several pitch-based dimensions: (1) the absolute pitch of notes, (2) the difference in pitch between successive notes, and (3) the higher-order statistical expectation of each note conditioned on its prior context. While humans readily perceive melody, how these dimensions are collectively represented in the brain and whether their encoding is specialized for music remains unknown. Here, we recorded high-density neurophysiological activity directly from the surface of human auditory cortex while Western participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial code for representing distinct dimensions of melody. The same participants listened to spoken English, and we compared evoked responses to music and speech. Cortical sites selective for music were systematically driven by the encoding of expectation. In contrast, sites that encoded pitch and pitch-change used the same neural code to represent equivalent properties of speech. These findings reveal the multidimensional nature of melody encoding, consisting of both music-specific and domain-general sound representations in auditory cortex. Teaser The human brain contains both general-purpose and music-specific neural populations for processing distinct attributes of melody.
Collapse
|
18
|
Moisseinen N, Särkämö T, Kauramäki J, Kleber B, Sihvonen AJ, Martínez-Molina N. Differential effects of ageing on the neural processing of speech and singing production. Front Aging Neurosci 2023; 15:1236971. [PMID: 37731954 PMCID: PMC10507273 DOI: 10.3389/fnagi.2023.1236971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 08/21/2023] [Indexed: 09/22/2023] Open
Abstract
Background Understanding healthy brain ageing has become vital as populations are ageing rapidly and age-related brain diseases are becoming more common. In normal brain ageing, speech processing undergoes functional reorganisation involving reductions of hemispheric asymmetry and overactivation in the prefrontal regions. However, little is known about how these changes generalise to other vocal production, such as singing, and how they are affected by associated cognitive demands. Methods The present cross-sectional fMRI study systematically maps the neural correlates of vocal production across adulthood (N=100, age 21-88 years) using a balanced 2x3 design where tasks varied in modality (speech: proverbs / singing: song phrases) and cognitive demand (repetition / completion from memory / improvisation). Results In speech production, ageing was associated with decreased left pre- and postcentral activation across tasks and increased bilateral angular and right inferior temporal and fusiform activation in the improvisation task. In singing production, ageing was associated with increased activation in medial and bilateral prefrontal and parietal regions in the completion task, whereas other tasks showed no ageing effects. Direct comparisons between the modalities showed larger age-related activation changes in speech than singing across tasks, including a larger left-to-right shift in lateral prefrontal regions in the improvisation task. Conclusion The present results suggest that the brains' singing network undergoes differential functional reorganisation in normal ageing compared to the speech network, particularly during a task with high executive demand. These findings are relevant for understanding the effects of ageing on vocal production as well as how singing can support communication in healthy ageing and neurological rehabilitation.
Collapse
Affiliation(s)
- Nella Moisseinen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, Centre of Excellence in Music, Mind, Body and the Brain, University of Helsinki, Helsinki, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, Centre of Excellence in Music, Mind, Body and the Brain, University of Helsinki, Helsinki, Finland
| | - Jaakko Kauramäki
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, Centre of Excellence in Music, Mind, Body and the Brain, University of Helsinki, Helsinki, Finland
| | - Boris Kleber
- Centre for Music in the Brain, Department of Clinical Medicine, Faculty of Health, Aarhus University, Aarhus, Denmark
| | - Aleksi J. Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, Centre of Excellence in Music, Mind, Body and the Brain, University of Helsinki, Helsinki, Finland
- School of Health and Rehabilitation Sciences, Centre for Clinical Research, University of Queensland, Brisbane, QLD, Australia
- Department of Neurology, Helsinki University Hospital, Helsinki, Finland
| | - Noelia Martínez-Molina
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, Centre of Excellence in Music, Mind, Body and the Brain, University of Helsinki, Helsinki, Finland
- Department of Information and Communication Technologies, Centre for Brain and Cognition, University Pompeu Fabra, Barcelona, Spain
| |
Collapse
|
19
|
Bellier L, Llorens A, Marciano D, Gunduz A, Schalk G, Brunner P, Knight RT. Music can be reconstructed from human auditory cortex activity using nonlinear decoding models. PLoS Biol 2023; 21:e3002176. [PMID: 37582062 PMCID: PMC10427021 DOI: 10.1371/journal.pbio.3002176] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 05/30/2023] [Indexed: 08/17/2023] Open
Abstract
Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior-posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain-computer interface (BCI) applications.
Collapse
Affiliation(s)
- Ludovic Bellier
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Anaïs Llorens
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Déborah Marciano
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Aysegul Gunduz
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida, United States of America
| | - Gerwin Schalk
- Department of Neurology, Albany Medical College, Albany, New York, United States of America
| | - Peter Brunner
- Department of Neurology, Albany Medical College, Albany, New York, United States of America
- Department of Neurosurgery, Washington University School of Medicine, St. Louis, Missouri, United States of America
- National Center for Adaptive Neurotechnologies, Albany, New York, United States of America
| | - Robert T. Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
- Department of Psychology, University of California, Berkeley, Berkeley, California, United States of America
| |
Collapse
|
20
|
Izen SC, Cassano-Coleman RY, Piazza EA. Music as a window into real-world communication. Front Psychol 2023; 14:1012839. [PMID: 37496799 PMCID: PMC10368476 DOI: 10.3389/fpsyg.2023.1012839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 06/06/2023] [Indexed: 07/28/2023] Open
Abstract
Communication has been studied extensively in the context of speech and language. While speech is tremendously effective at transferring ideas between people, music is another communicative mode that has a unique power to bring people together and transmit a rich tapestry of emotions, through joint music-making and listening in a variety of everyday contexts. Research has begun to examine the behavioral and neural correlates of the joint action required for successful musical interactions, but it has yet to fully account for the rich, dynamic, multimodal nature of musical communication. We review the current literature in this area and propose that naturalistic musical paradigms will open up new ways to study communication more broadly.
Collapse
|
21
|
Chen X, Affourtit J, Ryskin R, Regev TI, Norman-Haignere S, Jouravlev O, Malik-Moraleda S, Kean H, Varley R, Fedorenko E. The human language system, including its inferior frontal component in "Broca's area," does not support music perception. Cereb Cortex 2023; 33:7904-7929. [PMID: 37005063 PMCID: PMC10505454 DOI: 10.1093/cercor/bhad087] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 04/04/2023] Open
Abstract
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within "Broca's area." However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions' responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
Collapse
Affiliation(s)
- Xuanyi Chen
- Department of Cognitive Sciences, Rice University, TX 77005, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rachel Ryskin
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive & Information Sciences, University of California, Merced, Merced, CA 95343, United States
| | - Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Samuel Norman-Haignere
- Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States
| | - Olessia Jouravlev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Science, Carleton University, Ottawa, ON, Canada
| | - Saima Malik-Moraleda
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rosemary Varley
- Psychology & Language Sciences, UCL, London, WCN1 1PF, United Kingdom
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
22
|
Liu J, Hilton CB, Bergelson E, Mehr SA. Language experience predicts music processing in a half-million speakers of fifty-four languages. Curr Biol 2023; 33:1916-1925.e4. [PMID: 37105166 PMCID: PMC10306420 DOI: 10.1016/j.cub.2023.03.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 02/08/2023] [Accepted: 03/23/2023] [Indexed: 04/29/2023]
Abstract
Tonal languages differ from other languages in their use of pitch (tones) to distinguish words. Lifelong experience speaking and hearing tonal languages has been argued to shape auditory processing in ways that generalize beyond the perception of linguistic pitch to the perception of pitch in other domains like music. We conducted a meta-analysis of prior studies testing this idea, finding moderate evidence supporting it. But prior studies were limited by mostly small sample sizes representing a small number of languages and countries, making it challenging to disentangle the effects of linguistic experience from variability in music training, cultural differences, and other potential confounds. To address these issues, we used web-based citizen science to assess music perception skill on a global scale in 34,034 native speakers of 19 tonal languages (e.g., Mandarin, Yoruba). We compared their performance to 459,066 native speakers of other languages, including 6 pitch-accented (e.g., Japanese) and 29 non-tonal languages (e.g., Hungarian). Whether or not participants had taken music lessons, native speakers of all 19 tonal languages had an improved ability to discriminate musical melodies on average, relative to speakers of non-tonal languages. But this improvement came with a trade-off: tonal language speakers were also worse at processing the musical beat. The results, which held across native speakers of many diverse languages and were robust to geographic and demographic variation, demonstrate that linguistic experience shapes music perception, with implications for relations between music, language, and culture in the human mind.
Collapse
Affiliation(s)
- Jingxuan Liu
- Columbia Business School, Columbia University, 665 W 130th Street, New York, NY 10027, USA; Department of Psychology & Neuroscience, Duke University, 417 Chapel Drive, Durham, NC 27708, USA.
| | - Courtney B Hilton
- Yale Child Study Center, Yale University, 300 George Street #900, New Haven, CT 06511, USA; School of Psychology, University of Auckland, 23 Symonds Street, Auckland 1010, New Zealand.
| | - Elika Bergelson
- Department of Psychology & Neuroscience, Duke University, 417 Chapel Drive, Durham, NC 27708, USA
| | - Samuel A Mehr
- Yale Child Study Center, Yale University, 300 George Street #900, New Haven, CT 06511, USA; School of Psychology, University of Auckland, 23 Symonds Street, Auckland 1010, New Zealand.
| |
Collapse
|
23
|
Theunissen F. Language and music: Singing voices and music talent. Curr Biol 2023; 33:R418-R420. [PMID: 37220737 DOI: 10.1016/j.cub.2023.03.086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Native speakers of tonal languages show enhanced musical melody perception but diminished rhythm abilities. This effect has now been rigorously demonstrated in a new study that tested the musical IQ of half a million human participants across the globe.
Collapse
Affiliation(s)
- Frédéric Theunissen
- University of California Berkeley, Department of Psychology, Integrative Biology and Helen Wills Neuroscience Institute, Berkeley, CA 94720, USA.
| |
Collapse
|
24
|
Wang L, Ong JH, Ponsot E, Hou Q, Jiang C, Liu F. Mental representations of speech and musical pitch contours reveal a diversity of profiles in autism spectrum disorder. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2023; 27:629-646. [PMID: 35848413 PMCID: PMC10074762 DOI: 10.1177/13623613221111207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
LAY ABSTRACT As a key auditory attribute of sounds, pitch is ubiquitous in our everyday listening experience involving language, music and environmental sounds. Given its critical role in auditory processing related to communication, numerous studies have investigated pitch processing in autism spectrum disorder. However, the findings have been mixed, reporting either enhanced, typical or impaired performance among autistic individuals. By investigating top-down comparisons of internal mental representations of pitch contours in speech and music, this study shows for the first time that, while autistic individuals exhibit diverse profiles of pitch processing compared to non-autistic individuals, their mental representations of pitch contours are typical across domains. These findings suggest that pitch-processing mechanisms are shared across domains in autism spectrum disorder and provide theoretical implications for using music to improve speech for those autistic individuals who have language problems.
Collapse
Affiliation(s)
- Li Wang
- University of Reading, UK
- The Chinese University of Hong Kong, Hong
Kong
| | | | | | - Qingqi Hou
- Nanjing Normal University of Special
Education, China
| | | | | |
Collapse
|
25
|
Auditory Electrophysiological and Perceptual Measures in Student Musicians with High Sound Exposure. Diagnostics (Basel) 2023; 13:diagnostics13050934. [PMID: 36900080 PMCID: PMC10000734 DOI: 10.3390/diagnostics13050934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 12/05/2022] [Accepted: 02/26/2023] [Indexed: 03/06/2023] Open
Abstract
This study aimed to determine (a) the influence of noise exposure background (NEB) on the peripheral and central auditory system functioning and (b) the influence of NEB on speech recognition in noise abilities in student musicians. Twenty non-musician students with self-reported low NEB and 18 student musicians with self-reported high NEB completed a battery of tests that consisted of physiological measures, including auditory brainstem responses (ABRs) at three different stimulus rates (11.3 Hz, 51.3 Hz, and 81.3 Hz), and P300, and behavioral measures including conventional and extended high-frequency audiometry, consonant-vowel nucleus-consonant (CNC) word test and AzBio sentence test for assessing speech perception in noise abilities at -9, -6, -3, 0, and +3 dB signal to noise ratios (SNRs). The NEB was negatively associated with performance on the CNC test at all five SNRs. A negative association was found between NEB and performance on the AzBio test at 0 dB SNR. No effect of NEB was found on the amplitude and latency of P300 and the ABR wave I amplitude. More investigations of larger datasets with different NEB and longitudinal measurements are needed to investigate the influence of NEB on word recognition in noise and to understand the specific cognitive processes contributing to the impact of NEB on word recognition in noise.
Collapse
|
26
|
MacGregor C, Ruth N, Müllensiefen D. Development and validation of the first adaptive test of emotion perception in music. Cogn Emot 2023; 37:284-302. [PMID: 36592153 DOI: 10.1080/02699931.2022.2162003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
ABSTRACTThe Musical Emotion Discrimination Task (MEDT) is a short, non-adaptive test of the ability to discriminate emotions in music. Test-takers hear two performances of the same melody, both played by the same performer but each trying to communicate a different basic emotion, and are asked to determine which one is "happier", for example. The goal of the current study was to construct a new version of the MEDT using a larger set of shorter, more diverse music clips and an adaptive framework to expand the ability range for which the test can deliver measurements. The first study analysed responses from a large sample of participants (N = 624) to determine how musical features contributed to item difficulty, which resulted in a quantitative model of musical emotion discrimination ability rooted in Item Response Theory (IRT). This model informed the construction of the adaptive MEDT. A second study contributed preliminary evidence for the validity and reliability of the adaptive MEDT, and demonstrated that the new version of the test is suitable for a wider range of abilities. This paper therefore presents the first adaptive musical emotion discrimination test, a new resource for investigating emotion processing which is freely available for research use.
Collapse
Affiliation(s)
- Chloe MacGregor
- Department of Psychology, Goldsmiths, University of London, London, England
| | - Nicolas Ruth
- Institute for Cultural Management and Media, University of Music and Performing Arts Munich, Munchen, Germany
| | | |
Collapse
|
27
|
Garnett EO, McAuley JD, Wieland EA, Chow HM, Zhu DC, Dilley LC, Chang SE. Auditory rhythm discrimination in adults who stutter: An fMRI study. BRAIN AND LANGUAGE 2023; 236:105219. [PMID: 36577315 DOI: 10.1016/j.bandl.2022.105219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 11/09/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
Rhythm perception deficits have been linked to neurodevelopmental disorders affecting speech and language. Children who stutter have shown poorer rhythm discrimination and attenuated functional connectivity in rhythm-related brain areas, which may negatively impact timing control required for speech. It is unclear whether adults who stutter (AWS), who are likely to have acquired compensatory adaptations in response to rhythm processing/timing deficits, are similarly affected. We compared rhythm discrimination in AWS and controls (total n = 36) during fMRI in two matched conditions: simple rhythms that consistently reinforced a periodic beat, and complex rhythms that did not (requiring greater reliance on internal timing). Consistent with an internal beat deficit hypothesis, behavioral results showed poorer complex rhythm discrimination for AWS than controls. In AWS, greater stuttering severity was associated with poorer rhythm discrimination. AWS showed increased activity within beat-based timing regions and increased functional connectivity between putamen and cerebellum (supporting interval-based timing) for simple rhythms.
Collapse
Affiliation(s)
- Emily O Garnett
- University of Michigan, Rachel Upjohn Building, 4250 Plymouth Rd., Ann Arbor, MI 48109, USA.
| | - J Devin McAuley
- Michigan State University, 619 Red Cedar Rd, East Lansing, MI 48864, USA
| | | | - Ho Ming Chow
- University of Michigan, Rachel Upjohn Building, 4250 Plymouth Rd., Ann Arbor, MI 48109, USA; University of Delaware, Tower at STAR, 100 Discovery Blvd, Newark, DE 19713, USA
| | - David C Zhu
- Michigan State University, Radiology Building, 846 Service Road, East Lansing, MI 48824, USA
| | - Laura C Dilley
- Michigan State University, 619 Red Cedar Rd, East Lansing, MI 48864, USA
| | - Soo-Eun Chang
- University of Michigan, Rachel Upjohn Building, 4250 Plymouth Rd., Ann Arbor, MI 48109, USA
| |
Collapse
|
28
|
Coetzee JP, Johnson MA, Lee Y, Wu AD, Iacoboni M, Monti MM. Dissociating Language and Thought in Human Reasoning. Brain Sci 2022; 13:brainsci13010067. [PMID: 36672048 PMCID: PMC9856203 DOI: 10.3390/brainsci13010067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/22/2022] [Accepted: 12/23/2022] [Indexed: 01/01/2023] Open
Abstract
What is the relationship between language and complex thought? In the context of deductive reasoning there are two main views. Under the first, which we label here the language-centric view, language is central to the syntax-like combinatorial operations of complex reasoning. Under the second, which we label here the language-independent view, these operations are dissociable from the mechanisms of natural language. We applied continuous theta burst stimulation (cTBS), a form of noninvasive neuromodulation, to healthy adult participants to transiently inhibit a subregion of Broca's area (left BA44) associated in prior work with parsing the syntactic relations of natural language. We similarly inhibited a subregion of dorsomedial frontal cortex (left medial BA8) which has been associated with core features of logical reasoning. There was a significant interaction between task and stimulation site. Post hoc tests revealed that performance on a linguistic reasoning task, but not deductive reasoning task, was significantly impaired after inhibition of left BA44, and performance on a deductive reasoning task, but not linguistic reasoning task, was decreased after inhibition of left medial BA8 (however not significantly). Subsequent linear contrasts supported this pattern. These novel results suggest that deductive reasoning may be dissociable from linguistic processes in the adult human brain, consistent with the language-independent view.
Collapse
Affiliation(s)
- John P. Coetzee
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Road, Stanford, CA 94305, USA
- VA Palo Alto Health Care System, Polytrauma Division, 3801 Miranda Avenue, Palo Alto, CA 94304, USA
| | - Micah A. Johnson
- Department of Psychology, University of California Los Angeles, Los Angeles, CA 90095, USA
| | - Youngzie Lee
- Department of Psychology, University of California Los Angeles, Los Angeles, CA 90095, USA
| | - Allan D. Wu
- Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA
- Brain Research Institute (BRI), University of California Los Angeles, Los Angeles, CA 90095, USA
| | - Marco Iacoboni
- Brain Research Institute (BRI), University of California Los Angeles, Los Angeles, CA 90095, USA
- Ahmanson-Lovelace Brain Mapping Center, University of California Los Angeles, Los Angeles, CA 90095, USA
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA
| | - Martin M. Monti
- Department of Psychology, University of California Los Angeles, Los Angeles, CA 90095, USA
- Brain Research Institute (BRI), University of California Los Angeles, Los Angeles, CA 90095, USA
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA 90095, USA
- Brain Injury Research Center (BIRC), Department of Neurosurgery, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, USA
- Correspondence: ; Tel.: +1-310-825-8546
| |
Collapse
|
29
|
Nayak S, Coleman PL, Ladányi E, Nitin R, Gustavson DE, Fisher SE, Magne CL, Gordon RL. The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) Framework for Understanding Musicality-Language Links Across the Lifespan. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:615-664. [PMID: 36742012 PMCID: PMC9893227 DOI: 10.1162/nol_a_00079] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 08/08/2022] [Indexed: 04/18/2023]
Abstract
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
Collapse
Affiliation(s)
- Srishti Nayak
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| | - Peyton L. Coleman
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Enikő Ladányi
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Linguistics, Potsdam University, Potsdam, Germany
| | - Rachana Nitin
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Daniel E. Gustavson
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO, USA
| | - Simon E. Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Cyrille L. Magne
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- PhD Program in Literacy Studies, Middle Tennessee State University, Murfreesboro, TN, USA
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| |
Collapse
|
30
|
Neves L, Correia AI, Castro SL, Martins D, Lima CF. Does music training enhance auditory and linguistic processing? A systematic review and meta-analysis of behavioral and brain evidence. Neurosci Biobehav Rev 2022; 140:104777. [PMID: 35843347 DOI: 10.1016/j.neubiorev.2022.104777] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 07/11/2022] [Accepted: 07/12/2022] [Indexed: 02/02/2023]
Abstract
It is often claimed that music training improves auditory and linguistic skills. Results of individual studies are mixed, however, and most evidence is correlational, precluding inferences of causation. Here, we evaluated data from 62 longitudinal studies that examined whether music training programs affect behavioral and brain measures of auditory and linguistic processing (N = 3928). For the behavioral data, a multivariate meta-analysis revealed a small positive effect of music training on both auditory and linguistic measures, regardless of the type of assignment (random vs. non-random), training (instrumental vs. non-instrumental), and control group (active vs. passive). The trim-and-fill method provided suggestive evidence of publication bias, but meta-regression methods (PET-PEESE) did not. For the brain data, a narrative synthesis also documented benefits of music training, namely for measures of auditory processing and for measures of speech and prosody processing. Thus, the available literature provides evidence that music training produces small neurobehavioral enhancements in auditory and linguistic processing, although future studies are needed to confirm that such enhancements are not due to publication bias.
Collapse
Affiliation(s)
- Leonor Neves
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Ana Isabel Correia
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - São Luís Castro
- Centro de Psicologia da Universidade do Porto (CPUP), Faculdade de Psicologia e de Ciências da Educação da Universidade do Porto (FPCEUP), Porto, Portugal
| | - Daniel Martins
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, UK; NIHR Maudsley Biomedical Research Centre (BRC), South London and Maudsley NHS Foundation Trust, London, UK
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal.
| |
Collapse
|
31
|
Scharinger M, Knoop CA, Wagner V, Menninghaus W. Neural processing of poems and songs is based on melodic properties. Neuroimage 2022; 257:119310. [PMID: 35569784 DOI: 10.1016/j.neuroimage.2022.119310] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/26/2022] [Accepted: 05/11/2022] [Indexed: 11/30/2022] Open
Abstract
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Pilgrimstein 16, Marburg 35032, Germany; Center for Mind, Brain and Behavior, Universities of Marburg and Gießen, Germany.
| | - Christine A Knoop
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Valentin Wagner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Experimental Psychology Unit, Helmut Schmidt University / University of the Federal Armed Forces Hamburg, Germany
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
32
|
Rupp K, Hect JL, Remick M, Ghuman A, Chandrasekaran B, Holt LL, Abel TJ. Neural responses in human superior temporal cortex support coding of voice representations. PLoS Biol 2022; 20:e3001675. [PMID: 35900975 PMCID: PMC9333263 DOI: 10.1371/journal.pbio.3001675] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 05/13/2022] [Indexed: 11/19/2022] Open
Abstract
The ability to recognize abstract features of voice during auditory perception is an intricate feat of human audition. For the listener, this occurs in near-automatic fashion to seamlessly extract complex cues from a highly variable auditory signal. Voice perception depends on specialized regions of auditory cortex, including superior temporal gyrus (STG) and superior temporal sulcus (STS). However, the nature of voice encoding at the cortical level remains poorly understood. We leverage intracerebral recordings across human auditory cortex during presentation of voice and nonvoice acoustic stimuli to examine voice encoding at the cortical level in 8 patient-participants undergoing epilepsy surgery evaluation. We show that voice selectivity increases along the auditory hierarchy from supratemporal plane (STP) to the STG and STS. Results show accurate decoding of vocalizations from human auditory cortical activity even in the complete absence of linguistic content. These findings show an early, less-selective temporal window of neural activity in the STG and STS followed by a sustained, strongly voice-selective window. Encoding models demonstrate divergence in the encoding of acoustic features along the auditory hierarchy, wherein STG/STS responses are best explained by voice category and acoustics, as opposed to acoustic features of voice stimuli alone. This is in contrast to neural activity recorded from STP, in which responses were accounted for by acoustic features. These findings support a model of voice perception that engages categorical encoding mechanisms within STG and STS to facilitate feature extraction. Voice perception occurs via specialized networks in higher order auditory cortex, but how voice features are encoded remains a central unanswered question. Using human intracerebral recordings of auditory cortex, this study provides evidence for categorical encoding of voice.
Collapse
Affiliation(s)
- Kyle Rupp
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Jasmine L. Hect
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Madison Remick
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Avniel Ghuman
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Lori L. Holt
- Department of Psychology, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - Taylor J. Abel
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- * E-mail:
| |
Collapse
|
33
|
Criscuolo A, Pando-Naude V, Bonetti L, Vuust P, Brattico E. An ALE meta-analytic review of musical expertise. Sci Rep 2022; 12:11726. [PMID: 35821035 PMCID: PMC9276732 DOI: 10.1038/s41598-022-14959-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 06/15/2022] [Indexed: 11/29/2022] Open
Abstract
Through long-term training, music experts acquire complex and specialized sensorimotor skills, which are paralleled by continuous neuro-anatomical and -functional adaptations. The underlying neuroplasticity mechanisms have been extensively explored in decades of research in music, cognitive, and translational neuroscience. However, the absence of a comprehensive review and quantitative meta-analysis prevents the plethora of variegated findings to ultimately converge into a unified picture of the neuroanatomy of musical expertise. Here, we performed a comprehensive neuroimaging meta-analysis of publications investigating neuro-anatomical and -functional differences between musicians (M) and non-musicians (NM). Eighty-four studies were included in the qualitative synthesis. From these, 58 publications were included in coordinate-based meta-analyses using the anatomic/activation likelihood estimation (ALE) method. This comprehensive approach delivers a coherent cortico-subcortical network encompassing sensorimotor and limbic regions bilaterally. Particularly, M exhibited higher volume/activity in auditory, sensorimotor, interoceptive, and limbic brain areas and lower volume/activity in parietal areas as opposed to NM. Notably, we reveal topographical (dis-)similarities between the identified functional and anatomical networks and characterize their link to various cognitive functions by means of meta-analytic connectivity modelling. Overall, we effectively synthesized decades of research in the field and provide a consistent and controversies-free picture of the neuroanatomy of musical expertise.
Collapse
Affiliation(s)
- Antonio Criscuolo
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Victor Pando-Naude
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark.
| | - Leonardo Bonetti
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
- Center for Eudaimonia and Human Flourishing, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Peter Vuust
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Elvira Brattico
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| |
Collapse
|
34
|
Norman-Haignere SV, Feather J, Boebinger D, Brunner P, Ritaccio A, McDermott JH, Schalk G, Kanwisher N. A neural population selective for song in human auditory cortex. Curr Biol 2022; 32:1470-1484.e12. [PMID: 35196507 PMCID: PMC9092957 DOI: 10.1016/j.cub.2022.01.069] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 10/26/2021] [Accepted: 01/24/2022] [Indexed: 12/18/2022]
Abstract
How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Institute, Columbia University, New York, NY, USA; HHMI Fellow of the Life Sciences Research Foundation, Chevy Chase, MD, USA; Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, ENS, PSL University, CNRS, Paris, France; Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, USA; Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, USA; Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Jenelle Feather
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA
| | - Dana Boebinger
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Peter Brunner
- Department of Neurology, Albany Medical College, Albany, NY, USA; National Center for Adaptive Neurotechnologies, Albany, NY, USA; Department of Neurosurgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Anthony Ritaccio
- Department of Neurology, Albany Medical College, Albany, NY, USA; Department of Neurology, Mayo Clinic, Jacksonville, FL, USA
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA; Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Gerwin Schalk
- Department of Neurology, Albany Medical College, Albany, NY, USA
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA
| |
Collapse
|
35
|
Williams JA, Margulis EH, Nastase SA, Chen J, Hasson U, Norman KA, Baldassano C. High-Order Areas and Auditory Cortex Both Represent the High-Level Event Structure of Music. J Cogn Neurosci 2022; 34:699-714. [PMID: 35015874 PMCID: PMC9169871 DOI: 10.1162/jocn_a_01815] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial prefrontal cortex, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.
Collapse
|
36
|
Nguyen DD, Chacon AM, Novakovic D, Hodges NJ, Carding PN, Madill C. Pitch Discrimination Testing in Patients with a Voice Disorder. J Clin Med 2022; 11:584. [PMID: 35160036 PMCID: PMC8836960 DOI: 10.3390/jcm11030584] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 01/17/2022] [Accepted: 01/18/2022] [Indexed: 02/01/2023] Open
Abstract
Auditory perception plays an important role in voice control. Pitch discrimination (PD) is a key index of auditory perception and is influenced by a variety of factors. Little is known about the potential effects of voice disorders on PD and whether PD testing can differentiate people with and without a voice disorder. We thus evaluated PD in a voice-disordered group (n = 71) and a non-voice-disordered control group (n = 80). The voice disorders included muscle tension dysphonia and neurological voice disorders and all participants underwent PD testing as part of a comprehensive voice assessment. Percentage of accurate responses and PD threshold were compared across groups. The PD percentage accuracy was significantly lower in the voice-disordered group than the control group, irrespective of musical background. Participants with voice disorders also required a larger PD threshold to correctly discriminate pitch differences. The mean PD threshold significantly discriminated the voice-disordered groups from the control group. These results have implications for the voice control and pathogenesis of voice disorders. They support the inclusion of PD testing during comprehensive voice assessment and throughout the treatment process for patients with voice disorders.
Collapse
Affiliation(s)
- Duy Duong Nguyen
- Voice Research Laboratory, Discipline of Speech Pathology, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (A.M.C.); (D.N.); (C.M.)
- National Hospital of Otorhinolaryngology, Hanoi 11519, Vietnam
| | - Antonia M. Chacon
- Voice Research Laboratory, Discipline of Speech Pathology, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (A.M.C.); (D.N.); (C.M.)
| | - Daniel Novakovic
- Voice Research Laboratory, Discipline of Speech Pathology, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (A.M.C.); (D.N.); (C.M.)
- The Canterbury Hospital, Campsie, NSW 2194, Australia
| | - Nicola J. Hodges
- School of Kinesiology, University of British Columbia, Vancouver, BC V6T 1Z1, Canada;
| | - Paul N. Carding
- Faculty of Health and Life Sciences, Oxford Institute of Nursing, Midwifery and Allied Health Research, Oxford OX3 0BP, UK;
| | - Catherine Madill
- Voice Research Laboratory, Discipline of Speech Pathology, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia; (A.M.C.); (D.N.); (C.M.)
| |
Collapse
|
37
|
Rimmele JM, Kern P, Lubinus C, Frieler K, Poeppel D, Assaneo MF. Musical Sophistication and Speech Auditory-Motor Coupling: Easy Tests for Quick Answers. Front Neurosci 2022; 15:764342. [PMID: 35058741 PMCID: PMC8763673 DOI: 10.3389/fnins.2021.764342] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/22/2021] [Indexed: 12/05/2022] Open
Abstract
Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the ∼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.
Collapse
Affiliation(s)
- Johanna M. Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
| | - Pius Kern
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Christina Lubinus
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Klaus Frieler
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - David Poeppel
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
- Department of Psychology, New York University, New York, NY, United States
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, México
| |
Collapse
|
38
|
Pino O, Romano G. Engagement and Arousal effects in predicting the increase of cognitive functioning following a neuromodulation program. ACTA BIO-MEDICA : ATENEI PARMENSIS 2022; 93:e2022248. [PMID: 35775751 PMCID: PMC9335441 DOI: 10.23750/abm.v93i3.13145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 04/22/2022] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND AIM Research in the field of Brain-Computer Interfaces (BCIs) has increased exponentially over the past few years, demonstrating their effectiveness and application in several areas. The main purpose of the present paper was to explore the relevance of user engagement during interaction with a BCI prototype (Neuro-Upper, NU), which aimed at brainwave synchronization through audio-visual entrainment, in the improvement of cognitive performance. METHODS This paper presents findings on data collected from a sample of 18 subjects with clinical disorders who completed about 55 consecutive sessions of 30 min of audio-visual stimulation. The relationship between engagement and improvement of cognitive function (measured through the Intelligence Quotient - IQ) during NU neuromodulation was evaluated through the Index of Cognitive Engagement (ICE) measured by the Pope ratio (Beta / (Alpha + Theta), and Arousal [(High Beta + Low Beta) / (High Alpha + Low Alpha)]. RESULTS A significant correlation between engagement and IQ improvement, but no correlation between arousal and IQ improvement emerged, as expected. CONCLUSIONS Future research aiming at clarifying the role of arousal in psychological disorders and related symptoms will be essential.
Collapse
Affiliation(s)
- Olimpia Pino
- University of Parma, Department of Medicine & Surgery, Neuroscience Unit.
| | | |
Collapse
|
39
|
The influence of memory on the speech-to-song illusion. Mem Cognit 2022; 50:1804-1815. [PMID: 35083717 PMCID: PMC9767999 DOI: 10.3758/s13421-021-01269-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/17/2021] [Indexed: 12/30/2022]
Abstract
In the speech-to-song illusion a spoken phrase is presented repeatedly and begins to sound as if it is being sung. Anecdotal reports suggest that subsequent presentations of a previously heard phrase enhance the illusion, even if several hours or days have elapsed between presentations. In Experiment 1, we examined in a controlled laboratory setting whether memory traces for a previously heard phrase would influence song-like ratings to a subsequent presentation of that phrase. The results showed that word lists that were played several times throughout the experimental session were rated as being more song-like at the end of the experiment than word lists that were played only once in the experimental session. In Experiment 2, we examined if the memory traces that influenced the speech-to-song illusion were abstract in nature or exemplar-based by playing some word lists several times during the experiment in the same voice and playing other word lists several times during the experiment but in different voices. The results showed that word lists played in the same voice were rated as more song-like at the end of the experiment than word lists played in different voices. Many previous studies have examined how various aspects of the stimulus itself influences the perception of the speech-to-song illusion. The results of the present experiments demonstrate that memory traces of the stimulus also influence the speech-to-song illusion.
Collapse
|
40
|
Beyond the Language Module: Musicality as a Stepping Stone Towards Language Acquisition. EVOLUTIONARY PSYCHOLOGY 2022. [DOI: 10.1007/978-3-030-76000-7_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
41
|
Yan J, Chen F, Gao X, Peng G. Auditory-Motor Mapping Training Facilitates Speech and Word Learning in Tone Language-Speaking Children With Autism: An Early Efficacy Study. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4664-4681. [PMID: 34705567 DOI: 10.1044/2021_jslhr-21-00029] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE It has been reported that tone language-speaking children with autism demonstrate speech-specific lexical tone processing difficulty, although they have intact or even better-than-normal processing of nonspeech/melodic pitch analogues. In this early efficacy study, we evaluated the therapeutic potential of Auditory-Motor Mapping Training (AMMT) in facilitating speech and word output for Mandarin-speaking nonverbal and low-verbal children with autism, in comparison with a matched non-AMMT-based control treatment. METHOD Fifteen Mandarin-speaking nonverbal and low-verbal children with autism spectrum disorder participated and completed all the AMMT-based treatment sessions by intoning (singing) and tapping the target words delivered via an app, whereas another 15 participants received control treatment. Generalized linear mixed-effects models were created to evaluate speech production accuracy and word production intelligibility across different groups and conditions. RESULTS Results showed that the AMMT-based treatment provided a more effective training approach in accelerating the rate of speech (especially lexical tone) and word learning in the trained items. More importantly, the enhanced training efficacy on lexical tone acquisition remained at 2 weeks after therapy and generalized to untrained tones that were not practiced. Furthermore, the low-verbal participants showed higher improvement compared to the nonverbal participants. CONCLUSIONS These data provide the first empirical evidence for adopting the AMMT-based training to facilitate speech and word learning in Mandarin-speaking nonverbal and low-verbal children with autism. This early efficacy study holds promise for improving lexical tone production in Mandarin-speaking children with autism but should be further replicated in larger scale randomized studies. Supplemental Material https://doi.org/10.23641/asha.16834627.
Collapse
Affiliation(s)
- Jinting Yan
- College of Qiyue Communication & Cangzhou Research Centre for Child Language Rehabilitation, Cangzhou Normal University, Hebei, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Xiaotian Gao
- College of Qiyue Communication & Cangzhou Research Centre for Child Language Rehabilitation, Cangzhou Normal University, Hebei, China
| | - Gang Peng
- Research Centre for Language, Cognition, and Neuroscience & Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR
| |
Collapse
|
42
|
Yaghmour M, Sarada P, Roach S, Kadar I, Pesheva Z, Chaari A, Bendriss G. EEG Correlates of Middle Eastern Music Improvisations on the Ney Instrument. Front Psychol 2021; 12:701761. [PMID: 34671287 PMCID: PMC8520950 DOI: 10.3389/fpsyg.2021.701761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 09/14/2021] [Indexed: 11/27/2022] Open
Abstract
The cognitive sciences have witnessed a growing interest in cognitive and neural basis of human creativity. Music improvisations constitute an ideal paradigm to study creativity, but the underlying cognitive processes remain poorly understood. In addition, studies on music improvisations using scales other than the major and minor chords are scarce. Middle Eastern Music is characterized by the additional use of microtones, resulting in a tonal–spatial system called Maqam. No EEG correlates have been proposed yet for the eight most commonly used maqams. The Ney, an end-blown flute that is popular and widely used in the Middle East was used by a professional musician to perform 24 improvisations at low, medium, and high tempos. Using the EMOTIV EPOC+, a 14-channel wireless EEG headset, brainwaves were recorded and quantified before and during improvisations. Pairwise comparisons were calculated using IBM-SPSS and a principal component analysis was used to evaluate the variability between the maqams. A significant increase of low frequency bands theta power and alpha power were observed at the frontal left and temporal left area as well as a significant increase in higher frequency bands beta-high bands and gamma at the right temporal and left parietal area. This study reveals the first EEG observations of the eight most commonly used maqam and is proposing EEG signatures for various maqams.
Collapse
Affiliation(s)
| | | | - Sarah Roach
- Premedical Division, Weill Cornell Medicine Qatar, Doha, Qatar
| | | | | | - Ali Chaari
- Premedical Division, Weill Cornell Medicine Qatar, Doha, Qatar
| | | |
Collapse
|
43
|
White PA. The extended present: an informational context for perception. Acta Psychol (Amst) 2021; 220:103403. [PMID: 34454251 DOI: 10.1016/j.actpsy.2021.103403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 08/04/2021] [Accepted: 08/19/2021] [Indexed: 01/29/2023] Open
Abstract
Several previous authors have proposed a kind of specious or subjective present moment that covers a few seconds of recent information. This article proposes a new hypothesis about the subjective present, renamed the extended present, defined not in terms of time covered but as a thematically connected information structure held in working memory and in transiently accessible form in long-term memory. The three key features of the extended present are that information in it is thematically connected, both internally and to current attended perceptual input, it is organised in a hierarchical structure, and all information in it is marked with temporal information, specifically ordinal and duration information. Temporal boundaries to the information structure are determined by hierarchical structure processing and by limits on processing and storage capacity. Supporting evidence for the importance of hierarchical structure analysis is found in the domains of music perception, speech and language processing, perception and production of goal-directed action, and exact arithmetical calculation. Temporal information marking is also discussed and a possible mechanism for representing ordinal and duration information on the time scale of the extended present is proposed. It is hypothesised that the extended present functions primarily as an informational context for making sense of current perceptual input, and as an enabler for perception and generation of complex structures and operations in language, action, music, exact calculation, and other domains.
Collapse
|
44
|
The origins of music in (musi)language. Behav Brain Sci 2021; 44:e104. [PMID: 34590552 DOI: 10.1017/s0140525x20000813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
The view of music as a byproduct of other cognitive functions has been deemed incomplete or incorrect. Revisiting the six lines of evidence that support this conclusion, it is argued that it is unclear how the hypothesis that music has its origins in (musi)language is discarded. Two additional promising research lines that can support or discard the byproduct hypothesis are presented.
Collapse
|
45
|
Kandylaki KD, Criscuolo A. Neural Tracking of Speech: Top-Down and Bottom-Up Influences in the Musician's Brain. J Neurosci 2021; 41:6579-6581. [PMID: 34348984 PMCID: PMC8336707 DOI: 10.1523/jneurosci.0756-21.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 05/16/2021] [Accepted: 05/24/2021] [Indexed: 11/21/2022] Open
Affiliation(s)
- Katerina D Kandylaki
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER, Maastricht, The Netherlands
| | - Antonio Criscuolo
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER, Maastricht, The Netherlands
| |
Collapse
|
46
|
Asano R, Boeckx C, Seifert U. Hierarchical control as a shared neurocognitive mechanism for language and music. Cognition 2021; 216:104847. [PMID: 34311153 DOI: 10.1016/j.cognition.2021.104847] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 05/14/2021] [Accepted: 07/11/2021] [Indexed: 12/16/2022]
Abstract
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany.
| | - Cedric Boeckx
- Section of General Linguistics, University of Barcelona, Spain; University of Barcelona Institute for Complex Systems (UBICS), Spain; Catalan Institute for Advanced Studies and Research (ICREA), Spain
| | - Uwe Seifert
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany
| |
Collapse
|
47
|
Tervaniemi M, Putkinen V, Nie P, Wang C, Du B, Lu J, Li S, Cowley BU, Tammi T, Tao S. Improved Auditory Function Caused by Music Versus Foreign Language Training at School Age: Is There a Difference? Cereb Cortex 2021; 32:63-75. [PMID: 34265850 PMCID: PMC8634570 DOI: 10.1093/cercor/bhab194] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 05/28/2021] [Accepted: 05/28/2021] [Indexed: 12/03/2022] Open
Abstract
In adults, music and speech share many neurocognitive functions, but how do they interact in a developing brain? We compared the effects of music and foreign language training on auditory neurocognition in Chinese children aged 8–11 years. We delivered group-based training programs in music and foreign language using a randomized controlled trial. A passive control group was also included. Before and after these year-long extracurricular programs, auditory event-related potentials were recorded (n = 123 and 85 before and after the program, respectively). Through these recordings, we probed early auditory predictive brain processes. To our surprise, the language program facilitated the children’s early auditory predictive brain processes significantly more than did the music program. This facilitation was most evident in pitch encoding when the experimental paradigm was musically relevant. When these processes were probed by a paradigm more focused on basic sound features, we found early predictive pitch encoding to be facilitated by music training. Thus, a foreign language program is able to foster auditory and music neurocognition, at least in tonal language speakers, in a manner comparable to that by a music program. Our results support the tight coupling of musical and linguistic brain functions also in the developing brain.
Collapse
Affiliation(s)
- Mari Tervaniemi
- Cicero Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland.,Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Advanced Innovation Center for Future Education, Beijing Normal University, Beijing, China
| | - Vesa Putkinen
- Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Turku PET Centre, University of Turku, Turku, Finland
| | - Peixin Nie
- Cicero Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland.,Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Cuicui Wang
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Bin Du
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Jing Lu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Shuting Li
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Benjamin Ultan Cowley
- Faculty of Educational Sciences, University of Helsinki, Finland.,Cognitive Science, Department of Digital Humanities, Faculty of Arts, University of Helsinki, Finland
| | - Tuisku Tammi
- Cognitive Science, Department of Digital Humanities, Faculty of Arts, University of Helsinki, Finland
| | - Sha Tao
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
48
|
Wang L, Pfordresher PQ, Jiang C, Liu F. Individuals with autism spectrum disorder are impaired in absolute but not relative pitch and duration matching in speech and song imitation. Autism Res 2021; 14:2355-2372. [PMID: 34214243 DOI: 10.1002/aur.2569] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 05/03/2021] [Accepted: 06/22/2021] [Indexed: 11/08/2022]
Abstract
Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation. However, few studies have identified clear quantitative characteristics of vocal imitation in ASD. This study investigated imitation of speech and song in English-speaking individuals with and without ASD and its modulation by age. Participants consisted of 25 autistic children and 19 autistic adults, who were compared to 25 children and 19 adults with typical development matched on age, gender, musical training, and cognitive abilities. The task required participants to imitate speech and song stimuli with varying pitch and duration patterns. Acoustic analyses of the imitation performance suggested that individuals with ASD were worse than controls on absolute pitch and duration matching for both speech and song imitation, although they performed as well as controls on relative pitch and duration matching. Furthermore, the two groups produced similar numbers of pitch contour, pitch interval-, and time errors. Across both groups, sung pitch was imitated more accurately than spoken pitch, whereas spoken duration was imitated more accurately than sung duration. Children imitated spoken pitch more accurately than adults when it came to speech stimuli, whereas age showed no significant relationship to song imitation. These results reveal a vocal imitation deficit across speech and music domains in ASD that is specific to absolute pitch and duration matching. This finding provides evidence for shared mechanisms between speech and song imitation, which involves independent implementation of relative versus absolute features. LAY SUMMARY: Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation of actions and gestures. Characteristics of vocal imitation in ASD remain unclear. By comparing speech and song imitation, this study shows that individuals with ASD have a vocal imitative deficit that is specific to absolute pitch and duration matching, while performing as well as controls on relative pitch and duration matching, across speech and music domains.
Collapse
Affiliation(s)
- Li Wang
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Peter Q Pfordresher
- Department of Psychology, University at Buffalo, State University of New York, Buffalo, New York, USA
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
49
|
Skipper JI, Lametti DR. Speech Perception under the Tent: A Domain-general Predictive Role for the Cerebellum. J Cogn Neurosci 2021; 33:1517-1534. [PMID: 34496370 DOI: 10.1162/jocn_a_01729] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The role of the cerebellum in speech perception remains a mystery. Given its uniform architecture, we tested the hypothesis that it implements a domain-general predictive mechanism whose role in speech is determined by connectivity. We collated all neuroimaging studies reporting cerebellar activity in the Neurosynth database (n = 8206). From this set, we found all studies involving passive speech and sound perception (n = 72, 64% speech, 12.5% sounds, 12.5% music, and 11% tones) and speech production and articulation (n = 175). Standard and coactivation neuroimaging meta-analyses were used to compare cerebellar and associated cortical activations between passive perception and production. We found distinct regions of perception- and production-related activity in the cerebellum and regions of perception-production overlap. Each of these regions had distinct patterns of cortico-cerebellar connectivity. To test for domain-generality versus specificity, we identified all psychological and task-related terms in the Neurosynth database that predicted activity in cerebellar regions associated with passive perception and production. Regions in the cerebellum activated by speech perception were associated with domain-general terms related to prediction. One hallmark of predictive processing is metabolic savings (i.e., decreases in neural activity when events are predicted). To test the hypothesis that the cerebellum plays a predictive role in speech perception, we examined cortical activation between studies reporting cerebellar activation and those without cerebellar activation during speech perception. When the cerebellum was active during speech perception, there was far less cortical activation than when it was inactive. The results suggest that the cerebellum implements a domain-general mechanism related to prediction during speech perception.
Collapse
Affiliation(s)
| | - Daniel R Lametti
- University College London.,Acadia University, Wolfville, Nova Scotia, Canada
| |
Collapse
|
50
|
Abstract
Abstract
The aim of this paper is to review recent hypotheses on the evolutionary origins of music in Homo sapiens, taking into account the most influential traditional hypotheses. To date, theories derived from evolution have focused primarily on the importance that music carries in solving detailed adaptive problems. The three most influential theoretical concepts have described the evolution of human music in terms of 1) sexual selection, 2) the formation of social bonds, or treated it 3) as a byproduct. According to recent proposals, traditional hypotheses are flawed or insufficient in fully explaining the complexity of music in Homo sapiens. This paper will critically discuss three traditional hypotheses of music evolution (music as an effect of sexual selection, a mechanism of social bonding, and a byproduct), as well as and two recent concepts of music evolution - music as a credible signal and Music and Social Bonding (MSB) hypothesis.
Collapse
|