1
|
Miranda ER. The advent of quantum computer music: mapping the field. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2024; 87:086001. [PMID: 38996413 DOI: 10.1088/1361-6633/ad627a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 07/12/2024] [Indexed: 07/14/2024]
Abstract
Quantum computing technology is developing at a fast pace. The impact of quantum computing on the music industry is inevitable. This paper maps the emerging field of quantum computer music. Quantum computer music investigates, and develops applications and methods to process music using quantum computing technology. The paper begins by contextualising the field. Then, it discusses significant examples of various approaches developed to date to leverage quantum computing to learn, process and generate music. The methods discussed range from rendering music using data from physical quantum mechanical systems and quantum mechanical simulations to computational quantum algorithms to generate music, including quantum AI. The ambition to develop techniques to encode audio quantumly for making sound synthesisers and audio signal processing systems is also discussed.
Collapse
Affiliation(s)
- Eduardo Reck Miranda
- Interdisciplinary Centre for Computer Music Research (ICCMR), Faculty of Arts, Design and Architecture, University of Plymouth, Plymouth PL4 8AA, United Kingdom
| |
Collapse
|
2
|
te Rietmolen N, Mercier MR, Trébuchon A, Morillon B, Schön D. Speech and music recruit frequency-specific distributed and overlapping cortical networks. eLife 2024; 13:RP94509. [PMID: 39038076 PMCID: PMC11262799 DOI: 10.7554/elife.94509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/24/2024] Open
Abstract
To what extent does speech and music processing rely on domain-specific and domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous speech or music, we investigated the presence of frequency-specific and network-level brain activity. We combined it with a statistical approach in which a clear operational distinction is made between shared, preferred, and domain-selective neural responses. We show that the majority of focal and network-level neural activity is shared between speech and music processing. Our data also reveal an absence of anatomical regional selectivity. Instead, domain-selective neural responses are restricted to distributed and frequency-specific coherent oscillations, typical of spectral fingerprints. Our work highlights the importance of considering natural stimuli and brain dynamics in their full complexity to map cognitive and brain functions.
Collapse
Affiliation(s)
- Noémie te Rietmolen
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| | - Manuel R Mercier
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| | - Agnès Trébuchon
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
- APHM, Hôpital de la Timone, Service de Neurophysiologie CliniqueMarseilleFrance
| | - Benjamin Morillon
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| | - Daniele Schön
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| |
Collapse
|
3
|
Mohd Rashid MH, Ab Rani NS, Kannan M, Abdullah MW, Ab Ghani MA, Kamel N, Mustapha M. Emotion brain network topology in healthy subjects following passive listening to different auditory stimuli. PeerJ 2024; 12:e17721. [PMID: 39040935 PMCID: PMC11262303 DOI: 10.7717/peerj.17721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 06/19/2024] [Indexed: 07/24/2024] Open
Abstract
A large body of research establishes the efficacy of musical intervention in many aspects of physical, cognitive, communication, social, and emotional rehabilitation. However, the underlying neural mechanisms for musical therapy remain elusive. This study aimed to investigate the potential neural correlates of musical therapy, focusing on the changes in the topology of emotion brain network. To this end, a Bayesian statistical approach and a cross-over experimental design were employed together with two resting-state magnetoencephalography (MEG) as controls. MEG recordings of 30 healthy subjects were acquired while listening to five auditory stimuli in random order. Two resting-state MEG recordings of each subject were obtained, one prior to the first stimulus (pre) and one after the final stimulus (post). Time series at the level of brain regions were estimated using depth-weighted minimum norm estimation (wMNE) source reconstruction method and the functional connectivity between these regions were computed. The resultant connectivity matrices were used to derive two topological network measures: transitivity and global efficiency which are important in gauging the functional segregation and integration of brain network respectively. The differences in these measures between pre- and post-stimuli resting MEG were set as the equivalence regions. We found that the network measures under all auditory stimuli were equivalent to the resting state network measures in all frequency bands, indicating that the topology of the functional brain network associated with emotional regulation in healthy subjects remains unchanged following these auditory stimuli. This suggests that changes in the emotion network topology may not be the underlying neural mechanism of musical therapy. Nonetheless, further studies are required to explore the neural mechanisms of musical interventions especially in the populations with neuropsychiatric disorders.
Collapse
Affiliation(s)
- Muhammad Hakimi Mohd Rashid
- Department of Basic Medical Sciences, Kulliyyah of Pharmacy, International Islamic University, Kuantan, Pahang, Malaysia
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu, Kelantan, Malaysia
| | - Nur Syairah Ab Rani
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu, Kelantan, Malaysia
| | - Mohammed Kannan
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu, Kelantan, Malaysia
- Department of Anatomy, Faculty of Medicine, Al Neelain University, Khartoum, Khartoum, Sudan
| | - Mohd Waqiyuddin Abdullah
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu, Kelantan, Malaysia
| | - Muhammad Amiri Ab Ghani
- Jabatan Al-Quran & Hadis, Kolej Islam Antarabangsa Sultan Ismail Petra, Nilam Puri, Kota Bharu, Kelantan, Malaysia
| | - Nidal Kamel
- Centre for Intelligent Signal & Imaging Research (CISIR), Electrical & Electronic Engineering Department, Universiti Teknologi PETRONAS, Seri Iskandar, Perak, Malaysia
| | - Muzaimi Mustapha
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kota Bharu, Kelantan, Malaysia
| |
Collapse
|
4
|
Curzel F, Tillmann B, Ferreri L. Lights on music cognition: A systematic and critical review of fNIRS applications and future perspectives. Brain Cogn 2024; 180:106200. [PMID: 38908228 DOI: 10.1016/j.bandc.2024.106200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 06/10/2024] [Accepted: 06/16/2024] [Indexed: 06/24/2024]
Abstract
Research investigating the neural processes related to music perception and production constitutes a well-established field within the cognitive neurosciences. While most neuroimaging tools have limitations in studying the complexity of musical experiences, functional Near-Infrared Spectroscopy (fNIRS) represents a promising, relatively new tool for studying music processes in both laboratory and ecological settings, which is also suitable for both typical and pathological populations across development. Here we systematically review fNIRS studies on music cognition, highlighting prospects and potentialities. We also include an overview of fNIRS basic theory, together with a brief comparison to characteristics of other neuroimaging tools. Fifty-nine studies meeting inclusion criteria (i.e., using fNIRS with music as the primary stimulus) are presented across five thematic sections. Critical discussion of methodology leads us to propose guidelines of good practices aiming for robust signal analyses and reproducibility. A continuously updated world map is proposed, including basic information from studies meeting the inclusion criteria. It provides an organized, accessible, and updatable reference database, which could serve as a catalyst for future collaborations within the community. In conclusion, fNIRS shows potential for investigating cognitive processes in music, particularly in ecological contexts and with special populations, aligning with current research priorities in music cognition.
Collapse
Affiliation(s)
- Federico Curzel
- Laboratoire d'Étude des Mécanismes Cognitifs (EMC), Université Lumière Lyon 2, Bron, Auvergne-Rhône-Alpes, 69500, France; Lyon Neuroscience Research Center (CRNL), INSERM, U1028, CNRS, UMR 5292, Université Claude Bernard Lyon1, Université de Lyon, Bron, Auvergne-Rhône-Alpes, 69500, France.
| | - Barbara Tillmann
- Lyon Neuroscience Research Center (CRNL), INSERM, U1028, CNRS, UMR 5292, Université Claude Bernard Lyon1, Université de Lyon, Bron, Auvergne-Rhône-Alpes, 69500, France; LEAD CNRS UMR5022, Université de Bourgogne-Franche Comté, Dijon, Bourgogne-Franche Comté 21000, France.
| | - Laura Ferreri
- Laboratoire d'Étude des Mécanismes Cognitifs (EMC), Université Lumière Lyon 2, Bron, Auvergne-Rhône-Alpes, 69500, France; Department of Brain and Behavioural Sciences, Università di Pavia, Pavia, Lombardia 27100, Italy.
| |
Collapse
|
5
|
Mizokuchi K, Tanaka T, Sato TG, Shiraki Y. Alpha band modulation caused by selective attention to music enables EEG classification. Cogn Neurodyn 2024; 18:1005-1020. [PMID: 38826648 PMCID: PMC11143110 DOI: 10.1007/s11571-023-09955-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 02/19/2023] [Accepted: 03/08/2023] [Indexed: 06/04/2024] Open
Abstract
Humans are able to pay selective attention to music or speech in the presence of multiple sounds. It has been reported that in the speech domain, selective attention enhances the cross-correlation between the envelope of speech and electroencephalogram (EEG) while also affecting the spatial modulation of the alpha band. However, when multiple music pieces are performed at the same time, it is unclear how selective attention affects neural entrainment and spatial modulation. In this paper, we hypothesized that the entrainment to the attended music differs from that to the unattended music and that spatial modulation in the alpha band occurs in conjunction with attention. We conducted experiments in which we presented musical excerpts to 15 participants, each listening to two excerpts simultaneously but paying attention to one of the two. The results showed that the cross-correlation function between the EEG signal and the envelope of the unattended melody had a more prominent peak than that of the attended melody, contrary to the findings for speech. In addition, the spatial modulation in the alpha band was found with a data-driven approach called the common spatial pattern method. Classification of the EEG signal with a support vector machine identified attended melodies and achieved an accuracy of 100% for 11 of the 15 participants. These results suggest that selective attention to music suppresses entrainment to the melody and that spatial modulation of the alpha band occurs in conjunction with attention. To the best of our knowledge, this is the first report to detect attended music consisting of several types of music notes only with EEG.
Collapse
Affiliation(s)
- Kana Mizokuchi
- Department of Electrical and Electronic Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Toshihisa Tanaka
- Department of Electrical Engineering and Computer Science, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Takashi G. Sato
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Kanagawa, Japan
| | - Yoshifumi Shiraki
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Kanagawa, Japan
| |
Collapse
|
6
|
Ferrier CH, Ruis C, Zadelhoff D, Robe PAJT, van Zandvoort MJE. IDEAL monitoring of musical skills during awake craniotomy: From step 1 to step 2. J Neuropsychol 2024; 18 Suppl 1:48-60. [PMID: 37916937 DOI: 10.1111/jnp.12347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 08/17/2023] [Accepted: 08/22/2023] [Indexed: 11/03/2023]
Abstract
The aim of awake brain surgery is to perform a maximum resection on the one hand, and to preserve cognitive functions, quality of life and personal autonomy on the other hand. Historically, language and sensorimotor functions were most frequently monitored. Over the years other cognitive functions, including music, have entered the operation theatre. Cases about monitoring musical abilities during awake brain surgery are emerging, and a systematic method how to monitor music would be the next step. According to the IDEAL framework for surgical innovations our study aims to present future recommendation based on a systematic literature search (PRISMA) in combination with lessons learned from three case reports from our own clinical practice with professional musicians (n = 3). We plead for structured procedures including individual tailored tasks. By embracing these recommendations, we can both improve clinical care and unravel music functions in the brain.
Collapse
Affiliation(s)
- C H Ferrier
- Utrecht Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - C Ruis
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
- Experimental Psychology/Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - D Zadelhoff
- Experimental Psychology/Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - P A J T Robe
- Utrecht Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - M J E van Zandvoort
- Utrecht Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
- Experimental Psychology/Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
7
|
Picazio S, Magnani B, Koch G, Oliveri M, Petrosini L. Frontal and cerebellar contributions to pitch and rhythm processing: a TMS study. Brain Struct Funct 2024:10.1007/s00429-024-02764-w. [PMID: 38403781 DOI: 10.1007/s00429-024-02764-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 01/12/2024] [Indexed: 02/27/2024]
Abstract
Music represents a salient stimulus for the brain with two key features: pitch and rhythm. Few data are available on cognitive analysis of music listening in musically naïve healthy participants. Beyond auditory cortices, neuroimaging data showed the involvement of prefrontal cortex in pitch and of cerebellum in rhythm. The present study is aimed at investigating the role of prefrontal and cerebellar cortices in both pitch and rhythm processing. The performance of fifteen participants without musical expertise was investigated in a listening discrimination task. The task required to decide whether two eight-element melodic sequences were equal or different according to pitch or rhythm characteristics. Before the task, we applied a protocol of continuous theta burst transcranial magnetic stimulation interfering with the activity of the left cerebellar hemisphere (lCb), right inferior frontal gyrus (rIFG), or vertex (Cz-control site), in a within cross-over design. Our results showed that participants were more accurate in pitch than rhythm tasks. Importantly, the reaction times were slower following rIFG or lCb stimulations in both tasks. Notably, frontal and cerebellar stimulations did not induce any motor effect in right and left hand. The present findings point to the role of the fronto-cerebellar network in music processing with a single mechanism for both pitch and rhythm patterns.
Collapse
Affiliation(s)
| | - Barbara Magnani
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Parma, Italy
| | - Giacomo Koch
- Santa Lucia Foundation IRCCS, Rome, Italy
- Human Physiology Section, Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara, Italy
| | - Massimiliano Oliveri
- Department of Psychology, Educational Sciences and Human Movement, University of Palermo, Palermo, Italy
- Neuroteam Life and Science, Palermo, Italy
| | | |
Collapse
|
8
|
Kim G, Kim DK, Jeong H. Spontaneous emergence of rudimentary music detectors in deep neural networks. Nat Commun 2024; 15:148. [PMID: 38168097 PMCID: PMC10761941 DOI: 10.1038/s41467-023-44516-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 12/15/2023] [Indexed: 01/05/2024] Open
Abstract
Music exists in almost every society, has universal acoustic features, and is processed by distinct neural circuits in humans even with no experience of musical training. However, it remains unclear how these innate characteristics emerge and what functions they serve. Here, using an artificial deep neural network that models the auditory information processing of the brain, we show that units tuned to music can spontaneously emerge by learning natural sound detection, even without learning music. The music-selective units encoded the temporal structure of music in multiple timescales, following the population-level response characteristics observed in the brain. We found that the process of generalization is critical for the emergence of music-selectivity and that music-selectivity can work as a functional basis for the generalization of natural sound, thereby elucidating its origin. These findings suggest that evolutionary adaptation to process natural sounds can provide an initial blueprint for our sense of music.
Collapse
Affiliation(s)
- Gwangsu Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Dong-Kyum Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Hawoong Jeong
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.
- Center for Complex Systems, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.
| |
Collapse
|
9
|
Dhakal K, Rosenthal ES, Kulpanowski AM, Dodelson JA, Wang Z, Cudemus-Deseda G, Villien M, Edlow BL, Presciutti AM, Januzzi JL, Ning M, Taylor Kimberly W, Amorim E, Brandon Westover M, Copen WA, Schaefer PW, Giacino JT, Greer DM, Wu O. Increased task-relevant fMRI responsiveness in comatose cardiac arrest patients is associated with improved neurologic outcomes. J Cereb Blood Flow Metab 2024; 44:50-65. [PMID: 37728641 PMCID: PMC10905635 DOI: 10.1177/0271678x231197392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/27/2023] [Accepted: 06/29/2023] [Indexed: 09/21/2023]
Abstract
Early prediction of the recovery of consciousness in comatose cardiac arrest patients remains challenging. We prospectively studied task-relevant fMRI responses in 19 comatose cardiac arrest patients and five healthy controls to assess the fMRI's utility for neuroprognostication. Tasks involved instrumental music listening, forward and backward language listening, and motor imagery. Task-specific reference images were created from group-level fMRI responses from the healthy controls. Dice scores measured the overlap of individual subject-level fMRI responses with the reference images. Task-relevant responsiveness index (Rindex) was calculated as the maximum Dice score across the four tasks. Correlation analyses showed that increased Dice scores were significantly associated with arousal recovery (P < 0.05) and emergence from the minimally conscious state (EMCS) by one year (P < 0.001) for all tasks except motor imagery. Greater Rindex was significantly correlated with improved arousal recovery (P = 0.002) and consciousness (P = 0.001). For patients who survived to discharge (n = 6), the Rindex's sensitivity was 75% for predicting EMCS (n = 4). Task-based fMRI holds promise for detecting covert consciousness in comatose cardiac arrest patients, but further studies are needed to confirm these findings. Caution is necessary when interpreting the absence of task-relevant fMRI responses as a surrogate for inevitable poor neurological prognosis.
Collapse
Affiliation(s)
- Kiran Dhakal
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Eric S Rosenthal
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Annelise M Kulpanowski
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Jacob A Dodelson
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Zihao Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Gaston Cudemus-Deseda
- Department of Cardiac Anesthesiology and Critical Care Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Marjorie Villien
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| | - Brian L Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Alexander M Presciutti
- Department of Psychiatry, Center for Health Outcomes and Interdisciplinary Research, Massachusetts General Hospital, Boston, MA, USA
| | - James L Januzzi
- Department of Medicine, Cardiology Division, Massachusetts General Hospital and Baim Institute for Clinical Research, Boston, MA, USA
| | - MingMing Ning
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - W Taylor Kimberly
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Edilberto Amorim
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | | | - William A Copen
- Department of Radiology, Neuroradiology Division, Massachusetts General Hospital, Boston, MA, USA
| | - Pamela W Schaefer
- Department of Radiology, Neuroradiology Division, Massachusetts General Hospital, Boston, MA, USA
| | - Joseph T Giacino
- Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Harvard Medical School, Charlestown, MA, USA
| | - David M Greer
- Department of Neurology, Boston University School of Medicine, Boston Medical Center, Boston, MA, USA
| | - Ona Wu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA
| |
Collapse
|
10
|
Andrade PE, Müllensiefen D, Andrade OVCA, Dunstan J, Zuk J, Gaab N. Sequence Processing in Music Predicts Reading Skills in Young Readers: A Longitudinal Study. JOURNAL OF LEARNING DISABILITIES 2024; 57:43-60. [PMID: 36935627 DOI: 10.1177/00222194231157722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Musical abilities, both in the pitch and temporal dimension, have been shown to be positively associated with phonological awareness and reading abilities in both children and adults. There is increasing evidence that the relationship between music and language relies primarily on the temporal dimension, including both meter and rhythm. It remains unclear to what extent skill level in these temporal aspects of music may uniquely contribute to the prediction of reading outcomes. A longitudinal design was used to test a group-administered musical sequence transcription task (MSTT). This task was designed to preferentially engage sequence processing skills while controlling for fine-grained pitch discrimination and rhythm in terms of temporal grouping. Forty-five children, native speakers of Portuguese (Mage = 7.4 years), completed the MSTT and a cognitive-linguistic protocol that included visual and auditory working memory tasks, as well as phonological awareness and reading tasks in second grade. Participants then completed reading assessments in third and fifth grades. Longitudinal regression models showed that MSTT and phonological awareness had comparable power to predict reading. The MSTT showed an overall classification accuracy for identifying low-achievement readers in Grades 2, 3, and 5 that was analogous to a comprehensive model including core predictors of reading disability. In addition, MSTT was the variable with the highest loading and the most discriminatory indicator of a phonological factor. These findings carry implications for the role of temporal sequence processing in contributing to the relationship between music and language and the potential use of MSTT as a language-independent, time- and cost-effective tool for the early identification of children at risk of reading disability.
Collapse
|
11
|
Papadaki E, Koustakas T, Werner A, Lindenberger U, Kühn S, Wenger E. Resting-state functional connectivity in an auditory network differs between aspiring professional and amateur musicians and correlates with performance. Brain Struct Funct 2023; 228:2147-2163. [PMID: 37792073 PMCID: PMC10587189 DOI: 10.1007/s00429-023-02711-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 09/10/2023] [Indexed: 10/05/2023]
Abstract
Auditory experience-dependent plasticity is often studied in the domain of musical expertise. Available evidence suggests that years of musical practice are associated with structural and functional changes in auditory cortex and related brain regions. Resting-state functional magnetic resonance imaging (MRI) can be used to investigate neural correlates of musical training and expertise beyond specific task influences. Here, we compared two groups of musicians with varying expertise: 24 aspiring professional musicians preparing for their entrance exam at Universities of Arts versus 17 amateur musicians without any such aspirations but who also performed music on a regular basis. We used an interval recognition task to define task-relevant brain regions and computed functional connectivity and graph-theoretical measures in this network on separately acquired resting-state data. Aspiring professionals performed significantly better on all behavioral indicators including interval recognition and also showed significantly greater network strength and global efficiency than amateur musicians. Critically, both average network strength and global efficiency were correlated with interval recognition task performance assessed in the scanner, and with an additional measure of interval identification ability. These findings demonstrate that task-informed resting-state fMRI can capture connectivity differences that correspond to expertise-related differences in behavior.
Collapse
Affiliation(s)
- Eleftheria Papadaki
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany.
- International Max Planck Research School on the Life Course (LIFE), Berlin, Germany.
| | - Theodoros Koustakas
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - André Werner
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - Ulman Lindenberger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany, London, UK
| | - Simone Kühn
- Lise Meitner Group for Environmental Neuroscience, Max Planck Institute for Human Development, Berlin, Germany
- Neuronal Plasticity Working Group, Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Elisabeth Wenger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| |
Collapse
|
12
|
Jünemann K, Engels A, Marie D, Worschech F, Scholz DS, Grouiller F, Kliegel M, Van De Ville D, Altenmüller E, Krüger THC, James CE, Sinke C. Increased functional connectivity in the right dorsal auditory stream after a full year of piano training in healthy older adults. Sci Rep 2023; 13:19993. [PMID: 37968500 PMCID: PMC10652022 DOI: 10.1038/s41598-023-46513-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 11/02/2023] [Indexed: 11/17/2023] Open
Abstract
Learning to play an instrument at an advanced age may help to counteract or slow down age-related cognitive decline. However, studies investigating the neural underpinnings of these effects are still scarce. One way to investigate the effects of brain plasticity is using resting-state functional connectivity (FC). The current study compared the effects of learning to play the piano (PP) against participating in music listening/musical culture (MC) lessons on FC in 109 healthy older adults. Participants underwent resting-state functional magnetic resonance imaging at three time points: at baseline, and after 6 and 12 months of interventions. Analyses revealed piano training-specific FC changes after 12 months of training. These include FC increase between right Heschl's gyrus (HG), and other right dorsal auditory stream regions. In addition, PP showed an increased anticorrelation between right HG and dorsal posterior cingulate cortex and FC increase between the right motor hand area and a bilateral network of predominantly motor-related brain regions, which positively correlated with fine motor dexterity improvements. We suggest to interpret those results as increased network efficiency for auditory-motor integration. The fact that functional neuroplasticity can be induced by piano training in healthy older adults opens new pathways to countervail age related decline.
Collapse
Affiliation(s)
- Kristin Jünemann
- Division of Clinical Psychology & Sexual Medicine, Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hannover, Germany
- Center for Systems Neuroscience, Hannover, Germany
| | - Anna Engels
- Division of Clinical Psychology & Sexual Medicine, Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hannover, Germany
| | - Damien Marie
- Geneva Musical Minds Lab, Geneva School of Health Sciences, University of Applied Sciences and Arts Western Switzerland (HES-SO), Geneva, Switzerland
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
- CIBM Center for Biomedical Imaging, MRI UNIGE, University of Geneva, Geneva, Switzerland
| | - Florian Worschech
- Center for Systems Neuroscience, Hannover, Germany
- Institute of Music Physiology and Musicians' Medicine, Hannover University of Music, Drama and Media, Hannover, Germany
| | - Daniel S Scholz
- Institute of Medical Psychology, University of Lübeck, Lübeck, Germany
- Department of Musicians' Health, University of Music Lübeck, Lübeck, Germany
| | - Frédéric Grouiller
- CIBM Center for Biomedical Imaging, MRI UNIGE, University of Geneva, Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Matthias Kliegel
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
- Center for the Interdisciplinary Study of Gerontology and Vulnerability, University of Geneva, Geneva, Switzerland
| | - Dimitri Van De Ville
- Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland
| | - Eckart Altenmüller
- Center for Systems Neuroscience, Hannover, Germany
- Institute of Music Physiology and Musicians' Medicine, Hannover University of Music, Drama and Media, Hannover, Germany
| | - Tillmann H C Krüger
- Division of Clinical Psychology & Sexual Medicine, Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hannover, Germany
- Center for Systems Neuroscience, Hannover, Germany
| | - Clara E James
- Geneva Musical Minds Lab, Geneva School of Health Sciences, University of Applied Sciences and Arts Western Switzerland (HES-SO), Geneva, Switzerland
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Christopher Sinke
- Division of Clinical Psychology & Sexual Medicine, Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hannover, Germany.
| |
Collapse
|
13
|
Hegde S, Keshavan MS. The brain on the beat: How music may heal schizophrenia. Schizophr Res 2023; 261:113-115. [PMID: 37717508 PMCID: PMC7615983 DOI: 10.1016/j.schres.2023.08.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 08/16/2023] [Accepted: 08/31/2023] [Indexed: 09/19/2023]
Affiliation(s)
- Shantala Hegde
- Clinical Neuropsychology & Cognitive Neuroscience Centre, Music Cognition Laboratory, Department of Clinical Psychology, National Institute of Mental Health and Neuro Sciences, Benglauru, India; Department of Psychiatry, BIDMC, Harvard Medical School, Boston, MA, United States of America.
| | - Matcheri S Keshavan
- Department of Psychiatry, BIDMC, Harvard Medical School, Boston, MA, United States of America.
| |
Collapse
|
14
|
Bellier L, Llorens A, Marciano D, Gunduz A, Schalk G, Brunner P, Knight RT. Music can be reconstructed from human auditory cortex activity using nonlinear decoding models. PLoS Biol 2023; 21:e3002176. [PMID: 37582062 PMCID: PMC10427021 DOI: 10.1371/journal.pbio.3002176] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 05/30/2023] [Indexed: 08/17/2023] Open
Abstract
Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior-posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain-computer interface (BCI) applications.
Collapse
Affiliation(s)
- Ludovic Bellier
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Anaïs Llorens
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Déborah Marciano
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Aysegul Gunduz
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida, United States of America
| | - Gerwin Schalk
- Department of Neurology, Albany Medical College, Albany, New York, United States of America
| | - Peter Brunner
- Department of Neurology, Albany Medical College, Albany, New York, United States of America
- Department of Neurosurgery, Washington University School of Medicine, St. Louis, Missouri, United States of America
- National Center for Adaptive Neurotechnologies, Albany, New York, United States of America
| | - Robert T. Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
- Department of Psychology, University of California, Berkeley, Berkeley, California, United States of America
| |
Collapse
|
15
|
Hou J, Chen C, Dong Q. Early musical training benefits to non-musical cognitive ability associated with the Gestalt principles. Front Psychol 2023; 14:1134116. [PMID: 37554141 PMCID: PMC10405822 DOI: 10.3389/fpsyg.2023.1134116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 07/07/2023] [Indexed: 08/10/2023] Open
Abstract
Musical training has been evidenced to facilitate music perception, which refers to the consistencies, boundaries, and segmentations in pieces of music that are associated with the Gestalt principles. The current study aims to test whether musical training is beneficial to non-musical cognitive ability with Gestalt principles. Three groups of Chinese participants (with early, late, and no musical training) were compared in terms of their performances on the Motor-Free Visual Perception Test (MVPT). The results show that the participants with early musical training had significantly better performance in the Gestalt-like Visual Closure subtest than those with late and no musical training, but no significances in other Gestalt-unlike subtests was identified (Visual Memory, Visual Discrimination, Spatial Relationship, Figure Ground in MVPT). This study suggests the benefit of early musical training on non-musical cognitive ability with Gestalt principles.
Collapse
Affiliation(s)
- Jiancheng Hou
- Research Center for Cross-Straits Cultural Development, Fujian Normal University, Fuzhou, Fujian, China
- State Key Lab of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- School of Public Health, Indiana University Bloomington, Bloomington, IN, United States
| | - Chuansheng Chen
- Department of Psychological Science, University of California, Irvine, CA, United States
| | - Qi Dong
- State Key Lab of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| |
Collapse
|
16
|
Adamska I, Finc K. Effect of LSD and music on the time-varying brain dynamics. Psychopharmacology (Berl) 2023:10.1007/s00213-023-06394-8. [PMID: 37291360 DOI: 10.1007/s00213-023-06394-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 05/31/2023] [Indexed: 06/10/2023]
Abstract
RATIONALE Psychedelics are getting closer to being widely used in clinical treatment. Music is known as a key element of psychedelic-assisted therapy due to its psychological effects, specifically on the emotion, meaning-making, and sensory processing. However, there is still a lack of understanding in how psychedelics influence brain activity in experimental settings involving music listening. OBJECTIVES The main goal of our research was to investigate the effect of music, as a part of "setting," on the brain states dynamics after lysergic acid diethylamide (LSD) intake. METHODS We used an open dataset, where a group of 15 participants underwent two functional MRI scanning sessions under LSD and placebo influence. Every scanning session contained three runs: two resting-state runs separated by one run with music listening. We applied K-Means clustering to identify the repetitive patterns of brain activity, so-called brain states. For further analysis, we calculated states' dwell time, fractional occupancy and transition probability. RESULTS The interaction effect of music and psychedelics led to change in the time-varying brain activity of the task-positive state. LSD, regardless of the music, affected the dynamics of the state of combined activity of DMN, SOM, and VIS networks. Crucially, we observed that the music itself could potentially have a long-term influence on the resting-state, in particular on states involving task-positive networks. CONCLUSIONS This study indicates that music, as a crucial element of "setting," can potentially have an influence on the subject's resting-state during psychedelic experience. Further studies should replicate these results on a larger sample size.
Collapse
Affiliation(s)
- Iga Adamska
- Faculty of Philosophy and Social Sciences, Nicolaus Copernicus University, Toruń, Poland.
| | - Karolina Finc
- Centre for Modern Interdisciplinary Technologies, Nicolaus Copernicus University, Toruń, Poland.
| |
Collapse
|
17
|
Marie D, Müller CA, Altenmüller E, Van De Ville D, Jünemann K, Scholz DS, Krüger TH, Worschech F, Kliegel M, Sinke C, James CE. Music interventions in 132 healthy older adults enhance cerebellar grey matter and auditory working memory, despite general brain atrophy. NEUROIMAGE: REPORTS 2023. [DOI: 10.1016/j.ynirp.2023.100166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
18
|
Singh M, Mehr SA. Universality, domain-specificity, and development of psychological responses to music. NATURE REVIEWS PSYCHOLOGY 2023; 2:333-346. [PMID: 38143935 PMCID: PMC10745197 DOI: 10.1038/s44159-023-00182-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/30/2023] [Indexed: 12/26/2023]
Abstract
Humans can find music happy, sad, fearful, or spiritual. They can be soothed by it or urged to dance. Whether these psychological responses reflect cognitive adaptations that evolved expressly for responding to music is an ongoing topic of study. In this Review, we examine three features of music-related psychological responses that help to elucidate whether the underlying cognitive systems are specialized adaptations: universality, domain-specificity, and early expression. Focusing on emotional and behavioural responses, we find evidence that the relevant psychological mechanisms are universal and arise early in development. However, the existing evidence cannot establish that these mechanisms are domain-specific. To the contrary, many findings suggest that universal psychological responses to music reflect more general properties of emotion, auditory perception, and other human cognitive capacities that evolved for non-musical purposes. Cultural evolution, driven by the tinkering of musical performers, evidently crafts music to compellingly appeal to shared psychological mechanisms, resulting in both universal patterns (such as form-function associations) and culturally idiosyncratic styles.
Collapse
Affiliation(s)
- Manvir Singh
- Institute for Advanced Study in Toulouse, University of
Toulouse 1 Capitole, Toulouse, France
| | - Samuel A. Mehr
- Yale Child Study Center, Yale University, New Haven, CT,
USA
- School of Psychology, University of Auckland, Auckland,
New Zealand
| |
Collapse
|
19
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
20
|
Movalled K, Sani A, Nikniaz L, Ghojazadeh M. The impact of sound stimulations during pregnancy on fetal learning: a systematic review. BMC Pediatr 2023; 23:183. [PMID: 37081418 PMCID: PMC10116668 DOI: 10.1186/s12887-023-03990-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Accepted: 04/04/2023] [Indexed: 04/22/2023] Open
Abstract
BACKGROUND The developing nervous system in utero is exposed to various stimuli with effects that may be carried forward to the neonatal period. This study aims to investigate the effects of sound stimulation (music and speech) on fetal memory and learning, which was assessed later in neonatal period. METHODS The MEDLINE (pubmed), Scopus, EMBASE, and Cochrane Library were searched. Two reviewers selected the studies and extracted the data independently. The quality of eligible studies was assessed using The Joanna Briggs Institute Critical Appraisal Checklist for Randomized Controlled Trials (RCTs). RESULTS Overall 3930 articles were retrieved and eight studies met the inclusion criteria. All of the included studies had good general quality; however, high risk of selection and detection bias was detected in most of them. Fetal learning was examined through neonatal electrocardiography (ECG), electroencephalography (EEG), habituation tests, and behavioral responses. Seven studies showed that the infants had learned the fetal sound stimulus and one study indicated that the prenatally stimulated infants performed significantly better on a neonatal behavior test. There was considerable diversity among studies in terms of sound stimulation type, characteristics (intensity and frequency), and duration, as well as outcome assessment methods. CONCLUSIONS Prenatal sound stimulation including music and speech can form stimulus-specific memory traces during fetal period and effect neonatal neural system. Further studies with precisely designed methodologies that follow safety recommendations, are needed.
Collapse
Affiliation(s)
| | - Anis Sani
- Tabriz University of Medical Sciences, Tabriz, Iran.
| | - Leila Nikniaz
- Tabriz Health Services Management Research Center, Health Management and Safety Promotion Research Institute, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Morteza Ghojazadeh
- Professor of Physiology, Iranian Centre for Evidence-Based Medicine, Tabriz University of Medical Sciences, Tabriz, Iran
| |
Collapse
|
21
|
Matziorinis AM, Flo BK, Skouras S, Dahle K, Henriksen A, Hausmann F, Sudmann TT, Gold C, Koelsch S. A 12-month randomised pilot trial of the Alzheimer's and music therapy study: a feasibility assessment of music therapy and physical activity in patients with mild-to-moderate Alzheimer's disease. Pilot Feasibility Stud 2023; 9:61. [PMID: 37076884 PMCID: PMC10114372 DOI: 10.1186/s40814-023-01287-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 03/30/2023] [Indexed: 04/21/2023] Open
Abstract
BACKGROUND The Alzheimer's and Music Therapy (ALMUTH) study is the first randomised controlled trial (RCT) design with 12 months of active non-pharmacological therapy (NPT) implementing music therapy (MT) and physical activity (PA) for participants with Alzheimer's disease (AD). The aim of the present article is to retrospectively examine the inclusion of mild-to-moderate Alzheimer's Disease patients into the main ALMUTH study protocol and to determine if continued inclusion of AD patients is warranted. METHODS The randomised pilot trial was conducted as a parallel three-arm RCT, reflecting the experimental design of the ALMUTH study. The trial was conducted in Bergen, Norway, and randomisation (1:1:1) was performed by an external researcher. The study was open label and the experimental design features two active NPTs: MT and PA, and a passive control (no intervention, CON) in Norwegian speaking patients with AD who still live at home and could provide informed consent. Sessions were offered once per week (up to 90 min) up to 40 sessions over 12 months. Baseline and follow-up tests included a full neuropsychological test battery and three magnetic resonance imaging (MRI) measurements (structural, functional, and diffusion weighted imaging). Feasibility outcomes were assessed and were determined as feasible if they met the target criteria. RESULTS Eighteen participants with a diagnosis of mild-to-moderate AD were screened, randomised, and tested once at baseline and once after 12-months. Participants were divided into three groups: MT (n = 6), PA (n = 6), and CON (n = 6). Results of the study revealed that the ALMUTH protocol in patients with AD was not feasible. The adherence to the study protocol was poor (50% attended sessions), with attrition and retention rates at 50%. The recruitment was costly and there were difficulties acquiring participants who met the inclusion criteria. Issues with study fidelity and problems raised by staff were taken into consideration for the updated study protocol. No adverse events were reported by the patients or their caregivers. CONCLUSIONS The pilot trial was not deemed feasible in patients with mild-to-moderate AD. To mitigate this, the ALMUTH study has expanded the recruitment criteria to include participants with milder forms of memory impairment (pre-AD) in addition to expanding the neuropsychological test battery. The ALMUTH study is currently ongoing through 2023. TRIAL REGISTRATION Norsk Forskningsråd (NFR) funded. Regional Committees for Medical and Health Research Ethics (REC-WEST: reference number 2018/206). CLINICALTRIALS gov: NCT03444181 (registered retrospectively 23 February 2018, https://clinicaltrials.gov/ct2/show/NCT03444181 ).
Collapse
Affiliation(s)
- A M Matziorinis
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway.
| | - B K Flo
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
| | - S Skouras
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
| | - K Dahle
- Kompetansesenter for Demens, Bergen Kommune, Norway
| | - A Henriksen
- Department of Sport, Food, and Natural Sciences, Faculty of Education, Arts, and Sports, Western Norway University of Applied Sciences, Bergen, Norway
| | - F Hausmann
- Department of Sport, Food, and Natural Sciences, Faculty of Education, Arts, and Sports, Western Norway University of Applied Sciences, Bergen, Norway
| | - T T Sudmann
- Department of Health and Function, Western Norway University of Applied Sciences, Bergen, Norway
| | - C Gold
- NORCE Norwegian Research Centre AS, Bergen, Norway
- Grieg Academy Department of Music, University of Bergen, Bergen, Norway
- Department of Clinical and Health Psychology, University of Vienna, Vienna, Austria
| | - S Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway.
| |
Collapse
|
22
|
Park JJ, Baek SC, Suh MW, Choi J, Kim SJ, Lim Y. The effect of topic familiarity and volatility of auditory scene on selective auditory attention. Hear Res 2023; 433:108770. [PMID: 37104990 DOI: 10.1016/j.heares.2023.108770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 04/06/2023] [Accepted: 04/15/2023] [Indexed: 04/29/2023]
Abstract
Selective auditory attention has been shown to modulate the cortical representation of speech. This effect has been well documented in acoustically more challenging environments. However, the influence of top-down factors, in particular topic familiarity, on this process remains unclear, despite evidence that semantic information can promote speech-in-noise perception. Apart from individual features forming a static listening condition, dynamic and irregular changes of auditory scenes-volatile listening environments-have been less studied. To address these gaps, we explored the influence of topic familiarity and volatile listening on the selective auditory attention process during dichotic listening using electroencephalography. When stories with unfamiliar topics were presented, participants' comprehension was severely degraded. However, their cortical activity selectively tracked the speech of the target story well. This implies that topic familiarity hardly influences the speech tracking neural index, possibly when the bottom-up information is sufficient. However, when the listening environment was volatile and the listeners had to re-engage in new speech whenever auditory scenes altered, the neural correlates of the attended speech were degraded. In particular, the cortical response to the attended speech and the spatial asymmetry of the response to the left and right attention were significantly attenuated around 100-200 ms after the speech onset. These findings suggest that volatile listening environments could adversely affect the modulation effect of selective attention, possibly by hampering proper attention due to increased perceptual load.
Collapse
Affiliation(s)
- Jonghwa Jeonglok Park
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of Electrical and Computer Engineering, College of Engineering, Seoul National University, Seoul 08826, South Korea
| | - Seung-Cheol Baek
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany
| | - Myung-Whan Suh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul 03080, South Korea
| | - Jongsuk Choi
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of AI Robotics, KIST School, Korea University of Science and Technology, Seoul 02792, South Korea
| | - Sung June Kim
- Department of Electrical and Computer Engineering, College of Engineering, Seoul National University, Seoul 08826, South Korea
| | - Yoonseob Lim
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of HY-KIST Bio-convergence, Hanyang University, Seoul 04763, South Korea.
| |
Collapse
|
23
|
Mellerio C, de Parcevaux AI, Charron S, Etevenon P, Oppenheim C. Functional MRI of a conductor in action. J Neuroradiol 2023; 50:278-279. [PMID: 36623585 DOI: 10.1016/j.neurad.2023.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 01/03/2023] [Accepted: 01/05/2023] [Indexed: 01/07/2023]
Affiliation(s)
- Charles Mellerio
- Neuroradiology Department, GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, Paris, France; INSERM U1266, Paris, France, Université Paris Cité, France.
| | - Anne Isabelle de Parcevaux
- Conservatoire National Supérieur de Musique et de Danse de Paris, 209, Avenue Jean-Jaurès, 75019, Paris, France
| | | | | | - Catherine Oppenheim
- Neuroradiology Department, GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, Paris, France; INSERM U1266, Paris, France, Université Paris Cité, France
| |
Collapse
|
24
|
Musso M, Altenmüller E, Reisert M, Hosp J, Schwarzwald R, Blank B, Horn J, Glauche V, Kaller C, Weiller C, Schumacher M. Speaking in gestures: Left dorsal and ventral frontotemporal brain systems underlie communication in conducting. Eur J Neurosci 2023; 57:324-350. [PMID: 36509461 DOI: 10.1111/ejn.15883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 09/27/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022]
Abstract
Conducting constitutes a well-structured system of signs anticipating information concerning the rhythm and dynamic of a musical piece. Conductors communicate the musical tempo to the orchestra, unifying the individual instrumental voices to form an expressive musical Gestalt. In a functional magnetic resonance imaging (fMRI) experiment, 12 professional conductors and 16 instrumentalists conducted real-time novel pieces with diverse complexity in orchestration and rhythm. For control, participants either listened to the stimuli or performed beat patterns, setting the time of a metronome or complex rhythms played by a drum. Activation of the left superior temporal gyrus (STG), supplementary and premotor cortex and Broca's pars opercularis (F3op) was shared in both musician groups and separated conducting from the other conditions. Compared to instrumentalists, conductors activated Broca's pars triangularis (F3tri) and the STG, which differentiated conducting from time beating and reflected the increase in complexity during conducting. In comparison to conductors, instrumentalists activated F3op and F3tri when distinguishing complex rhythm processing from simple rhythm processing. Fibre selection from a normative human connectome database, constructed using a global tractography approach, showed that the F3op and STG are connected via the arcuate fasciculus, whereas the F3tri and STG are connected via the extreme capsule. Like language, the anatomical framework characterising conducting gestures is located in the left dorsal system centred on F3op. This system reflected the sensorimotor mapping for structuring gestures to musical tempo. The ventral system centred on F3Tri may reflect the art of conductors to set this musical tempo to the individual orchestra's voices in a global, holistic way.
Collapse
Affiliation(s)
- Mariacristina Musso
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Eckart Altenmüller
- Institute of Music Physiology and Musician's Medicine, Hannover University of Music Drama and Media, Hannover, Germany
| | - Marco Reisert
- Department of Medical Physics, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jonas Hosp
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Ralf Schwarzwald
- Department of Neuroradiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Bettina Blank
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Julian Horn
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Volkmar Glauche
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Christoph Kaller
- Department of Medical Physics, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Cornelius Weiller
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Martin Schumacher
- Department of Neuroradiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
25
|
Zhang N. Research on the Difference between Environmental Music Perception and Innovation Ability Based on EEG Data. JOURNAL OF ENVIRONMENTAL AND PUBLIC HEALTH 2022; 2022:9441697. [PMID: 36438930 PMCID: PMC9691327 DOI: 10.1155/2022/9441697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 06/09/2022] [Accepted: 06/18/2022] [Indexed: 11/19/2022]
Abstract
It is of great significance to practice and explore music creation for training creative talents. Perception includes feeling and perception, and feeling is a reflection of individual attributes of objective things directly acting on sensory organs. This paper mainly has a research on the difference between environmental music perception and innovation ability based on EEG data. First, this study performed noise reduction and artifact preprocessing of EEG signals generated by subjects with different levels of consciousness subjected to musical stimulation and then performed tensor decomposition to obtain the tensor component of EEG. The time-domain components of these tensor components were analyzed together with five musical features (fluctuation centroid, fluctuation entropy, pulse clarity, key clarity, and mode), EEG tensor components related to music characteristics were analyzed, the power spectrum and the distribution of responsive brain regions were analyzed, and finally, the differences in the processing of music characteristics by different levels of consciousness were explored.
Collapse
Affiliation(s)
- Na Zhang
- School of Education Science, Shanxi Normal University, Taiyuan 041009, China
| |
Collapse
|
26
|
The rediscovered motor-related area 55b emerges as a core hub of music perception. Commun Biol 2022; 5:1104. [PMID: 36257973 PMCID: PMC9579133 DOI: 10.1038/s42003-022-04009-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 09/19/2022] [Indexed: 12/03/2022] Open
Abstract
Passive listening to music, without sound production or evident movement, is long known to activate motor control regions. Nevertheless, the exact neuroanatomical correlates of the auditory-motor association and its underlying neural mechanisms have not been fully determined. Here, based on a NeuroSynth meta-analysis and three original fMRI paradigms of music perception, we show that the long-ignored pre-motor region, area 55b, an anatomically unique and functionally intriguing region, is a core hub of music perception. Moreover, results of a brain-behavior correlation analysis implicate neural entrainment as the underlying mechanism of area 55b’s contribution to music perception. In view of the current results and prior literature, area 55b is proposed as a keystone of sensorimotor integration, a fundamental brain machinery underlying simple to hierarchically complex behaviors. Refining the neuroanatomical and physiological understanding of sensorimotor integration is expected to have a major impact on various fields, from brain disorders to artificial general intelligence. Functional magnetic resonance imaging data acquired during passive listening to music suggest that pre-motor area 55b acts as a core hub of music processing in humans.
Collapse
|
27
|
Nagy SI, Révész G, Séra L, Bandi SA, Stachó L. Final-note expectancy and humor: an empirical investigation. BMC Psychol 2022; 10:228. [PMID: 36180930 PMCID: PMC9526306 DOI: 10.1186/s40359-022-00936-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 09/11/2022] [Accepted: 09/22/2022] [Indexed: 11/30/2022] Open
Abstract
Background Melodic expectations were manipulated to investigate the nature of tonally incongruent melodic final notes that may elicit humor in listeners. To our knowledge, this is the first experiment aiming at studying humor elicitation in music with the use of empirical, quantitative methods. To this aim, we have based the experiment on the incongruency/resolution theory of humor and the violations of expectations in music. Our goal was to determine the amount of change, that is, the degree of incongruency required to elicit humor. Methods We composed two simple, 8-bar long melodies, and changed their final notes so that they could randomly finish on any semitone between an octave upwards and downwards with respect to the original, tonic final note. This resulted in 25 versions for both melodies, including the original final notes, for each semitone. Musician and non-musician participants rated each version of each melody on five 7-point bipolar scales according to goodness of fit, humor, beauty, playfulness, and pleasantness. Results and conclusions Our results showed that even a single change of the final note can elicit humor. No strong connection was found between humor elicitation and the level of incongruency (i.e., the amount of violation of expectation). Instead, changes to the major-mode melody were more likely to be found humorous than those to the minor-mode melody, implying that a so-called playful context is necessary for humor elicitation as the major melody was labelled playful by the listeners. Furthermore, final notes below the original tonic end note were also found to be less humorous and less fitting to the melodic context than those above it. Supplementary Information The online version contains supplementary material available at 10.1186/s40359-022-00936-z.
Collapse
Affiliation(s)
- Sándor Imre Nagy
- Institute of Psychology, University of Pécs, Pécs, Hungary. .,Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary. .,Faculty of Music and Visual Arts, University of Pécs, Pécs, Hungary.
| | - György Révész
- Institute of Psychology, University of Pécs, Pécs, Hungary
| | - László Séra
- Institute of Psychology, University of Pécs, Pécs, Hungary
| | | | | |
Collapse
|
28
|
Kemmerer D. Revisiting the relation between syntax, action, and left BA44. Front Hum Neurosci 2022; 16:923022. [PMID: 36211129 PMCID: PMC9537576 DOI: 10.3389/fnhum.2022.923022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 09/05/2022] [Indexed: 11/13/2022] Open
Abstract
Among the many lines of research that have been exploring how embodiment contributes to cognition, one focuses on how the neural substrates of language may be shared, or at least closely coupled, with those of action. This paper revisits a particular proposal that has received considerable attention—namely, that the forms of hierarchical sequencing that characterize both linguistic syntax and goal-directed action are underpinned partly by common mechanisms in left Brodmann area (BA) 44, a cortical region that is not only classically regarded as part of Broca’s area, but is also a core component of the human Mirror Neuron System. First, a recent multi-participant, multi-round debate about this proposal is summarized together with some other relevant findings. This review reveals that while the proposal is supported by a variety of theoretical arguments and empirical results, it still faces several challenges. Next, a narrower application of the proposal is discussed, specifically involving the basic word order of subject (S), object (O), and verb (V) in simple transitive clauses. Most languages are either SOV or SVO, and, building on prior work, it is argued that these strong syntactic tendencies derive from how left BA44 represents the sequential-hierarchical structure of goal-directed actions. Finally, with the aim of clarifying what it might mean for syntax and action to have “common” neural mechanisms in left BA44, two different versions of the main proposal are distinguished. Hypothesis 1 states that the very same neural mechanisms in left BA44 subserve some aspects of hierarchical sequencing for syntax and action, whereas Hypothesis 2 states that anatomically distinct but functionally parallel neural mechanisms in left BA44 subserve some aspects of hierarchical sequencing for syntax and action. Although these two hypotheses make different predictions, at this point neither one has significantly more explanatory power than the other, and further research is needed to elaborate and test them.
Collapse
Affiliation(s)
- David Kemmerer
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IND, United States
- Department of Psychological Sciences, Purdue University, West Lafayette, IND, United States
- *Correspondence: David Kemmerer,
| |
Collapse
|
29
|
Wu T, Sun F, Guo Y, Zhai M, Yu S, Chu J, Yu C, Yang Y. Spatio-Temporal Dynamics of Entropy in EEGS during Music Stimulation of Alzheimer's Disease Patients with Different Degrees of Dementia. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1137. [PMID: 36010801 PMCID: PMC9407451 DOI: 10.3390/e24081137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 07/29/2022] [Accepted: 08/01/2022] [Indexed: 06/15/2023]
Abstract
Music has become a common adjunctive treatment for Alzheimer’s disease (AD) in recent years. Because Alzheimer’s disease can be classified into different degrees of dementia according to its severity (mild, moderate, severe), this study is to investigate whether there are differences in brain response to music stimulation in AD patients with different degrees of dementia. Seventeen patients with mild-to-moderate dementia, sixteen patients with severe dementia, and sixteen healthy elderly participants were selected as experimental subjects. The nonlinear characteristics of electroencephalogram (EEG) signals were extracted from 64-channel EEG signals acquired before, during, and after music stimulation. The results showed the following. (1) At the temporal level, both at the whole brain area and sub-brain area levels, the EEG responses of the mild-to-moderate patients showed statistical differences from those of the severe patients (p < 0.05). The nonlinear characteristics during music stimulus, including permutation entropy (PmEn), sample entropy (SampEn), and Lempel−Ziv complexity (LZC), were significantly higher in both mild-to-moderate patients and healthy controls compared to pre-stimulation, while it was significantly lower in severe patients. (2) At the spatial level, the EEG responses of the mild-to-moderate patients and the severe patients showed statistical differences (p < 0.05), showing that as the degree of dementia progressed, fewer pairs of EEG characteristic showed significant differences among brain regions under music stimulation. In this paper, we found that AD patients with different degrees of dementia had different EEG responses to music stimulation. Our study provides a possible explanation for this discrepancy in terms of the pathological progression of AD and music cognitive hierarchy theory. Our study has adjunctive implications for clinical music therapy in AD., potentially allowing for more targeted treatment. Meanwhile, the variations in the brains of Alzheimer’s patients in response to music stimulation might be a model for investigating the neural mechanism of music perception.
Collapse
|
30
|
Scharinger M, Knoop CA, Wagner V, Menninghaus W. Neural processing of poems and songs is based on melodic properties. Neuroimage 2022; 257:119310. [PMID: 35569784 DOI: 10.1016/j.neuroimage.2022.119310] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/26/2022] [Accepted: 05/11/2022] [Indexed: 11/30/2022] Open
Abstract
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Pilgrimstein 16, Marburg 35032, Germany; Center for Mind, Brain and Behavior, Universities of Marburg and Gießen, Germany.
| | - Christine A Knoop
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Valentin Wagner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Experimental Psychology Unit, Helmut Schmidt University / University of the Federal Armed Forces Hamburg, Germany
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
31
|
Wang L, Ong JH, Ponsot E, Hou Q, Jiang C, Liu F. Mental representations of speech and musical pitch contours reveal a diversity of profiles in autism spectrum disorder. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2022; 27:629-646. [PMID: 35848413 PMCID: PMC10074762 DOI: 10.1177/13623613221111207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
LAY ABSTRACT As a key auditory attribute of sounds, pitch is ubiquitous in our everyday listening experience involving language, music and environmental sounds. Given its critical role in auditory processing related to communication, numerous studies have investigated pitch processing in autism spectrum disorder. However, the findings have been mixed, reporting either enhanced, typical or impaired performance among autistic individuals. By investigating top-down comparisons of internal mental representations of pitch contours in speech and music, this study shows for the first time that, while autistic individuals exhibit diverse profiles of pitch processing compared to non-autistic individuals, their mental representations of pitch contours are typical across domains. These findings suggest that pitch-processing mechanisms are shared across domains in autism spectrum disorder and provide theoretical implications for using music to improve speech for those autistic individuals who have language problems.
Collapse
Affiliation(s)
- Li Wang
- University of Reading, UK.,The Chinese University of Hong Kong, Hong Kong
| | | | | | - Qingqi Hou
- Nanjing Normal University of Special Education, China
| | | | | |
Collapse
|
32
|
Cecchetti G, Herff SA, Rohrmeier MA. Musical Garden Paths: Evidence for Syntactic Revision Beyond the Linguistic Domain. Cogn Sci 2022; 46:e13165. [PMID: 35738498 PMCID: PMC9286404 DOI: 10.1111/cogs.13165] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 04/10/2022] [Accepted: 05/16/2022] [Indexed: 11/30/2022]
Abstract
While theoretical and empirical insights suggest that the capacity to represent and process complex syntax is crucial in language as well as other domains, it is still unclear whether specific parsing mechanisms are also shared across domains. Focusing on the musical domain, we developed a novel behavioral paradigm to investigate whether a phenomenon of syntactic revision occurs in the processing of tonal melodies under analogous conditions as in language. We present the first proof-of-existence for syntactic revision in a set of tonally ambiguous melodies, supporting the relevance of syntactic representations and parsing with language-like characteristics in a nonlinguistic domain. Furthermore, we find no evidence for a modulatory effect of musical training, suggesting that a general cognitive capacity, rather than explicit knowledge and strategies, may underlie the observed phenomenon in music.
Collapse
Affiliation(s)
- Gabriele Cecchetti
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Steffen A Herff
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| |
Collapse
|
33
|
Morett LM, Feiler JB, Getz LM. Elucidating the influences of embodiment and conceptual metaphor on lexical and non-speech tone learning. Cognition 2022; 222:105014. [DOI: 10.1016/j.cognition.2022.105014] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 01/04/2022] [Accepted: 01/05/2022] [Indexed: 11/25/2022]
|
34
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 94] [Impact Index Per Article: 47.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
35
|
Sihvonen AJ, Soinila S, Särkämö T. Post-stroke enriched auditory environment induces structural connectome plasticity: secondary analysis from a randomized controlled trial. Brain Imaging Behav 2022; 16:1813-1822. [PMID: 35352235 PMCID: PMC9279272 DOI: 10.1007/s11682-022-00661-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2022] [Indexed: 11/30/2022]
Abstract
Post-stroke neuroplasticity and cognitive recovery can be enhanced by multimodal stimulation via environmental enrichment. In this vein, recent studies have shown that enriched sound environment (i.e., listening to music) during the subacute post-stroke stage improves cognitive outcomes compared to standard care. The beneficial effects of post-stroke music listening are further pronounced when listening to music containing singing, which enhances language recovery coupled with structural and functional connectivity changes within the language network. However, outside the language network, virtually nothing is known about the effects of enriched sound environment on the structural connectome of the recovering post-stroke brain. Here, we report secondary outcomes from a single-blind randomized controlled trial (NCT01749709) in patients with ischaemic or haemorrhagic stroke (N = 38) who were randomly assigned to listen to vocal music, instrumental music, or audiobooks during the first 3 post-stroke months. Utilizing the longitudinal diffusion-weighted MRI data of the trial, the present study aimed to determine whether the music listening interventions induce changes on structural white matter connectome compared to the control audiobook intervention. Both vocal and instrumental music groups increased quantitative anisotropy longitudinally in multiple left dorsal and ventral tracts as well as in the corpus callosum, and also in the right hemisphere compared to the audiobook group. Audiobook group did not show increased structural connectivity changes compared to both vocal and instrumental music groups. This study shows that listening to music, either vocal or instrumental promotes wide-spread structural connectivity changes in the post-stroke brain, providing a fertile ground for functional restoration.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Turku, Finland. .,School of Health and Rehabilitation Sciences, Queensland Aphasia Research Centre and UQ Centre for Clinical Research, The University of Queensland, Brisbane, Australia.
| | - Seppo Soinila
- Neurocenter, Turku University Hospital and Division of Clinical Neurosciences, University of Turku, Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Turku, Finland
| |
Collapse
|
36
|
Worschech F, Altenmüller E, Jünemann K, Sinke C, Krüger THC, Scholz DS, Müller CAH, Kliegel M, James CE, Marie D. Evidence of cortical thickness increases in bilateral auditory brain structures following piano learning in older adults. Ann N Y Acad Sci 2022; 1513:21-30. [PMID: 35292982 DOI: 10.1111/nyas.14762] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 02/03/2022] [Indexed: 12/25/2022]
Abstract
Morphological differences in the auditory brain of musicians compared to nonmusicians are often associated with life-long musical activity. Cross-sectional studies, however, do not allow for any causal inferences, and most experimental studies testing music-driven adaptations investigated children. Although the importance of the age at which musical training begins is widely recognized to impact neuroplasticity, there have been few longitudinal studies examining music-related changes in the brains of older adults. Using magnetic resonance imaging, we measured cortical thickness (CT) of 12 auditory-related regions of interest before and after 6 months of musical instruction in 134 healthy, right-handed, normal-hearing, musically-naive older adults (64-76 years old). Prior to the study, all participants were randomly assigned to either piano training or to a musical culture/music listening group. In five regions-left Heschl's gyrus, left planum polare, bilateral superior temporal sulcus, and right Heschl's sulcus-we found an increase in CT in the piano training group compared with the musical culture group. Furthermore, CT of the right Heschl's gyrus could be identified as a morphological substrate supporting speech in noise perception. The results support the conclusion that playing an instrument is an effective stimulator for cortical plasticity, even in older adults.
Collapse
Affiliation(s)
- Florian Worschech
- Institute for Music Physiology and Musicians' Medicine, Hanover University of Music, Drama and Media, Hanover, Germany.,Center for Systems Neuroscience, Hanover, Germany
| | - Eckart Altenmüller
- Institute for Music Physiology and Musicians' Medicine, Hanover University of Music, Drama and Media, Hanover, Germany.,Center for Systems Neuroscience, Hanover, Germany
| | - Kristin Jünemann
- Center for Systems Neuroscience, Hanover, Germany.,Division of Clinical Psychology & Sexual Medicine, Department of Psychiatry, Social Psychiatry and Psychotherapy, Hanover Medical School, Hanover, Germany
| | - Christopher Sinke
- Division of Clinical Psychology & Sexual Medicine, Department of Psychiatry, Social Psychiatry and Psychotherapy, Hanover Medical School, Hanover, Germany
| | - Tillmann H C Krüger
- Center for Systems Neuroscience, Hanover, Germany.,Division of Clinical Psychology & Sexual Medicine, Department of Psychiatry, Social Psychiatry and Psychotherapy, Hanover Medical School, Hanover, Germany
| | - Daniel S Scholz
- Institute for Music Physiology and Musicians' Medicine, Hanover University of Music, Drama and Media, Hanover, Germany.,Center for Systems Neuroscience, Hanover, Germany
| | - Cécile A H Müller
- Geneva Musical Minds Lab, Geneva School of Health Sciences, University of Applied Sciences and Arts Western Switzerland HES-SO, Geneva, Switzerland
| | - Matthias Kliegel
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland.,Center for the Interdisciplinary Study of Gerontology and Vulnerability, University of Geneva, Geneva, Switzerland
| | - Clara E James
- Geneva Musical Minds Lab, Geneva School of Health Sciences, University of Applied Sciences and Arts Western Switzerland HES-SO, Geneva, Switzerland.,Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Damien Marie
- Geneva Musical Minds Lab, Geneva School of Health Sciences, University of Applied Sciences and Arts Western Switzerland HES-SO, Geneva, Switzerland.,Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
37
|
Cohn N, Schilperoord J. Remarks on Multimodality: Grammatical Interactions in the Parallel Architecture. Front Artif Intell 2022; 4:778060. [PMID: 35059636 PMCID: PMC8764459 DOI: 10.3389/frai.2021.778060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 12/10/2021] [Indexed: 11/13/2022] Open
Abstract
Language is typically embedded in multimodal communication, yet models of linguistic competence do not often incorporate this complexity. Meanwhile, speech, gesture, and/or pictures are each considered as indivisible components of multimodal messages. Here, we argue that multimodality should not be characterized by whole interacting behaviors, but by interactions of similar substructures which permeate across expressive behaviors. These structures comprise a unified architecture and align within Jackendoff's Parallel Architecture: a modality, meaning, and grammar. Because this tripartite architecture persists across modalities, interactions can manifest within each of these substructures. Interactions between modalities alone create correspondences in time (ex. speech with gesture) or space (ex. writing with pictures) of the sensory signals, while multimodal meaning-making balances how modalities carry "semantic weight" for the gist of the whole expression. Here we focus primarily on interactions between grammars, which contrast across two variables: symmetry, related to the complexity of the grammars, and allocation, related to the relative independence of interacting grammars. While independent allocations keep grammars separate, substitutive allocation inserts expressions from one grammar into those of another. We show that substitution operates in interactions between all three natural modalities (vocal, bodily, graphic), and also in unimodal contexts within and between languages, as in codeswitching. Altogether, we argue that unimodal and multimodal expressions arise as emergent interactive states from a unified cognitive architecture, heralding a reconsideration of the "language faculty" itself.
Collapse
Affiliation(s)
- Neil Cohn
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, Netherlands
| | | |
Collapse
|
38
|
Williams JA, Margulis EH, Nastase SA, Chen J, Hasson U, Norman KA, Baldassano C. High-Order Areas and Auditory Cortex Both Represent the High-Level Event Structure of Music. J Cogn Neurosci 2022; 34:699-714. [PMID: 35015874 DOI: 10.1162/jocn_a_01815] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial pFC, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.
Collapse
|
39
|
An Introduction to Musical Interactions. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6010004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The article presents a contextual survey of eight contributions in the special issue Musical Interactions (Volume I) in Multimodal Technologies and Interaction. The presentation includes (1) a critical examination of what it means to be musical, to devise the concept of music proper to MTI as well as multicultural proximity, and (2) a conceptual framework for instrumentation, design, and assessment of musical interaction research through five enabling dimensions: Affordance; Design Alignment; Adaptive Learning; Second-Order Feedback; Temporal Integration. Each dimension is discussed and applied in the survey. The results demonstrate how the framework provides an interdisciplinary scope required for musical interaction, and how this approach may offer a coherent way to describe and assess approaches to research and design as well as implementations of interactive musical systems. Musical interaction stipulates musical liveness for experiencing both music and technologies. While music may be considered ontologically incomplete without a listener, musical interaction is defined as ontological completion of a state of music and listening through a listener’s active engagement with musical resources in multimodal information flow.
Collapse
|
40
|
Bianco R, Novembre G, Ringer H, Kohler N, Keller PE, Villringer A, Sammler D. Lateral Prefrontal Cortex Is a Hub for Music Production from Structural Rules to Movements. Cereb Cortex 2021; 32:3878-3895. [PMID: 34965579 PMCID: PMC9476625 DOI: 10.1093/cercor/bhab454] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 11/08/2021] [Accepted: 11/09/2021] [Indexed: 11/13/2022] Open
Abstract
Complex sequential behaviors, such as speaking or playing music, entail flexible rule-based chaining of single acts. However, it remains unclear how the brain translates abstract structural rules into movements. We combined music production with multimodal neuroimaging to dissociate high-level structural and low-level motor planning. Pianists played novel musical chord sequences on a muted MR-compatible piano by imitating a model hand on screen. Chord sequences were manipulated in terms of musical harmony and context length to assess structural planning, and in terms of fingers used for playing to assess motor planning. A model of probabilistic sequence processing confirmed temporally extended dependencies between chords, as opposed to local dependencies between movements. Violations of structural plans activated the left inferior frontal and middle temporal gyrus, and the fractional anisotropy of the ventral pathway connecting these two regions positively predicted behavioral measures of structural planning. A bilateral frontoparietal network was instead activated by violations of motor plans. Both structural and motor networks converged in lateral prefrontal cortex, with anterior regions contributing to musical structure building, and posterior areas to movement planning. These results establish a promising approach to study sequence production at different levels of action representation.
Collapse
Affiliation(s)
- Roberta Bianco
- UCL Ear Institute, University College London, London WC1X 8EE, UK.,Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome 00161, Italy
| | - Hanna Ringer
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Institute of Psychology, University of Leipzig, Leipzig 04109, Germany
| | - Natalie Kohler
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Peter E Keller
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Aarhus 8000, Denmark.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW 2751, Australia
| | - Arno Villringer
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Daniela Sammler
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| |
Collapse
|
41
|
The effects of music listening on somatic symptoms and stress markers in the everyday life of women with somatic complaints and depression. Sci Rep 2021; 11:24062. [PMID: 34911978 PMCID: PMC8674261 DOI: 10.1038/s41598-021-03374-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 11/30/2021] [Indexed: 12/18/2022] Open
Abstract
Despite a growing body of literature documenting the health-beneficial effects of music, empirical research on the effects of music listening in individuals with psychosomatic disorders is scarce. Using an ambulatory assessment design, we tested whether music listening predicts changes in somatic symptoms, subjective, and biological stress levels, and examined potential mediating processes, in the everyday life of 58 women (M = 27.7 years) with somatic symptom disorder (SSD) and depressive disorders (DEP). Multilevel models revealed that music listening predicted lower subjective stress ratings (p ≤ 0.02) irrespective of mental health condition, which, in turn, predicted lower somatic symptoms (p ≤ 0.03). Moreover, specific music characteristics modulated somatic symptoms (p = 0.01) and autonomic activity (p = 0.03). These findings suggest that music listening might mitigate somatic symptoms predominantly via a reduction in subjective stress in women with SSD and DEP and further inform the development of targeted music interventions applicable in everyday life.
Collapse
|
42
|
Zheng G, Li Y, Qi X, Zhang W, Yu Y. Mental Calculation Drives Reliable and Weak Distant Connectivity While Music Listening Induces Dense Local Connectivity. PHENOMICS (CHAM, SWITZERLAND) 2021; 1:285-298. [PMID: 36939768 PMCID: PMC9590531 DOI: 10.1007/s43657-021-00027-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 08/13/2021] [Accepted: 08/22/2021] [Indexed: 11/27/2022]
Abstract
Mathematical calculation usually requires sustained attention to manipulate numbers in the mind, while listening to light music has a relaxing effect on the brain. The differences in the corresponding brain functional network topologies underlying these behaviors remain rarely known. Here, we systematically examined the brain dynamics of four behaviors (resting with eyes closed and eyes open, tasks of music listening and mental calculation) using 64-channel electroencephalogram (EEG) recordings and graph theory analysis. We developed static and dynamic minimum spanning tree (MST) analysis method and demonstrated that the brain network topology under mental calculation is a more line-like structure with less tree hierarchy and leaf fraction; however, the hub regions, which are mainly located in the frontal, temporal and parietal regions, grow more stable over time. In contrast, music-listening drives the brain to exhibit a highly rich network of star structure, and the hub regions are mainly located in the posterior regions. We then adopted the dynamic dissimilarity of different MSTs over time based on the graph Laplacian and revealed low dissimilarity during mental calculation. These results suggest that the human brain functional connectivity of individuals has unique dynamic diversity and flexibility under various behaviors. Supplementary Information The online version contains supplementary material available at 10.1007/s43657-021-00027-w.
Collapse
Affiliation(s)
- Gaoxing Zheng
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute and Research Institute of Intelligent and Complex Systems, Institute of Science and Technology for Brain-Inspired Intelligence, Shanghai, 200433 China
- Department of Neurology, Zhongshan Hospital and Shanghai Medical College, Fudan University, Shanghai, 200032 China
| | - Yuzhu Li
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute and Research Institute of Intelligent and Complex Systems, Institute of Science and Technology for Brain-Inspired Intelligence, Shanghai, 200433 China
| | - Xiaoying Qi
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute and Research Institute of Intelligent and Complex Systems, Institute of Science and Technology for Brain-Inspired Intelligence, Shanghai, 200433 China
| | - Wei Zhang
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute and Research Institute of Intelligent and Complex Systems, Institute of Science and Technology for Brain-Inspired Intelligence, Shanghai, 200433 China
| | - Yuguo Yu
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Human Phenome Institute and Research Institute of Intelligent and Complex Systems, Institute of Science and Technology for Brain-Inspired Intelligence, Shanghai, 200433 China
| |
Collapse
|
43
|
Abstract
Increasing evidence has uncovered associations between the cognition of abstract schemas and spatial perception. Here we examine such associations for Western musical syntax, tonality. Spatial metaphors are ubiquitous when describing tonality: stable, closural tones are considered to be spatially central and, as gravitational foci, spatially lower. We investigated whether listeners, musicians and nonmusicians, indeed associate tonal relationships with visuospatial dimensions, including spatial height, centrality, laterality, and size, implicitly or explicitly, and whether such mappings are consistent with established metaphors. In the explicit paradigm, participants heard a tonality-establishing prime followed by a probe tone and coupled each probe with a subjectively appropriate location (Exp.1) or size (Exp.4). The implicit paradigm used a version of the Implicit Association Test to examine associations of tonal stability with vertical position (Exp.2), lateral position (Exp3) and size (Exp.5). Tonal stability was indeed associated with perceived physical space: the spatial distances between the locations associated with different scale-degrees significantly correlated with the tonal stability differences between these scale-degrees. However, inconsistently with musical discourse, stable tones were associated with leftward (instead of central) and higher (instead of lower) spatial positions. We speculate that these mappings are influenced by emotion, embodying the "good is up" metaphor, and by the spatial structure of music keyboards. Taken together, the results demonstrate a new type of cross-modal correspondence and a hitherto under-researched connotative function of musical structure. Importantly, the results suggest that the spatial mappings of an abstract domain may be independent of the spatial metaphors used to describe that domain.
Collapse
|
44
|
Arabin B, Hellmeyer L, Maul J, Metz GAS. Awareness of maternal stress, consequences for the offspring and the need for early interventions to increase stress resilience. J Perinat Med 2021; 49:979-989. [PMID: 34478615 DOI: 10.1515/jpm-2021-0323] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 06/29/2021] [Indexed: 12/31/2022]
Abstract
Experimental and clinical studies suggest that prenatal experiences may influence health trajectories up to adulthood and high age. According to the hypothesis of developmental origins of health and disease exposure of pregnant women to stress, nutritional challenges, infection, violence, or war may "program" risks for diseases in later life. Stress and anxieties can exist or be provoked in parents after fertility treatment, after information or diagnosis of fetal abnormalities and demand simultaneous caring concepts to support the parents. In vulnerable groups, it is therefore important to increase the stress resilience to avoid harmful consequences for the growing child. "Enriched environment" defines a key paradigm to decipher how interactions between genes and environment change the structure and function of the brain. The regulation of the fetal hippocampal neurogenesis and morphology during pregnancy is one example of this complex interaction. Animal experiments have demonstrated that an enriched environment can revert consequences of stress in the offspring during critical periods of brain plasticity. Epigenetic markers of stress or wellbeing during pregnancy might even be diagnosed by fragments of placental DNA in the maternal circulation that show characteristic methylation patterns. The development of fetal senses further illustrates how external stimulation may impact individual preferences. Here, we therefore not only discuss how maternal stress influences cognitive development and resilience, but also design possibilities of non-invasive interventions for both mothers and children summarized and evaluated in the light of their potential to improve the health of future generations.
Collapse
Affiliation(s)
- Birgit Arabin
- Clara Angela Foundation, Berlin, Germany.,Department of Obstetrics, Charité, Humboldt University Berlin, Berlin, Germany
| | - Lars Hellmeyer
- Clara Angela Foundation, Berlin, Germany.,Vivantes Klinikum im Friedrichshain, Berlin, Germany
| | | | - Gerlinde A S Metz
- Clara Angela Foundation, Berlin, Germany.,Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada
| |
Collapse
|
45
|
Ecological and psychological factors in the cultural evolution of music. Behav Brain Sci 2021; 44:e110. [PMID: 34588039 DOI: 10.1017/s0140525x20001181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The two target articles agree that processes of cultural evolution generate richness and diversity in music, but neither address this question in a focused way. We sketch one way to proceed - and hence suggest how the target articles differ not only in empirical claims, but also in their tacit, prior assumptions about the relationship between cognition and culture.
Collapse
|
46
|
Di Liberto GM, Marion G, Shamma SA. Accurate Decoding of Imagined and Heard Melodies. Front Neurosci 2021; 15:673401. [PMID: 34421512 PMCID: PMC8375770 DOI: 10.3389/fnins.2021.673401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Accepted: 06/17/2021] [Indexed: 11/16/2022] Open
Abstract
Music perception requires the human brain to process a variety of acoustic and music-related properties. Recent research used encoding models to tease apart and study the various cortical contributors to music perception. To do so, such approaches study temporal response functions that summarise the neural activity over several minutes of data. Here we tested the possibility of assessing the neural processing of individual musical units (bars) with electroencephalography (EEG). We devised a decoding methodology based on a maximum correlation metric across EEG segments (maxCorr) and used it to decode melodies from EEG based on an experiment where professional musicians listened and imagined four Bach melodies multiple times. We demonstrate here that accurate decoding of melodies in single-subjects and at the level of individual musical units is possible, both from EEG signals recorded during listening and imagination. Furthermore, we find that greater decoding accuracies are measured for the maxCorr method than for an envelope reconstruction approach based on backward temporal response functions (bTRFenv). These results indicate that low-frequency neural signals encode information beyond note timing, especially with respect to low-frequency cortical signals below 1 Hz, which are shown to encode pitch-related information. Along with the theoretical implications of these results, we discuss the potential applications of this decoding methodology in the context of novel brain-computer interface solutions.
Collapse
Affiliation(s)
- Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, CNRS, Paris, France.,Ecole Normale Supérieure, PSL University, Paris, France.,Department of Mechanical, Manufacturing and Biomedical Engineering, Trinity Centre for Biomedical Engineering, Trinity College, Trinity Institute of Neuroscience, The University of Dublin, Dublin, Ireland.,Centre for Biomedical Engineering, School of Electrical and Electronic Engineering and UCD University College Dublin, Dublin, Ireland
| | - Guilhem Marion
- Laboratoire des Systèmes Perceptifs, CNRS, Paris, France
| | - Shihab A Shamma
- Laboratoire des Systèmes Perceptifs, CNRS, Paris, France.,Institute for Systems Research, Electrical and Computer Engineering, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
47
|
Wang L, Beaman CP, Jiang C, Liu F. Perception and Production of Statement-Question Intonation in Autism Spectrum Disorder: A Developmental Investigation. J Autism Dev Disord 2021; 52:3456-3472. [PMID: 34355295 PMCID: PMC9296411 DOI: 10.1007/s10803-021-05220-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/25/2021] [Indexed: 11/25/2022]
Abstract
Prosody or “melody in speech” in autism spectrum disorder (ASD) is often perceived as atypical. This study examined perception and production of statements and questions in 84 children, adolescents and adults with and without ASD, as well as participants’ pitch direction discrimination thresholds. The results suggested that the abilities to discriminate (in both speech and music conditions), identify, and imitate statement-question intonation were intact in individuals with ASD across age cohorts. Sensitivity to pitch direction predicted performance on intonation processing in both groups, who also exhibited similar developmental changes. These findings provide evidence for shared mechanisms in pitch processing between speech and music, as well as associations between low- and high-level pitch processing and between perception and production of pitch.
Collapse
Affiliation(s)
- Li Wang
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - C Philip Beaman
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.
| |
Collapse
|
48
|
Zhang X, Gong Q. Context-dependent Plasticity and Strength of Subcortical Encoding of Musical Sounds Independently Underlie Pitch Discrimination for Music Melodies. Neuroscience 2021; 472:68-89. [PMID: 34358631 DOI: 10.1016/j.neuroscience.2021.07.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 07/26/2021] [Accepted: 07/27/2021] [Indexed: 10/20/2022]
Abstract
Subcortical auditory nuclei contribute to pitch perception, but how subcortical sound encoding is related to pitch processing for music perception remains unclear. Conventionally, enhanced subcortical sound encoding is considered underlying superior pitch discrimination. However, associations between superior auditory perception and the context-dependent plasticity of subcortical sound encoding are also documented. Here, we explored the subcortical neural correlates to music pitch perception by analyzing frequency-following responses (FFRs) to musical sounds presented in a predictable context and a random context. We found that the FFR inter-trial phase-locking (ITPL) was negatively correlated with behavioral performances of discrimination of pitches in music melodies. It was also negatively correlated with the plasticity indices measuring the variability of FFRs to physically identical sounds between the two contexts. The plasticity indices were consistently positively correlated with pitch discrimination performances, suggesting the subcortical context-dependent plasticity underlying music pitch perception. Moreover, the raw FFR spectral strength was not significantly correlated with pitch discrimination performances. However, it was positively correlated with behavioral performances when the FFR ITPL was controlled by partial correlations, suggesting that the strength of subcortical sound encoding underlies music pitch perception. When the spectral strength was controlled by partial correlations, the negative ITPL-behavioral correlations were maintained. Furthermore, the FFR ITPL, the plasticity indices, and the FFR spectral strength were more correlated with pitch than with rhythm discrimination performances. These findings suggest that the context-dependent plasticity and the strength of subcortical encoding of musical sounds are independently and perhaps specifically associated with pitch perception for music melodies.
Collapse
Affiliation(s)
- Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qin Gong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; School of Medicine, Shanghai University, Shanghai, China.
| |
Collapse
|
49
|
Edalati M, Mahmoudzadeh M, Safaie J, Wallois F, Moghimi S. Violation of rhythmic expectancies can elicit late frontal gamma activity nested in theta oscillations. Psychophysiology 2021; 58:e13909. [PMID: 34310719 PMCID: PMC9285090 DOI: 10.1111/psyp.13909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 06/25/2021] [Accepted: 07/08/2021] [Indexed: 11/29/2022]
Abstract
Rhythm processing involves building expectations according to the hierarchical temporal structure of auditory events. Although rhythm processing has been addressed in the context of predictive coding, the properties of the oscillatory response in different cortical areas are still not clear. We explored the oscillatory properties of the neural response to rhythmic incongruence and the cross-frequency coupling between multiple frequencies to further investigate the mechanisms underlying rhythm perception. We designed an experiment to investigate the neural response to rhythmic deviations in which the tone either arrived earlier than expected or the tone in the same metrical position was omitted. These two manipulations modulate the rhythmic structure differently, with the former creating a larger violation of the general structure of the musical stimulus than the latter. Both deviations resulted in an MMN response, whereas only the rhythmic deviant resulted in a subsequent P3a. Rhythmic deviants due to the early occurrence of a tone, but not omission deviants, seemed to elicit a late high gamma response (60-80 Hz) at the end of the P3a over the left frontal region, which, interestingly, correlated with the P3a amplitude over the same region and was also nested in theta oscillations. The timing of the elicited high-frequency gamma oscillations related to rhythmic deviation suggests that it might be related to the update of the predictive neural model, corresponding to the temporal structure of the events in higher-level cortical areas.
Collapse
Affiliation(s)
- M Edalati
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, CURS, Amiens, France.,Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - M Mahmoudzadeh
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, CURS, Amiens, France.,Inserm UMR1105, EFSN Pédiatriques, CHU Amiens sud, Amiens, France
| | - J Safaie
- Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - F Wallois
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, CURS, Amiens, France.,Inserm UMR1105, EFSN Pédiatriques, CHU Amiens sud, Amiens, France
| | - S Moghimi
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, CURS, Amiens, France.,Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran.,Inserm UMR1105, EFSN Pédiatriques, CHU Amiens sud, Amiens, France
| |
Collapse
|
50
|
Wang L, Pfordresher PQ, Jiang C, Liu F. Individuals with autism spectrum disorder are impaired in absolute but not relative pitch and duration matching in speech and song imitation. Autism Res 2021; 14:2355-2372. [PMID: 34214243 DOI: 10.1002/aur.2569] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 05/03/2021] [Accepted: 06/22/2021] [Indexed: 11/08/2022]
Abstract
Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation. However, few studies have identified clear quantitative characteristics of vocal imitation in ASD. This study investigated imitation of speech and song in English-speaking individuals with and without ASD and its modulation by age. Participants consisted of 25 autistic children and 19 autistic adults, who were compared to 25 children and 19 adults with typical development matched on age, gender, musical training, and cognitive abilities. The task required participants to imitate speech and song stimuli with varying pitch and duration patterns. Acoustic analyses of the imitation performance suggested that individuals with ASD were worse than controls on absolute pitch and duration matching for both speech and song imitation, although they performed as well as controls on relative pitch and duration matching. Furthermore, the two groups produced similar numbers of pitch contour, pitch interval-, and time errors. Across both groups, sung pitch was imitated more accurately than spoken pitch, whereas spoken duration was imitated more accurately than sung duration. Children imitated spoken pitch more accurately than adults when it came to speech stimuli, whereas age showed no significant relationship to song imitation. These results reveal a vocal imitation deficit across speech and music domains in ASD that is specific to absolute pitch and duration matching. This finding provides evidence for shared mechanisms between speech and song imitation, which involves independent implementation of relative versus absolute features. LAY SUMMARY: Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation of actions and gestures. Characteristics of vocal imitation in ASD remain unclear. By comparing speech and song imitation, this study shows that individuals with ASD have a vocal imitative deficit that is specific to absolute pitch and duration matching, while performing as well as controls on relative pitch and duration matching, across speech and music domains.
Collapse
Affiliation(s)
- Li Wang
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Peter Q Pfordresher
- Department of Psychology, University at Buffalo, State University of New York, Buffalo, New York, USA
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|