1
|
Liu M, Teng X, Jiang J. Instrumental music training relates to intensity assessment but not emotional prosody recognition in Mandarin. PLoS One 2024; 19:e0309432. [PMID: 39213300 PMCID: PMC11364251 DOI: 10.1371/journal.pone.0309432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024] Open
Abstract
Building on research demonstrating the benefits of music training for emotional prosody recognition in nontonal languages, this study delves into its unexplored influence on tonal languages. In tonal languages, the acoustic similarity between lexical tones and music, along with the dual role of pitch in conveying lexical and affective meanings, create a unique interplay. We evaluated 72 participants, half of whom had extensive instrumental music training, with the other half serving as demographically matched controls. All participants completed an online test consisting of 210 Chinese pseudosentences, each designed to express one of five emotions: happiness, sadness, fear, anger, or neutrality. Our robust statistical analyses, which included effect size estimates and Bayesian factors, revealed that music and nonmusic groups exhibit similar abilities in identifying the emotional prosody of various emotions. However, the music group attributed higher intensity ratings to emotional prosodies of happiness, fear, and anger compared to the nonmusic group. These findings suggest that while instrumental music training is not related to emotional prosody recognition, it does appear to be related to perceived emotional intensity. This dissociation between emotion recognition and intensity evaluation adds a new piece to the puzzle of the complex relationship between music training and emotion perception in tonal languages.
Collapse
Affiliation(s)
- Mengting Liu
- Department of Art, Harbin Conservatory of Music, Harbin, China
| | - Xiangbin Teng
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Jun Jiang
- Music College, Shanghai Normal University, Shanghai, China
| |
Collapse
|
2
|
Smith AR, Salley B, Hanson-Abromeit D, Paluch RA, Engel H, Piazza J, Kong KL. The impact of a community-based music program during infancy on the quality of parent-child language interactions. Child Dev 2024; 95:481-496. [PMID: 37767574 DOI: 10.1111/cdev.14005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 07/10/2023] [Accepted: 07/25/2023] [Indexed: 09/29/2023]
Abstract
The early language environment, especially high-quality, contingent parent-child language interactions, is crucial for a child's language development and later academic success. In this secondary analysis study, 89 parent-child dyads were randomly assigned to either the Music Together® (music) or play date (control) classes. Children were 9- to 15-month old at baseline, primarily white (86.7%) and female (52%). Measures of conversational turns (CTs) and parental verbal quality were coded from parent-child free play episodes at baseline, mid-intervention (month 6), and post-intervention (month 12). Results show that participants in the music group had a significantly greater increase in CT measures and quality of parent verbalization post-intervention. Music enrichment programs may be a strategy to enhance parent-child language interactions during early childhood.
Collapse
Affiliation(s)
- Amy R Smith
- Baby Health Behavior Lab, Division of Health Services and Outcomes Research, Children's Mercy Research Institute, Children's Mercy Hospital, Kansas City, Missouri, USA
- Center for Children's Healthy Lifestyles and Nutrition, University of Kansas Medical Center, Kansas City, Kansas, USA
| | - Brenda Salley
- Department of Pediatrics, University of Kansas Medical Center, Kansas City, Kansas, USA
- Department of Pediatrics, Children's Mercy Hospital, Kansas City, Missouri, USA
| | | | - Rocco A Paluch
- Division of Behavioral Medicine, Department of Pediatrics, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, New York, USA
| | - Hideko Engel
- Baby Health Behavior Lab, Division of Health Services and Outcomes Research, Children's Mercy Research Institute, Children's Mercy Hospital, Kansas City, Missouri, USA
| | - Jacqueline Piazza
- Division of Behavioral Medicine, Department of Pediatrics, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, New York, USA
| | - Kai Ling Kong
- Baby Health Behavior Lab, Division of Health Services and Outcomes Research, Children's Mercy Research Institute, Children's Mercy Hospital, Kansas City, Missouri, USA
- Center for Children's Healthy Lifestyles and Nutrition, University of Kansas Medical Center, Kansas City, Kansas, USA
- Department of Pediatrics, University of Missouri, Kansas City, Missouri, USA
| |
Collapse
|
3
|
Pino MC, Giancola M, D'Amico S. The Association between Music and Language in Children: A State-of-the-Art Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:children10050801. [PMID: 37238349 DOI: 10.3390/children10050801] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 04/26/2023] [Accepted: 04/27/2023] [Indexed: 05/28/2023]
Abstract
Music and language are two complex systems that specifically characterize the human communication toolkit. There has been a heated debate in the literature on whether music was an evolutionary precursor to language or a byproduct of cognitive faculties that developed to support language. The present review of existing literature about the relationship between music and language highlights that music plays a critical role in language development in early life. Our findings revealed that musical properties, such as rhythm and melody, could affect language acquisition in semantic processing and grammar, including syntactic aspects and phonological awareness. Overall, the results of the current review shed further light on the complex mechanisms involving the music-language link, highlighting that music plays a central role in the comprehension of language development from the early stages of life.
Collapse
Affiliation(s)
- Maria Chiara Pino
- Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, 67100 L'Aquila, Italy
| | - Marco Giancola
- Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, 67100 L'Aquila, Italy
| | - Simonetta D'Amico
- Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, 67100 L'Aquila, Italy
| |
Collapse
|
4
|
Order of statistical learning depends on perceptive uncertainty. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 4:100080. [PMID: 36926596 PMCID: PMC10011828 DOI: 10.1016/j.crneur.2023.100080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 02/02/2023] [Accepted: 02/06/2023] [Indexed: 03/05/2023] Open
Abstract
Statistical learning (SL) is an innate mechanism by which the brain automatically encodes the n-th order transition probability (TP) of a sequence and grasps the uncertainty of the TP distribution. Through SL, the brain predicts a subsequent event (e n+1 ) based on the preceding events (e n ) that have a length of "n". It is now known that uncertainty modulates prediction in top-down processing by the human predictive brain. However, the manner in which the human brain modulates the order of SL strategies based on the degree of uncertainty remains an open question. The present study examined how uncertainty modulates the neural effects of SL and whether differences in uncertainty alter the order of SL strategies. It used auditory sequences in which the uncertainty of sequential information is manipulated based on the conditional entropy. Three sequences with different TP ratios of 90:10, 80:20, and 67:33 were prepared as low-, intermediate, and high-uncertainty sequences, respectively (conditional entropy: 0.47, 0.72, and 0.92 bit, respectively). Neural responses were recorded when the participants listened to the three sequences. The results showed that stimuli with lower TPs elicited a stronger neural response than those with higher TPs, as demonstrated by a number of previous studies. Furthermore, we found that participants adopted higher-order SL strategies in the high uncertainty sequence. These results may indicate that the human brain has an ability to flexibly alter the order based on the uncertainty. This uncertainty may be an important factor that determines the order of SL strategies. Particularly, considering that a higher-order SL strategy mathematically allows the reduction of uncertainty in information, we assumed that the brain may take higher-order SL strategies when encountering high uncertain information in order to reduce the uncertainty. The present study may shed new light on understanding individual differences in SL performance across different uncertain situations.
Collapse
|
5
|
Franco F, Suttora C, Spinelli M, Kozar I, Fasolo M. Singing to infants matters: Early singing interactions affect musical preferences and facilitate vocabulary building. JOURNAL OF CHILD LANGUAGE 2022; 49:552-577. [PMID: 33908341 DOI: 10.1017/s0305000921000167] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This research revealed that the frequency of reported parent-infant singing interactions predicted 6-month-old infants' performance in laboratory music experiments and mediated their language development in the second year. At 6 months, infants (n = 36) were tested using a preferential listening procedure assessing their sustained attention to instrumental and sung versions of the same novel tunes whilst the parents completed an ad-hoc questionnaire assessing home musical interactions with their infants. Language development was assessed with a follow-up when the infants were 14-month-old (n = 26). The main results showed that 6-month-olds preferred listening to sung rather than instrumental melodies, and that self-reported high levels of parental singing with their infants [i] were associated with less pronounced preference for the sung over the instrumental version of the tunes at 6 months, and [ii] predicted significant advantages on the language outcomes in the second year. The results are interpreted in relation to conceptions of developmental plasticity.
Collapse
Affiliation(s)
- Fabia Franco
- Department of Psychology, Middlesex University, London, UK
| | - Chiara Suttora
- Department of Psychology, University of Bologna, Bologna, Italy
| | - Maria Spinelli
- Department of Neuroscience, Imaging and Clinical Science, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - Iryna Kozar
- Department of Psychology, University of Milan-Bicocca, Milan, Italy
| | - Mirco Fasolo
- Department of Neuroscience, Imaging and Clinical Science, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| |
Collapse
|
6
|
Okano T, Daikoku T, Ugawa Y, Kanai K, Yumoto M. Perceptual uncertainty modulates auditory statistical learning: A magnetoencephalography study. Int J Psychophysiol 2021; 168:65-71. [PMID: 34418465 DOI: 10.1016/j.ijpsycho.2021.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 08/09/2021] [Accepted: 08/10/2021] [Indexed: 11/17/2022]
Abstract
Statistical learning allows comprehension of structured information, such as that in language and music. The brain computes a sequence's transition probability and predicts future states to minimise sensory reaction and derive entropy (uncertainty) from sequential information. Neurophysiological studies have revealed that early event-related neural responses (P1 and N1) reflect statistical learning - when the brain encodes transition probability in stimulus sequences, it predicts an upcoming stimulus with a high transition probability and suppresses the early event-related responses to a stimulus with a high transition probability. This amplitude difference between high and low transition probabilities reflects statistical learning effects. However, how a sequence's transition probability ratio affects neural responses contributing to statistical learning effects remains unknown. This study investigated how transition-probability ratios or conditional entropy (uncertainty) in auditory sequences modulate the early event-related neuromagnetic responses of P1m and N1m. Sequence uncertainties were manipulated using three different transition-probability ratios: 90:10%, 80:20%, and 67:33% (conditional entropy: 0.47, 0.72, and 0.92 bits, respectively). Neuromagnetic responses were recorded when participants listened to sequential sounds with these three transition probabilities. Amplitude differences between lower and higher probabilities were larger in sequences with transition-probability ratios of 90:10% and smaller in sequences with those of 67:33%, compared to sequences with those of 80:20%. This suggests that the transition-probability ratio finely tunes P1m and N1m. Our study also showed larger amplitude differences between frequent- and rare-transition stimuli in P1m than in N1m. This indicates that information about transition-probability differences may be calculated in earlier cognitive processes.
Collapse
Affiliation(s)
- Tomoko Okano
- Department of Neurology, Fukushima Medical University, Fukushima, Japan; Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tatsuya Daikoku
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Japan.
| | - Yoshikazu Ugawa
- Department of Human Neurophysiology, Fukushima Medical University, Fukushima, Japan
| | - Kazuaki Kanai
- Department of Neurology, Fukushima Medical University, Fukushima, Japan
| | - Masato Yumoto
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; Advanced Medical Science Research Center, Gunma Paz University, Gunma, Japan
| |
Collapse
|
7
|
Hidalgo C, Pesnot-Lerousseau J, Marquis P, Roman S, Schön D. Rhythmic Training Improves Temporal Anticipation and Adaptation Abilities in Children With Hearing Loss During Verbal Interaction. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3234-3247. [PMID: 31433722 DOI: 10.1044/2019_jslhr-s-18-0349] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose In this study, we investigate temporal adaptation capacities of children with normal hearing and children with cochlear implants and/or hearing aids during verbal exchange. We also address the question of the efficiency of a rhythmic training on temporal adaptation during speech interaction in children with hearing loss. Method We recorded electroencephalogram data in children while they named pictures delivered on a screen, in alternation with a virtual partner. We manipulated the virtual partner's speech rate (fast vs. slow) and the regularity of alternation (regular vs. irregular). The group of children with normal hearing was tested once, and the group of children with hearing loss was tested twice: once after 30 min of auditory training and once after 30 min of rhythmic training. Results Both groups of children adjusted their speech rate to that of the virtual partner and were sensitive to the regularity of alternation with a less accurate performance following irregular turns. Moreover, irregular turns elicited a negative event-related potential in both groups, showing a detection of temporal deviancy. Notably, the amplitude of this negative component positively correlated with accuracy in the alternation task. In children with hearing loss, the effect was more pronounced and long-lasting following rhythmic training compared with auditory training. Conclusion These results are discussed in terms of temporal adaptation abilities in speech interaction and suggest the use of rhythmic training to improve these skills of children with hearing loss.
Collapse
Affiliation(s)
- Céline Hidalgo
- Laboratoire Parole et Langage, CNRS, Aix-Marseille University, Aix-en Provence, France
- Institut de Neurosciences des Systèmes, Inserm, Aix-Marseille University, Marseille, France
| | | | - Patrick Marquis
- Institut de Neurosciences des Systèmes, Inserm, Aix-Marseille University, Marseille, France
| | - Stéphane Roman
- Institut de Neurosciences des Systèmes, Inserm, Aix-Marseille University, Marseille, France
- Pediatric Otolaryngology Department, La Timone Children's Hospital (AP-HM), Marseille, France
| | - Daniele Schön
- Institut de Neurosciences des Systèmes, Inserm, Aix-Marseille University, Marseille, France
| |
Collapse
|
8
|
Politimou N, Dalla Bella S, Farrugia N, Franco F. Born to Speak and Sing: Musical Predictors of Language Development in Pre-schoolers. Front Psychol 2019; 10:948. [PMID: 31231260 PMCID: PMC6558368 DOI: 10.3389/fpsyg.2019.00948] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Accepted: 04/09/2019] [Indexed: 11/13/2022] Open
Abstract
The relationship between musical and linguistic skills has received particular attention in infants and school-aged children. However, very little is known about pre-schoolers. This leaves a gap in our understanding of the concurrent development of these skills during development. Moreover, attention has been focused on the effects of formal musical training, while neglecting the influence of informal musical activities at home. To address these gaps, in Study 1, 3- and 4-year-old children (n = 40) performed novel musical tasks (perception and production) adapted for young children in order to examine the link between musical skills and the development of key language capacities, namely grammar and phonological awareness. In Study 2, we investigated the influence of informal musical experience at home on musical and linguistic skills of young pre-schoolers, using the same evaluation tools. We found systematic associations between distinct musical and linguistic skills. Rhythm perception and production were the best predictors of phonological awareness, while melody perception was the best predictor of grammar acquisition, a novel association not previously observed in developmental research. These associations could not be explained by variability in general cognitive functioning, such as verbal memory and non-verbal abilities. Thus, selective music-related auditory and motor skills are likely to underpin different aspects of language development and can be dissociated in pre-schoolers. We also found that informal musical experience at home contributes to the development of grammar. An effect of musical skills on both phonological awareness and language grammar is mediated by home musical experience. These findings pave the way for the development of dedicated musical activities for pre-schoolers to support specific areas of language development.
Collapse
Affiliation(s)
- Nina Politimou
- Department of Psychology, Middlesex University, London, United Kingdom
| | - Simone Dalla Bella
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Department of Psychology, University of Montreal, Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Department of Cognitive Psychology, University of Economics and Human Sciences in Warsaw, Warsaw, Poland
| | - Nicolas Farrugia
- Lab-STICC, Department of Electronics, IMT Atlantique, Brest, France
| | - Fabia Franco
- Department of Psychology, Middlesex University, London, United Kingdom
| |
Collapse
|
9
|
Tsogli V, Jentschke S, Daikoku T, Koelsch S. When the statistical MMN meets the physical MMN. Sci Rep 2019; 9:5563. [PMID: 30944387 PMCID: PMC6447621 DOI: 10.1038/s41598-019-42066-4] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 03/21/2019] [Indexed: 11/10/2022] Open
Abstract
How do listeners respond to prediction errors within patterned sequence of sounds? To answer this question we carried out a statistical learning study using electroencephalography (EEG). In a continuous auditory stream of sound triplets the deviations were either (a) statistical, in terms of transitional probability, (b) physical, due to a change in sound location (left or right speaker) or (c) a double deviants, i.e. a combination of the two. Statistical and physical deviants elicited a statistical mismatch negativity and a physical MMN respectively. Most importantly, we found that effects of statistical and physical deviants interacted (the statistical MMN was smaller when co-occurring with a physical deviant). Results show, for the first time, that processing of prediction errors due to statistical learning is affected by prediction errors due to physical deviance. Our findings thus show that the statistical MMN interacts with the physical MMN, implying that prediction error processing due to physical sound attributes suppresses processing of learned statistical properties of sounds.
Collapse
Affiliation(s)
- Vera Tsogli
- University of Bergen, Department for Biological and Medical Psychology, Postboks 7807, 5020, Bergen, Norway.
| | - Sebastian Jentschke
- University of Bergen, Department for Biological and Medical Psychology, Postboks 7807, 5020, Bergen, Norway.,University of Bergen, Department of Psychosocial Science, Postboks 7807, 5020, Bergen, Norway
| | - Tatsuya Daikoku
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103, Leipzig, Germany
| | - Stefan Koelsch
- University of Bergen, Department for Biological and Medical Psychology, Postboks 7807, 5020, Bergen, Norway.,Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103, Leipzig, Germany
| |
Collapse
|
10
|
Auditory midbrain coding of statistical learning that results from discontinuous sensory stimulation. PLoS Biol 2018; 16:e2005114. [PMID: 30048446 PMCID: PMC6065201 DOI: 10.1371/journal.pbio.2005114] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Accepted: 06/21/2018] [Indexed: 11/19/2022] Open
Abstract
Detecting regular patterns in the environment, a process known as statistical
learning, is essential for survival. Neuronal adaptation is a key mechanism in
the detection of patterns that are continuously repeated across short (seconds
to minutes) temporal windows. Here, we found in mice that a subcortical
structure in the auditory midbrain was sensitive to patterns that were repeated
discontinuously, in a temporally sparse manner, across windows of minutes to
hours. Using a combination of behavioral, electrophysiological, and molecular
approaches, we found changes in neuronal response gain that varied in mechanism
with the degree of sound predictability and resulted in changes in frequency
coding. Analysis of population activity (structural tuning) revealed an increase
in frequency classification accuracy in the context of increased overlap in
responses across frequencies. The increase in accuracy and overlap was
paralleled at the behavioral level in an increase in generalization in the
absence of diminished discrimination. Gain modulation was accompanied by changes
in gene and protein expression, indicative of long-term plasticity.
Physiological changes were largely independent of corticofugal feedback, and no
changes were seen in upstream cochlear nucleus responses, suggesting a key role
of the auditory midbrain in sensory gating. Subsequent behavior demonstrated
learning of predictable and random patterns and their importance in auditory
conditioning. Using longer timescales than previously explored, the combined
data show that the auditory midbrain codes statistical learning of temporally
sparse patterns, a process that is critical for the detection of relevant
stimuli in the constant soundscape that the animal navigates through. Some things are learned simply because they are there and not because they are
relevant at that moment in time. This is particularly true of surrounding
sounds, which we process automatically and continuously, detecting their
repetitive patterns or singularities. Learning about rewards and punishment is
typically attributed to cortical structures in the brain and known to occur over
long time windows. Learning of surrounding regularities, on the other hand, is
attributed to subcortical structures and has been shown to occur in seconds. The
brain can, however, also detect the regularity in sounds that are
discontinuously repeated across intervals of minutes and hours. For example, we
learn to identify people by the sound of their steps through an unconscious
process involving repeated but isolated exposures to the coappearance of sound
and person. Here, we show that a subcortical structure, the auditory midbrain,
can code such temporally spread regularities. Neurons in the auditory midbrain
changed their response pattern in mice that heard a fixed tone whenever they
went into one room in the environment they lived in. Learning of temporally
spread sound patterns can, therefore, occur in subcortical structures.
Collapse
|
11
|
Daikoku T. Neurophysiological Markers of Statistical Learning in Music and Language: Hierarchy, Entropy, and Uncertainty. Brain Sci 2018; 8:E114. [PMID: 29921829 PMCID: PMC6025354 DOI: 10.3390/brainsci8060114] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 06/14/2018] [Accepted: 06/18/2018] [Indexed: 01/07/2023] Open
Abstract
Statistical learning (SL) is a method of learning based on the transitional probabilities embedded in sequential phenomena such as music and language. It has been considered an implicit and domain-general mechanism that is innate in the human brain and that functions independently of intention to learn and awareness of what has been learned. SL is an interdisciplinary notion that incorporates information technology, artificial intelligence, musicology, and linguistics, as well as psychology and neuroscience. A body of recent study has suggested that SL can be reflected in neurophysiological responses based on the framework of information theory. This paper reviews a range of work on SL in adults and children that suggests overlapping and independent neural correlations in music and language, and that indicates disability of SL. Furthermore, this article discusses the relationships between the order of transitional probabilities (TPs) (i.e., hierarchy of local statistics) and entropy (i.e., global statistics) regarding SL strategies in human's brains; claims importance of information-theoretical approaches to understand domain-general, higher-order, and global SL covering both real-world music and language; and proposes promising approaches for the application of therapy and pedagogy from various perspectives of psychology, neuroscience, computational studies, musicology, and linguistics.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany.
| |
Collapse
|
12
|
Paraskevopoulos E, Chalas N, Bamidis P. Functional connectivity of the cortical network supporting statistical learning in musicians and non-musicians: an MEG study. Sci Rep 2017; 7:16268. [PMID: 29176557 PMCID: PMC5701139 DOI: 10.1038/s41598-017-16592-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2017] [Accepted: 11/14/2017] [Indexed: 01/18/2023] Open
Abstract
Statistical learning is a cognitive process of great importance for the detection and representation of environmental regularities. Complex cognitive processes such as statistical learning usually emerge as a result of the activation of widespread cortical areas functioning in dynamic networks. The present study investigated the cortical large-scale network supporting statistical learning of tone sequences in humans. The reorganization of this network related to musical expertise was assessed via a cross-sectional comparison of a group of musicians to a group of non-musicians. The cortical responses to a statistical learning paradigm incorporating an oddball approach were measured via Magnetoencephalographic (MEG) recordings. Large-scale connectivity of the cortical activity was calculated via a statistical comparison of the estimated transfer entropy in the sources' activity. Results revealed the functional architecture of the network supporting the processing of statistical learning, highlighting the prominent role of informational processing pathways that bilaterally connect superior temporal and intraparietal sources with the left IFG. Musical expertise is related to extensive reorganization of this network, as the group of musicians showed a network comprising of more widespread and distributed cortical areas as well as enhanced global efficiency and increased contribution of additional temporal and frontal sources in the information processing pathway.
Collapse
Affiliation(s)
- Evangelos Paraskevopoulos
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, P.C., 54124, Thessaloniki, Greece.
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, P.C., D-48149, Münster, Germany.
| | - Nikolas Chalas
- School of Biology, Faculty of Science, Aristotle University of Thessaloniki, P.C., 54124, Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, P.C., 54124, Thessaloniki, Greece
| |
Collapse
|
13
|
Ong JH, Burnham D, Escudero P, Stevens CJ. Effect of Linguistic and Musical Experience on Distributional Learning of Nonnative Lexical Tones. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2769-2780. [PMID: 28975194 DOI: 10.1044/2016_jslhr-s-16-0080] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2016] [Accepted: 08/07/2016] [Indexed: 06/07/2023]
Abstract
PURPOSE Evidence suggests that extensive experience with lexical tones or musical training provides an advantage in perceiving nonnative lexical tones. This investigation concerns whether such an advantage is evident in learning nonnative lexical tones based on the distributional structure of the input. METHOD Using an established protocol, distributional learning of lexical tones was investigated with tone language (Mandarin) listeners with no musical training (Experiment 1) and nontone language (Australian English) listeners with musical training (Experiment 2). Within each experiment, participants were trained on a bimodal (2-peak) or a unimodal (single peak) distribution along a continuum spanning a Thai lexical tone minimal pair. Discrimination performance on the target minimal pair was assessed before and after training. RESULTS Mandarin nonmusicians exhibited clear distributional learning (listeners in the bimodal, but not those in the unimodal condition, improved significantly as a function of training), whereas Australian English musicians did not (listeners in both the bimodal and unimodal conditions improved as a function of training). CONCLUSIONS Our findings suggest that veridical perception of lexical tones is not sufficient for distributional learning of nonnative lexical tones to occur. Rather, distributional learning appears to be modulated by domain-specific pitch experience and is constrained possibly by top-down interference.
Collapse
Affiliation(s)
- Jia Hoong Ong
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales, Australia
| | - Denis Burnham
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales, Australia
| | - Paola Escudero
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales, Australia
| | - Catherine J Stevens
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales, Australia
| |
Collapse
|
14
|
Mandikal Vasuki PR, Sharma M, Ibrahim R, Arciuli J. Statistical learning and auditory processing in children with music training: An ERP study. Clin Neurophysiol 2017; 128:1270-1281. [DOI: 10.1016/j.clinph.2017.04.010] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Revised: 03/24/2017] [Accepted: 04/17/2017] [Indexed: 11/28/2022]
|
15
|
Abstract
Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.
Collapse
Affiliation(s)
- Simone Falk
- Aix-Marseille Univ, LPL, UMR 7309, CNRS, Aix-en-Provence, France.,Université Sorbonne Nouvelle Paris-3, LPP, UMR 7018, CNRS, Paris, France.,Ludwig-Maximilians-University, Munich, Germany
| | | | | |
Collapse
|
16
|
Jantzen MG. Toward a More Conclusive Understanding of the Relationship between Musical Training and Reading. Front Psychol 2017; 8:263. [PMID: 28289396 PMCID: PMC5326772 DOI: 10.3389/fpsyg.2017.00263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 02/10/2017] [Indexed: 11/13/2022] Open
|
17
|
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning. Hear Res 2016; 342:112-123. [DOI: 10.1016/j.heares.2016.10.008] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 10/11/2016] [Accepted: 10/15/2016] [Indexed: 11/19/2022]
|
18
|
Zioga I, Di Bernardi Luft C, Bhattacharya J. Musical training shapes neural responses to melodic and prosodic expectation. Brain Res 2016; 1650:267-282. [PMID: 27622645 PMCID: PMC5069926 DOI: 10.1016/j.brainres.2016.09.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Revised: 09/01/2016] [Accepted: 09/09/2016] [Indexed: 11/15/2022]
Abstract
Current research on music processing and syntax or semantics in language suggests that music and language share partially overlapping neural resources. Pitch also constitutes a common denominator, forming melody in music and prosody in language. Further, pitch perception is modulated by musical training. The present study investigated how music and language interact on pitch dimension and whether musical training plays a role in this interaction. For this purpose, we used melodies ending on an expected or unexpected note (melodic expectancy being estimated by a computational model) paired with prosodic utterances which were either expected (statements with falling pitch) or relatively unexpected (questions with rising pitch). Participants' (22 musicians, 20 nonmusicians) ERPs and behavioural responses in a statement/question discrimination task were recorded. Participants were faster for simultaneous expectancy violations in the melodic and linguistic stimuli. Further, musicians performed better than nonmusicians, which may be related to their increased pitch tracking ability. At the neural level, prosodic violations elicited a front-central positive ERP around 150 ms after the onset of the last word/note, while musicians presented reduced P600 in response to strong incongruities (questions on low-probability notes). Critically, musicians' P800 amplitudes were proportional to their level of musical training, suggesting that expertise might shape the pitch processing of language. The beneficial aspect of expertise could be attributed to its strengthening effect of general executive functions. These findings offer novel contributions to our understanding of shared higher-order mechanisms between music and language processing on pitch dimension, and further demonstrate a potential modulation by musical expertise. Melodic expectancy influences the processing of prosodic expectancy. Musical expertise modulates pitch processing in music and language. Musicians have a more refined response to pitch. Musicians' neural responses are proportional to their level of musical expertise. Possible association between the P200 neural component and behavioural facilitation.
Collapse
Affiliation(s)
- Ioanna Zioga
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom.
| | - Caroline Di Bernardi Luft
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom; School of Biological and Chemical Sciences, Queen Mary, University of London, Mile End Rd, London E1 4NS, United Kingdom
| | - Joydeep Bhattacharya
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom
| |
Collapse
|
19
|
Patel AD, Morgan E. Exploring Cognitive Relations Between Prediction in Language and Music. Cogn Sci 2016; 41 Suppl 2:303-320. [DOI: 10.1111/cogs.12411] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2015] [Revised: 06/05/2016] [Accepted: 06/14/2016] [Indexed: 02/04/2023]
Affiliation(s)
- Aniruddh D. Patel
- Department of Psychology; Tufts University
- Azrieli Program in Brain, Mind, & Consciousness; Canadian Institute for Advanced Research (CIFAR); Toronto
| | | |
Collapse
|
20
|
Auditory Magnetoencephalographic Frequency-Tagged Responses Mirror the Ongoing Segmentation Processes Underlying Statistical Learning. Brain Topogr 2016; 30:220-232. [DOI: 10.1007/s10548-016-0518-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Accepted: 08/31/2016] [Indexed: 10/21/2022]
|
21
|
Daikoku T, Yatomi Y, Yumoto M. Pitch-class distribution modulates the statistical learning of atonal chord sequences. Brain Cogn 2016; 108:1-10. [PMID: 27429093 DOI: 10.1016/j.bandc.2016.06.008] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Revised: 03/22/2016] [Accepted: 06/28/2016] [Indexed: 10/21/2022]
Abstract
The present study investigated whether neural responses could demonstrate the statistical learning of chord sequences and how the perception underlying a pitch class can affect the statistical learning of chord sequences. Neuromagnetic responses to two chord sequences of augmented triads that were presented every 0.5s were recorded from fourteen right-handed participants. One sequence was a series of 360 chord triplets, each of which consisted of three chords in the same pitch class (clustered pitch-classes sequences). The other sequence was a series of 360 chord triplets, each of which consisted of three chords in different pitch classes (dispersed pitch-classes sequences). The order of the triplets was constrained by a first-order Markov stochastic model such that a forthcoming triplet was statistically defined by the most recent triplet (80% for one; 20% for the other two). We performed a repeated-measures ANOVA with the peak amplitude and latency of the P1m, N1m and P2m. In the clustered pitch-classes sequences, the P1m responses to the triplets that appeared with higher transitional probability were significantly reduced compared with those with lower transitional probability, whereas no significant result was detected in the dispersed pitch-classes sequences. Neuromagnetic significance was concordant with the results of familiarity interviews conducted after each learning session. The P1m response is a useful index for the statistical learning of chord sequences. Domain-specific perception based on the pitch class may facilitate the domain-general statistical learning of chord sequences.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yutaka Yatomi
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Masato Yumoto
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
22
|
Deschamps I, Hasson U, Tremblay P. The Structural Correlates of Statistical Information Processing during Speech Perception. PLoS One 2016; 11:e0149375. [PMID: 26919234 PMCID: PMC4771024 DOI: 10.1371/journal.pone.0149375] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2015] [Accepted: 02/01/2016] [Indexed: 11/30/2022] Open
Abstract
The processing of continuous and complex auditory signals such as speech relies on the ability to use statistical cues (e.g. transitional probabilities). In this study, participants heard short auditory sequences composed either of Italian syllables or bird songs and completed a regularity-rating task. Behaviorally, participants were better at differentiating between levels of regularity in the syllable sequences than in the bird song sequences. Inter-individual differences in sensitivity to regularity for speech stimuli were correlated with variations in surface-based cortical thickness (CT). These correlations were found in several cortical areas including regions previously associated with statistical structure processing (e.g. bilateral superior temporal sulcus, left precentral sulcus and inferior frontal gyrus), as well other regions (e.g. left insula, bilateral superior frontal gyrus/sulcus and supramarginal gyrus). In all regions, this correlation was positive suggesting that thicker cortex is related to higher sensitivity to variations in the statistical structure of auditory sequences. Overall, these results suggest that inter-individual differences in CT within a distributed network of cortical regions involved in statistical structure processing, attention and memory is predictive of the ability to detect structural structure in auditory speech sequences.
Collapse
Affiliation(s)
- Isabelle Deschamps
- Département de Réadaptation, Université Laval, Québec City, QC, Canada
- Centre de Recherche de l’Institut Universitaire en santé mentale de Québec, Québec City, QC, Canada
| | - Uri Hasson
- Center for Mind & Brain Sciences (CIMeC), University of Trento, Mattarello (TN), Italy
| | - Pascale Tremblay
- Département de Réadaptation, Université Laval, Québec City, QC, Canada
- Centre de Recherche de l’Institut Universitaire en santé mentale de Québec, Québec City, QC, Canada
| |
Collapse
|
23
|
Koelsch S, Busch T, Jentschke S, Rohrmeier M. Under the hood of statistical learning: A statistical MMN reflects the magnitude of transitional probabilities in auditory sequences. Sci Rep 2016; 6:19741. [PMID: 26830652 PMCID: PMC4735647 DOI: 10.1038/srep19741] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2015] [Accepted: 12/15/2015] [Indexed: 11/09/2022] Open
Abstract
Within the framework of statistical learning, many behavioural studies investigated the processing of unpredicted events. However, surprisingly few neurophysiological studies are available on this topic, and no statistical learning experiment has investigated electroencephalographic (EEG) correlates of processing events with different transition probabilities. We carried out an EEG study with a novel variant of the established statistical learning paradigm. Timbres were presented in isochronous sequences of triplets. The first two sounds of all triplets were equiprobable, while the third sound occurred with either low (10%), intermediate (30%), or high (60%) probability. Thus, the occurrence probability of the third item of each triplet (given the first two items) was varied. Compared to high-probability triplet endings, endings with low and intermediate probability elicited an early anterior negativity that had an onset around 100 ms and was maximal at around 180 ms. This effect was larger for events with low than for events with intermediate probability. Our results reveal that, when predictions are based on statistical learning, events that do not match a prediction evoke an early anterior negativity, with the amplitude of this mismatch response being inversely related to the probability of such events. Thus, we report a statistical mismatch negativity (sMMN) that reflects statistical learning of transitional probability distributions that go beyond auditory sensory memory capabilities.
Collapse
Affiliation(s)
- Stefan Koelsch
- University of Bergen, Department for Biological and Medical Psychology, Bergen, 5009, Norway
| | - Tobias Busch
- Freie Universität Berlin, Department for Educational Sciences and Psychology, Berlin, 14195, Germany
| | - Sebastian Jentschke
- University of Bergen, Department for Biological and Medical Psychology, Bergen, 5009, Norway
| | - Martin Rohrmeier
- Technische Universität Dresden, Institut für Kunst- und Musikwissenschaft, Dresden, 01219, Germany
| |
Collapse
|
24
|
Kotchoubey B, Pavlov YG, Kleber B. Music in Research and Rehabilitation of Disorders of Consciousness: Psychological and Neurophysiological Foundations. Front Psychol 2015; 6:1763. [PMID: 26640445 PMCID: PMC4661237 DOI: 10.3389/fpsyg.2015.01763] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 11/03/2015] [Indexed: 01/18/2023] Open
Abstract
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients' self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation.
Collapse
Affiliation(s)
- Boris Kotchoubey
- Institute for Medical Psychology and Behavioural Neurobiology, University of Tübingen, Tübingen, Germany
| | - Yuri G. Pavlov
- Institute for Medical Psychology and Behavioural Neurobiology, University of Tübingen, Tübingen, Germany
- Department of Psychology, Ural Federal University, Yekaterinburg, Russia
| | - Boris Kleber
- Institute for Medical Psychology and Behavioural Neurobiology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
25
|
Ravignani A, Westphal-Fitch G, Aust U, Schlumpp MM, Fitch WT. More than one way to see it: Individual heuristics in avian visual computation. Cognition 2015; 143:13-24. [PMID: 26113444 PMCID: PMC4710635 DOI: 10.1016/j.cognition.2015.05.021] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Revised: 04/15/2015] [Accepted: 05/26/2015] [Indexed: 11/05/2022]
Abstract
Comparative pattern learning experiments investigate how different species find regularities in sensory input, providing insights into cognitive processing in humans and other animals. Past research has focused either on one species' ability to process pattern classes or different species' performance in recognizing the same pattern, with little attention to individual and species-specific heuristics and decision strategies. We trained and tested two bird species, pigeons (Columba livia) and kea (Nestor notabilis, a parrot species), on visual patterns using touch-screen technology. Patterns were composed of several abstract elements and had varying degrees of structural complexity. We developed a model selection paradigm, based on regular expressions, that allowed us to reconstruct the specific decision strategies and cognitive heuristics adopted by a given individual in our task. Individual birds showed considerable differences in the number, type and heterogeneity of heuristic strategies adopted. Birds' choices also exhibited consistent species-level differences. Kea adopted effective heuristic strategies, based on matching learned bigrams to stimulus edges. Individual pigeons, in contrast, adopted an idiosyncratic mix of strategies that included local transition probabilities and global string similarity. Although performance was above chance and quite high for kea, no individual of either species provided clear evidence of learning exactly the rule used to generate the training stimuli. Our results show that similar behavioral outcomes can be achieved using dramatically different strategies and highlight the dangers of combining multiple individuals in a group analysis. These findings, and our general approach, have implications for the design of future pattern learning experiments, and the interpretation of comparative cognition research more generally.
Collapse
Affiliation(s)
- Andrea Ravignani
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria; Language Evolution and Computation Research Unit, University of Edinburgh, EH8 9AD Edinburgh, UK.
| | - Gesche Westphal-Fitch
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - Ulrike Aust
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - Martin M Schlumpp
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria; Haidlhof Research Station, University of Vienna/University of Veterinary Medicine Vienna/Messerli Research Institute, 2540 Bad Vöslau, Austria
| | - W Tecumseh Fitch
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria; Haidlhof Research Station, University of Vienna/University of Veterinary Medicine Vienna/Messerli Research Institute, 2540 Bad Vöslau, Austria.
| |
Collapse
|
26
|
François C, Grau-Sánchez J, Duarte E, Rodriguez-Fornells A. Musical training as an alternative and effective method for neuro-education and neuro-rehabilitation. Front Psychol 2015; 6:475. [PMID: 25972820 PMCID: PMC4411999 DOI: 10.3389/fpsyg.2015.00475] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2014] [Accepted: 04/02/2015] [Indexed: 01/14/2023] Open
Abstract
In the last decade, important advances in the field of cognitive science, psychology, and neuroscience have largely contributed to improve our knowledge on brain functioning. More recently, a line of research has been developed that aims at using musical training and practice as alternative tools for boosting specific perceptual, motor, cognitive, and emotional skills both in healthy population and in neurologic patients. These findings are of great hope for a better treatment of language-based learning disorders or motor impairment in chronic non-communicative diseases. In the first part of this review, we highlight several studies showing that learning to play a musical instrument can induce substantial neuroplastic changes in cortical and subcortical regions of motor, auditory and speech processing networks in healthy population. In a second part, we provide an overview of the evidence showing that musical training can be an alternative, low-cost and effective method for the treatment of language-based learning impaired populations. We then report results of the few studies showing that training with musical instruments can have positive effects on motor, emotional, and cognitive deficits observed in patients with non-communicable diseases such as stroke or Parkinson Disease. Despite inherent differences between musical training in educational and rehabilitation contexts, these results favor the idea that the structural, multimodal, and emotional properties of musical training can play an important role in developing new, creative and cost-effective intervention programs for education and rehabilitation in the next future.
Collapse
Affiliation(s)
- Clément François
- Department of Basic Psychology, University of Barcelona, Barcelona, Spain
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, Barcelona, Spain
| | - Jennifer Grau-Sánchez
- Department of Basic Psychology, University of Barcelona, Barcelona, Spain
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, Barcelona, Spain
| | - Esther Duarte
- Department of Physical Medicine and Rehabilitation, Parc de Salut Mar, Hospitals del Mar i de l’Esperança, Barcelona, Spain
| | - Antoni Rodriguez-Fornells
- Department of Basic Psychology, University of Barcelona, Barcelona, Spain
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, Barcelona, Spain
- Catalan Institution for Research and Advanced Studies, Barcelona, Spain
| |
Collapse
|
27
|
Plante E, Almryde K, Patterson DK, Vance CJ, Asbjørnsen AE. Language lateralization shifts with learning by adults. Laterality 2014; 20:306-25. [PMID: 25285756 DOI: 10.1080/1357650x.2014.963597] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
For the majority of the population, language is a left-hemisphere lateralized function. During childhood, a pattern of increasing left lateralization for language has been described in brain imaging studies, suggesting that this trait develops. This development could reflect change due to brain maturation or change due to skill acquisition, given that children acquire and refine language skills as they mature. We test the possibility that skill acquisition, independent of age-associated maturation can result in shifts in language lateralization in classic language cortex. We imaged adults exposed to an unfamiliar language during three successive fMRI scans. Participants were then asked to identify specific words embedded in Norwegian sentences. Exposure to these sentences, relative to complex tones, resulted in consistent activation in the left and right superior temporal gyrus. Activation in this region became increasingly left-lateralized with repeated exposure to the unfamiliar language. These results demonstrate that shifts in lateralization can be produced in the short term within a learning context, independent of maturation.
Collapse
Affiliation(s)
- Elena Plante
- a Department of Speech, Language, & Hearing Sciences , University of Arizona , Tucson , AZ , USA
| | | | | | | | | |
Collapse
|
28
|
François C, Jaillet F, Takerkart S, Schön D. Faster sound stream segmentation in musicians than in nonmusicians. PLoS One 2014; 9:e101340. [PMID: 25014068 PMCID: PMC4094420 DOI: 10.1371/journal.pone.0101340] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2013] [Accepted: 06/05/2014] [Indexed: 12/24/2022] Open
Abstract
The musician's brain is considered as a good model of brain plasticity as musical training is known to modify auditory perception and related cortical organization. Here, we show that music-related modifications can also extend beyond motor and auditory processing and generalize (transfer) to speech processing. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and non-linguistic stimuli based only on probabilities of occurrence between adjacent syllables, tones or timbres. The paradigm classically used in these studies consists of a passive exposure phase followed by a testing phase. By using both behavioural and electrophysiological measures, we recently showed that adult musicians and musically trained children outperform nonmusicians in the test following brief exposure to an artificial sung language. However, the behavioural test does not allow for studying the learning process per se but rather the result of the learning. In the present study, we analyze the electrophysiological learning curves that are the ongoing brain dynamics recorded as the learning is taking place. While musicians show an inverted U shaped learning curve, nonmusicians show a linear learning curve. Analyses of Event-Related Potentials (ERPs) allow for a greater understanding of how and when musical training can improve speech segmentation. These results bring evidence of enhanced neural sensitivity to statistical regularities in musicians and support the hypothesis of positive transfer of training effect from music to sound stream segmentation in general.
Collapse
Affiliation(s)
- Clément François
- Cognition and Brain Plasticity Unit, Institute of Biomedicine Research of Bellvitge, Barcelona, Spain
- Department of Basic Psychology, University of Barcelona, Barcelona, Spain
| | - Florent Jaillet
- Institut de Neurosciences de la Timone, Unité Mixte de Recherche 7289, Aix-Marseille Université, Centre National de la Recherche Scientifique, Marseille, France
| | - Sylvain Takerkart
- Institut de Neurosciences de la Timone, Unité Mixte de Recherche 7289, Aix-Marseille Université, Centre National de la Recherche Scientifique, Marseille, France
| | - Daniele Schön
- Institut de Neurosciences des Systèmes Unité 1106, Aix-Marseille Université, Institut National de la Santé Et de la Recherche Médicale, Marseille, France
- * E-mail:
| |
Collapse
|
29
|
Marozeau J, Innes-Brown H, Blamey PJ. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant. Front Psychol 2013; 4:790. [PMID: 24223563 PMCID: PMC3818467 DOI: 10.3389/fpsyg.2013.00790] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Accepted: 10/07/2013] [Indexed: 11/13/2022] Open
Abstract
Our ability to listen selectively to single sound sources in complex auditory environments is termed "auditory stream segregation."This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody. The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope. Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device) influences the way that listeners use different acoustic cues for segregating interleaved musical streams.
Collapse
Affiliation(s)
- Jeremy Marozeau
- Department of Medical Bionics, University of Melbourne Melbourne, VIC, Australia ; Bionics Institute Melbourne, VIC, Australia
| | | | | |
Collapse
|
30
|
Thalamocortical mechanisms for integrating musical tone and rhythm. Hear Res 2013; 308:50-9. [PMID: 24103509 DOI: 10.1016/j.heares.2013.09.017] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2013] [Revised: 09/21/2013] [Accepted: 09/26/2013] [Indexed: 11/24/2022]
Abstract
Studies over several decades have identified many of the neuronal substrates of music perception by pursuing pitch and rhythm perception separately. Here, we address the question of how these mechanisms interact, starting with the observation that the peripheral pathways of the so-called "Core" and "Matrix" thalamocortical system provide the anatomical bases for tone and rhythm channels. We then examine the hypothesis that these specialized inputs integrate acoustic content within rhythm context in auditory cortex using classical types of "driving" and "modulatory" mechanisms. This hypothesis provides a framework for deriving testable predictions about the early stages of music processing. Furthermore, because thalamocortical circuits are shared by speech and music processing, such a model provides concrete implications for how music experience contributes to the development of robust speech encoding mechanisms.
Collapse
|