1
|
Hartston M, Lulav-Bash T, Goldstein-Marcusohn Y, Avidan G, Hadad BS. Perceptual narrowing continues throughout childhood: Evidence from specialization of face processing. J Exp Child Psychol 2024; 245:105964. [PMID: 38823356 DOI: 10.1016/j.jecp.2024.105964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 04/17/2024] [Accepted: 04/17/2024] [Indexed: 06/03/2024]
Abstract
Face recognition shows a long trajectory of development and is known to be closely associated with the development of social skills. However, it is still debated whether this long trajectory is perceptually based and what the role is of experience-based refinements of face representations throughout development. We examined the effects of short and long-term experienced stimulus history on face processing, using regression biases of face representations towards the experienced mean. Children and adults performed same-different judgments in a serial discrimination task where two consecutive faces were drawn from a distribution of morphed faces. The results show that face recognition continues to improve after 9 years of age, with more pronounced improvements for own-race faces. This increased narrowing with age is also indicated by similar use of stimulus statistics for own-race and other-race faces in children, contrary to the different use of the overall stimulus history for these two face types in adults. Increased face proficiency in adulthood renders the perceptual system less tuned to other-race face statistics. Altogether, the results demonstrate associations between levels of specialization and the extent to which perceptual representations become narrowly tuned with age.
Collapse
Affiliation(s)
- Marissa Hartston
- Department of Special Education, Faculty of Education, University of Haifa, Haifa 3498838, Israel
| | - Tal Lulav-Bash
- Department of Special Education, Faculty of Education, University of Haifa, Haifa 3498838, Israel; Department of Psychology, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel
| | - Yael Goldstein-Marcusohn
- Department of Special Education, Faculty of Education, University of Haifa, Haifa 3498838, Israel
| | - Galia Avidan
- Department of Psychology, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel
| | - Bat-Sheva Hadad
- Department of Special Education, Faculty of Education, University of Haifa, Haifa 3498838, Israel; Edmond J. Safra Brain Research Center, University of Haifa, Haifa 3498838, Israel.
| |
Collapse
|
2
|
Skelton AE, Franklin A, Bosten JM. Colour vision is aligned with natural scene statistics at 4 months of age. Dev Sci 2023; 26:e13402. [PMID: 37138516 DOI: 10.1111/desc.13402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 03/23/2023] [Accepted: 03/28/2023] [Indexed: 05/05/2023]
Abstract
Visual perception in adult humans is thought to be tuned to represent the statistical regularities of natural scenes. For example, in adults, visual sensitivity to different hues shows an asymmetry which coincides with the statistical regularities of colour in the natural world. Infants are sensitive to statistical regularities in social and linguistic stimuli, but whether or not infants' visual systems are tuned to natural scene statistics is currently unclear. We measured colour discrimination in infants to investigate whether or not the visual system can represent chromatic scene statistics in very early life. Our results reveal the earliest association between vision and natural scene statistics that has yet been found: even as young as 4 months of age, colour vision is aligned with the distributions of colours in natural scenes. RESEARCH HIGHLIGHTS: We find infants' colour sensitivity is aligned with the distribution of colours in the natural world, as it is in adults. At just 4 months, infants' visual systems are tailored to extract and represent the statistical regularities of the natural world. This points to a drive for the human brain to represent statistical regularities even at a young age.
Collapse
Affiliation(s)
- Alice E Skelton
- The Sussex Colour Group & Sussex Baby Lab, University of Sussex, Brighton, UK
| | - Anna Franklin
- The Sussex Colour Group & Sussex Baby Lab, University of Sussex, Brighton, UK
| | - Jenny M Bosten
- The Sussex Vision Lab, University of Sussex, Brighton, UK
| |
Collapse
|
3
|
Nguyen T, Flaten E, Trainor LJ, Novembre G. Early social communication through music: State of the art and future perspectives. Dev Cogn Neurosci 2023; 63:101279. [PMID: 37515832 PMCID: PMC10407289 DOI: 10.1016/j.dcn.2023.101279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 07/31/2023] Open
Abstract
A growing body of research shows that the universal capacity for music perception and production emerges early in development. Possibly building on this predisposition, caregivers around the world often communicate with infants using songs or speech entailing song-like characteristics. This suggests that music might be one of the earliest developing and most accessible forms of interpersonal communication, providing a platform for studying early communicative behavior. However, little research has examined music in truly communicative contexts. The current work aims to facilitate the development of experimental approaches that rely on dynamic and naturalistic social interactions. We first review two longstanding lines of research that examine musical interactions by focusing either on the caregiver or the infant. These include defining the acoustic and non-acoustic features that characterize infant-directed (ID) music, as well as behavioral and neurophysiological research examining infants' processing of musical timing and pitch. Next, we review recent studies looking at early musical interactions holistically. This research focuses on how caregivers and infants interact using music to achieve co-regulation, mutual engagement, and increase affiliation and prosocial behavior. We conclude by discussing methodological, technological, and analytical advances that might empower a comprehensive study of musical communication in early childhood.
Collapse
Affiliation(s)
- Trinh Nguyen
- Neuroscience of Perception and Action Lab, Italian Institute of Technology, Rome, Italy.
| | - Erica Flaten
- Department of Psychology, Neuroscience and Behavior, McMaster University, Hamilton, Canada
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behavior, McMaster University, Hamilton, Canada; McMaster Institute for Music and the Mind, McMaster University, Hamilton, Canada; Rotman Research Institute, Baycrest Hospital, Toronto, Canada
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology, Rome, Italy
| |
Collapse
|
4
|
Merseal HM, Beaty RE, Kenett YN, Lloyd-Cox J, de Manzano Ö, Norgaard M. Representing melodic relationships using network science. Cognition 2023; 233:105362. [PMID: 36628852 DOI: 10.1016/j.cognition.2022.105362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 11/13/2022] [Accepted: 12/18/2022] [Indexed: 01/11/2023]
Abstract
Music is a complex system consisting of many dimensions and hierarchically organized information-the organization of which, to date, we do not fully understand. Network science provides a powerful approach to representing such complex systems, from the social networks of people to modelling the underlying network structures of different cognitive mechanisms. In the present research, we explored whether network science methodology can be extended to model the melodic patterns underlying expert improvised music. Using a large corpus of transcribed improvisations, we constructed a network model in which 5-pitch sequences were linked depending on consecutive occurrences, constituting 116,403 nodes (sequences) and 157,429 edges connecting them. We then investigated whether mathematical graph modelling relates to musical characteristics in real-world listening situations via a behavioral experiment paralleling those used to examine language. We found that as melodic distance within the network increased, participants judged melodic sequences as less related. Moreover, the relationship between distance and reaction time (RT) judgements was quadratic: participants slowed in RT up to distance four, then accelerated; a parallel finding to research in language networks. This study offers insights into the hidden network structure of improvised tonal music and suggests that humans are sensitive to the property of melodic distance in this network. More generally, our work demonstrates the similarity between music and language as complex systems, and how network science methods can be used to quantify different aspects of its complexity.
Collapse
Affiliation(s)
- Hannah M Merseal
- Department of Psychology, Pennsylvania State University, United States.
| | - Roger E Beaty
- Department of Psychology, Pennsylvania State University, United States
| | - Yoed N Kenett
- Faculty of Data and Decisions Sciences, Technion Institute of Technology, Israel
| | - James Lloyd-Cox
- Department of Cognitive Neuroscience, Goldsmiths, University of London, England, United Kingdom
| | - Örjan de Manzano
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Germany
| | - Martin Norgaard
- Department of Music Education, Georgia State University, United States
| |
Collapse
|
5
|
Welch D, Reybrouck M, Podlipniak P. Meaning in Music Is Intentional, but in Soundscape It Is Not-A Naturalistic Approach to the Qualia of Sounds. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 20:269. [PMID: 36612591 PMCID: PMC9819651 DOI: 10.3390/ijerph20010269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 12/14/2022] [Accepted: 12/21/2022] [Indexed: 06/17/2023]
Abstract
The sound environment and music intersect in several ways and the same holds true for the soundscape and our internal response to listening to music. Music may be part of a sound environment or take on some aspects of environmental sound, and therefore some of the soundscape response may be experienced alongside the response to the music. At a deeper level, coping with music, spoken language, and the sound environment may all have influenced our evolution, and the cognitive-emotional structures and responses evoked by all three sources of acoustic information may be, to some extent, the same. This paper distinguishes and defines the extent of our understanding about the interplay of external sound and our internal response to it in both musical and real-world environments. It takes a naturalistic approach to music/sound and music-listening/soundscapes to describe in objective terms some mechanisms of sense-making and interactions with the sounds. It starts from a definition of sound as vibrational and transferable energy that impinges on our body and our senses, with a dynamic tension between lower-level coping mechanisms and higher-level affective and cognitive functioning. In this way, we establish both commonalities and differences between musical responses and soundscapes. Future research will allow this understanding to grow and be refined further.
Collapse
Affiliation(s)
- David Welch
- Audiology Section, School of Population Health, University of Auckland, Auckland 2011, New Zealand
| | - Mark Reybrouck
- Faculty of Arts, University of Leuven, 3000 Leuven, Belgium
- Department of Art History, Musicology and Theater Studies, IPEM Institute for Psychoacoustics and Electronic Music, 9000 Ghent, Belgium
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, 61-712 Poznan, Poland
| |
Collapse
|
6
|
Di Stefano N, Vuust P, Brattico E. Consonance and dissonance perception. A critical review of the historical sources, multidisciplinary findings, and main hypotheses. Phys Life Rev 2022; 43:273-304. [PMID: 36372030 DOI: 10.1016/j.plrev.2022.10.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 10/17/2022] [Indexed: 11/05/2022]
Abstract
Revealed more than two millennia ago by Pythagoras, consonance and dissonance (C/D) are foundational concepts in music theory, perception, and aesthetics. The search for the biological, acoustical, and cultural factors that affect C/D perception has resulted in descriptive accounts inspired by arithmetic, musicological, psychoacoustical or neurobiological frameworks without reaching a consensus. Here, we review the key historical sources and modern multidisciplinary findings on C/D and integrate them into three main hypotheses: the vocal similarity hypothesis (VSH), the psychocultural hypothesis (PH), and the sensorimotor hypothesis (SH). By illustrating the hypotheses-related findings, we highlight their major conceptual, methodological, and terminological shortcomings. Trying to provide a unitary framework for C/D understanding, we put together multidisciplinary research on human and animal vocalizations, which converges to suggest that auditory roughness is associated with distress/danger and, therefore, elicits defensive behavioral reactions and neural responses that indicate aversion. We therefore stress the primacy of vocality and roughness as key factors in the explanation of C/D phenomenon, and we explore the (neuro)biological underpinnings of the attraction-aversion mechanisms that are triggered by C/D stimuli. Based on the reviewed evidence, while the aversive nature of dissonance appears as solidly rooted in the multidisciplinary findings, the attractive nature of consonance remains a somewhat speculative claim that needs further investigation. Finally, we outline future directions for empirical research in C/D, especially regarding cross-modal and cross-cultural approaches.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies (ISTC), National Research Council of Italy (CNR), Via San Martino della Battaglia 44, 00185 Rome, Italy.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, 70122 Bari, Italy.
| |
Collapse
|
7
|
Leongómez JD, Havlíček J, Roberts SC. Musicality in human vocal communication: an evolutionary perspective. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200391. [PMID: 34775823 PMCID: PMC8591388 DOI: 10.1098/rstb.2020.0391] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 07/08/2021] [Indexed: 12/02/2022] Open
Abstract
Studies show that specific vocal modulations, akin to those of infant-directed speech (IDS) and perhaps music, play a role in communicating intentions and mental states during human social interaction. Based on this, we propose a model for the evolution of musicality-the capacity to process musical information-in relation to human vocal communication. We suggest that a complex social environment, with strong social bonds, promoted the appearance of musicality-related abilities. These social bonds were not limited to those between offspring and mothers or other carers, although these may have been especially influential in view of altriciality of human infants. The model can be further tested in other species by comparing levels of sociality and complexity of vocal communication. By integrating several theories, our model presents a radically different view of musicality, not limited to specifically musical scenarios, but one in which this capacity originally evolved to aid parent-infant communication and bonding, and even today plays a role not only in music but also in IDS, as well as in some adult-directed speech contexts. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.
Collapse
Affiliation(s)
- Juan David Leongómez
- Human Behaviour Lab, Faculty of Psychology, Universidad El Bosque, Bogota, Colombia
| | - Jan Havlíček
- Department of Zoology, Charles University, Prague, Czech Republic
| | - S. Craig Roberts
- Faculty of Natural Sciences, University of Stirling, Stirling, UK
| |
Collapse
|
8
|
Beyond the Language Module: Musicality as a Stepping Stone Towards Language Acquisition. EVOLUTIONARY PSYCHOLOGY 2022. [DOI: 10.1007/978-3-030-76000-7_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
9
|
Buren V, Müllensiefen D, Roeske TC, Degé F. What Makes Babies Musical? Conceptions of Musicality in Infants and Toddlers. Front Psychol 2021; 12:736833. [PMID: 35095640 PMCID: PMC8797144 DOI: 10.3389/fpsyg.2021.736833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 11/19/2021] [Indexed: 11/21/2022] Open
Abstract
Despite major advances in research on musical ability in infants, relatively little attention has been paid to individual differences in general musicality in infants. A fundamental problem has been the lack of a clear definition of what constitutes "general musicality" or "musical ability" in infants and toddlers, resulting in a wide range of test procedures that rely on different models of musicality. However, musicality can be seen as a social construct that can take on different meanings across cultures, sub-groups, and individuals, and may be subject to change over time. Therefore, one way to get a clearer picture of infant musicality is to assess conceptions of musicality in the general population. Using this approach, we surveyed 174 German adults, asking about their view and conceptions regarding behaviors that characterize a musical child under 3 years. Based on previous studies on adult and child musicality, we designed a survey containing 41 statements describing musical behaviors in children. Participants were asked to rate how indicative these behaviors were of musicality in infants and toddlers. PCA analysis revealed 4 components of musical abilities and behaviors in under-3-year-olds: Musical Communication, Enthusiasm and Motivation, Adaptive Expressiveness, and Musical Abilities as traditionally defined. Professional background and musical expertise of the respondents did not significantly influence participants' conceptions. Our results suggest that, in order to capture musicality in young children, a wider range of skills and observable behaviors should be taken into account than those assessed by traditional musical ability tests for young children.
Collapse
Affiliation(s)
- Verena Buren
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
| | - Daniel Müllensiefen
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| | - Tina C. Roeske
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
| | - Franziska Degé
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
| |
Collapse
|
10
|
Meng X, Kato M, Itakura S. Development of synchrony‐dominant expectations in observers. SOCIAL DEVELOPMENT 2021. [DOI: 10.1111/sode.12556] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Xianwei Meng
- Graduate School of Human Sciences Osaka University Suita Japan
| | - Masaharu Kato
- Center for Baby Science Doshisha University Kyoto Japan
| | - Shoji Itakura
- Center for Baby Science Doshisha University Kyoto Japan
| |
Collapse
|
11
|
Mendoza JK, Fausey CM. Everyday music in infancy. Dev Sci 2021; 24:e13122. [PMID: 34170059 PMCID: PMC8596421 DOI: 10.1111/desc.13122] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 03/18/2021] [Accepted: 03/30/2021] [Indexed: 11/28/2022]
Abstract
Infants enculturate to their soundscape over the first year of life, yet theories of how they do so rarely make contact with details about the sounds available in everyday life. Here, we report on properties of a ubiquitous early ecology in which foundational skills get built: music. We captured daylong recordings from 35 infants ages 6–12 months at home and fully double‐coded 467 h of everyday sounds for music and its features, tunes, and voices. Analyses of this first‐of‐its‐kind corpus revealed two distributional properties of infants’ everyday musical ecology. First, infants encountered vocal music in over half, and instrumental in over three‐quarters, of everyday music. Live sources generated one‐third, and recorded sources three‐quarters, of everyday music. Second, infants did not encounter each individual tune and voice in their day equally often. Instead, the most available identity cumulated to many more seconds of the day than would be expected under a uniform distribution. These properties of everyday music in human infancy are different from what is discoverable in environments highly constrained by context (e.g., laboratories) and time (e.g., minutes rather than hours). Together with recent insights about the everyday motor, language, and visual ecologies of infancy, these findings reinforce an emerging priority to build theories of development that address the opportunities and challenges of real input encountered by real learners.
Collapse
Affiliation(s)
| | - Caitlin M Fausey
- Department of Psychology, University of Oregon, Eugene, Oregon, USA
| |
Collapse
|
12
|
Developmental differences in the hemodynamic response to changes in lyrics and melodies by 4- and 12-month-old infants. Cognition 2021; 213:104711. [PMID: 33858670 DOI: 10.1016/j.cognition.2021.104711] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 03/24/2021] [Accepted: 03/26/2021] [Indexed: 11/22/2022]
Abstract
Songs and speech play central roles in early caretaker-infant communicative interactions, which are crucial for infants' cognitive, social, and emotional development. Compared to speech development, however, much less is known about how infants process songs or how songs affect their development. Lyrics and melody are two key components of songs, and much of the research on song processing has examined how the two components of the songs are processed. The current study focused on the roles of lyrics and melody in song perception, by examining developmental patterns and the ways in which lyrics and melody are processed in the infants' brains using near-infrared spectroscopy (NIRS). The results revealed that developmental changes occur in infants' processing of lyrics and melody in a similar timeline as perceptual reorganization, that is, from 4.5 and 12 months of age. We found that 4.5-month-olds showed a right hemispheric advantage in the processing of songs that underwent a change in either lyrics or melodies. Conversely, 12-month-olds showed significantly higher activation bilaterally when lyrics and melody changed at the same time. These results suggest that 4.5-month-olds processed songs in the same manner as music without lyrics. Moreover, 12-month-olds processed lyrics and melody in an interactive manner, a sign of a more mature processing method. These findings highlight the importance of investigating the independent development of music and language, and also considering the relationship between speech and song, lyrics and melody in song, and speech and music more broadly.
Collapse
|
13
|
Li Y, Tang C, Lu J, Wu J, Chang EF. Human cortical encoding of pitch in tonal and non-tonal languages. Nat Commun 2021; 12:1161. [PMID: 33608548 PMCID: PMC7896081 DOI: 10.1038/s41467-021-21430-x] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 01/26/2021] [Indexed: 11/09/2022] Open
Abstract
Languages can use a common repertoire of vocal sounds to signify distinct meanings. In tonal languages, such as Mandarin Chinese, pitch contours of syllables distinguish one word from another, whereas in non-tonal languages, such as English, pitch is used to convey intonation. The neural computations underlying language specialization in speech perception are unknown. Here, we use a cross-linguistic approach to address this. Native Mandarin- and English- speaking participants each listened to both Mandarin and English speech, while neural activity was directly recorded from the non-primary auditory cortex. Both groups show language-general coding of speaker-invariant pitch at the single electrode level. At the electrode population level, we find language-specific distribution of cortical tuning parameters in Mandarin speakers only, with enhanced sensitivity to Mandarin tone categories. Our results show that speech perception relies upon a shared cortical auditory feature processing mechanism, which may be tuned to the statistics of a given language. Different languages rely on different vocal sounds to convey meaning. Here the authors show that language-general coding of pitch occurs in the non-primary auditory cortex for both tonal (Mandarin Chinese) and non-tonal (English) languages, with some language specificity on the population level.
Collapse
Affiliation(s)
- Yuanning Li
- Department of Neurological Surgery, University of California, San Francisco, CA, USA.,Center for Integrative Neuroscience, University of California, San Francisco, CA, USA
| | - Claire Tang
- Department of Neurological Surgery, University of California, San Francisco, CA, USA.,Center for Integrative Neuroscience, University of California, San Francisco, CA, USA
| | - Junfeng Lu
- Brain Function Laboratory, Neurosurgical Institute of Fudan University, Shanghai, China.,Shanghai Key laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| | - Jinsong Wu
- Brain Function Laboratory, Neurosurgical Institute of Fudan University, Shanghai, China. .,Shanghai Key laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China. .,Neurologic Surgery Department, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China. .,Institute of Brain-Intelligence Technology, Zhangjiang Lab, Shanghai, China.
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, CA, USA. .,Center for Integrative Neuroscience, University of California, San Francisco, CA, USA.
| |
Collapse
|
14
|
Nikolsky A. The Pastoral Origin of Semiotically Functional Tonal Organization of Music. Front Psychol 2020; 11:1358. [PMID: 32848961 PMCID: PMC7396614 DOI: 10.3389/fpsyg.2020.01358] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 05/22/2020] [Indexed: 11/13/2022] Open
Abstract
This paper presents a new line of inquiry into when and how music as a semiotic system was born. Eleven principal expressive aspects of music each contains specific structural patterns whose configuration signifies a certain affective state. This distinguishes the tonal organization of music from the phonetic and prosodic organization of natural languages and animal communication. The question of music’s origin can therefore be answered by establishing the point in human history at which all eleven expressive aspects might have been abstracted from the instinct-driven primate calls and used to express human psycho-emotional states. Etic analysis of acoustic parameters is the prime means of cross-examination of the typical patterns of expression of the basic emotions in human music versus animal vocal communication. A new method of such analysis is proposed here. Formation of such expressive aspects as meter, tempo, melodic intervals, and articulation can be explained by the influence of bipedal locomotion, breathing cycle, and heartbeat, long before Homo sapiens. However, two aspects, rhythm and melodic contour, most crucial for music as we know it, lack proxies in the Paleolithic lifestyle. The available ethnographic and developmental data leads one to believe that rhythmic and directional patterns of melody became involved in conveying emotion-related information in the process of frequent switching from one call-type to another within the limited repertory of calls. Such calls are usually adopted for the ongoing caretaking of human youngsters and domestic animals. The efficacy of rhythm and pitch contour in affective communication must have been spontaneously discovered in new important cultural activities. The most likely scenario for music to have become fully semiotically functional and to have spread wide enough to avoid extinctions is the formation of cross-specific communication between humans and domesticated animals during the Neolithic demographic explosion and the subsequent cultural revolution. Changes in distance during such communication must have promoted the integration between different expressive aspects and generated the basic musical grammar. The model of such communication can be found in the surviving tradition of Scandinavian pastoral music - kulning. This article discusses the most likely ways in which such music evolved.
Collapse
|
15
|
Hahn LE, Benders T, Snijders TM, Fikkert P. Six-month-old infants recognize phrases in song and speech. INFANCY 2020; 25:699-718. [PMID: 32794372 DOI: 10.1111/infa.12357] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 06/09/2020] [Accepted: 07/02/2020] [Indexed: 11/29/2022]
Abstract
Infants exploit acoustic boundaries to perceptually organize phrases in speech. This prosodic parsing ability is well-attested and is a cornerstone to the development of speech perception and grammar. However, infants also receive linguistic input in child songs. This study provides evidence that infants parse songs into meaningful phrasal units and replicates previous research for speech. Six-month-old Dutch infants (n = 80) were tested in the song or speech modality in the head-turn preference procedure. First, infants were familiarized to two versions of the same word sequence: One version represented a well-formed unit, and the other contained a phrase boundary halfway through. At test, infants were presented two passages, each containing one version of the familiarized sequence. The results for speech replicated the previously observed preference for the passage containing the well-formed sequence, but only in a more fine-grained analysis. The preference for well-formed phrases was also observed in the song modality, indicating that infants recognize phrase structure in song. There were acoustic differences between stimuli of the current and previous studies, suggesting that infants are flexible in their processing of boundary cues while also providing a possible explanation for differences in effect sizes.
Collapse
Affiliation(s)
- Laura E Hahn
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands.,International Max Planck Research School for Language Sciences, Nijmegen, The Netherlands
| | - Titia Benders
- Department of Linguistics, Macquarie University, Sydney, NSW, Australia
| | - Tineke M Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
| | - Paula Fikkert
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
16
|
Aimé C, Le Covec M, Bovet D, Esseily R. La musicalité est-elle un héritage de notre histoire biologique ? ENFANCE 2020. [DOI: 10.3917/enf2.201.0041] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
17
|
A Set of 200 Musical Stimuli Varying in Balance, Contour, Symmetry, and Complexity: Behavioral and Computational Assessments. Behav Res Methods 2020; 52:1491-1509. [DOI: 10.3758/s13428-019-01329-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
18
|
Suppanen E, Huotilainen M, Ylinen S. Rhythmic structure facilitates learning from auditory input in newborn infants. Infant Behav Dev 2019; 57:101346. [DOI: 10.1016/j.infbeh.2019.101346] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 07/31/2019] [Accepted: 08/01/2019] [Indexed: 02/01/2023]
|
19
|
Karmonik C, Brandt A, Elias S, Townsend J, Silverman E, Shi Z, Frazier JT. Similarity of individual functional brain connectivity patterns formed by music listening quantified with a data-driven approach. Int J Comput Assist Radiol Surg 2019; 15:703-713. [PMID: 31655968 DOI: 10.1007/s11548-019-02077-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Accepted: 10/09/2019] [Indexed: 10/25/2022]
Abstract
INTRODUCTION This study aims to explore the similarities in functional connectivity (FC) patterns in individuals when listening to different music genres and, in comparison, to the spoken word, using a novel data-driven approach. Our model and findings can potentially be utilized for evaluating the neurological effects of therapeutic music interventions. MATERIALS AND METHODS Twelve healthy volunteers listened to seven different sound tracks while undergoing functional magnetic resonance imaging (fMRI) scans: music of the volunteer's choice with positive emotional attachment, two selections of unfamiliar classical music, one classical piece repeated with visual guidance and three spoken language tracks. FC network graphs were created, and selected graph properties were evaluated toward their commonalities across sound tracks. For comparison, FC patterns represented by the graph adjacency matrices were directly compared for high and low BOLD activation during listening. RESULTS Graph properties averaged across subjects showed similar values for the same sound track compared to different sound tracks (p < 0.003). For high BOLD activation involving most areas in the auditory cortex, FC patterns for the same sound track correlated highly (0.74 ± 0.11), whereas FC patterns for different sound tracks did not (0.09 ± 0.07; p < 6e-5). For low BOLD activation involving additional brain regions, correlation of FC patterns for the sound tracks was still higher (0.43 ± 0.07) than for different sound tracks (0.09 ± 0.05; p < 8e-6). CONCLUSION Similar music creates similar functional activation and connectivity patterns in the brain of healthy individuals as does listening to the spoken word. Direct comparison of FC patterns yielded higher correlations than indirect comparisons of graph properties derived from corresponding FC networks.
Collapse
Affiliation(s)
- Christof Karmonik
- Center for Performing Arts Medicine, Houston Methodist Hospital, Houston, TX, USA. .,Translational Imaging Center, Houston Methodist Research Institute, Houston, TX, USA. .,Department of Radiology, Weill Cornell Medical College, New York, NY, USA.
| | - Anthony Brandt
- Shepherd School of Music, Rice University, Houston, TX, USA
| | - Saba Elias
- Translational Imaging Center, Houston Methodist Research Institute, Houston, TX, USA
| | - Jennifer Townsend
- Center for Performing Arts Medicine, Houston Methodist Hospital, Houston, TX, USA
| | - Elliott Silverman
- Center for Performing Arts Medicine, Houston Methodist Hospital, Houston, TX, USA
| | - Zhaoyue Shi
- Translational Imaging Center, Houston Methodist Research Institute, Houston, TX, USA
| | - J Todd Frazier
- Center for Performing Arts Medicine, Houston Methodist Hospital, Houston, TX, USA
| |
Collapse
|
20
|
Fuller C, Başkent D, Free R. Early Deafened, Late Implanted Cochlear Implant Users Appreciate Music More Than and Identify Music as Well as Postlingual Users. Front Neurosci 2019; 13:1050. [PMID: 31680802 PMCID: PMC6798179 DOI: 10.3389/fnins.2019.01050] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Accepted: 09/19/2019] [Indexed: 11/13/2022] Open
Abstract
Introduction: Typical cochlear implant (CI) users, namely postlingually deafened and implanted, report to not enjoy listening to music, and find it difficult to perceive music. Another group of CI users, the early-deafened (during language acquisition) and late-implanted (after a long period of auditory deprivation; EDLI), report a higher music appreciation, but is this related to a better music perception? Materials and Methods: Sixteen EDLI and fifteen postlingually deafened (control group) CI users participated in the study. The inclusion criteria for EDLI were: severe or profound hearing loss onset before the age of 6 years, implantation after the age of 16 years, and CI experience more than 1 year. Subjectively, music perception and appreciation was evaluated using the Dutch Musical Background Questionnaire. Behaviorally, music perception was measured with melodic contour identification (MCI), using two instruments (piano and organ), each tested with and without a masking contour. Semitone distance between successive tones of the target varied from 1 to 3 semitones. Results: Subjectively, the EDLI group reported to appreciate music more than postlingually deafened CI users. Behaviorally, while clinical phoneme recognition test score on average was lower in the EDLI group, melodic contour identification did not significantly differ between the two groups. There was, however, an effect of instrument and masker for both groups; the piano was the best-recognized instrument, and for both instruments, the masker with non-overlapping pitch was best recognized. Discussion: EDLI group reported higher appreciation of music than postlingual control group, even though behaviorally measured music perception did not differ significantly between the two groups. Both surprising findings since EDLI CI users would be expected to have lower outcomes based on the early deafness onset, long duration of auditory deprivation, and on average lower clinical speech scores. Perhaps, the music perception difficulty comes from similar electric hearing limitations in both groups. The higher subjective appreciation in EDLI might be due to the lack of a musical memory, with no ability to compare music heard via the CI to acoustic music perception. Overall, our findings support a benefit from implantation for a positive music experience in EDLI CI users.
Collapse
Affiliation(s)
- Christina Fuller
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands.,Department of Otorhinolaryngology, Treant Zorggroep, Emmen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Rolien Free
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| |
Collapse
|
21
|
|
22
|
Regular Music Exposure in Juvenile Rats Facilitates Conditioned Fear Extinction and Reduces Anxiety after Foot Shock in Adulthood. BIOMED RESEARCH INTERNATIONAL 2019; 2019:8740674. [PMID: 31380440 PMCID: PMC6662454 DOI: 10.1155/2019/8740674] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 05/31/2019] [Accepted: 06/18/2019] [Indexed: 11/18/2022]
Abstract
Music exposure is known to play a positive role in learning and memory and can be a complementary treatment for anxiety and fear. However, whether juvenile music exposure affects adult behavior is not known. Two-week-old Sprague-Dawley rats were exposed to music for 2 hours daily or to background noise (controls) for a period of 3 weeks. At 60 days of age, rats were subjected to auditory fear conditioning, fear extinction training, and anxiety-like behavior assessments or to anterior cingulate cortex (ACC) brain-derived neurotrophic factor (BDNF) assays. We found that the music-exposed rats showed significantly less freezing behaviors during fear extinction training and spent more time in the open arm of the elevated plus maze after fear conditioning when compared with the control rats. Moreover, the BDNF levels in the ACC in the music group were significantly higher than those of the controls with the fear conditioning session. This result suggests that music exposure in juvenile rats decreases anxiety-like behaviors, facilitates fear extinction, and increases BDNF levels in the ACC in adulthood after a stressful event.
Collapse
|
23
|
Ehrlich SK, Agres KR, Guan C, Cheng G. A closed-loop, music-based brain-computer interface for emotion mediation. PLoS One 2019; 14:e0213516. [PMID: 30883569 PMCID: PMC6422328 DOI: 10.1371/journal.pone.0213516] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 02/23/2019] [Indexed: 11/29/2022] Open
Abstract
Emotions play a critical role in rational and intelligent behavior; a better fundamental knowledge of them is indispensable for understanding higher order brain function. We propose a non-invasive brain-computer interface (BCI) system to feedback a person’s affective state such that a closed-loop interaction between the participant’s brain responses and the musical stimuli is established. We realized this concept technically in a functional prototype of an algorithm that generates continuous and controllable patterns of synthesized affective music in real-time, which is embedded within a BCI architecture. We evaluated our concept in two separate studies. In the first study, we tested the efficacy of our music algorithm by measuring subjective affective responses from 11 participants. In a second pilot study, the algorithm was embedded in a real-time BCI architecture to investigate affective closed-loop interactions in 5 participants. Preliminary results suggested that participants were able to intentionally modulate the musical feedback by self-inducing emotions (e.g., by recalling memories), suggesting that the system was able not only to capture the listener’s current affective state in real-time, but also potentially provide a tool for listeners to mediate their own emotions by interacting with music. The proposed concept offers a tool to study emotions in the loop, promising to cast a complementary light on emotion-related brain research, particularly in terms of clarifying the interactive, spatio-temporal dynamics underlying affective processing in the brain.
Collapse
Affiliation(s)
- Stefan K. Ehrlich
- Chair for Cognitive Systems, Department of Electrical and Computer Engineering, Technische Universität München (TUM), Munich, Germany
- * E-mail:
| | - Kat R. Agres
- Institute of High Performance Computing, Social and Cognitive Computing Department, Agency for Science, Technology and Research (A*STAR), Singapore, Singapore
- Yong Siew Toh Conservatory of Music, National University of Singapore (NUS), Singapore, Singapore
| | - Cuntai Guan
- School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore, Singapore
| | - Gordon Cheng
- Chair for Cognitive Systems, Department of Electrical and Computer Engineering, Technische Universität München (TUM), Munich, Germany
| |
Collapse
|
24
|
Reybrouck M, Podlipniak P. Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music. Brain Sci 2019; 9:E53. [PMID: 30832292 PMCID: PMC6468545 DOI: 10.3390/brainsci9030053] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/18/2019] [Accepted: 02/26/2019] [Indexed: 11/24/2022] Open
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener's attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, KU Leuven⁻University of Leuven, 3000 Leuven, Belgium and IPEM⁻Department of Musicology, Ghent University, 9000 Ghent, Belgium.
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, ul. Umultowska 89D, 61-614 Poznań, Poland.
| |
Collapse
|
25
|
Ozernov-Palchik O, Wolf M, Patel AD. Relationships between early literacy and nonlinguistic rhythmic processes in kindergarteners. J Exp Child Psychol 2019; 167:354-368. [PMID: 29227852 DOI: 10.1016/j.jecp.2017.11.009] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 11/04/2017] [Accepted: 11/17/2017] [Indexed: 10/18/2022]
Abstract
A growing number of studies report links between nonlinguistic rhythmic abilities and certain linguistic abilities, particularly phonological skills. The current study investigated the relationship between nonlinguistic rhythmic processing, phonological abilities, and early literacy abilities in kindergarteners. A distinctive aspect of the current work was the exploration of whether processing of different types of rhythmic patterns is differentially related to kindergarteners' phonological and reading-related abilities. Specifically, we examined the processing of metrical versus nonmetrical rhythmic patterns, that is, patterns capable of being subdivided into equal temporal intervals or not (Povel & Essens, 1985). This is an important comparison because most music involves metrical sequences, in which rhythm often has an underlying temporal grid of isochronous units. In contrast, nonmetrical sequences are arguably more typical to speech rhythm, which is temporally structured but does not involve an underlying grid of equal temporal units. A rhythm discrimination app with metrical and nonmetrical patterns was administered to 74 kindergarteners in conjunction with cognitive and preliteracy measures. Findings support a relationship among rhythm perception, phonological awareness, and letter-sound knowledge (an essential precursor of reading). A mediation analysis revealed that the association between rhythm perception and letter-sound knowledge is mediated through phonological awareness. Furthermore, metrical perception accounted for unique variance in letter-sound knowledge above all other language and cognitive measures. These results point to a unique role for temporal regularity processing in the association between musical rhythm and literacy in young children.
Collapse
Affiliation(s)
- Ola Ozernov-Palchik
- Eliot Pearson Department of Child Study and Human Development, Tufts University, Medford, MA 02155, USA.
| | - Maryanne Wolf
- Eliot Pearson Department of Child Study and Human Development, Tufts University, Medford, MA 02155, USA
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, MA 02155, USA; Azrieli Program in Brain, Mind & Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| |
Collapse
|
26
|
Abstract
Music is at the centre of what it means to be human - it is the sounds of human bodies and minds moving in creative, story-making ways. We argue that music comes from the way in which knowing bodies (Merleau-Ponty) prospectively explore the environment using habitual 'patterns of action,' which we have identified as our innate 'communicative musicality.' To support our argument, we present short case studies of infant interactions using micro analyses of video and audio recordings to show the timings and shapes of intersubjective vocalizations and body movements of adult and child while they improvise shared narratives of meaning. Following a survey of the history of discoveries of infant abilities, we propose that the gestural narrative structures of voice and body seen as infants communicate with loving caregivers are the building blocks of what become particular cultural instances of the art of music, and of dance, theatre and other temporal arts. Children enter into a musical culture where their innate communicative musicality can be encouraged and strengthened through sensitive, respectful, playful, culturally informed teaching in companionship. The central importance of our abilities for music as part of what sustains our well-being is supported by evidence that communicative musicality strengthens emotions of social resilience to aid recovery from mental stress and illness. Drawing on the experience of the first author as a counsellor, we argue that the strength of one person's communicative musicality can support the vitality of another's through the application of skilful techniques that encourage an intimate, supportive, therapeutic, spirited companionship. Turning to brain science, we focus on hemispheric differences and the affective neuroscience of Jaak Panksepp. We emphasize that the psychobiological purpose of our innate musicality grows from the integrated rhythms of energy in the brain for prospective, sensation-seeking affective guidance of vitality of movement. We conclude with a Coda that recalls the philosophy of the Scottish Enlightenment, which built on the work of Heraclitus and Spinoza. This view places the shared experience of sensations of living - our communicative musicality - as inspiration for rules of logic formulated in symbols of language.
Collapse
Affiliation(s)
- Stephen Malloch
- Westmead Psychotherapy Program, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia
- The MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Sydney, NSW, Australia
| | - Colwyn Trevarthen
- Department of Psychology, School of Philosophy, Psychology and Language Sciences, The University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
27
|
Chen A, Peter V, Wijnen F, Schnack H, Burnham D. Are lexical tones musical? Native language's influence on neural response to pitch in different domains. BRAIN AND LANGUAGE 2018; 180-182:31-41. [PMID: 29689493 DOI: 10.1016/j.bandl.2018.04.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 03/15/2018] [Accepted: 04/14/2018] [Indexed: 06/08/2023]
Abstract
Language experience shapes musical and speech pitch processing. We investigated whether speaking a lexical tone language natively modulates neural processing of pitch in language and music as well as their correlation. We tested tone language (Mandarin Chinese), and non-tone language (Dutch) listeners in a passive oddball paradigm measuring mismatch negativity (MMN) for (i) Chinese lexical tones and (ii) three-note musical melodies with similar pitch contours. For lexical tones, Chinese listeners showed a later MMN peak than the non-tone language listeners, whereas for MMN amplitude there were no significant differences between groups. Dutch participants also showed a late discriminative negativity (LDN). In the music condition two MMNs, corresponding to the two notes that differed between the standard and the deviant were found for both groups, and an LDN were found for both the Dutch and the Chinese listeners. The music MMNs were significantly right lateralized. Importantly, significant correlations were found between the lexical tone and the music MMNs for the Dutch but not the Chinese participants. The results suggest that speaking a tone language natively does not necessarily enhance neural responses to pitch either in language or in music, but that it does change the nature of neural pitch processing: non-tone language speakers appear to perceive lexical tones as musical, whereas for tone language speakers, lexical tones and music may activate different neural networks. Neural resources seem to be assigned differently for the lexical tones and for musical melodies, presumably depending on the presence or absence of long-term phonological memory traces.
Collapse
Affiliation(s)
- Ao Chen
- Utrecht Institute of Linguistics OTS, Utrecht University, Utrecht, The Netherlands; MARCS Institute for Brain, Behaviour & Development, Western Sydney University, Sydney, Australia; School of Communication Science, Beijing Language and Culture University, Beijing, China.
| | - Varghese Peter
- MARCS Institute for Brain, Behaviour & Development, Western Sydney University, Sydney, Australia; Department of Linguistics, Macquarie University, North Ryde, NSW 2109, Australia
| | - Frank Wijnen
- Utrecht Institute of Linguistics OTS, Utrecht University, Utrecht, The Netherlands
| | - Hugo Schnack
- Utrecht Institute of Linguistics OTS, Utrecht University, Utrecht, The Netherlands; Department of Psychiatry, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Denis Burnham
- MARCS Institute for Brain, Behaviour & Development, Western Sydney University, Sydney, Australia
| |
Collapse
|
28
|
van der Schyff D, Schiavio A. Evolutionary Musicology Meets Embodied Cognition: Biocultural Coevolution and the Enactive Origins of Human Musicality. Front Neurosci 2017; 11:519. [PMID: 29033780 PMCID: PMC5626875 DOI: 10.3389/fnins.2017.00519] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Accepted: 09/04/2017] [Indexed: 12/31/2022] Open
Abstract
Despite evolutionary musicology's interdisciplinary nature, and the diverse methods it employs, the field has nevertheless tended to divide into two main positions. Some argue that music should be understood as a naturally selected adaptation, while others claim that music is a product of culture with little or no relevance for the survival of the species. We review these arguments, suggesting that while interesting and well-reasoned positions have been offered on both sides of the debate, the nature-or-culture (or adaptation vs. non-adaptation) assumptions that have traditionally driven the discussion have resulted in a problematic either/or dichotomy. We then consider an alternative "biocultural" proposal that appears to offer a way forward. As we discuss, this approach draws on a range of research in theoretical biology, archeology, neuroscience, embodied and ecological cognition, and dynamical systems theory (DST), positing a more integrated model that sees biological and cultural dimensions as aspects of the same evolving system. Following this, we outline the enactive approach to cognition, discussing the ways it aligns with the biocultural perspective. Put simply, the enactive approach posits a deep continuity between mind and life, where cognitive processes are explored in terms of how self-organizing living systems enact relationships with the environment that are relevant to their survival and well-being. It highlights the embodied and ecologically situated nature of living agents, as well as the active role they play in their own developmental processes. Importantly, the enactive approach sees cognitive and evolutionary processes as driven by a range of interacting factors, including the socio-cultural forms of activity that characterize the lives of more complex creatures such as ourselves. We offer some suggestions for how this approach might enhance and extend the biocultural model. To conclude we briefly consider the implications of this approach for practical areas such as music education.
Collapse
Affiliation(s)
- Dylan van der Schyff
- Faculty of Education, Simon Fraser University, Burnaby, BC, Canada
- Faculty of Music, University of Oxford, Oxford, United Kingdom
| | - Andrea Schiavio
- Institute for Music Education, University of Music and Performing Arts, Graz, Austria
- Department of Music, The University of Sheffield, Sheffield, United Kingdom
- Centre for Systematic Musicology, University of Graz, Graz, Austria
| |
Collapse
|
29
|
McFee B, Nieto O, Farbood MM, Bello JP. Evaluating Hierarchical Structure in Music Annotations. Front Psychol 2017; 8:1337. [PMID: 28824514 PMCID: PMC5541043 DOI: 10.3389/fpsyg.2017.01337] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Accepted: 07/20/2017] [Indexed: 11/21/2022] Open
Abstract
Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.
Collapse
Affiliation(s)
- Brian McFee
- Center for Data Science, New York UniversityNew York, NY, United States.,Music and Audio Research Laboratory, Department of Music and Performing Arts Professions, New York UniversityNew York, NY, United States
| | | | - Morwaread M Farbood
- Music and Audio Research Laboratory, Department of Music and Performing Arts Professions, New York UniversityNew York, NY, United States
| | - Juan Pablo Bello
- Music and Audio Research Laboratory, Department of Music and Performing Arts Professions, New York UniversityNew York, NY, United States
| |
Collapse
|
30
|
Chen A, Stevens CJ, Kager R. Pitch Perception in the First Year of Life, a Comparison of Lexical Tones and Musical Pitch. Front Psychol 2017; 8:297. [PMID: 28337157 PMCID: PMC5343020 DOI: 10.3389/fpsyg.2017.00297] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2016] [Accepted: 02/16/2017] [Indexed: 11/20/2022] Open
Abstract
Pitch variation is pervasive in speech, regardless of the language to which infants are exposed. Lexical tone is influenced by general sensitivity to pitch. We examined whether the development in lexical tone perception may develop in parallel with perception of pitch in other cognitive domains namely music. Using a visual fixation paradigm, 100 and one 4- and 12-month-old Dutch infants were tested on their discrimination of Chinese rising and dipping lexical tones as well as comparable three-note musical pitch contours. The 4-month-old infants failed to show a discrimination effect in either condition, whereas the 12-month-old infants succeeded in both conditions. These results suggest that lexical tone perception may reflect and relate to general pitch perception abilities, which may serve as a basis for developing more complex language and musical skills.
Collapse
Affiliation(s)
- Ao Chen
- Utrecht Institute of Linguistics, Utrecht UniversityUtrecht, Netherlands; Communication Science School, Beijing Language and Culture UniversityBeijing, China
| | - Catherine J Stevens
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney NSW, Australia
| | - René Kager
- Utrecht Institute of Linguistics, Utrecht University Utrecht, Netherlands
| |
Collapse
|
31
|
Liu L, Kager R. Enhanced music sensitivity in 9-month-old bilingual infants. Cogn Process 2017; 18:55-65. [PMID: 27817073 PMCID: PMC5306126 DOI: 10.1007/s10339-016-0780-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2016] [Accepted: 10/01/2016] [Indexed: 11/25/2022]
Abstract
This study explores the influence of bilingualism on the cognitive processing of language and music. Specifically, we investigate how infants learning a non-tone language perceive linguistic and musical pitch and how bilingualism affects cross-domain pitch perception. Dutch monolingual and bilingual infants of 8-9 months participated in the study. All infants had Dutch as one of the first languages. The other first languages, varying among bilingual families, were not tone or pitch accent languages. In two experiments, infants were tested on the discrimination of a lexical (N = 42) or a violin (N = 48) pitch contrast via a visual habituation paradigm. The two contrasts shared identical pitch contours but differed in timbre. Non-tone language learning infants did not discriminate the lexical contrast regardless of their ambient language environment. When perceiving the violin contrast, bilingual but not monolingual infants demonstrated robust discrimination. We attribute bilingual infants' heightened sensitivity in the musical domain to the enhanced acoustic sensitivity stemming from a bilingual environment. The distinct perceptual patterns between language and music and the influence of acoustic salience on perception suggest processing diversion and association in the first year of life. Results indicate that the perception of music may entail both shared neural network with language processing, and unique neural network that is distinct from other cognitive functions.
Collapse
Affiliation(s)
- Liquan Liu
- School of Social Sciences and Psychology, Western Sydney University, Sydney, Australia.
- Utrecht Institute of Linguistics OTS, Utrecht University, Utrecht, The Netherlands.
| | - René Kager
- School of Social Sciences and Psychology, Western Sydney University, Sydney, Australia
| |
Collapse
|
32
|
Gratton I, Brandimonte MA, Bruno N. Absolute Memory for Tempo in Musicians and Non-Musicians. PLoS One 2016; 11:e0163558. [PMID: 27760198 PMCID: PMC5070877 DOI: 10.1371/journal.pone.0163558] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2015] [Accepted: 09/11/2016] [Indexed: 11/18/2022] Open
Abstract
The ability to remember tempo (the perceived frequency of musical pulse) without external references may be defined, by analogy with the notion of absolute pitch, as absolute tempo (AT). Anecdotal reports and sparse empirical evidence suggest that at least some individuals possess AT. However, to our knowledge, no systematic assessments of AT have been performed using laboratory tasks comparable to those assessing absolute pitch. In the present study, we operationalize AT as the ability to identify and reproduce tempo in the absence of rhythmic or melodic frames of reference and assess these abilities in musically trained and untrained participants. We asked 15 musicians and 15 non-musicians to listen to a seven-step `tempo scale' of metronome beats, each associated to a numerical label, and then to perform two memory tasks. In the first task, participants heard one of the tempi and attempted to report the correct label (identification task), in the second, they saw one label and attempted to tap the correct tempo (production task). A musical and visual excerpt was presented between successive trials as a distractor to prevent participants from using previous tempi as anchors. Thus, participants needed to encode tempo information with the corresponding label, store the information, and recall it to give the response. We found that more than half were able to perform above chance in at least one of the tasks, and that musical training differentiated between participants in identification, but not in production. These results suggest that AT is relatively wide-spread, relatively independent of musical training in tempo production, but further refined by training in tempo identification. We propose that at least in production, the underlying motor representations are related to tactus, a basic internal rhythmic period that may provide a body-based reference for encoding tempo.
Collapse
Affiliation(s)
- Irene Gratton
- Conservatorio di musica Giuseppe Tartini, Trieste, Italy
- * E-mail:
| | | | | |
Collapse
|
33
|
Chen A, Kager R. Discrimination of Lexical Tones in the First Year of Life. INFANT AND CHILD DEVELOPMENT 2016. [DOI: 10.1002/icd.1944] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Ao Chen
- Utrecht Institute of Linguistics; Utrecht The Netherlands
| | - René Kager
- Utrecht Institute of Linguistics; Utrecht The Netherlands
| |
Collapse
|
34
|
Perszyk DR, Waxman SR. Listening to the calls of the wild: The role of experience in linking language and cognition in young infants. Cognition 2016; 153:175-81. [PMID: 27209387 PMCID: PMC5134735 DOI: 10.1016/j.cognition.2016.05.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2015] [Revised: 05/06/2016] [Accepted: 05/09/2016] [Indexed: 10/21/2022]
Abstract
Well before they understand their first words, infants have begun to link language and cognition. This link is initially broad: At 3months, listening to both human and nonhuman primate vocalizations supports infants' object categorization, a building block of cognition. But by 6months, the link has narrowed: Only human vocalizations support categorization. What mechanisms underlie this rapid tuning process? Here, we document the crucial role of infants' experience as infants tune this link to cognition. Merely exposing infants to nonhuman primate vocalizations permits them to preserve, rather than sever, the link between these signals and categorization. Exposing infants to backward speech-a signal that fails to support categorization in the first year of life-does not have this advantage. This new evidence illuminates the central role of early experience as infants specify which signals, from an initially broad set, they will continue to link to core cognitive capacities.
Collapse
Affiliation(s)
- Danielle R Perszyk
- Department of Psychology, Northwestern University, Evanston, IL 60208, United States.
| | - Sandra R Waxman
- Department of Psychology, Northwestern University, Evanston, IL 60208, United States
| |
Collapse
|
35
|
Daikoku T, Yatomi Y, Yumoto M. Pitch-class distribution modulates the statistical learning of atonal chord sequences. Brain Cogn 2016; 108:1-10. [PMID: 27429093 DOI: 10.1016/j.bandc.2016.06.008] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Revised: 03/22/2016] [Accepted: 06/28/2016] [Indexed: 10/21/2022]
Abstract
The present study investigated whether neural responses could demonstrate the statistical learning of chord sequences and how the perception underlying a pitch class can affect the statistical learning of chord sequences. Neuromagnetic responses to two chord sequences of augmented triads that were presented every 0.5s were recorded from fourteen right-handed participants. One sequence was a series of 360 chord triplets, each of which consisted of three chords in the same pitch class (clustered pitch-classes sequences). The other sequence was a series of 360 chord triplets, each of which consisted of three chords in different pitch classes (dispersed pitch-classes sequences). The order of the triplets was constrained by a first-order Markov stochastic model such that a forthcoming triplet was statistically defined by the most recent triplet (80% for one; 20% for the other two). We performed a repeated-measures ANOVA with the peak amplitude and latency of the P1m, N1m and P2m. In the clustered pitch-classes sequences, the P1m responses to the triplets that appeared with higher transitional probability were significantly reduced compared with those with lower transitional probability, whereas no significant result was detected in the dispersed pitch-classes sequences. Neuromagnetic significance was concordant with the results of familiarity interviews conducted after each learning session. The P1m response is a useful index for the statistical learning of chord sequences. Domain-specific perception based on the pitch class may facilitate the domain-general statistical learning of chord sequences.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yutaka Yatomi
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Masato Yumoto
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
36
|
Gervain J, Werker JF, Black A, Geffen MN. The neural correlates of processing scale-invariant environmental sounds at birth. Neuroimage 2016; 133:144-150. [DOI: 10.1016/j.neuroimage.2016.03.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2015] [Revised: 02/24/2016] [Accepted: 03/01/2016] [Indexed: 12/20/2022] Open
|
37
|
Leong V, Goswami U. Acoustic-Emergent Phonology in the Amplitude Envelope of Child-Directed Speech. PLoS One 2015; 10:e0144411. [PMID: 26641472 PMCID: PMC4671555 DOI: 10.1371/journal.pone.0144411] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2015] [Accepted: 11/18/2015] [Indexed: 02/02/2023] Open
Abstract
When acquiring language, young children may use acoustic spectro-temporal patterns in speech to derive phonological units in spoken language (e.g., prosodic stress patterns, syllables, phonemes). Children appear to learn acoustic-phonological mappings rapidly, without direct instruction, yet the underlying developmental mechanisms remain unclear. Across different languages, a relationship between amplitude envelope sensitivity and phonological development has been found, suggesting that children may make use of amplitude modulation (AM) patterns within the envelope to develop a phonological system. Here we present the Spectral Amplitude Modulation Phase Hierarchy (S-AMPH) model, a set of algorithms for deriving the dominant AM patterns in child-directed speech (CDS). Using Principal Components Analysis, we show that rhythmic CDS contains an AM hierarchy comprising 3 core modulation timescales. These timescales correspond to key phonological units: prosodic stress (Stress AM, ~2 Hz), syllables (Syllable AM, ~5 Hz) and onset-rime units (Phoneme AM, ~20 Hz). We argue that these AM patterns could in principle be used by naïve listeners to compute acoustic-phonological mappings without lexical knowledge. We then demonstrate that the modulation statistics within this AM hierarchy indeed parse the speech signal into a primitive hierarchically-organised phonological system comprising stress feet (proto-words), syllables and onset-rime units. We apply the S-AMPH model to two other CDS corpora, one spontaneous and one deliberately-timed. The model accurately identified 72-82% (freely-read CDS) and 90-98% (rhythmically-regular CDS) stress patterns, syllables and onset-rime units. This in-principle demonstration that primitive phonology can be extracted from speech AMs is termed Acoustic-Emergent Phonology (AEP) theory. AEP theory provides a set of methods for examining how early phonological development is shaped by the temporal modulation structure of speech across languages. The S-AMPH model reveals a crucial developmental role for stress feet (AMs ~2 Hz). Stress feet underpin different linguistic rhythm typologies, and speech rhythm underpins language acquisition by infants in all languages.
Collapse
Affiliation(s)
- Victoria Leong
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Usha Goswami
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
38
|
Tunçgenç B, Cohen E, Fawcett C. Rock With Me: The Role of Movement Synchrony in Infants' Social and Nonsocial Choices. Child Dev 2015; 86:976-84. [DOI: 10.1111/cdev.12354] [Citation(s) in RCA: 82] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
39
|
Berland A, Gaillard P, Guidetti M, Barone P. Perception of everyday sounds: a developmental study of a free sorting task. PLoS One 2015; 10:e0115557. [PMID: 25643286 PMCID: PMC4313934 DOI: 10.1371/journal.pone.0115557] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2014] [Accepted: 11/24/2014] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVES The analysis of categorization of everyday sounds is a crucial aspect of the perception of our surrounding world. However, it constitutes a poorly explored domain in developmental studies. The aim of our study was to understand the nature and the logic of the construction of auditory cognitive categories for natural sounds during development. We have developed an original approach based on a free sorting task (FST). Indeed, categorization is fundamental for structuring the world and cognitive skills related to, without having any need of the use of language. Our project explored the ability of children to structure their acoustic world, and to investigate how such structuration matures during normal development. We hypothesized that age affects the listening strategy and the category decision, as well as the number and the content of individual categories. DESIGN Eighty-two French children (6-9 years), 20 teenagers (12-13 years), and 24 young adults participated in the study. Perception and categorization of everyday sounds was assessed based on a FST composed of 18 different sounds belonging to three a priori categories: non-linguistic human vocalizations, environmental sounds, and musical instruments. RESULTS Children listened to the sounds more times than older participants, built significantly more classes than adults, and used a different strategy of classification. We can thus conclude that there is an age effect on how the participants accomplished the task. Analysis of the auditory categorization performed by 6-year-old children showed that this age constitutes a pivotal stage, in agreement with the progressive change from a non-logical reasoning based mainly on perceptive representations to the logical reasoning used by older children. In conclusion, our results suggest that the processing of auditory object categorization develops through different stages, while the intrinsic basis of the classification of sounds is already present in childhood.
Collapse
Affiliation(s)
- Aurore Berland
- Unité de Recherche Interdisciplinaire Octogone, EA4156, Laboratoire Cognition, Communication et Développement, Université de Toulouse Jean-Jaurès, Toulouse, France
- Centre de Recherche Cerveau et Cognition, Université de Toulouse UPS, CNRS-UMR 5549, Toulouse, France
| | - Pascal Gaillard
- Unité de Recherche Interdisciplinaire Octogone, EA4156, Laboratoire Cognition, Communication et Développement, Université de Toulouse Jean-Jaurès, Toulouse, France
| | - Michèle Guidetti
- Unité de Recherche Interdisciplinaire Octogone, EA4156, Laboratoire Cognition, Communication et Développement, Université de Toulouse Jean-Jaurès, Toulouse, France
| | - Pascal Barone
- Centre de Recherche Cerveau et Cognition, Université de Toulouse UPS, CNRS-UMR 5549, Toulouse, France
| |
Collapse
|
40
|
Daikoku T, Yatomi Y, Yumoto M. Statistical learning of music- and language-like sequences and tolerance for spectral shifts. Neurobiol Learn Mem 2014; 118:8-19. [PMID: 25451311 DOI: 10.1016/j.nlm.2014.11.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2014] [Revised: 09/25/2014] [Accepted: 11/02/2014] [Indexed: 11/18/2022]
Abstract
In our previous study (Daikoku, Yatomi, & Yumoto, 2014), we demonstrated that the N1m response could be a marker for the statistical learning process of pitch sequence, in which each tone was ordered by a Markov stochastic model. The aim of the present study was to investigate how the statistical learning of music- and language-like auditory sequences is reflected in the N1m responses based on the assumption that both language and music share domain generality. By using vowel sounds generated by a formant synthesizer, we devised music- and language-like auditory sequences in which higher-ordered transitional rules were embedded according to a Markov stochastic model by controlling fundamental (F0) and/or formant frequencies (F1-F2). In each sequence, F0 and/or F1-F2 were spectrally shifted in the last one-third of the tone sequence. Neuromagnetic responses to the tone sequences were recorded from 14 right-handed normal volunteers. In the music- and language-like sequences with pitch change, the N1m responses to the tones that appeared with higher transitional probability were significantly decreased compared with the responses to the tones that appeared with lower transitional probability within the first two-thirds of each sequence. Moreover, the amplitude difference was even retained within the last one-third of the sequence after the spectral shifts. However, in the language-like sequence without pitch change, no significant difference could be detected. The pitch change may facilitate the statistical learning in language and music. Statistically acquired knowledge may be appropriated to process altered auditory sequences with spectral shifts. The relative processing of spectral sequences may be a domain-general auditory mechanism that is innate to humans.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yutaka Yatomi
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Masato Yumoto
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
41
|
Implicit and explicit statistical learning of tone sequences across spectral shifts. Neuropsychologia 2014; 63:194-204. [PMID: 25192632 DOI: 10.1016/j.neuropsychologia.2014.08.028] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Revised: 08/21/2014] [Accepted: 08/22/2014] [Indexed: 11/22/2022]
Abstract
We investigated how the statistical learning of auditory sequences is reflected in neuromagnetic responses in implicit and explicit learning conditions. Complex tones with fundamental frequencies (F0s) in a five-tone equal temperament were generated by a formant synthesizer. The tones were subsequently ordered with the constraint that the probability of the forthcoming tone was statistically defined (80% for one tone; 5% for the other four) by the latest two successive tones (second-order Markov chains). The tone sequence consisted of 500 tones and 250 successive tones with a relative shift of F0s based on the same Markov transitional matrix. In explicit and implicit learning conditions, neuromagnetic responses to the tone sequence were recorded from fourteen right-handed participants. The temporal profiles of the N1m responses to the tones with higher and lower transitional probabilities were compared. In the explicit learning condition, the N1m responses to tones with higher transitional probability were significantly decreased compared with responses to tones with lower transitional probability in the latter half of the 500-tone sequence. Furthermore, this difference was retained even after the F0s were relatively shifted. In the implicit learning condition, N1m responses to tones with higher transitional probability were significantly decreased only for the 250 tones following the relative shift of F0s. The delayed detection of learning effects across the sound-spectral shift in the implicit condition may imply that learning may progress earlier in explicit learning conditions than in implicit learning conditions. The finding that the learning effects were retained across spectral shifts regardless of the learning modality indicates that relative pitch processing may be an essential ability for humans.
Collapse
|
42
|
Affiliation(s)
- Jaana Oikkonen
- Department of Medical Genetics; University of Helsinki; Helsinki Finland
| | - Irma Järvelä
- Department of Medical Genetics; University of Helsinki; Helsinki Finland
| |
Collapse
|
43
|
Silva S, Barbosa F, Marques-Teixeira J, Petersson KM, Castro SL. You know when: Event-related potentials and theta/beta power indicate boundary prediction in music. J Integr Neurosci 2014; 13:19-34. [DOI: 10.1142/s0219635214500022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
44
|
Miendlarzewska EA, Trost WJ. How musical training affects cognitive development: rhythm, reward and other modulating variables. Front Neurosci 2014; 7:279. [PMID: 24672420 PMCID: PMC3957486 DOI: 10.3389/fnins.2013.00279] [Citation(s) in RCA: 84] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2013] [Accepted: 12/31/2013] [Indexed: 01/08/2023] Open
Abstract
Musical training has recently gained additional interest in education as increasing neuroscientific research demonstrates its positive effects on brain development. Neuroimaging revealed plastic changes in the brains of adult musicians but it is still unclear to what extent they are the product of intensive music training rather than of other factors, such as preexisting biological markers of musicality. In this review, we synthesize a large body of studies demonstrating that benefits of musical training extend beyond the skills it directly aims to train and last well into adulthood. For example, children who undergo musical training have better verbal memory, second language pronunciation accuracy, reading ability and executive functions. Learning to play an instrument as a child may even predict academic performance and IQ in young adulthood. The degree of observed structural and functional adaptation in the brain correlates with intensity and duration of practice. Importantly, the effects on cognitive development depend on the timing of musical initiation due to sensitive periods during development, as well as on several other modulating variables. Notably, we point to motivation, reward and social context of musical education, which are important yet neglected factors affecting the long-term benefits of musical training. Further, we introduce the notion of rhythmic entrainment and suggest that it may represent a mechanism supporting learning and development of executive functions. It also hones temporal processing and orienting of attention in time that may underlie enhancements observed in reading and verbal memory. We conclude that musical training uniquely engenders near and far transfer effects, preparing a foundation for a range of skills, and thus fostering cognitive development.
Collapse
Affiliation(s)
- Ewa A Miendlarzewska
- Department of Fundamental Neurosciences, (CMU), University of Geneva Geneva, Switzerland ; Swiss Centre of Affective Sciences, University of Geneva Geneva, Switzerland
| | - Wiebke J Trost
- Swiss Centre of Affective Sciences, University of Geneva Geneva, Switzerland
| |
Collapse
|
45
|
Friendly RH, Rendall D, Trainor LJ. Plasticity after perceptual narrowing for voice perception: reinstating the ability to discriminate monkeys by their voices at 12 months of age. Front Psychol 2013; 4:718. [PMID: 24130540 PMCID: PMC3793506 DOI: 10.3389/fpsyg.2013.00718] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Accepted: 09/18/2013] [Indexed: 11/14/2022] Open
Abstract
Differentiating individuals by their voice is an important social skill for infants to acquire. In a previous study, we demonstrated that the ability to discriminate individuals by voice follows a pattern of perceptual narrowing (Friendly et al., 2013). Specifically, we found that the ability to discriminate between two foreign-species (rhesus monkey) voices decreased significantly between 6 and 12 months of age. Also during this period, there was a trend for the ability to discriminate human voices to increase. Here we investigate the extent to which plasticity remains at 12 months, after perceptual narrowing has occurred. We found that 12-month-olds who received 2 weeks of monkey-voice training were significantly better at discriminating between rhesus monkey voices than untrained 12-month-olds. Furthermore, discrimination was reinstated to a level slightly better than that of untrained 6-month-olds, suggesting that voice-processing abilities remain considerably plastic at the end of the first year.
Collapse
Affiliation(s)
- Rayna H. Friendly
- Department of Psychology, Neuroscience and Behaviour, McMaster UniversityHamilton, ON, Canada
| | - Drew Rendall
- Department of Psychology, University of LethbridgeLethbridge, AB, Canada
| | - Laurel J. Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster UniversityHamilton, ON, Canada
- Rotman Research Institute, Baycrest CentreToronto, ON, Canada
| |
Collapse
|
46
|
Peretz I, Gosselin N, Nan Y, Caron-Caplette E, Trehub SE, Béland R. A novel tool for evaluating children's musical abilities across age and culture. Front Syst Neurosci 2013; 7:30. [PMID: 23847479 PMCID: PMC3707384 DOI: 10.3389/fnsys.2013.00030] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2012] [Accepted: 06/15/2013] [Indexed: 11/16/2022] Open
Abstract
The present study introduces a novel tool for assessing musical abilities in children: The Montreal Battery of Evaluation of Musical Abilities (MBEMA). The battery, which comprises tests of memory, scale, contour, interval, and rhythm, was administered to 245 children in Montreal and 91 in Beijing (Experiment 1), and an abbreviated version was administered to an additional 85 children in Montreal (in less than 20 min; Experiment 2). All children were 6–8 years of age. Their performance indicated that both versions of the MBEMA are sensitive to individual differences and to musical training. The sensitivity of the tests extends to Mandarin-speaking children despite the fact that they show enhanced performance relative to French-speaking children. Because this Chinese advantage is not limited to musical pitch but extends to rhythm and memory, it is unlikely that it results from early exposure to a tonal language. In both cultures and versions of the tests, amount of musical practice predicts performance. Thus, the MBEMA can serve as an objective, short and up-to-date test of musical abilities in a variety of situations, from the identification of children with musical difficulties to the assessment of the effects of musical training in typically developing children of different cultures.
Collapse
Affiliation(s)
- Isabelle Peretz
- Department of Psychology, International Laboratory of Brain, Music, and Sound Research, University of Montreal Montreal, QC, Canada
| | | | | | | | | | | |
Collapse
|
47
|
Kirschner S, Ilari B. Joint Drumming in Brazilian and German Preschool Children. JOURNAL OF CROSS-CULTURAL PSYCHOLOGY 2013. [DOI: 10.1177/0022022113493139] [Citation(s) in RCA: 70] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
As a core feature of musical rituals around the world, humans synchronize their movements to the pulse of a shared acoustic pattern—a behavior called rhythmic entrainment. The purpose of the present study was (a) to examine the development of rhythmic entrainment with a focus on the role of experience and (b) to follow one line of evidence concerning its adaptive function. We hypothesized (a) that children learn how to synchronize movements to sound during social interactions, where they experience this behavior as a convention of the surrounding culture’s practice, and (b) that rhythmic entrainment has an adaptive value by allowing several people to coordinate their actions, thereby creating group cohesion and ultimately promoting cooperativeness. We compared the spontaneous synchronization behavior of Brazilian and German preschool children during joint drumming with an experimenter, either vis-à-vis or separated by a curtain, versus drumming along a playback beat. Afterward, we measured the children’s prosocial tendencies toward the experimenter. We found that Brazilian children were more likely than German children to spontaneously synchronize their drumming in a social setting, even if the codrummer was hidden from view. According to hypothesis, the variation in individual synchronization accuracy between and within the two samples could be partly explained by differences in individual experience with active musical practice, as revealed by parental interviews. However, we found no differences in children’s prosocial tendencies depending on whether they just had drummed alone or together with the experimenter.
Collapse
|
48
|
Bidelman GM. The role of the auditory brainstem in processing musically relevant pitch. Front Psychol 2013; 4:264. [PMID: 23717294 PMCID: PMC3651994 DOI: 10.3389/fpsyg.2013.00264] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2013] [Accepted: 04/23/2013] [Indexed: 11/13/2022] Open
Abstract
Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners' perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis Memphis, TN, USA ; School of Communication Sciences and Disorders, University of Memphis Memphis, TN, USA
| |
Collapse
|
49
|
Istók E, Friberg A, Huotilainen M, Tervaniemi M. Expressive timing facilitates the neural processing of phrase boundaries in music: evidence from event-related potentials. PLoS One 2013; 8:e55150. [PMID: 23383088 PMCID: PMC3559386 DOI: 10.1371/journal.pone.0055150] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2012] [Accepted: 12/20/2012] [Indexed: 11/24/2022] Open
Abstract
The organization of sound into meaningful units is fundamental to the processing of auditory information such as speech and music. In expressive music performance, structural units or phrases may become particularly distinguishable through subtle timing variations highlighting musical phrase boundaries. As such, expressive timing may support the successful parsing of otherwise continuous musical material. By means of the event-related potential technique (ERP), we investigated whether expressive timing modulates the neural processing of musical phrases. Musicians and laymen listened to short atonal scale-like melodies that were presented either isochronously (deadpan) or with expressive timing cues emphasizing the melodies’ two-phrase structure. Melodies were presented in an active and a passive condition. Expressive timing facilitated the processing of phrase boundaries as indicated by decreased N2b amplitude and enhanced P3a amplitude for target phrase boundaries and larger P2 amplitude for non-target boundaries. When timing cues were lacking, task demands increased especially for laymen as reflected by reduced P3a amplitude. In line, the N2b occurred earlier for musicians in both conditions indicating general faster target detection compared to laymen. Importantly, the elicitation of a P3a-like response to phrase boundaries marked by a pitch leap during passive exposure suggests that expressive timing information is automatically encoded and may lead to an involuntary allocation of attention towards significant events within a melody. We conclude that subtle timing variations in music performance prepare the listener for musical key events by directing and guiding attention towards their occurrences. That is, expressive timing facilitates the structuring and parsing of continuous musical material even when the auditory input is unattended.
Collapse
Affiliation(s)
- Eva Istók
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland.
| | | | | | | |
Collapse
|
50
|
Huang J, Gamble D, Sarnlertsophon K, Wang X, Hsiao S. Integration of auditory and tactile inputs in musical meter perception. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2013; 787:453-61. [PMID: 23716252 PMCID: PMC4324720 DOI: 10.1007/978-1-4614-1590-9_50] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.
Collapse
Affiliation(s)
- Juan Huang
- The Solomon H. Snyder Department of Neuroscience, The Johns Hopkins University, Baltimore, MD 21205, USA.
| | | | | | | | | |
Collapse
|