1
|
Yu CY, Cabildo A, Grahn JA, Vanden Bosch der Nederlanden CM. Perceived rhythmic regularity is greater for song than speech: examining acoustic correlates of rhythmic regularity in speech and song. Front Psychol 2023; 14:1167003. [PMID: 37303916 PMCID: PMC10250601 DOI: 10.3389/fpsyg.2023.1167003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Rhythm is a key feature of music and language, but the way rhythm unfolds within each domain differs. Music induces perception of a beat, a regular repeating pulse spaced by roughly equal durations, whereas speech does not have the same isochronous framework. Although rhythmic regularity is a defining feature of music and language, it is difficult to derive acoustic indices of the differences in rhythmic regularity between domains. The current study examined whether participants could provide subjective ratings of rhythmic regularity for acoustically matched (syllable-, tempo-, and contour-matched) and acoustically unmatched (varying in tempo, syllable number, semantics, and contour) exemplars of speech and song. We used subjective ratings to index the presence or absence of an underlying beat and correlated ratings with stimulus features to identify acoustic metrics of regularity. Experiment 1 highlighted that ratings based on the term "rhythmic regularity" did not result in consistent definitions of regularity across participants, with opposite ratings for participants who adopted a beat-based definition (song greater than speech), a normal-prosody definition (speech greater than song), or an unclear definition (no difference). Experiment 2 defined rhythmic regularity as how easy it would be to tap or clap to the utterances. Participants rated song as easier to clap or tap to than speech for both acoustically matched and unmatched datasets. Subjective regularity ratings from Experiment 2 illustrated that stimuli with longer syllable durations and with less spectral flux were rated as more rhythmically regular across domains. Our findings demonstrate that rhythmic regularity distinguishes speech from song and several key acoustic features can be used to predict listeners' perception of rhythmic regularity within and across domains as well.
Collapse
Affiliation(s)
- Chu Yi Yu
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Anne Cabildo
- Department of Psychology, University of Toronto, Mississauga, ON, Canada
| | - Jessica A. Grahn
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Christina M. Vanden Bosch der Nederlanden
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
- Department of Psychology, University of Toronto, Mississauga, ON, Canada
| |
Collapse
|
2
|
Hoffman C, Cheng J, Ji D, Dabaghian Y. Pattern dynamics and stochasticity of the brain rhythms. Proc Natl Acad Sci U S A 2023; 120:e2218245120. [PMID: 36976768 PMCID: PMC10083604 DOI: 10.1073/pnas.2218245120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Accepted: 02/07/2023] [Indexed: 03/29/2023] Open
Abstract
Our current understanding of brain rhythms is based on quantifying their instantaneous or time-averaged characteristics. What remains unexplored is the actual structure of the waves-their shapes and patterns over finite timescales. Here, we study brain wave patterning in different physiological contexts using two independent approaches: The first is based on quantifying stochasticity relative to the underlying mean behavior, and the second assesses "orderliness" of the waves' features. The corresponding measures capture the waves' characteristics and abnormal behaviors, such as atypical periodicity or excessive clustering, and demonstrate coupling between the patterns' dynamics and the animal's location, speed, and acceleration. Specifically, we studied patterns of θ, γ, and ripple waves recorded in mice hippocampi and observed speed-modulated changes of the wave's cadence, an antiphase relationship between orderliness and acceleration, as well as spatial selectiveness of patterns. Taken together, our results offer a complementary-mesoscale-perspective on brain wave structure, dynamics, and functionality.
Collapse
Affiliation(s)
- Clarissa Hoffman
- Department of Neurology, McGovern Medical School, The University of Texas, Houston, TX77030
| | - Jingheng Cheng
- Department of Neuroscience, Baylor College of Medicine, Houston, TX77030
| | - Daoyun Ji
- Department of Neuroscience, Baylor College of Medicine, Houston, TX77030
- Department of Molecular and Cell Biology, Baylor College of Medicine, Houston, TX77030
| | - Yuri Dabaghian
- Department of Neurology, McGovern Medical School, The University of Texas, Houston, TX77030
| |
Collapse
|
3
|
Hawkins S, Farrant C. Influence of Turn-Taking in Musical and Spoken Activities on Empathy and Self-Esteem of Socially Vulnerable Young Teenagers. Front Psychol 2022; 12:801574. [PMID: 35197885 PMCID: PMC8859432 DOI: 10.3389/fpsyg.2021.801574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 12/30/2021] [Indexed: 11/13/2022] Open
Abstract
This study describes a preliminary test of the hypothesis that, when people engage in musical and linguistic activities designed to enhance the interactive, turn-taking properties of typical conversation, they benefit in ways that enhance empathy and self-esteem, relative to people who experience activities that are similar except that synchronous action is emphasized, with no interactional turn-taking. Twenty-two 12–14 year olds identified as socially vulnerable (e.g., for anxiety) received six enjoyable 1-h sessions of musical improvisation, language games that developed sensitivity to linguistic rhythm and melody, and cross-over activities like rap. The Turn-taking group (n = 11), practiced characteristics of conversation in language games, and these were also introduced into musical activities. This involved much turn-taking and predicting what others would do. A matched control group, the Synchrony group, did similar activities but in synchrony, with less prediction and no turn-taking. Task complexity increased over the six sessions. Psychometric testing before and after the series showed that the Turn-taking group increased in empathy on self-report (Toronto Empathy Questionnaire) and behavioral (‘Reading the Mind in the Eyes’) measures, and in the General subtest of the Culture-Free Self-Esteem Inventory. While more work is needed to confirm the conclusions for relevant demographic groups, the current results point to the social value of musical and linguistic activities that mimic entrained, tightly coordinated parameters of everyday conversational interaction, in which, at any one time, individuals act as equal participants who have different roles.
Collapse
Affiliation(s)
- Sarah Hawkins
- Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
- *Correspondence: Sarah Hawkins,
| | | |
Collapse
|
4
|
Fiveash A, Bedoin N, Gordon RL, Tillmann B. Processing rhythm in speech and music: Shared mechanisms and implications for developmental speech and language disorders. Neuropsychology 2021; 35:771-791. [PMID: 34435803 PMCID: PMC8595576 DOI: 10.1037/neu0000766] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE Music and speech are complex signals containing regularities in how they unfold in time. Similarities between music and speech/language in terms of their auditory features, rhythmic structure, and hierarchical structure have led to a large body of literature suggesting connections between the two domains. However, the precise underlying mechanisms behind this connection remain to be elucidated. METHOD In this theoretical review article, we synthesize previous research and present a framework of potentially shared neural mechanisms for music and speech rhythm processing. We outline structural similarities of rhythmic signals in music and speech, synthesize prominent music and speech rhythm theories, discuss impaired timing in developmental speech and language disorders, and discuss music rhythm training as an additional, potentially effective therapeutic tool to enhance speech/language processing in these disorders. RESULTS We propose the processing rhythm in speech and music (PRISM) framework, which outlines three underlying mechanisms that appear to be shared across music and speech/language processing: Precise auditory processing, synchronization/entrainment of neural oscillations to external stimuli, and sensorimotor coupling. The goal of this framework is to inform directions for future research that integrate cognitive and biological evidence for relationships between rhythm processing in music and speech. CONCLUSION The current framework can be used as a basis to investigate potential links between observed timing deficits in developmental disorders, impairments in the proposed mechanisms, and pathology-specific deficits which can be targeted in treatment and training supporting speech therapy outcomes. On these grounds, we propose future research directions and discuss implications of our framework. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Anna Fiveash
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
| | - Nathalie Bedoin
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
- University of Lyon 2, CNRS, UMR5596, Lyon, F-69000, France
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, Tennessee
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
| |
Collapse
|
5
|
Linguistic syncopation: Meter-syntax alignment affects sentence comprehension and sensorimotor synchronization. Cognition 2021; 217:104880. [PMID: 34419725 DOI: 10.1016/j.cognition.2021.104880] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 07/01/2021] [Accepted: 08/11/2021] [Indexed: 02/08/2023]
Abstract
The hierarchical organization of speech rhythm into meter putatively confers cognitive affordances for perception, memory, and motor coordination. Meter also aligns with phrasal structure in systematic ways. In this paper, we show that this alignment affects the robustness of syntactic comprehension and discuss possible underlying mechanisms. In two experiments, we manipulated meter-syntax alignment while sentences with relative clause structures were either read as text (experiment 1, n = 40) or listened to as speech (experiment 2, n = 40). In experiment 2, we also measured the stability with which participants could tap in time with the metrical accents in the sentences they were comprehending. In addition to making more mistakes, sensorimotor synchronization was disrupted when syntactic cues clashed with the metrical context. We suggest that this reflects a tight coordination of top-down linguistic knowledge with the sensorimotor system to optimize comprehension.
Collapse
|
6
|
Ten Oever S, Martin AE. An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions. eLife 2021; 10:68066. [PMID: 34338196 PMCID: PMC8328513 DOI: 10.7554/elife.68066] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 07/16/2021] [Indexed: 11/19/2022] Open
Abstract
Neuronal oscillations putatively track speech in order to optimize sensory processing. However, it is unclear how isochronous brain oscillations can track pseudo-rhythmic speech input. Here we propose that oscillations can track pseudo-rhythmic speech when considering that speech time is dependent on content-based predictions flowing from internal language models. We show that temporal dynamics of speech are dependent on the predictability of words in a sentence. A computational model including oscillations, feedback, and inhibition is able to track pseudo-rhythmic speech input. As the model processes, it generates temporal phase codes, which are a candidate mechanism for carrying information forward in time. The model is optimally sensitive to the natural temporal speech dynamics and can explain empirical data on temporal speech illusions. Our results suggest that speech tracking does not have to rely only on the acoustics but could also exploit ongoing interactions between oscillations and constraints flowing from internal language models.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Andrea E Martin
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
7
|
Musical improvisation enhances interpersonal coordination in subsequent conversation: Motor and speech evidence. PLoS One 2021; 16:e0250166. [PMID: 33857238 PMCID: PMC8049323 DOI: 10.1371/journal.pone.0250166] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Accepted: 03/31/2021] [Indexed: 11/19/2022] Open
Abstract
This study explored the effects of musical improvisation between dyads of same-sex strangers on subsequent behavioural alignment. Participants-all non-musicians-conversed before and after either improvising music together (Musical Improvisation-MI-group) or doing a motoric non-rhythmic cooperative task (building a tower together using wooden blocks; the Hands-Busy-HB-group). Conversations were free, but initially guided by an adaptation of the Fast Friends Questionnaire for inducing talk among students who are strangers and meeting for the first time. Throughout, participants' motion was recorded with an optical motion-capture system (Mocap) and analysed in terms of speed cross-correlations. Their conversations were also recorded on separate channels using headset microphones and were analysed in terms of the periodicity displayed by rhythmic peaks in the turn transitions across question and answer pairs (Q+A pairs). Compared with their first conversations, the MI group in the second conversations showed: (a) a very rapid, partially simultaneous anatomical coordination between 0 and 0.4 s; (b) delayed mirror motoric coordination between 0.8 and 1.5 s; and (c) a higher proportion of Periodic Q+A pairs. In contrast, the HB group's motoric coordination changed slightly in timing but not in degree of coordination between the first and second conversations, and there was no significant change in the proportion of periodic Q+A pairs they produced. These results show a convergent effect of prior musical interaction on joint body movement and use of shared periodicity across speech turn-transitions in conversations, suggesting that interaction in music and speech may be mediated by common processes.
Collapse
|
8
|
Polyanskaya L, Samuel AG, Ordin M. Regularity in speech rhythm as a social coalition signal. Ann N Y Acad Sci 2019; 1453:153-165. [PMID: 31373001 DOI: 10.1111/nyas.14193] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2019] [Revised: 05/24/2019] [Accepted: 06/13/2019] [Indexed: 01/09/2023]
Abstract
Regular rhythm facilitates audiomotor entrainment and synchronization in motor behavior and vocalizations between individuals. As rhythm entrainment between interacting agents is correlated with higher levels of cooperation and prosocial affiliative behavior, humans can potentially map regular speech rhythm onto higher cooperation and friendliness between interacting individuals. We tested this hypothesis at two rhythmic levels: pulse (recurrent acoustic events) and meter (hierarchical structuring of pulses based on their relative salience). We asked the listeners to make judgments of the hostile or collaborative attitude of two interacting agents who exhibit either regular or irregular pulse (Experiment 1) or meter (Experiment 2). The results confirmed a link between the perception of social affiliation and rhythmicity: evenly distributed pulses (vowel onsets) and consistent grouping of pulses into recurrent hierarchical patterns are more likely to be perceived as cooperation signals. People are more sensitive to regularity at the level of pulse than at the level of meter, and they are more confident when they associate cooperation with isochrony in pulse. The evolutionary origin of this faculty is possibly the need to transmit and perceive coalition information in social groups of human ancestors. We discuss the implications of these findings for the emergence of speech in humans.
Collapse
Affiliation(s)
- Leona Polyanskaya
- BCBL - Basque Centre on Cognition, Brain and Language, Donostia, Spain
| | - Arthur G Samuel
- BCBL - Basque Centre on Cognition, Brain and Language, Donostia, Spain.,IKERBASQUE - Basque Foundation for Science, Bilbao, Spain.,Department of Psychology, Stony Brook University, Stony Brook, New York
| | - Mikhail Ordin
- BCBL - Basque Centre on Cognition, Brain and Language, Donostia, Spain.,IKERBASQUE - Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
9
|
Oesch N. Music and Language in Social Interaction: Synchrony, Antiphony, and Functional Origins. Front Psychol 2019; 10:1514. [PMID: 31312163 PMCID: PMC6614337 DOI: 10.3389/fpsyg.2019.01514] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Accepted: 06/17/2019] [Indexed: 11/13/2022] Open
Abstract
Music and language are universal human abilities with many apparent similarities relating to their acoustics, structure, and frequent use in social situations. We might therefore expect them to be understood and processed similarly, and indeed an emerging body of research suggests that this is the case. But the focus has historically been on the individual, looking at the passive listener or the isolated speaker or performer, even though social interaction is the primary site of use for both domains. Nonetheless, an important goal of emerging research is to compare music and language in terms of acoustics and structure, social interaction, and functional origins to develop parallel accounts across the two domains. Indeed, a central aim of both of evolutionary musicology and language evolution research is to understand the adaptive significance or functional origin of human music and language. An influential proposal to emerge in recent years has been referred to as the social bonding hypothesis. Here, within a comparative approach to animal communication systems, I review empirical studies in support of the social bonding hypothesis in humans, non-human primates, songbirds, and various other mammals. In support of this hypothesis, I review six research fields: (i) the functional origins of music; (ii) the functional origins of language; (iii) mechanisms of social synchrony for human social bonding; (iv) language and social bonding in humans; (v) music and social bonding in humans; and (vi) pitch, tone and emotional expression in human speech and music. I conclude that the comparative study of complex vocalizations and behaviors in various extant species can provide important insights into the adaptive function(s) of these traits in these species, as well as offer evidence-based speculations for the existence of "musilanguage" in our primate ancestors, and thus inform our understanding of the biology and evolution of human music and language.
Collapse
Affiliation(s)
- Nathan Oesch
- Music and Neuroscience Lab, Department of Psychology, The Brain and Mind Institute, Western University, London, ON, Canada
- Cognitive Neuroscience of Communication and Hearing (CoNCH) Lab, Department of Psychology, The Brain and Mind Institute, Western University, London, ON, Canada
| |
Collapse
|
10
|
Kotz S, Ravignani A, Fitch W. The Evolution of Rhythm Processing. Trends Cogn Sci 2018; 22:896-910. [DOI: 10.1016/j.tics.2018.08.002] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Revised: 07/25/2018] [Accepted: 08/02/2018] [Indexed: 01/14/2023]
|
11
|
Wieland EA, McAuley JD, Dilley LC, Chang SE. Evidence for a rhythm perception deficit in children who stutter. BRAIN AND LANGUAGE 2015; 144:26-34. [PMID: 25880903 PMCID: PMC5382013 DOI: 10.1016/j.bandl.2015.03.008] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2014] [Revised: 03/05/2015] [Accepted: 03/22/2015] [Indexed: 05/29/2023]
Abstract
Stuttering is a neurodevelopmental disorder that affects the timing and rhythmic flow of speech production. When speech is synchronized with an external rhythmic pacing signal (e.g., a metronome), even severe stuttering can be markedly alleviated, suggesting that people who stutter may have difficulty generating an internal rhythm to pace their speech. To investigate this possibility, children who stutter and typically-developing children (n=17 per group, aged 6-11 years) were compared in terms of their auditory rhythm discrimination abilities of simple and complex rhythms. Children who stutter showed worse rhythm discrimination than typically-developing children. These findings provide the first evidence of impaired rhythm perception in children who stutter, supporting the conclusion that developmental stuttering may be associated with a deficit in rhythm processing.
Collapse
Affiliation(s)
- Elizabeth A Wieland
- Department of Communicative Sciences and Disorders, Michigan State University, 1026 Red Cedar Rd, East Lansing, MI 48824, USA.
| | - J Devin McAuley
- Department of Psychology and Neuroscience Program, Michigan State University, 316 Physics Rd, East Lansing, MI 48824, USA.
| | - Laura C Dilley
- Department of Communicative Sciences and Disorders, Michigan State University, 1026 Red Cedar Rd, East Lansing, MI 48824, USA.
| | - Soo-Eun Chang
- Department of Psychiatry, University of Michigan, Rachel Upjohn Building, 4250 Plymouth Rd, Ann Arbor, MI 48109, USA.
| |
Collapse
|
12
|
Smith R, Rathcke T, Cummins F, Overy K, Scott S. Communicative rhythms in brain and behaviour. Philos Trans R Soc Lond B Biol Sci 2014; 369:20130389. [PMID: 25385770 DOI: 10.1098/rstb.2013.0389] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Affiliation(s)
- Rachel Smith
- Glasgow University Laboratory of Phonetics (School of Critical Studies), University of Glasgow, Glasgow, UK
| | - Tamara Rathcke
- English Language and Linguistics (School of European Culture and Languages), University of Kent, Canterbury CT2 7NF, UK
| | - Fred Cummins
- University College Dublin, Dublin 4, Republic of Ireland
| | - Katie Overy
- Don Wright Faculty of Music, University of Western Ontario, London, Ontario, Canada N6A 3K7 IMHSD, Reid School of Music, Edinburgh College of Art, University of Edinburgh, Edinburgh EH8 9DF, UK
| | - Sophie Scott
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AR, UK
| |
Collapse
|