1
|
Vonthron F, Yuen A, Pellerin H, Cohen D, Grossard C. A Serious Game to Train Rhythmic Abilities in Children With Dyslexia: Feasibility and Usability Study. JMIR Serious Games 2024; 12:e42733. [PMID: 37830510 PMCID: PMC10811594 DOI: 10.2196/42733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 01/26/2023] [Accepted: 10/09/2023] [Indexed: 10/14/2023] Open
Abstract
BACKGROUND Rhythm perception and production are related to phonological awareness and reading performance, and rhythmic deficits have been reported in dyslexia. In addition, rhythm-based interventions can improve cognitive function, and there is consistent evidence suggesting that they are an efficient tool for training reading skills in dyslexia. OBJECTIVE This paper describes a rhythmic training protocol for children with dyslexia provided through a serious game (SG) called Mila-Learn and the methodology used to test its usability. METHODS We computed Mila-Learn, an SG that makes training remotely accessible and consistently reproducible and follows an educative agenda using Unity (Unity Technologies). The SG's development was informed by 2 studies conducted during the French COVID-19 lockdowns. Study 1 was a feasibility study evaluating the autonomous use of Mila-Learn with 2500 children with reading deficits. Data were analyzed from a subsample of 525 children who spontaneously played at least 15 (median 42) games. Study 2, following the same real-life setting as study 1, evaluated the usability of an enhanced version of Mila-Learn over 6 months in a sample of 3337 children. The analysis was carried out in 98 children with available diagnoses. RESULTS Benefiting from study 1 feedback, we improved Mila-Learn to enhance motivation and learning by adding specific features, including customization, storylines, humor, and increasing difficulty. Linear mixed models showed that performance improved over time. The scores were better for older children (P<.001), children with attention-deficit/hyperactivity disorder (P<.001), and children with dyslexia (P<.001). Performance improved significantly faster in children with attention-deficit/hyperactivity disorder (β=.06; t3754=3.91; P<.001) and slower in children with dyslexia (β=-.06; t3816=-5.08; P<.001). CONCLUSIONS Given these encouraging results, future work will focus on the clinical evaluation of Mila-Learn through a large double-blind randomized controlled trial comparing Mila-Learn and a placebo game.
Collapse
Affiliation(s)
| | | | - Hugues Pellerin
- Service de Psychiatrie de l'Enfant et de l'Adolescent, Groupe Hospitalier Pitié-Salpêtrière, Assistance Publique-Hôpitaux de Paris, Paris, France
| | - David Cohen
- Service de Psychiatrie de l'Enfant et de l'Adolescent, Groupe Hospitalier Pitié-Salpêtrière, Assistance Publique-Hôpitaux de Paris, Paris, France
- Institut des Systèmes Intelligents et Robotiques (ISIR, CNRS UMR7222), Sorbonne Université, Paris, France
| | - Charline Grossard
- Service de Psychiatrie de l'Enfant et de l'Adolescent, Groupe Hospitalier Pitié-Salpêtrière, Assistance Publique-Hôpitaux de Paris, Paris, France
- Institut des Systèmes Intelligents et Robotiques (ISIR, CNRS UMR7222), Sorbonne Université, Paris, France
| |
Collapse
|
2
|
Cecchetti G, Tomasini CA, Herff SA, Rohrmeier MA. Interpreting Rhythm as Parsing: Syntactic-Processing Operations Predict the Migration of Visual Flashes as Perceived During Listening to Musical Rhythms. Cogn Sci 2023; 47:e13389. [PMID: 38038624 DOI: 10.1111/cogs.13389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 11/10/2023] [Accepted: 11/13/2023] [Indexed: 12/02/2023]
Abstract
Music can be interpreted by attributing syntactic relationships to sequential musical events, and, computationally, such musical interpretation represents an analogous combinatorial task to syntactic processing in language. While this perspective has been primarily addressed in the domain of harmony, we focus here on rhythm in the Western tonal idiom, and we propose for the first time a framework for modeling the moment-by-moment execution of processing operations involved in the interpretation of music. Our approach is based on (1) a music-theoretically motivated grammar formalizing the competence of rhythmic interpretation in terms of three basic types of dependency (preparation, syncopation, and split; Rohrmeier, 2020), and (2) psychologically plausible predictions about the complexity of structural integration and memory storage operations, necessary for parsing hierarchical dependencies, derived from the dependency locality theory (Gibson, 2000). With a behavioral experiment, we exemplify an empirical implementation of the proposed theoretical framework. One hundred listeners were asked to reproduce the location of a visual flash presented while listening to three rhythmic excerpts, each exemplifying a different interpretation under the formal grammar. The hypothesized execution of syntactic-processing operations was found to be a significant predictor of the observed displacement between the reported and the objective location of the flashes. Overall, this study presents a theoretical approach and a first empirical proof-of-concept for modeling the cognitive process resulting in such interpretation as a form of syntactic parsing with algorithmic similarities to its linguistic counterpart. Results from the present small-scale experiment should not be read as a final test of the theory, but they are consistent with the theoretical predictions after controlling for several possible confounding factors and may form the basis for further large-scale and ecological testing.
Collapse
Affiliation(s)
- Gabriele Cecchetti
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Cédric A Tomasini
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Steffen A Herff
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| |
Collapse
|
3
|
Xing J, Sainburg T, Taylor H, Gentner TQ. Syntactic modulation of rhythm in Australian pied butcherbird song. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220704. [PMID: 36177196 PMCID: PMC9515642 DOI: 10.1098/rsos.220704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 09/05/2022] [Indexed: 05/04/2023]
Abstract
The acoustic structure of birdsong is spectrally and temporally complex. Temporal complexity is often investigated in a syntactic framework focusing on the statistical features of symbolic song sequences. Alternatively, temporal patterns can be investigated in a rhythmic framework that focuses on the relative timing between song elements. Here, we investigate the merits of combining both frameworks by integrating syntactic and rhythmic analyses of Australian pied butcherbird (Cracticus nigrogularis) songs, which exhibit organized syntax and diverse rhythms. We show that rhythms of the pied butcherbird song bouts in our sample are categorically organized and predictable by the song's first-order sequential syntax. These song rhythms remain categorically distributed and strongly associated with the first-order sequential syntax even after controlling for variance in note length, suggesting that the silent intervals between notes induce a rhythmic structure on note sequences. We discuss the implication of syntactic-rhythmic relations as a relevant feature of song complexity with respect to signals such as human speech and music, and advocate for a broader conception of song complexity that takes into account syntax, rhythm, and their interaction with other acoustic and perceptual features.
Collapse
Affiliation(s)
- Jeffrey Xing
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
| | - Tim Sainburg
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
| | - Hollis Taylor
- Sydney Conservatorium of Music, University of Sydney, Sydney, New South Wales, Australia
| | - Timothy Q. Gentner
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
- Neurobiology Section, Division of Biological Sciences, University of California San Diego, La Jolla, CA, USA
- Kavli Institute for Brain and Mind, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
4
|
Xing J, Sainburg T, Taylor H, Gentner TQ. Syntactic modulation of rhythm in Australian pied butcherbird song. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220704. [PMID: 36177196 DOI: 10.6084/m9.figshare.c.6197494] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 09/05/2022] [Indexed: 05/21/2023]
Abstract
The acoustic structure of birdsong is spectrally and temporally complex. Temporal complexity is often investigated in a syntactic framework focusing on the statistical features of symbolic song sequences. Alternatively, temporal patterns can be investigated in a rhythmic framework that focuses on the relative timing between song elements. Here, we investigate the merits of combining both frameworks by integrating syntactic and rhythmic analyses of Australian pied butcherbird (Cracticus nigrogularis) songs, which exhibit organized syntax and diverse rhythms. We show that rhythms of the pied butcherbird song bouts in our sample are categorically organized and predictable by the song's first-order sequential syntax. These song rhythms remain categorically distributed and strongly associated with the first-order sequential syntax even after controlling for variance in note length, suggesting that the silent intervals between notes induce a rhythmic structure on note sequences. We discuss the implication of syntactic-rhythmic relations as a relevant feature of song complexity with respect to signals such as human speech and music, and advocate for a broader conception of song complexity that takes into account syntax, rhythm, and their interaction with other acoustic and perceptual features.
Collapse
Affiliation(s)
- Jeffrey Xing
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
| | - Tim Sainburg
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
| | - Hollis Taylor
- Sydney Conservatorium of Music, University of Sydney, Sydney, New South Wales, Australia
| | - Timothy Q Gentner
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
- Neurobiology Section, Division of Biological Sciences, University of California San Diego, La Jolla, CA, USA
- Kavli Institute for Brain and Mind, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
5
|
Asano R, Boeckx C, Seifert U. Hierarchical control as a shared neurocognitive mechanism for language and music. Cognition 2021; 216:104847. [PMID: 34311153 DOI: 10.1016/j.cognition.2021.104847] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 05/14/2021] [Accepted: 07/11/2021] [Indexed: 12/16/2022]
Abstract
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany.
| | - Cedric Boeckx
- Section of General Linguistics, University of Barcelona, Spain; University of Barcelona Institute for Complex Systems (UBICS), Spain; Catalan Institute for Advanced Studies and Research (ICREA), Spain
| | - Uwe Seifert
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany
| |
Collapse
|
6
|
Eccles R, van der Linde J, le Roux M, Holloway J, MacCutcheon D, Ljung R, Swanepoel DW. Is Phonological Awareness Related to Pitch, Rhythm, and Speech-in-Noise Discrimination in Young Children? Lang Speech Hear Serv Sch 2020; 52:383-395. [PMID: 33464981 DOI: 10.1044/2020_lshss-20-00032] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose Phonological awareness (PA) requires the complex integration of language, speech, and auditory processing abilities. Enhanced pitch and rhythm discrimination have been shown to improve PA and speech-in-noise (SiN) discrimination. The screening of pitch and rhythm discrimination, if nonlinguistic correlates of these abilities, could contribute to screening procedures prior to diagnostic assessment. This research aimed to determine the association of PA abilities with pitch, rhythm, and SiN discrimination in children aged 5-7 years old. Method Forty-one participants' pitch, rhythm, and SiN discrimination and PA abilities were evaluated. To control for confounding factors, including biological and environmental risk exposure and gender differences, typically developing male children from high socioeconomic statuses were selected. Pearson correlation was used to identify associations between variables, and stepwise regression analysis was used to identify possible predictors of PA. Results Correlations of medium strength were identified between PA and pitch, rhythm, and SiN discrimination. Pitch and diotic digit-in-noise discrimination formed the strongest regression model (adjusted R 2 = .4213, r = .649) for phoneme-grapheme correspondence. Conclusions The current study demonstrates predictive relationships between the complex auditory discrimination skills of pitch, rhythm, and diotic digit-in-noise recognition and foundational phonemic awareness and phonic skills in young males from high socioeconomic statuses. Pitch, rhythm, and digit-in-noise discrimination measures hold potential as screening measures for delays in phonemic awareness and phonic difficulties and as components of stimulation programs.
Collapse
Affiliation(s)
- Renata Eccles
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
| | - Jeannie van der Linde
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
| | - Mia le Roux
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
| | - Jenny Holloway
- Data Science Research Group, Operational Intelligence, Council for Scientific and Industrial Research Next Generation Enterprises and Institutions, Pretoria, South Africa
| | - Douglas MacCutcheon
- Department of Building, Energy and Environmental Engineering, Högskolan i Gävle, Sweden
| | - Robert Ljung
- Department of Building, Energy and Environmental Engineering, Högskolan i Gävle, Sweden
| | - De Wet Swanepoel
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
| |
Collapse
|
7
|
Shared neural resources of rhythm and syntax: An ALE meta-analysis. Neuropsychologia 2019; 137:107284. [PMID: 31783081 DOI: 10.1016/j.neuropsychologia.2019.107284] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 11/25/2019] [Indexed: 11/20/2022]
Abstract
A growing body of evidence has highlighted behavioral connections between musical rhythm and linguistic syntax, suggesting that these abilities may be mediated by common neural resources. Here, we performed a quantitative meta-analysis of neuroimaging studies using activation likelihood estimate (ALE) to localize the shared neural structures engaged in a representative set of musical rhythm (rhythm, beat, and meter) and linguistic syntax (merge movement, and reanalysis) operations. Rhythm engaged a bilateral sensorimotor network throughout the brain consisting of the inferior frontal gyri, supplementary motor area, superior temporal gyri/temporoparietal junction, insula, intraparietal lobule, and putamen. By contrast, syntax mostly recruited the left sensorimotor network including the inferior frontal gyrus, posterior superior temporal gyrus, premotor cortex, and supplementary motor area. Intersections between rhythm and syntax maps yielded overlapping regions in the left inferior frontal gyrus, left supplementary motor area, and bilateral insula-neural substrates involved in temporal hierarchy processing and predictive coding. Together, this is the first neuroimaging meta-analysis providing detailed anatomical overlap of sensorimotor regions recruited for musical rhythm and linguistic syntax.
Collapse
|
8
|
Zioga I, Di Bernardi Luft C, Bhattacharya J. Musical training shapes neural responses to melodic and prosodic expectation. Brain Res 2016; 1650:267-282. [PMID: 27622645 PMCID: PMC5069926 DOI: 10.1016/j.brainres.2016.09.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Revised: 09/01/2016] [Accepted: 09/09/2016] [Indexed: 11/15/2022]
Abstract
Current research on music processing and syntax or semantics in language suggests that music and language share partially overlapping neural resources. Pitch also constitutes a common denominator, forming melody in music and prosody in language. Further, pitch perception is modulated by musical training. The present study investigated how music and language interact on pitch dimension and whether musical training plays a role in this interaction. For this purpose, we used melodies ending on an expected or unexpected note (melodic expectancy being estimated by a computational model) paired with prosodic utterances which were either expected (statements with falling pitch) or relatively unexpected (questions with rising pitch). Participants' (22 musicians, 20 nonmusicians) ERPs and behavioural responses in a statement/question discrimination task were recorded. Participants were faster for simultaneous expectancy violations in the melodic and linguistic stimuli. Further, musicians performed better than nonmusicians, which may be related to their increased pitch tracking ability. At the neural level, prosodic violations elicited a front-central positive ERP around 150 ms after the onset of the last word/note, while musicians presented reduced P600 in response to strong incongruities (questions on low-probability notes). Critically, musicians' P800 amplitudes were proportional to their level of musical training, suggesting that expertise might shape the pitch processing of language. The beneficial aspect of expertise could be attributed to its strengthening effect of general executive functions. These findings offer novel contributions to our understanding of shared higher-order mechanisms between music and language processing on pitch dimension, and further demonstrate a potential modulation by musical expertise. Melodic expectancy influences the processing of prosodic expectancy. Musical expertise modulates pitch processing in music and language. Musicians have a more refined response to pitch. Musicians' neural responses are proportional to their level of musical expertise. Possible association between the P200 neural component and behavioural facilitation.
Collapse
Affiliation(s)
- Ioanna Zioga
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom.
| | - Caroline Di Bernardi Luft
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom; School of Biological and Chemical Sciences, Queen Mary, University of London, Mile End Rd, London E1 4NS, United Kingdom
| | - Joydeep Bhattacharya
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom
| |
Collapse
|