1
|
Cancer A, Antonietti A. Deepening temporal cues in reading manipulations for dyslexia: A commentary on Horowitz-Kraus et al. (2023). Cortex 2024; 174:238-240. [PMID: 38242754 DOI: 10.1016/j.cortex.2023.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 12/03/2023] [Accepted: 12/21/2023] [Indexed: 01/21/2024]
Affiliation(s)
- Alice Cancer
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | | |
Collapse
|
2
|
Kovács P, Tóth B, Honbolygó F, Szalárdy O, Kohári A, Mády K, Magyari L, Winkler I. Speech prosody supports speaker selection and auditory stream segregation in a multi-talker situation. Brain Res 2023; 1805:148246. [PMID: 36657631 DOI: 10.1016/j.brainres.2023.148246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 01/06/2023] [Accepted: 01/12/2023] [Indexed: 01/19/2023]
Abstract
To process speech in a multi-talker environment, listeners need to segregate the mixture of incoming speech streams and focus their attention on one of them. Potentially, speech prosody could aid the segregation of different speakers, the selection of the desired speech stream, and detecting targets within the attended stream. For testing these issues, we recorded behavioral responses and extracted event-related potentials and functional brain networks from electroencephalographic signals recorded while participants listened to two concurrent speech streams, performing a lexical detection and a recognition memory task in parallel. Prosody manipulation was applied to the attended speech stream in one group of participants and to the ignored speech stream in another group. Naturally recorded speech stimuli were either intact, synthetically F0-flattened, or prosodically suppressed by the speaker. Results show that prosody - especially the parsing cues mediated by speech rate - facilitates stream selection, while playing a smaller role in auditory stream segmentation and target detection.
Collapse
Affiliation(s)
- Petra Kovács
- Department of Cognitive Science, Budapest University of Technology and Economics, Hungary
| | - Brigitta Tóth
- Institute of Cognitive Neuroscience and Psychology, Research Center for Natural Sciences, Hungary.
| | - Ferenc Honbolygó
- Brain Imaging Center, Research Center for Natural Sciences, Hungary
| | - Orsolya Szalárdy
- Institute of Cognitive Neuroscience and Psychology, Research Center for Natural Sciences, Hungary; Institute of Behavioural Sciences, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Anna Kohári
- Research Group of Phonetics, Institute for General and Hungarian Linguistics, Hungarian Research Centre for Linguistics, Hungary
| | - Katalin Mády
- Research Group of Phonetics, Institute for General and Hungarian Linguistics, Hungarian Research Centre for Linguistics, Hungary
| | - Lilla Magyari
- Department of Social Studies, Faculty of Social Sciences, University of Stavanger, Stavanger, Norway; Norwegian Centre for Reading Education and Research, Faculty of Arts and Education, University of Stavanger, Stavanger, Norway
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Center for Natural Sciences, Hungary
| |
Collapse
|
3
|
Coopmans CW, Struiksma ME, Coopmans PHA, Chen A. Processing of Grammatical Agreement in the Face of Variation in Lexical Stress: A Mismatch Negativity Study. LANGUAGE AND SPEECH 2023; 66:202-213. [PMID: 35652369 PMCID: PMC9976639 DOI: 10.1177/00238309221098116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Previous electroencephalography studies have yielded evidence for automatic processing of syntax and lexical stress. However, these studies looked at both effects in isolation, limiting their generalizability to everyday language comprehension. In the current study, we investigated automatic processing of grammatical agreement in the face of variation in lexical stress. Using an oddball paradigm, we measured the Mismatch Negativity (MMN) in Dutch-speaking participants while they listened to Dutch subject-verb sequences (linguistic context) or acoustically similar sequences in which the subject was replaced by filtered noise (nonlinguistic context). The verb forms differed in the inflectional suffix, rendering the subject-verb sequences grammatically correct or incorrect, and leading to a difference in the stress pattern of the verb forms. We found that the MMNs were modulated in both the linguistic and nonlinguistic condition, suggesting that the processing load induced by variation in lexical stress can hinder early automatic processing of grammatical agreement. However, as the morphological differences between the verb forms correlated with differences in number of syllables, an interpretation in terms of the prosodic structure of the sequences cannot be ruled out. Future research is needed to determine which of these factors (i.e., lexical stress, syllabic structure) most strongly modulate early syntactic processing.
Collapse
Affiliation(s)
- Cas W. Coopmans
- Max Planck Institute for Psycholinguistics, The Netherlands; Centre for Language Studies, Radboud University, The Netherlands
| | | | | | - Aoju Chen
- Aoju Chen, Utrecht Institute of Linguistics OTS, Utrecht University, Trans 10, 3512 JK Utrecht, The Netherlands.
| |
Collapse
|
4
|
The Role of Auditory and Visual Components in Reading Training: No Additional Effect of Synchronized Visual Cue in a Rhythm-Based Intervention for Dyslexia. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073360] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Based on the transfer effects of music training on the phonological and reading abilities of children with dyslexia, a computerized rhythmic intervention—the Rhythmic Reading Training (RRT)—was developed, in which reading exercises are combined with a rhythmic synchronization task. This rehabilitation program was previously tested in multiple controlled clinical trials, which confirmed its effectiveness in improving the reading skills of children and adolescents with dyslexia. In order to assess the specific contribution of the visual component of the training, namely, the presence of a visual cue supporting rhythmic synchronization, a controlled experimental study was conducted. Fifty-eight students with dyslexia aged 8 to 13 years were assigned to three conditions: (a) RRT auditory and visual condition, in which a visual cue was synchronized with the rhythmic stimulation; (b) RRT auditory-only condition, in which the visual cue was excluded; (c) no intervention. Comparisons of the participants’ performance before, after, and 3 months after the end of the intervention period revealed the significant immediate and long-term effect of both RRT conditions on reading, rapid naming, phonological, rhythmic, and attentional abilities. No significant differences were found between visual and auditory conditions, therefore showing no additional contribution of the visual component to the improvements induced by the RRT. Clinical Trial ID: NCT04995991.
Collapse
|
5
|
Henrich K, Scharinger M. Predictive Processing in Poetic Language: Event-Related Potentials Data on Rhythmic Omissions in Metered Speech. Front Psychol 2022; 12:782765. [PMID: 35069363 PMCID: PMC8769205 DOI: 10.3389/fpsyg.2021.782765] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 11/29/2021] [Indexed: 11/13/2022] Open
Abstract
Predictions during language comprehension are currently discussed from many points of view. One area where predictive processing may play a particular role concerns poetic language that is regularized by meter and rhyme, thus allowing strong predictions regarding the timing and stress of individual syllables. While there is growing evidence that these prosodic regularities influence language processing, less is known about the potential influence of prosodic preferences (binary, strong-weak patterns) on neurophysiological processes. To this end, the present electroencephalogram (EEG) study examined whether the predictability of strong and weak syllables within metered speech would differ as a function of meter (trochee vs. iamb). Strong, i.e., accented positions within a foot should be more predictable than weak, i.e., unaccented positions. Our focus was on disyllabic pseudowords that solely differed between trochaic and iambic structure, with trochees providing the preferred foot in German. Methodologically, we focused on the omission Mismatch Negativity (oMMN) that is elicited when an anticipated auditory stimulus is omitted. The resulting electrophysiological brain response is particularly interesting because its elicitation does not depend on a physical stimulus. Omissions in deviant position of a passive oddball paradigm occurred at either first- or second-syllable position of the aforementioned pseudowords, resulting in a 2-by-2 design with the factors foot type and omission position. Analyses focused on the mean oMMN amplitude and latency differences across the four conditions. The result pattern was characterized by an interaction of the effects of foot type and omission position for both amplitudes and latencies. In first position, omissions resulted in larger and earlier oMMNs for trochees than for iambs. In second position, omissions resulted in larger oMMNs for iambs than for trochees, but the oMMN latency did not differ. The results suggest that omissions, particularly in initial position, are modulated by a trochaic preference in German. The preferred strong-weak pattern may have strengthened the prosodic prediction, especially for matching, trochaic stimuli, such that the violation of this prediction led to an earlier and stronger prediction error. Altogether, predictive processing seems to play a particular role in metered speech, especially if the meter is based on the preferred foot type.
Collapse
Affiliation(s)
- Karen Henrich
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Research Group Phonetics, Philipps-University of Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior, Universities of Marburg and Giessen, Marburg, Germany
| |
Collapse
|
6
|
Aichert I, Lehner K, Falk S, Späth M, Franke M, Ziegler W. In Time with the Beat: Entrainment in Patients with Phonological Impairment, Apraxia of Speech, and Parkinson's Disease. Brain Sci 2021; 11:brainsci11111524. [PMID: 34827523 PMCID: PMC8615970 DOI: 10.3390/brainsci11111524] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 11/08/2021] [Accepted: 11/12/2021] [Indexed: 11/25/2022] Open
Abstract
In the present study, we investigated if individuals with neurogenic speech sound impairments of three types, Parkinson’s dysarthria, apraxia of speech, and aphasic phonological impairment, accommodate their speech to the natural speech rhythm of an auditory model, and if so, whether the effect is more significant after hearing metrically regular sentences as compared to those with an irregular pattern. This question builds on theories of rhythmic entrainment, assuming that sensorimotor predictions of upcoming events allow humans to synchronize their actions with an external rhythm. To investigate entrainment effects, we conducted a sentence completion task relating participants’ response latencies to the spoken rhythm of the prime heard immediately before. A further research question was if the perceived rhythm interacts with the rhythm of the participants’ own productions, i.e., the trochaic or iambic stress pattern of disyllabic target words. For a control group of healthy speakers, our study revealed evidence for entrainment when trochaic target words were preceded by regularly stressed prime sentences. Persons with Parkinson’s dysarthria showed a pattern similar to that of the healthy individuals. For the patient groups with apraxia of speech and with phonological impairment, considerably longer response latencies with differing patterns were observed. Trochaic target words were initiated with significantly shorter latencies, whereas the metrical regularity of prime sentences had no consistent impact on response latencies and did not interact with the stress pattern of the target words to be produced. The absence of an entrainment in these patients may be explained by the more severe difficulties in initiating speech at all. We discuss the results in terms of clinical implications for diagnostics and therapy in neurogenic speech disorders.
Collapse
Affiliation(s)
- Ingrid Aichert
- Clinical Neuropsychology Research Group, Institute of Phonetics and Speech Processing, Ludwig-Maximilians-Universität München, 80799 Munich, Germany; (K.L.); (M.F.); (W.Z.)
- Correspondence:
| | - Katharina Lehner
- Clinical Neuropsychology Research Group, Institute of Phonetics and Speech Processing, Ludwig-Maximilians-Universität München, 80799 Munich, Germany; (K.L.); (M.F.); (W.Z.)
| | - Simone Falk
- International Laboratory for Brain, Music and Sound Research (BRAMS), Département de Linguistique et de Traduction, Université de Montréal, Montréal, QC H3C 3J7, Canada;
| | - Mona Späth
- Neolexon, Limedix GmbH, 80538 Munich, Germany;
| | - Mona Franke
- Clinical Neuropsychology Research Group, Institute of Phonetics and Speech Processing, Ludwig-Maximilians-Universität München, 80799 Munich, Germany; (K.L.); (M.F.); (W.Z.)
| | - Wolfram Ziegler
- Clinical Neuropsychology Research Group, Institute of Phonetics and Speech Processing, Ludwig-Maximilians-Universität München, 80799 Munich, Germany; (K.L.); (M.F.); (W.Z.)
| |
Collapse
|
7
|
Linguistic syncopation: Meter-syntax alignment affects sentence comprehension and sensorimotor synchronization. Cognition 2021; 217:104880. [PMID: 34419725 DOI: 10.1016/j.cognition.2021.104880] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 07/01/2021] [Accepted: 08/11/2021] [Indexed: 02/08/2023]
Abstract
The hierarchical organization of speech rhythm into meter putatively confers cognitive affordances for perception, memory, and motor coordination. Meter also aligns with phrasal structure in systematic ways. In this paper, we show that this alignment affects the robustness of syntactic comprehension and discuss possible underlying mechanisms. In two experiments, we manipulated meter-syntax alignment while sentences with relative clause structures were either read as text (experiment 1, n = 40) or listened to as speech (experiment 2, n = 40). In experiment 2, we also measured the stability with which participants could tap in time with the metrical accents in the sentences they were comprehending. In addition to making more mistakes, sensorimotor synchronization was disrupted when syntactic cues clashed with the metrical context. We suggest that this reflects a tight coordination of top-down linguistic knowledge with the sensorimotor system to optimize comprehension.
Collapse
|
8
|
Kasedo R, Iijima A, Nakahara K, Adachi Y, Hasegawa I. Development of a Self-paced Sequential Letterstring Reading Task to Capture the Temporal Dynamics of Reading a Natural Language. ADVANCED BIOMEDICAL ENGINEERING 2021. [DOI: 10.14326/abe.10.26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Affiliation(s)
- Ryutaro Kasedo
- Department of Bio-cybernetics, Graduate School of Science and Technology, Niigata University
- Department of Physiology, School of Medicine, Niigata University
| | - Atsuhiko Iijima
- Department of Bio-cybernetics, Graduate School of Science and Technology, Niigata University
- School of Health Sciences, Faculty of Medicine, Niigata University
| | | | - Yusuke Adachi
- Department of Physiology, School of Medicine, Niigata University
| | - Isao Hasegawa
- Department of Physiology, School of Medicine, Niigata University
| |
Collapse
|
9
|
LaCroix AN, Blumenstein N, Tully M, Baxter LC, Rogalsky C. Effects of prosody on the cognitive and neural resources supporting sentence comprehension: A behavioral and lesion-symptom mapping study. BRAIN AND LANGUAGE 2020; 203:104756. [PMID: 32032865 PMCID: PMC7064294 DOI: 10.1016/j.bandl.2020.104756] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 12/03/2019] [Accepted: 01/19/2020] [Indexed: 05/29/2023]
Abstract
Non-canonical sentence comprehension impairments are well-documented in aphasia. Studies of neurotypical controls indicate that prosody can aid comprehension by facilitating attention towards critical pitch inflections and phrase boundaries. However, no studies have examined how prosody may engage specific cognitive and neural resources during non-canonical sentence comprehension in persons with left hemisphere damage. Experiment 1 examines the relationship between comprehension of non-canonical sentences spoken with typical and atypical prosody and several cognitive measures in 25 persons with chronic left hemisphere stroke and 20 matched controls. Experiment 2 explores the neural resources critical for non-canonical sentence comprehension with each prosody type using region-of-interest-based multiple regressions. Lower orienting attention abilities and greater inferior frontal and parietal damage predicted lower comprehension, but only for sentences with typical prosody. Our results suggest that typical sentence prosody may engage attention resources to support non-canonical sentence comprehension, and this relationship may be disrupted following left hemisphere stroke.
Collapse
Affiliation(s)
- Arianna N LaCroix
- College of Health Solutions, Arizona State University, Tempe, AZ, USA; College of Health Sciences, Midwestern University, Glendale, AZ, USA
| | | | - McKayla Tully
- College of Health Solutions, Arizona State University, Tempe, AZ, USA
| | | | - Corianne Rogalsky
- College of Health Solutions, Arizona State University, Tempe, AZ, USA.
| |
Collapse
|
10
|
Shared neural resources of rhythm and syntax: An ALE meta-analysis. Neuropsychologia 2019; 137:107284. [PMID: 31783081 DOI: 10.1016/j.neuropsychologia.2019.107284] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 11/25/2019] [Indexed: 11/20/2022]
Abstract
A growing body of evidence has highlighted behavioral connections between musical rhythm and linguistic syntax, suggesting that these abilities may be mediated by common neural resources. Here, we performed a quantitative meta-analysis of neuroimaging studies using activation likelihood estimate (ALE) to localize the shared neural structures engaged in a representative set of musical rhythm (rhythm, beat, and meter) and linguistic syntax (merge movement, and reanalysis) operations. Rhythm engaged a bilateral sensorimotor network throughout the brain consisting of the inferior frontal gyri, supplementary motor area, superior temporal gyri/temporoparietal junction, insula, intraparietal lobule, and putamen. By contrast, syntax mostly recruited the left sensorimotor network including the inferior frontal gyrus, posterior superior temporal gyrus, premotor cortex, and supplementary motor area. Intersections between rhythm and syntax maps yielded overlapping regions in the left inferior frontal gyrus, left supplementary motor area, and bilateral insula-neural substrates involved in temporal hierarchy processing and predictive coding. Together, this is the first neuroimaging meta-analysis providing detailed anatomical overlap of sensorimotor regions recruited for musical rhythm and linguistic syntax.
Collapse
|
11
|
Ravignani A, Dalla Bella S, Falk S, Kello CT, Noriega F, Kotz SA. Rhythm in speech and animal vocalizations: a cross-species perspective. Ann N Y Acad Sci 2019; 1453:79-98. [PMID: 31237365 PMCID: PMC6851814 DOI: 10.1111/nyas.14166] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 05/14/2019] [Accepted: 05/24/2019] [Indexed: 12/31/2022]
Abstract
Why does human speech have rhythm? As we cannot travel back in time to witness how speech developed its rhythmic properties and why humans have the cognitive skills to process them, we rely on alternative methods to find out. One powerful tool is the comparative approach: studying the presence or absence of cognitive/behavioral traits in other species to determine which traits are shared between species and which are recent human inventions. Vocalizations of many species exhibit temporal structure, but little is known about how these rhythmic structures evolved, are perceived and produced, their biological and developmental bases, and communicative functions. We review the literature on rhythm in speech and animal vocalizations as a first step toward understanding similarities and differences across species. We extend this review to quantitative techniques that are useful for computing rhythmic structure in acoustic sequences and hence facilitate cross-species research. We report links between vocal perception and motor coordination and the differentiation of rhythm based on hierarchical temporal structure. While still far from a complete cross-species perspective of speech rhythm, our review puts some pieces of the puzzle together.
Collapse
Affiliation(s)
- Andrea Ravignani
- Artificial Intelligence LaboratoryVrije Universiteit BrusselBrusselsBelgium
- Institute for Advanced StudyUniversity of AmsterdamAmsterdamthe Netherlands
| | - Simone Dalla Bella
- International Laboratory for BrainMusic and Sound Research (BRAMS)MontréalQuebecCanada
- Department of PsychologyUniversity of MontrealMontréalQuebecCanada
- Department of Cognitive PsychologyWarsawPoland
| | - Simone Falk
- International Laboratory for BrainMusic and Sound Research (BRAMS)MontréalQuebecCanada
- Laboratoire de Phonétique et Phonologie, UMR 7018, CNRS/Université Sorbonne Nouvelle Paris‐3Institut de Linguistique et Phonétique générales et appliquéesParisFrance
| | | | - Florencia Noriega
- Chair for Network DynamicsCenter for Advancing Electronics Dresden (CFAED), TU DresdenDresdenGermany
- CODE University of Applied SciencesBerlinGermany
| | - Sonja A. Kotz
- International Laboratory for BrainMusic and Sound Research (BRAMS)MontréalQuebecCanada
- Basic and Applied NeuroDynamics Laboratory, Faculty of Psychology and Neuroscience, Department of Neuropsychology and PsychopharmacologyMaastricht UniversityMaastrichtthe Netherlands
- Department of NeuropsychologyMax‐Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
12
|
Hesling I, Labache L, Joliot M, Tzourio-Mazoyer N. Large-scale plurimodal networks common to listening to, producing and reading word lists: an fMRI study combining task-induced activation and intrinsic connectivity in 144 right-handers. Brain Struct Funct 2019; 224:3075-3094. [PMID: 31494717 PMCID: PMC6875148 DOI: 10.1007/s00429-019-01951-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 08/29/2019] [Indexed: 02/07/2023]
Abstract
We aimed at identifying plurimodal large-scale networks for producing, listening to and reading word lists based on the combined analyses of task-induced activation and resting-state intrinsic connectivity in 144 healthy right-handers. In the first step, we identified the regions in each hemisphere showing joint activation and joint asymmetry during the three tasks. In the left hemisphere, 14 homotopic regions of interest (hROIs) located in the left Rolandic sulcus, precentral gyrus, cingulate gyrus, cuneus and inferior supramarginal gyrus (SMG) met this criterion, and 7 hROIs located in the right hemisphere were located in the preSMA, medial superior frontal gyrus, precuneus and superior temporal sulcus (STS). In a second step, we calculated the BOLD temporal correlations across these 21 hROIs at rest and conducted a hierarchical clustering analysis to unravel their network organization. Two networks were identified, including the WORD-LIST_CORE network that aggregated 14 motor, premotor and phonemic areas in the left hemisphere plus the right STS that corresponded to the posterior human voice area (pHVA). The present results revealed that word-list processing is based on left articulatory and storage areas supporting the action-perception cycle common not only to production and listening but also to reading. The inclusion of the right pHVA acting as a prosodic integrative area highlights the importance of prosody in the three modalities and reveals an intertwining across hemispheres between prosodic (pHVA) and phonemic (left SMG) processing. These results are consistent with the motor theory of speech postulating that articulatory gestures are the central motor units on which word perception, production, and reading develop and act together.
Collapse
Affiliation(s)
- Isabelle Hesling
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France. .,CNRS, IMN, UMR 5293, 33000, Bordeaux, France. .,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France. .,IMN Institut des Maladies Neurodégénératives UMR 5293, Team 5: GIN Groupe d'imagerie Neurofonctionnelle, CEA-CNRS, Université de Bordeaux, Centre Broca Nouvelle-Aquitaine-3ème étage, 146 rue Léo-Saignat-CS 61292-Case 28, 33076, Bordeaux CEDEX, France.
| | - L Labache
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France.,CNRS, IMN, UMR 5293, 33000, Bordeaux, France.,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France.,University of Bordeaux, IMB, UMR 5251, 33405, Talence, France.,INRIA Bordeaux Sud-Ouest, CQFD, INRIA, UMR 5251, 33405, Talence, France
| | - M Joliot
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France.,CNRS, IMN, UMR 5293, 33000, Bordeaux, France.,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France
| | - N Tzourio-Mazoyer
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France.,CNRS, IMN, UMR 5293, 33000, Bordeaux, France.,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France
| |
Collapse
|
13
|
LaCroix AN, Blumenstein N, Houlihan C, Rogalsky C. The effects of prosody on sentence comprehension: evidence from a neurotypical control group and seven cases of chronic stroke. Neurocase 2019; 25:106-117. [PMID: 31241420 PMCID: PMC6662577 DOI: 10.1080/13554794.2019.1630447] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Both prosody and sentence structure (e.g., canonical versus non-canonical) affect sentence comprehension. However, few previous studies have examined a possible interaction between prosody and sentence structure. In adult controls we found a significant interaction: typical sentence prosody, versus list prosody, facilitated comprehension of only some sentence structures. In seven stroke patients, impaired attentional control was related to impaired comprehension with sentence prosody but not list prosody; impaired working memory was related to impaired comprehension with list prosody, but not sentence prosody. Thus, non-canonical sentence comprehension impairments in stroke patients may be modulated by prosody, based on a patient's cognitive abilities.
Collapse
Affiliation(s)
- Arianna N LaCroix
- a College of Health Solutions , Arizona State University , Tempe , AZ , USA.,b College of Health Sciences , Midwestern University , Glendale , AZ , USA
| | - Nicole Blumenstein
- a College of Health Solutions , Arizona State University , Tempe , AZ , USA
| | - Chloe Houlihan
- a College of Health Solutions , Arizona State University , Tempe , AZ , USA
| | - Corianne Rogalsky
- a College of Health Solutions , Arizona State University , Tempe , AZ , USA
| |
Collapse
|
14
|
Kotz S, Ravignani A, Fitch W. The Evolution of Rhythm Processing. Trends Cogn Sci 2018; 22:896-910. [DOI: 10.1016/j.tics.2018.08.002] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Revised: 07/25/2018] [Accepted: 08/02/2018] [Indexed: 01/14/2023]
|
15
|
Sharpe V, Fogerty D, den Ouden DB. The Role of Fundamental Frequency and Temporal Envelope in Processing Sentences with Temporary Syntactic Ambiguities. LANGUAGE AND SPEECH 2017; 60:399-426. [PMID: 28915784 DOI: 10.1177/0023830916652649] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Previous experiments have demonstrated the impact of speech prosody on syntactic processing. The present study was designed to examine how listeners use specific acoustic properties of prosody for grammatical interpretation. We investigated the independent contributions of two acoustic properties associated with the pitch and rhythmic properties of speech; the fundamental frequency and temporal envelope, respectively. The effect of degrading these prosodic components was examined by testing listeners' ability to parse early-closure garden-path sentences. A second aim was to investigate how effects of prosody interact with semantic effects of sentence plausibility. Using a task that required both a comprehension and a production response, we were able to determine that degradation of the speech envelope more consistently affects syntactic processing than degradation of the fundamental frequency. These effects are exacerbated in sentences with plausible misinterpretations, showing that prosodic degradation interacts with contextual cues to sentence interpretation.
Collapse
Affiliation(s)
- Victoria Sharpe
- Department of Communication Sciences and Disorders, University of South Carolina, USA
| | - Daniel Fogerty
- Department of Communication Sciences and Disorders, University of South Carolina, USA
| | - Dirk-Bart den Ouden
- Department of Communication Sciences and Disorders, University of South Carolina, USA
| |
Collapse
|
16
|
Falk S, Volpi-Moncorger C, Dalla Bella S. Auditory-Motor Rhythms and Speech Processing in French and German Listeners. Front Psychol 2017; 8:395. [PMID: 28443036 PMCID: PMC5387104 DOI: 10.3389/fpsyg.2017.00395] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2016] [Accepted: 03/02/2017] [Indexed: 11/25/2022] Open
Abstract
Moving to a speech rhythm can enhance verbal processing in the listener by increasing temporal expectancies (Falk and Dalla Bella, 2016). Here we tested whether this hypothesis holds for prosodically diverse languages such as German (a lexical stress-language) and French (a non-stress language). Moreover, we examined the relation between motor performance and the benefits for verbal processing as a function of language. Sixty-four participants, 32 German and 32 French native speakers detected subtle word changes in accented positions in metrically structured sentences to which they previously tapped with their index finger. Before each sentence, they were cued by a metronome to tap either congruently (i.e., to accented syllables) or incongruently (i.e., to non-accented parts) to the following speech stimulus. Both French and German speakers detected words better when cued to tap congruently compared to incongruent tapping. Detection performance was predicted by participants' motor performance in the non-verbal cueing phase. Moreover, tapping rate while participants tapped to speech predicted detection differently for the two language groups, in particular in the incongruent tapping condition. We discuss our findings in light of the rhythmic differences of both languages and with respect to recent theories of expectancy-driven and multisensory speech processing.
Collapse
Affiliation(s)
- Simone Falk
- Institut für Deutsche Philologie, Ludwig-Maximilians-UniversityMunich, Germany.,Laboratoire Parole et Langage, UMR 7309, Centre National de la Recherche Scientifique, Aix-Marseille UniversityAix-en-Provence, France.,Laboratoire Phonétique et Phonologie, UMR 7018, CNRS, Université Sorbonne Nouvelle Paris-3Paris, France
| | - Chloé Volpi-Moncorger
- Laboratoire Parole et Langage, UMR 7309, Centre National de la Recherche Scientifique, Aix-Marseille UniversityAix-en-Provence, France
| | - Simone Dalla Bella
- EuroMov, University of MontpellierMontpellier, France.,Institut Universitaire de FranceParis, France.,International Laboratory for Brain, Music, and Sound ResearchMontreal, QC, Canada.,Department of Cognitive Psychology, Wyższa Szkoła Finansów i Zarządzania w Warszawie (WSFiZ)Warsaw, Poland
| |
Collapse
|
17
|
Abstract
Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.
Collapse
Affiliation(s)
- Simone Falk
- Aix-Marseille Univ, LPL, UMR 7309, CNRS, Aix-en-Provence, France.,Université Sorbonne Nouvelle Paris-3, LPP, UMR 7018, CNRS, Paris, France.,Ludwig-Maximilians-University, Munich, Germany
| | | | | |
Collapse
|
18
|
Aesthetic appreciation of poetry correlates with ease of processing in event-related potentials. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2016; 16:362-73. [PMID: 26697879 DOI: 10.3758/s13415-015-0396-x] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Rhetorical theory suggests that rhythmic and metrical features of language substantially contribute to persuading, moving, and pleasing an audience. A potential explanation of these effects is offered by "cognitive fluency theory," which stipulates that recurring patterns (e.g., meter) enhance perceptual fluency and can lead to greater aesthetic appreciation. In this article, we explore these two assertions by investigating the effects of meter and rhyme in the reception of poetry by means of event-related brain potentials (ERPs). Participants listened to four versions of lyrical stanzas that varied in terms of meter and rhyme, and rated the stanzas for rhythmicity and aesthetic liking. The behavioral and ERP results were in accord with enhanced liking and rhythmicity ratings for metered and rhyming stanzas. The metered and rhyming stanzas elicited smaller N400/P600 ERP responses than their nonmetered, nonrhyming, or nonmetered and nonrhyming counterparts. In addition, the N400 and P600 effects for the lyrical stanzas correlated with aesthetic liking effects (metered-nonmetered), implying that modulation of the N400 and P600 has a direct bearing on the aesthetic appreciation of lyrical stanzas. We suggest that these effects are indicative of perceptual-fluency-enhanced aesthetic liking, as postulated by cognitive fluency theory.
Collapse
|
19
|
Kriukova O, Mani N. Processing Metrical Information in Silent Reading: An ERP Study. Front Psychol 2016; 7:1432. [PMID: 27713718 PMCID: PMC5031776 DOI: 10.3389/fpsyg.2016.01432] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2016] [Accepted: 09/07/2016] [Indexed: 12/02/2022] Open
Abstract
Listeners are sensitive to the metric structure of words, i.e., an alternating pattern of stressed and unstressed syllables, in auditory speech processing: Event-related potentials recorded as participants listen to a sequence of words with a consistent metrical pattern, e.g., a series of trochaic words, suggest that participants register words metrically incongruent with the preceding sequence. Here we examine whether the processing of individual words in silent reading is similarly impacted by rhythmic properties of the surrounding context. We recorded participants’ EEG as they read lists of either three trochaic or iambic disyllabic words followed by a target word that was either congruent or incongruent with the preceding metric pattern. Event-Related Potentials (ERPs) to targets were modulated by an interaction between metrical structure (iambic vs. trochaic) and congruence: for iambs, more positive ERPs were observed in the incongruent than congruent condition 250–400 ms and 400–600 ms post-stimulus, whereas no reliable impact of congruence was found for trochees. We suggest that when iambs are in an incongruent context, i.e., preceded by trochees, the context contains the metrical structure that is more typical in participants’ native language which facilitates processing relative to when they are presented in a congruent context, containing the less typical, i.e., iambic, metrical structure. The results provide evidence that comprehenders are sensitive to the prosodic properties of the context even in silent reading, such that this sensitivity impacts lexico-semantic processing of individual words.
Collapse
Affiliation(s)
- Olga Kriukova
- Psychology of Language Research Group, Department of Psychology, Georg-Elias-Müller Institute of Psychology, Georg-August-Universität Göttingen Göttingen, Germany
| | - Nivedita Mani
- Psychology of Language Research Group, Department of Psychology, Georg-Elias-Müller Institute of Psychology, Georg-August-Universität Göttingen Göttingen, Germany
| |
Collapse
|
20
|
Falk S, Maslow E, Thum G, Hoole P. Temporal variability in sung productions of adolescents who stutter. JOURNAL OF COMMUNICATION DISORDERS 2016; 62:101-114. [PMID: 27323225 DOI: 10.1016/j.jcomdis.2016.05.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Revised: 05/09/2016] [Accepted: 05/24/2016] [Indexed: 06/06/2023]
Abstract
UNLABELLED Singing has long been used as a technique to enhance and reeducate temporal aspects of articulation in speech disorders. In the present study, differences in temporal structure of sung versus spoken speech were investigated in stuttering. In particular, the question was examined if singing helps to reduce VOT variability of voiceless plosives, which would indicate enhanced temporal coordination of oral and laryngeal processes. Eight German adolescents who stutter and eight typically fluent peers repeatedly spoke and sang a simple German congratulation formula in which a disyllabic target word (e.g., /'ki:ta/) was repeated five times. Every trial, the first syllable of the word was varied starting equally often with one of the three voiceless German stops /p/, /t/, /k/. Acoustic analyses showed that mean VOT and stop gap duration reduced during singing compared to speaking while mean vowel and utterance duration was prolonged in singing in both groups. Importantly, adolescents who stutter significantly reduced VOT variability (measured as the Coefficient of Variation) during sung productions compared to speaking in word-initial stressed positions while the control group showed a slight increase in VOT variability. However, in unstressed syllables, VOT variability increased in both adolescents who do and do not stutter from speech to song. In addition, vowel and utterance durational variability decreased in both groups, yet, adolescents who stutter were still more variable in utterance duration independent of the form of vocalization. These findings shed new light on how singing alters temporal structure and in particular, the coordination of laryngeal-oral timing in stuttering. Future perspectives for investigating how rhythmic aspects could aid the management of fluent speech in stuttering are discussed. LEARNING OUTCOMES Readers will be able to describe (1) current perspectives on singing and its effects on articulation and fluency in stuttering and (2) acoustic parameters such as VOT variability which indicate the efficiency of control and coordination of laryngeal-oral movements. They will understand and be able to discuss (3) how singing reduces temporal variability in the productions of adolescents who do and do not stutter and 4) how this is linked to altered articulatory patterns in singing as well as to its rhythmic structure.
Collapse
Affiliation(s)
- Simone Falk
- Institute of German Philology, Ludwig-Maximilians-University, Schellingstr. 3, 80799 Munich, Germany; Laboratoire Parole et Langage, UMR 7309, Aix-Marseille University, CNRS, Aix-en-Provence, France.
| | - Elena Maslow
- Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University, Munich, Germany
| | - Georg Thum
- Counselling Service for Stuttering, Institute of Clinical Speech Therapy and Education (Spra-chheilpädagogik), Ludwig-Maximilians-University, Munich, Germany
| | - Philip Hoole
- Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University, Munich, Germany
| |
Collapse
|
21
|
Roncaglia-Denissen MP, Roor DA, Chen A, Sadakata M. The Enhanced Musical Rhythmic Perception in Second Language Learners. Front Hum Neurosci 2016; 10:288. [PMID: 27375469 PMCID: PMC4901070 DOI: 10.3389/fnhum.2016.00288] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 05/27/2016] [Indexed: 11/13/2022] Open
Abstract
Previous research suggests that mastering languages with distinct rather than similar rhythmic properties enhances musical rhythmic perception. This study investigates whether learning a second language (L2) contributes to enhanced musical rhythmic perception in general, regardless of first and second languages rhythmic properties. Additionally, we investigated whether this perceptual enhancement could be alternatively explained by exposure to musical rhythmic complexity, such as the use of compound meter in Turkish music. Finally, it investigates if an enhancement of musical rhythmic perception could be observed among L2 learners whose first language relies heavily on pitch information, as is the case with tonal languages. Therefore, we tested Turkish, Dutch and Mandarin L2 learners of English and Turkish monolinguals on their musical rhythmic perception. Participants' phonological and working memory capacities, melodic aptitude, years of formal musical training and daily exposure to music were assessed to account for cultural and individual differences which could impact their rhythmic ability. Our results suggest that mastering a L2 rather than exposure to musical rhythmic complexity could explain individuals' enhanced musical rhythmic perception. An even stronger enhancement of musical rhythmic perception was observed for L2 learners whose first and second languages differ regarding their rhythmic properties, as enhanced performance of Turkish in comparison with Dutch L2 learners of English seem to suggest. Such a stronger enhancement of rhythmic perception seems to be found even among L2 learners whose first language relies heavily on pitch information, as the performance of Mandarin L2 learners of English indicates. Our findings provide further support for a cognitive transfer between the language and music domain.
Collapse
Affiliation(s)
- M. Paula Roncaglia-Denissen
- Institute for Logic, Language and Computation, University of AmsterdamAmsterdam, Netherlands
- Amsterdam Brain and Cognition (ABC), University of AmsterdamAmsterdam, Netherlands
| | - Drikus A. Roor
- Musicology Department, University of AmsterdamAmsterdam, Netherlands
| | - Ao Chen
- Utrecht Institute of Linguistics (Uil OTS), Utrecht UniversityUtrecht, Netherlands
| | - Makiko Sadakata
- Institute for Logic, Language and Computation, University of AmsterdamAmsterdam, Netherlands
- Musicology Department, University of AmsterdamAmsterdam, Netherlands
- Donders Institute for Brain, Cognition and Behavior, Radboud University NijmegenNijmegen, Netherlands
| |
Collapse
|
22
|
Irregular Speech Rate Dissociates Auditory Cortical Entrainment, Evoked Responses, and Frontal Alpha. J Neurosci 2016; 35:14691-701. [PMID: 26538641 DOI: 10.1523/jneurosci.2243-15.2015] [Citation(s) in RCA: 76] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that delta entrainment is controlled by frontal alpha mechanisms and thus support the notion that rhythmic auditory cortical entrainment is shaped by top-down mechanisms.
Collapse
|
23
|
Magne C, Jordan DK, Gordon RL. Speech rhythm sensitivity and musical aptitude: ERPs and individual differences. BRAIN AND LANGUAGE 2016; 153-154:13-19. [PMID: 26828758 DOI: 10.1016/j.bandl.2016.01.001] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2015] [Revised: 12/13/2015] [Accepted: 01/21/2016] [Indexed: 06/05/2023]
Abstract
This study investigated the electrophysiological markers of rhythmic expectancy during speech perception. In addition, given the large literature showing overlaps between cognitive and neural resources recruited for language and music, we considered a relation between musical aptitude and individual differences in speech rhythm sensitivity. Twenty adults were administered a standardized assessment of musical aptitude, and EEG was recorded as participants listened to sequences of four bisyllabic words for which the stress pattern of the final word either matched or mismatched the stress pattern of the preceding words. Words with unexpected stress patterns elicited an increased fronto-central mid-latency negativity. In addition, rhythm aptitude significantly correlated with the size of the negative effect elicited by unexpected iambic words, the least common type of stress pattern in English. The present results suggest shared neurocognitive resources for speech rhythm and musical rhythm.
Collapse
Affiliation(s)
- Cyrille Magne
- Psychology Department, Middle Tennessee State University, United States.
| | - Deanna K Jordan
- Psychology Department, Middle Tennessee State University, United States
| | - Reyna L Gordon
- Department of Otolaryngology, Vanderbilt University Medical Center, United States; Vanderbilt Kennedy Center, United States
| |
Collapse
|
24
|
Planchou C, Clément S, Béland R, Cason N, Motte J, Samson S. Word Detection in Sung and Spoken Sentences in Children With Typical Language Development or With Specific Language Impairment. Adv Cogn Psychol 2016; 11:118-35. [PMID: 26767070 PMCID: PMC4710888 DOI: 10.5709/acp-0177-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2015] [Accepted: 06/17/2015] [Indexed: 11/23/2022] Open
Abstract
Background: Previous studies have reported that children score better in language
tasks using sung rather than spoken stimuli. We examined word detection ease in
sung and spoken sentences that were equated for phoneme duration and pitch
variations in children aged 7 to 12 years with typical language development
(TLD) as well as in children with specific language impairment (SLI ), and
hypothesized that the facilitation effect would vary with language abilities.
Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in
sentences that were spoken, sung on pitches extracted from speech, and sung on
original scores. In Experiment 2, we added a natural speech rate condition and
tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with
SLI and 16 age-matched children with TLD were tested in all four conditions.
Results: In both TLD groups, older children scored better than the younger ones.
The matched TLD group scored higher than the SLI group who scored at the level
of the younger children with TLD . None of the experiments showed a facilitation
effect of sung over spoken stimuli. Conclusions: Word detection abilities
improved with age in both TLD and SLI groups. Our findings are compatible with
the hypothesis of delayed language abilities in children with SLI , and are
discussed in light of the role of durational prosodic cues in words
detection.
Collapse
Affiliation(s)
- Clément Planchou
- Pediatric Neurology Unit, American Memorial Hospital, University
Hospital of Reims, France
| | - Sylvain Clément
- Neuropsychology: Audition, Cognition, Action, PSITEC Laboratory (EA
4072), Department of Psychology, University of Lille, France
| | - Renée Béland
- Department of Speech Therapy and Audiology, University of Montreal,
Canada
| | - Nia Cason
- Neuropsychology: Audition, Cognition, Action, PSITEC Laboratory (EA
4072), Department of Psychology, University of Lille, France
| | - Jacques Motte
- Pediatric Neurology Unit, American Memorial Hospital, University
Hospital of Reims, France
| | - Séverine Samson
- Neuropsychology: Audition, Cognition, Action, PSITEC Laboratory (EA
4072), Department of Psychology, University of Lille, France
| |
Collapse
|
25
|
Abstract
Every day we communicate using complex linguistic and musical systems, yet these modern systems are the product of a much more ancient relationship with sound. When we speak, we communicate not only with the words we choose, but also with the patterns of sound we create and the movements that create them. From the natural rhythms of speech, to the precise timing characteristics of a consonant, these patterns guide our daily communication. By examining the principles of information processing that are common to speech and music, we peel back the layers to reveal the biological foundations of human communication through sound. Further, we consider how the brain's response to sound is shaped by experience, such as musical expertise, and implications for the treatment of communication disorders.
Collapse
Affiliation(s)
- Nina Kraus
- Auditory Neuroscience Laboratory, Departments of
- Communication Sciences,
- Neurobiology and Physiology,
- Otolaryngology, Northwestern University, Evanston, Illinois 60208;
| | - Jessica Slater
- Auditory Neuroscience Laboratory, Departments of
- Communication Sciences,
| |
Collapse
|
26
|
Slater J, Kraus N. The role of rhythm in perceiving speech in noise: a comparison of percussionists, vocalists and non-musicians. Cogn Process 2015; 17:79-87. [PMID: 26445880 DOI: 10.1007/s10339-015-0740-7] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2015] [Accepted: 09/17/2015] [Indexed: 01/11/2023]
Abstract
The natural rhythms of speech help a listener follow what is being said, especially in noisy conditions. There is increasing evidence for links between rhythm abilities and language skills; however, the role of rhythm-related expertise in perceiving speech in noise is unknown. The present study assesses musical competence (rhythmic and melodic discrimination), speech-in-noise perception and auditory working memory in young adult percussionists, vocalists and non-musicians. Outcomes reveal that better ability to discriminate rhythms is associated with better sentence-in-noise (but not words-in-noise) perception across all participants. These outcomes suggest that sensitivity to rhythm helps a listener understand unfolding speech patterns in degraded listening conditions, and that observations of a "musician advantage" for speech-in-noise perception may be mediated in part by superior rhythm skills.
Collapse
Affiliation(s)
- Jessica Slater
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA.,Department of Communication Sciences, Northwestern University, Evanston, IL, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA. .,Department of Communication Sciences, Northwestern University, Evanston, IL, USA. .,Institute for Neuroscience, Northwestern University, Evanston, IL, USA. .,Department of Neurobiology and Physiology, Northwestern University, Evanston, IL, USA. .,Department of Otolaryngology, Northwestern University, 2240 Campus Drive, Evanston, IL, 60208, USA.
| |
Collapse
|
27
|
Abstract
Sensitivity to speech rhythm, especially the pattern of stressed and unstressed syllables, is an important aspect of language acquisition and comprehension from infancy through adulthood. In English, a strong correlation exists between speech rhythm and grammatical class. This property is well illustrated by a particular group of noun/verb homographs that are spelled the same but are pronounced with a lexical stress on the first syllable when used as a noun or on the second syllable when used as a verb. The purpose of this study was to further examine the neural markers of speech rhythm and its role in word recognition. To this end, event-related brain potentials were recorded while participants listened to spoken sentences containing a stress homograph either in a noun or a verb position. The rhythmic structure of the stress homographs was manipulated so that they were pronounced with a stress pattern that either matched or mismatched their grammatical class. Results of cluster-based permutation tests on the event-related brain potentials revealed larger negativities over the centrofrontal scalp regions when the stress homographs were mispronounced, in line with previous studies on lexical ambiguity resolution. In addition, differences between rhythmically unexpected nouns and verbs could be seen as early as 200 ms, suggesting that listeners are sensitive to statistical properties of their language rhythm. Together, these results support the hypothesis that information about speech rhythm is rapidly integrated during speech perception and contributes to lexical retrieval.
Collapse
|
28
|
Gordon RL, Jacobs MS, Schuele CM, McAuley JD. Perspectives on the rhythm-grammar link and its implications for typical and atypical language development. Ann N Y Acad Sci 2015; 1337:16-25. [PMID: 25773612 DOI: 10.1111/nyas.12683] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
This paper reviews the mounting evidence for shared cognitive mechanisms and neural resources for rhythm and grammar. Evidence for a role of rhythm skills in language development and language comprehension is reviewed here in three lines of research: (1) behavioral and brain data from adults and children, showing that prosody and other aspects of timing of sentences influence online morpho-syntactic processing; (2) comorbidity of impaired rhythm with grammatical deficits in children with language impairment; and (3) our recent work showing a strong positive association between rhythm perception skills and expressive grammatical skills in young school-age children with typical development. Our preliminary follow-up study presented here revealed that musical rhythm perception predicted variance in 6-year-old children's production of complex syntax, as well as online reorganization of grammatical information (transformation); these data provide an additional perspective on the hierarchical relations potentially shared by rhythm and grammar. A theoretical framework for shared cognitive resources for the role of rhythm in perceiving and learning grammatical structure is elaborated on in light of potential implications for using rhythm-emphasized musical training to improve language skills in children.
Collapse
Affiliation(s)
- Reyna L Gordon
- Department of Otolaryngology, Vanderbilt University School of Medicine, Nashville, Tennessee; Vanderbilt Kennedy Center, Vanderbilt University, Nashville, Tennessee
| | | | | | | |
Collapse
|
29
|
Cason N, Astésano C, Schön D. Bridging music and speech rhythm: rhythmic priming and audio-motor training affect speech perception. Acta Psychol (Amst) 2015; 155:43-50. [PMID: 25553343 DOI: 10.1016/j.actpsy.2014.12.002] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2014] [Revised: 12/01/2014] [Accepted: 12/03/2014] [Indexed: 11/16/2022] Open
Abstract
Following findings that musical rhythmic priming enhances subsequent speech perception, we investigated whether rhythmic priming for spoken sentences can enhance phonological processing - the building blocks of speech - and whether audio-motor training enhances this effect. Participants heard a metrical prime followed by a sentence (with a matching/mismatching prosodic structure), for which they performed a phoneme detection task. Behavioural (RT) data was collected from two groups: one who received audio-motor training, and one who did not. We hypothesised that 1) phonological processing would be enhanced in matching conditions, and 2) audio-motor training with the musical rhythms would enhance this effect. Indeed, providing a matching rhythmic prime context resulted in faster phoneme detection, thus revealing a cross-domain effect of musical rhythm on phonological processing. In addition, our results indicate that rhythmic audio-motor training enhances this priming effect. These results have important implications for rhythm-based speech therapies, and suggest that metrical rhythm in music and speech may rely on shared temporal processing brain resources.
Collapse
Affiliation(s)
- Nia Cason
- Aix-Marseille Université, Institut de Neurosciences des Systèmes, Marseille, France; INSERM, U1106, Marseille, France.
| | - Corine Astésano
- UMR 7309, Laboratoire Parole et Langage, CNRS & Aix-Marseille University, 5 avenue Pasteur, 13006 Aix-en-Provence, France; EA 4156, U.R.I. Octogone-Lordat, 5 allées Antonio Machado, 31058 Toulouse Cedex 09, France.
| | - Daniele Schön
- Aix-Marseille Université, Institut de Neurosciences des Systèmes, Marseille, France; INSERM, U1106, Marseille, France.
| |
Collapse
|
30
|
Kraus N, Slater J. Music and language. THE HUMAN AUDITORY SYSTEM - FUNDAMENTAL ORGANIZATION AND CLINICAL DISORDERS 2015; 129:207-22. [DOI: 10.1016/b978-0-444-62630-1.00012-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
31
|
Slater J, Strait DL, Skoe E, O'Connell S, Thompson E, Kraus N. Longitudinal effects of group music instruction on literacy skills in low-income children. PLoS One 2014; 9:e113383. [PMID: 25409300 PMCID: PMC4237413 DOI: 10.1371/journal.pone.0113383] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2014] [Accepted: 10/24/2014] [Indexed: 11/18/2022] Open
Abstract
Children from low-socioeconomic backgrounds tend to fall progressively further behind their higher-income peers over the course of their academic careers. Music training has been associated with enhanced language and learning skills, suggesting that music programs could play a role in helping low-income children to stay on track academically. Using a controlled, longitudinal design, the impact of group music instruction on English reading ability was assessed in 42 low-income Spanish-English bilingual children aged 6-9 years in Los Angeles. After one year, children who received music training retained their age-normed level of reading performance while a matched control group's performance deteriorated, consistent with expected declines in this population. While the extent of change is modest, outcomes nonetheless provide evidence that music programs may have value in helping to counteract the negative effects of low-socioeconomic status on child literacy development.
Collapse
Affiliation(s)
- Jessica Slater
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
| | - Dana L. Strait
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Institute for Neuroscience, Northwestern University, Evanston, Illinois, United States of America
| | - Erika Skoe
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
| | - Samantha O'Connell
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
| | - Elaine Thompson
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
- Institute for Neuroscience, Northwestern University, Evanston, Illinois, United States of America
- Department of Neurobiology and Physiology, Northwestern University, Evanston, Illinois, United States of America
- Department of Otolaryngology, Northwestern University, Evanston, Illinois, United States of America
| |
Collapse
|
32
|
Gordon RL, Shivers CM, Wieland EA, Kotz SA, Yoder PJ, Devin McAuley J. Musical rhythm discrimination explains individual differences in grammar skills in children. Dev Sci 2014; 18:635-44. [DOI: 10.1111/desc.12230] [Citation(s) in RCA: 90] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2013] [Accepted: 06/30/2014] [Indexed: 11/30/2022]
Affiliation(s)
| | | | - Elizabeth A. Wieland
- Department of Communicative Sciences and Disorders; Michigan State University; USA
| | - Sonja A. Kotz
- Department of Neuropsychology; Max Planck Institute for Human Cognitive and Brain Sciences; Leipzig Germany
- School of Psychological Sciences; The University of Manchester; UK
| | - Paul J. Yoder
- Vanderbilt Kennedy Center; Vanderbilt University; USA
| | - J. Devin McAuley
- Department of Psychology and Neuroscience Program; Michigan State University; USA
| |
Collapse
|