1
|
Berthault E, Chen S, Falk S, Morillon B, Schön D. Auditory and motor priming of metric structure improves understanding of degraded speech. Cognition 2024; 248:105793. [PMID: 38636164 DOI: 10.1016/j.cognition.2024.105793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 03/07/2024] [Accepted: 04/09/2024] [Indexed: 04/20/2024]
Abstract
Speech comprehension is enhanced when preceded (or accompanied) by a congruent rhythmic prime reflecting the metrical sentence structure. Although these phenomena have been described for auditory and motor primes separately, their respective and synergistic contribution has not been addressed. In this experiment, participants performed a speech comprehension task on degraded speech signals that were preceded by a rhythmic prime that could be auditory, motor or audiomotor. Both auditory and audiomotor rhythmic primes facilitated speech comprehension speed. While the presence of a purely motor prime (unpaced tapping) did not globally benefit speech comprehension, comprehension accuracy scaled with the regularity of motor tapping. In order to investigate inter-individual variability, participants also performed a Spontaneous Speech Synchronization test. The strength of the estimated perception-production coupling correlated positively with overall speech comprehension scores. These findings are discussed in the framework of the dynamic attending and active sensing theories.
Collapse
Affiliation(s)
- Emma Berthault
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France.
| | - Sophie Chen
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France.
| | - Simone Falk
- Department of Linguistics and Translation, University of Montreal, Canada; International Laboratory for Brain, Music and Sound Research, Montreal, Canada.
| | - Benjamin Morillon
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France.
| | - Daniele Schön
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France.
| |
Collapse
|
2
|
Reyes-Pérez P, Hernández-Ledesma AL, Román-López TV, García-Vilchis B, Ramírez-González D, Lázaro-Figueroa A, Martinez D, Flores-Ocampo V, Espinosa-Méndez IM, Tinajero-Nieto L, Peña-Ayala A, Morelos-Figaredo E, Guerra-Galicia CM, Torres-Valdez E, Gordillo-Huerta MV, Gandarilla-Martínez NA, Salinas-Barboza K, Félix-Rodríguez G, Frontana-Vázquez G, Matuk-Pérez Y, Estrada-Bellmann I, Alpizar-Rodríguez D, Rodríguez-Violante M, Rentería ME, Ruíz-Contreras AE, Alcauter S, Medina-Rivera A. Building national patient registries in Mexico: insights from the MexOMICS Consortium. Front Digit Health 2024; 6:1344103. [PMID: 38895515 PMCID: PMC11183280 DOI: 10.3389/fdgth.2024.1344103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 05/01/2024] [Indexed: 06/21/2024] Open
Abstract
Objective To introduce MexOMICS, a Mexican Consortium focused on establishing electronic databases to collect, cross-reference, and share health-related and omics data on the Mexican population. Methods Since 2019, the MexOMICS Consortium has established three electronic-based registries: the Mexican Twin Registry (TwinsMX), Mexican Lupus Registry (LupusRGMX), and the Mexican Parkinson's Research Network (MEX-PD), designed and implemented using the Research Electronic Data Capture web-based application. Participants were enrolled through voluntary participation and on-site engagement with medical specialists. We also acquired DNA samples and Magnetic Resonance Imaging scans in subsets of participants. Results The registries have successfully enrolled a large number of participants from a variety of regions within Mexico: TwinsMX (n = 2,915), LupusRGMX (n = 1,761) and MEX-PD (n = 750). In addition to sociodemographic, psychosocial, and clinical data, MexOMICS has collected DNA samples to study the genetic biomarkers across the three registries. Cognitive function has been assessed with the Montreal Cognitive Assessment in a subset of 376 MEX-PD participants. Furthermore, a subset of 267 twins have participated in cognitive evaluations with the Creyos platform and in MRI sessions acquiring structural, functional, and spectroscopy brain imaging; comparable evaluations are planned for LupusRGMX and MEX-PD. Conclusions The MexOMICS registries offer a valuable repository of information concerning the potential interplay of genetic and environmental factors in health conditions among the Mexican population.
Collapse
Affiliation(s)
- Paula Reyes-Pérez
- Laboratorio Internacional de Investigación Sobre el Genoma Humano, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
| | - Ana Laura Hernández-Ledesma
- Laboratorio Internacional de Investigación Sobre el Genoma Humano, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
| | - Talía V. Román-López
- Departamento de Neurobiología Conductual y Cognitiva, Instituto de Neurobiología, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
| | - Brisa García-Vilchis
- Laboratorio de Neurogenómica Cognitiva, Unidad de Investigación de Psicobiología y Neurociencias, Coordinación de Psicobiología y Neurociencias, Facultad de Psicología, Universidad Nacional Autónoma de México, Ciudad de México, Mexico
| | - Diego Ramírez-González
- Departamento de Neurobiología Conductual y Cognitiva, Instituto de Neurobiología, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
| | - Alejandra Lázaro-Figueroa
- Laboratorio de Neurogenómica Cognitiva, Unidad de Investigación de Psicobiología y Neurociencias, Coordinación de Psicobiología y Neurociencias, Facultad de Psicología, Universidad Nacional Autónoma de México, Ciudad de México, Mexico
| | - Domingo Martinez
- Laboratorio Internacional de Investigación Sobre el Genoma Humano, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
- Unidad de Genómica Avanzada, Langebio, Centro de Investigación y Estudios Avanzados del Instituto Politécnico Nacional, Irapuato, Mexico
- Escuela Nacional de Estudios Superiores, Unidad Juriquilla, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
| | - Victor Flores-Ocampo
- Laboratorio Internacional de Investigación Sobre el Genoma Humano, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
| | - Ian M. Espinosa-Méndez
- Departamento de Neurobiología Conductual y Cognitiva, Instituto de Neurobiología, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
| | - Lizbet Tinajero-Nieto
- Hospital General Regional No. 1, Instituto Mexicano del Seguro Social, Querétaro, Santiago de Querétaro, Mexico
| | - Angélica Peña-Ayala
- Hospital General Regional No. 1, Instituto Mexicano del Seguro Social, Querétaro, Santiago de Querétaro, Mexico
- Instituto Nacional de Rehabilitación “Luis Guillermo Ibarra Ibarra”, Ciudad de México, Mexico
| | - Eugenia Morelos-Figaredo
- Hospital Regional, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado, Morelia, Mexico
| | | | | | - María Vanessa Gordillo-Huerta
- Hospital General Querétaro, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado, Santiago de Querétaro, Mexico
| | | | | | | | | | - Yamil Matuk-Pérez
- Facultad de Medicina, Universidad Autónoma de Querétaro. Unidad de Neurociencias, Hospital Angeles Centro Sur, Santiago de Querétaro, Mexico
| | - Ingrid Estrada-Bellmann
- Movement Disorders Clinic, Neurology Division, Internal Medicine Department, University Hospital “Dr. José E. González”, Universidad Autónoma de Nuevo León, Monterrey, Mexico
| | | | - Mayela Rodríguez-Violante
- Laboratorio Clínico de Enfermedades Neurodegenerativas, Instituto Nacional de Neurología y Neurocirugía Manuel Velasco Suárez, Mexico City, Mexico
| | - Miguel E. Rentería
- Mental Health and Neuroscience Program, QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia
- School of Biomedical Sciences, Faculty of Medicine, The University of Queensland, Brisbane, QLD, Australia
| | - Alejandra E. Ruíz-Contreras
- Laboratorio de Neurogenómica Cognitiva, Unidad de Investigación de Psicobiología y Neurociencias, Coordinación de Psicobiología y Neurociencias, Facultad de Psicología, Universidad Nacional Autónoma de México, Ciudad de México, Mexico
| | - Sarael Alcauter
- Departamento de Neurobiología Conductual y Cognitiva, Instituto de Neurobiología, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
| | - Alejandra Medina-Rivera
- Laboratorio Internacional de Investigación Sobre el Genoma Humano, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
| |
Collapse
|
3
|
Abrams EB, Namballa R, He R, Poeppel D, Ripollés P. Elevator music as a tool for the quantitative characterization of reward. Ann N Y Acad Sci 2024; 1535:121-136. [PMID: 38566486 DOI: 10.1111/nyas.15131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
While certain musical genres and songs are widely popular, there is still large variability in the music that individuals find rewarding or emotional, even among those with a similar musical enculturation. Interestingly, there is one Western genre that is intended to attract minimal attention and evoke a mild emotional response: elevator music. In a series of behavioral experiments, we show that elevator music consistently elicits low pleasure and surprise. Participants reported elevator music as being less pleasurable than music from popular genres, even when participants did not regularly listen to the comparison genre. Participants reported elevator music to be familiar even when they had not explicitly heard the presented song before. Computational and behavioral measures of surprisal showed that elevator music was less surprising, and thus more predictable, than other well-known genres. Elevator music covers of popular songs were rated as less pleasurable, surprising, and arousing than their original counterparts. Finally, we used elevator music as a control for self-selected rewarding songs in a proof-of-concept physiological (electrodermal activity and piloerection) experiment. Our results suggest that elevator music elicits low emotional responses consistently across Western music listeners, making it a unique control stimulus for studying musical novelty, pleasure, and surprise.
Collapse
Affiliation(s)
- Ellie Bean Abrams
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - Richa Namballa
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - Richard He
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| |
Collapse
|
4
|
Gómez Varela I, Orpella J, Poeppel D, Ripolles P, Assaneo MF. Syllabic rhythm and prior linguistic knowledge interact with individual differences to modulate phonological statistical learning. Cognition 2024; 245:105737. [PMID: 38342068 DOI: 10.1016/j.cognition.2024.105737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 01/30/2024] [Accepted: 01/31/2024] [Indexed: 02/13/2024]
Abstract
Phonological statistical learning - our ability to extract meaningful regularities from spoken language - is considered critical in the early stages of language acquisition, in particular for helping to identify discrete words in continuous speech. Most phonological statistical learning studies use an experimental task introduced by Saffran et al. (1996), in which the syllables forming the words to be learned are presented continuously and isochronously. This raises the question of the extent to which this purportedly powerful learning mechanism is robust to the kinds of rhythmic variability that characterize natural speech. Here, we tested participants with arhythmic, semi-rhythmic, and isochronous speech during learning. In addition, we investigated how input rhythmicity interacts with two other factors previously shown to modulate learning: prior knowledge (syllable order plausibility with respect to participants' first language) and learners' speech auditory-motor synchronization ability. We show that words are extracted by all learners even when the speech input is completely arhythmic. Interestingly, high auditory-motor synchronization ability increases statistical learning when the speech input is temporally more predictable but only when prior knowledge can also be used. This suggests an additional mechanism for learning based on predictions not only about when but also about what upcoming speech will be.
Collapse
Affiliation(s)
- Ireri Gómez Varela
- Institute of Neurobiology, National Autonomous University of Mexico, Querétaro, Mexico
| | - Joan Orpella
- Department of Psychology, New York University, New York, NY, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA; Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany; Center for Language, Music and Emotion (CLaME), New York University, New York, NY, USA; Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Pablo Ripolles
- Department of Psychology, New York University, New York, NY, USA; Center for Language, Music and Emotion (CLaME), New York University, New York, NY, USA; Music and Audio Research Lab (MARL), New York University, New York, NY, USA; Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - M Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Querétaro, Mexico.
| |
Collapse
|
5
|
Mares C, Echavarría Solana R, Assaneo MF. Auditory-motor synchronization varies among individuals and is critically shaped by acoustic features. Commun Biol 2023; 6:658. [PMID: 37344562 DOI: 10.1038/s42003-023-04976-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 05/24/2023] [Indexed: 06/23/2023] Open
Abstract
The ability to synchronize body movements with quasi-regular auditory stimuli represents a fundamental trait in humans at the core of speech and music. Despite the long trajectory of the study of such ability, little attention has been paid to how acoustic features of the stimuli and individual differences can modulate auditory-motor synchrony. Here, by exploring auditory-motor synchronization abilities across different effectors and types of stimuli, we revealed that this capability is more restricted than previously assumed. While the general population can synchronize to sequences composed of the repetitions of the same acoustic unit, the synchrony in a subgroup of participants is impaired when the unit's identity varies across the sequence. In addition, synchronization in this group can be temporarily restored by being primed by a facilitator stimulus. Auditory-motor integration is stable across effectors, supporting the hypothesis of a central clock mechanism subserving the different articulators but critically shaped by the acoustic features of the stimulus and individual abilities.
Collapse
Affiliation(s)
- Cecilia Mares
- Institute of Neurobiology, National Autonomous University of Mexico, Juriquilla, Querétaro, Mexico
| | | | - M Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Juriquilla, Querétaro, Mexico.
| |
Collapse
|
6
|
Fiveash A, Ferreri L, Bouwer FL, Kösem A, Moghimi S, Ravignani A, Keller PE, Tillmann B. Can rhythm-mediated reward boost learning, memory, and social connection? Perspectives for future research. Neurosci Biobehav Rev 2023; 149:105153. [PMID: 37019245 DOI: 10.1016/j.neubiorev.2023.105153] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 03/14/2023] [Accepted: 03/31/2023] [Indexed: 04/05/2023]
Abstract
Studies of rhythm processing and of reward have progressed separately, with little connection between the two. However, consistent links between rhythm and reward are beginning to surface, with research suggesting that synchronization to rhythm is rewarding, and that this rewarding element may in turn also boost this synchronization. The current mini review shows that the combined study of rhythm and reward can be beneficial to better understand their independent and combined roles across two central aspects of cognition: 1) learning and memory, and 2) social connection and interpersonal synchronization; which have so far been studied largely independently. From this basis, it is discussed how connections between rhythm and reward can be applied to learning and memory and social connection across different populations, taking into account individual differences, clinical populations, human development, and animal research. Future research will need to consider the rewarding nature of rhythm, and that rhythm can in turn boost reward, potentially enhancing other cognitive and social processes.
Collapse
Affiliation(s)
- A Fiveash
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR 5292, INSERM U1028, F-69000 Lyon, France; University of Lyon 1, Lyon, France; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia.
| | - L Ferreri
- Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy; Laboratoire d'Étude des Mécanismes Cognitifs, Université Lumière Lyon 2, Lyon, France
| | - F L Bouwer
- Department of Psychology, Brain and Cognition, University of Amsterdam, Amsterdam, the Netherlands
| | - A Kösem
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR 5292, INSERM U1028, F-69000 Lyon, France
| | - S Moghimi
- Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, INSERM U1105, Amiens, France
| | - A Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, the Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - P E Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - B Tillmann
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR 5292, INSERM U1028, F-69000 Lyon, France; University of Lyon 1, Lyon, France; Laboratory for Research on Learning and Development, LEAD - CNRS UMR5022, Université de Bourgogne, Dijon, France
| |
Collapse
|
7
|
Lubinus C, Keitel A, Obleser J, Poeppel D, Rimmele JM. Explaining flexible continuous speech comprehension from individual motor rhythms. Proc Biol Sci 2023; 290:20222410. [PMID: 36855868 PMCID: PMC9975658 DOI: 10.1098/rspb.2022.2410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023] Open
Abstract
When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (digit span) and higher linguistic, model-based sentence predictability-particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role.
Collapse
Affiliation(s)
- Christina Lubinus
- Department of Neuroscience and Department of Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
| | - Anne Keitel
- Psychology, University of Dundee, Dundee DD1 4HN, UK
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience (in Cooperation with Max Planck Society), Frankfurt am Main, Germany
| | - Johanna M. Rimmele
- Department of Neuroscience and Department of Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
| |
Collapse
|
8
|
Luo L, Lu L. Studying rhythm processing in speech through the lens of auditory-motor synchronization. Front Neurosci 2023; 17:1146298. [PMID: 36937684 PMCID: PMC10017839 DOI: 10.3389/fnins.2023.1146298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 02/20/2023] [Indexed: 03/06/2023] Open
Abstract
Continuous speech is organized into a hierarchy of rhythms. Accurate processing of this rhythmic hierarchy through the interactions of auditory and motor systems is fundamental to speech perception and production. In this mini-review, we aim to evaluate the implementation of behavioral auditory-motor synchronization paradigms when studying rhythm processing in speech. First, we present an overview of the classic finger-tapping paradigm and its application in revealing differences in auditory-motor synchronization between the typical and clinical populations. Next, we highlight key findings on rhythm hierarchy processing in speech and non-speech stimuli from finger-tapping studies. Following this, we discuss the potential caveats of the finger-tapping paradigm and propose the speech-speech synchronization (SSS) task as a promising tool for future studies. Overall, we seek to raise interest in developing new methods to shed light on the neural mechanisms of speech processing.
Collapse
Affiliation(s)
- Lu Luo
- School of Psychology, Beijing Sport University, Beijing, China
- Laboratory of Sports Stress and Adaptation of General Administration of Sport, Beijing, China
| | - Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing, China
- *Correspondence: Lingxi Lu,
| |
Collapse
|
9
|
Orpella J, Assaneo MF, Ripollés P, Noejovich L, López-Barroso D, de Diego-Balaguer R, Poeppel D. Differential activation of a frontoparietal network explains population-level differences in statistical learning from speech. PLoS Biol 2022; 20:e3001712. [PMID: 35793349 PMCID: PMC9292101 DOI: 10.1371/journal.pbio.3001712] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 07/18/2022] [Accepted: 06/14/2022] [Indexed: 11/18/2022] Open
Abstract
People of all ages display the ability to detect and learn from patterns in seemingly random stimuli. Referred to as statistical learning (SL), this process is particularly critical when learning a spoken language, helping in the identification of discrete words within a spoken phrase. Here, by considering individual differences in speech auditory–motor synchronization, we demonstrate that recruitment of a specific neural network supports behavioral differences in SL from speech. While independent component analysis (ICA) of fMRI data revealed that a network of auditory and superior pre/motor regions is universally activated in the process of learning, a frontoparietal network is additionally and selectively engaged by only some individuals (high auditory–motor synchronizers). Importantly, activation of this frontoparietal network is related to a boost in learning performance, and interference with this network via articulatory suppression (AS; i.e., producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on SL from speech and reconciles previous contrasting findings. These findings also highlight a more general need to factor in fundamental individual differences for a precise characterization of cognitive phenomena. In the context of speech, statistical learning is thought to be an important mechanism for language acquisition. This study shows that language statistical learning is boosted by the recruitment of a fronto-parietal brain network related to auditory-motor synchronization and its interplay with a mandatory auditory-motor learning system.
Collapse
Affiliation(s)
- Joan Orpella
- Department of Psychology, New York University, New York, New York, United States of America
| | - M. Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Juriquilla, Querétaro, Mexico
- * E-mail:
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, New York, United States of America
- Music and Audio Research Lab (MARL), New York University, New York, New York, United States of America
- Center for Language, Music and Emotion (CLaME), New York University, New York, New York, United States of America
- Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Laura Noejovich
- Department of Psychology, New York University, New York, New York, United States of America
| | - Diana López-Barroso
- Cognitive Neurology and Aphasia Unit, Centro de Investigaciones Médico-Sanitarias, Instituto de Investigación Biomédica de Málaga–IBIMA and University of Málaga, Málaga, Spain
- Department of Psychobiology and Methodology of Behavioral Sciences, Faculty of Psychology and Speech Therapy, University of Málaga, Málaga, Spain
| | - Ruth de Diego-Balaguer
- ICREA, Barcelona, Spain
- Cognition and Brain Plasticity Unit, IDIBELL, L’Hospitalet de Llobregat, Barcelona, Spain
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
- Institute of Neuroscience, University of Barcelona, Barcelona, Spain
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Language, Music and Emotion (CLaME), New York University, New York, New York, United States of America
- Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Ernst Struengmann Institute for Neuroscience, Frankfurt, Germany
| |
Collapse
|
10
|
Gnanateja GN, Devaraju DS, Heyne M, Quique YM, Sitek KR, Tardif MC, Tessmer R, Dial HR. On the Role of Neural Oscillations Across Timescales in Speech and Music Processing. Front Comput Neurosci 2022; 16:872093. [PMID: 35814348 PMCID: PMC9260496 DOI: 10.3389/fncom.2022.872093] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 05/24/2022] [Indexed: 11/25/2022] Open
Abstract
This mini review is aimed at a clinician-scientist seeking to understand the role of oscillations in neural processing and their functional relevance in speech and music perception. We present an overview of neural oscillations, methods used to study them, and their functional relevance with respect to music processing, aging, hearing loss, and disorders affecting speech and language. We first review the oscillatory frequency bands and their associations with speech and music processing. Next we describe commonly used metrics for quantifying neural oscillations, briefly touching upon the still-debated mechanisms underpinning oscillatory alignment. Following this, we highlight key findings from research on neural oscillations in speech and music perception, as well as contributions of this work to our understanding of disordered perception in clinical populations. Finally, we conclude with a look toward the future of oscillatory research in speech and music perception, including promising methods and potential avenues for future work. We note that the intention of this mini review is not to systematically review all literature on cortical tracking of speech and music. Rather, we seek to provide the clinician-scientist with foundational information that can be used to evaluate and design research studies targeting the functional role of oscillations in speech and music processing in typical and clinical populations.
Collapse
Affiliation(s)
- G Nike Gnanateja
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Dhatri S Devaraju
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Matthias Heyne
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Yina M Quique
- Center for Education in Health Sciences, Northwestern University, Chicago, IL, United States
| | - Kevin R Sitek
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Monique C Tardif
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Rachel Tessmer
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States
| | - Heather R Dial
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States.,Department of Communication Sciences and Disorders, University of Houston, Houston, TX, United States
| |
Collapse
|