1
|
Zhang M, Riecke L, Bonte M. Cortical tracking of language structures: Modality-dependent and independent responses. Clin Neurophysiol 2024; 166:56-65. [PMID: 39111244 DOI: 10.1016/j.clinph.2024.07.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 04/18/2024] [Accepted: 07/20/2024] [Indexed: 09/15/2024]
Abstract
OBJECTIVES The mental parsing of linguistic hierarchy is crucial for language comprehension, and while there is growing interest in the cortical tracking of auditory speech, the neurophysiological substrates for tracking written language are still unclear. METHODS We recorded electroencephalographic (EEG) responses from participants exposed to auditory and visual streams of either random syllables or tri-syllabic real words. Using a frequency-tagging approach, we analyzed the neural representations of physically presented (i.e., syllables) and mentally constructed (i.e., words) linguistic units and compared them between the two sensory modalities. RESULTS We found that tracking syllables is partially modality dependent, with anterior and posterior scalp regions more involved in the tracking of spoken and written syllables, respectively. The cortical tracking of spoken and written words instead was found to involve a shared anterior region to a similar degree, suggesting a modality-independent process for word tracking. CONCLUSION Our study suggests that basic linguistic features are represented in a sensory modality-specific manner, while more abstract ones are modality-unspecific during the online processing of continuous language input. SIGNIFICANCE The current methodology may be utilized in future research to examine the development of reading skills, especially the deficiencies in fluent reading among those with dyslexia.
Collapse
Affiliation(s)
- Manli Zhang
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Lars Riecke
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
2
|
Bonte M, Brem S. Unraveling individual differences in learning potential: A dynamic framework for the case of reading development. Dev Cogn Neurosci 2024; 66:101362. [PMID: 38447471 PMCID: PMC10925938 DOI: 10.1016/j.dcn.2024.101362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 02/02/2024] [Accepted: 03/01/2024] [Indexed: 03/08/2024] Open
Abstract
Children show an enormous capacity to learn during development, but with large individual differences in the time course and trajectory of learning and the achieved skill level. Recent progress in developmental sciences has shown the contribution of a multitude of factors including genetic variation, brain plasticity, socio-cultural context and learning experiences to individual development. These factors interact in a complex manner, producing children's idiosyncratic and heterogeneous learning paths. Despite an increasing recognition of these intricate dynamics, current research on the development of culturally acquired skills such as reading still has a typical focus on snapshots of children's performance at discrete points in time. Here we argue that this 'static' approach is often insufficient and limits advancements in the prediction and mechanistic understanding of individual differences in learning capacity. We present a dynamic framework which highlights the importance of capturing short-term trajectories during learning across multiple stages and processes as a proxy for long-term development on the example of reading. This framework will help explain relevant variability in children's learning paths and outcomes and fosters new perspectives and approaches to study how children develop and learn.
Collapse
Affiliation(s)
- Milene Bonte
- Department of Cognitive Neuroscience and Maastricht Brain Imaging Center, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| | - Silvia Brem
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland; URPP Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zurich, Zurich, Switzerland
| |
Collapse
|
3
|
Xie X, Jaeger TF, Kurumada C. What we do (not) know about the mechanisms underlying adaptive speech perception: A computational framework and review. Cortex 2023; 166:377-424. [PMID: 37506665 DOI: 10.1016/j.cortex.2023.05.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 12/23/2022] [Accepted: 05/05/2023] [Indexed: 07/30/2023]
Abstract
Speech from unfamiliar talkers can be difficult to comprehend initially. These difficulties tend to dissipate with exposure, sometimes within minutes or less. Adaptivity in response to unfamiliar input is now considered a fundamental property of speech perception, and research over the past two decades has made substantial progress in identifying its characteristics. The mechanisms underlying adaptive speech perception, however, remain unknown. Past work has attributed facilitatory effects of exposure to any one of three qualitatively different hypothesized mechanisms: (1) low-level, pre-linguistic, signal normalization, (2) changes in/selection of linguistic representations, or (3) changes in post-perceptual decision-making. Direct comparisons of these hypotheses, or combinations thereof, have been lacking. We describe a general computational framework for adaptive speech perception (ASP) that-for the first time-implements all three mechanisms. We demonstrate how the framework can be used to derive predictions for experiments on perception from the acoustic properties of the stimuli. Using this approach, we find that-at the level of data analysis presently employed by most studies in the field-the signature results of influential experimental paradigms do not distinguish between the three mechanisms. This highlights the need for a change in research practices, so that future experiments provide more informative results. We recommend specific changes to experimental paradigms and data analysis. All data and code for this study are shared via OSF, including the R markdown document that this article is generated from, and an R library that implements the models we present.
Collapse
Affiliation(s)
- Xin Xie
- Language Science, University of California, Irvine, USA.
| | - T Florian Jaeger
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA; Computer Science, University of Rochester, Rochester, NY, USA
| | - Chigusa Kurumada
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
4
|
Di Pietro SV, Karipidis II, Pleisch G, Brem S. Neurodevelopmental trajectories of letter and speech sound processing from preschool to the end of elementary school. Dev Cogn Neurosci 2023; 61:101255. [PMID: 37196374 DOI: 10.1016/j.dcn.2023.101255] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 03/20/2023] [Accepted: 05/11/2023] [Indexed: 05/19/2023] Open
Abstract
Learning to read alphabetic languages starts with learning letter-speech-sound associations. How this process changes brain function during development is still largely unknown. We followed 102 children with varying reading skills in a mixed-longitudinal/cross-sectional design from the prereading stage to the end of elementary school over five time points (n = 46 with two and more time points, of which n = 16 fully-longitudinal) to investigate the neural trajectories of letter and speech sound processing using fMRI. Children were presented with letters and speech sounds visually, auditorily, and audiovisually in kindergarten (6.7yo), at the middle (7.3yo) and end of first grade (7.6yo), and in second (8.4yo) and fifth grades (11.5yo). Activation of the ventral occipitotemporal cortex for visual and audiovisual processing followed a complex trajectory, with two peaks in first and fifth grades. The superior temporal gyrus (STG) showed an inverted U-shaped trajectory for audiovisual letter processing, a development that in poor readers was attenuated in middle STG and absent in posterior STG. Finally, the trajectories for letter-speech-sound integration were modulated by reading skills and showed differing directionality in the congruency effect depending on the time point. This unprecedented study captures the development of letter processing across elementary school and its neural trajectories in children with varying reading skills.
Collapse
Affiliation(s)
- S V Di Pietro
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland; URPP Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zurich, Zurich, Switzerland
| | - I I Karipidis
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland
| | - G Pleisch
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland
| | - S Brem
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland; URPP Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zurich, Zurich, Switzerland.
| |
Collapse
|
5
|
Di Pietro SV, Willinger D, Frei N, Lutz C, Coraj S, Schneider C, Stämpfli P, Brem S. Disentangling influences of dyslexia, development, and reading experience on effective brain connectivity in children. Neuroimage 2023; 268:119869. [PMID: 36639004 DOI: 10.1016/j.neuroimage.2023.119869] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 12/29/2022] [Accepted: 01/09/2023] [Indexed: 01/12/2023] Open
Abstract
Altered brain connectivity between regions of the reading network has been associated with reading difficulties. However, it remains unclear whether connectivity differences between children with dyslexia (DYS) and those with typical reading skills (TR) are specific to reading impairments or to reading experience. In this functional MRI study, 132 children (M = 10.06 y, SD = 1.46) performed a phonological lexical decision task. We aimed to disentangle (1) disorder-specific from (2) experience-related differences in effective connectivity and to (3) characterize the development of DYS and TR. We applied dynamic causal modeling to age-matched (ndys = 25, nTR = 35) and reading-level-matched (ndys = 25, nTR = 22) groups. Developmental effects were assessed in beginning and advanced readers (TR: nbeg = 48, nadv = 35, DYS: nbeg = 24, nadv = 25). We show that altered feedback connectivity between the inferior parietal lobule and the visual word form area (VWFA) during print processing can be specifically attributed to reading impairments, because these alterations were found in DYS compared to both the age-matched and reading-level-matched TR. In contrast, feedforward connectivity from the VWFA to parietal and frontal regions characterized experience in TR and increased with age and reading skill. These directed connectivity findings pinpoint disorder-specific and experience-dependent alterations in the brain's reading network.
Collapse
Affiliation(s)
- Sarah V Di Pietro
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland; URPP Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zurich, Zurich, Switzerland
| | - David Willinger
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland; Department of Psychology and Psychodynamics, Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| | - Nada Frei
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland
| | - Christina Lutz
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland
| | - Seline Coraj
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland
| | - Chiara Schneider
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland
| | - Philipp Stämpfli
- MR-Center of the Department of Psychiatry, Psychotherapy and Psychosomatics and the Department of Child and Adolescent Psychiatry and Psychotherapy, Psychiatric Hospital, University of Zurich, Zurich, Switzerland
| | - Silvia Brem
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland; URPP Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zurich, Zurich, Switzerland; MR-Center of the Department of Psychiatry, Psychotherapy and Psychosomatics and the Department of Child and Adolescent Psychiatry and Psychotherapy, Psychiatric Hospital, University of Zurich, Zurich, Switzerland.
| |
Collapse
|
6
|
Pourhashemi F, Baart M, van Laarhoven T, Vroomen J. Want to quickly adapt to distorted speech and become a better listener? Read lips, not text. PLoS One 2022; 17:e0278986. [PMID: 36580461 PMCID: PMC9799298 DOI: 10.1371/journal.pone.0278986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 11/28/2022] [Indexed: 12/30/2022] Open
Abstract
When listening to distorted speech, does one become a better listener by looking at the face of the speaker or by reading subtitles that are presented along with the speech signal? We examined this question in two experiments in which we presented participants with spectrally distorted speech (4-channel noise-vocoded speech). During short training sessions, listeners received auditorily distorted words or pseudowords that were partially disambiguated by concurrently presented lipread information or text. After each training session, listeners were tested with new degraded auditory words. Learning effects (based on proportions of correctly identified words) were stronger if listeners had trained with words rather than with pseudowords (a lexical boost), and adding lipread information during training was more effective than adding text (a lipread boost). Moreover, the advantage of lipread speech over text training was also found when participants were tested more than a month later. The current results thus suggest that lipread speech may have surprisingly long-lasting effects on adaptation to distorted speech.
Collapse
Affiliation(s)
- Faezeh Pourhashemi
- Dept. of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Martijn Baart
- Dept. of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
- BCBL, Basque Center on Cognition, Brain, and Language, Donostia, Spain
- * E-mail:
| | - Thijs van Laarhoven
- Dept. of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Jean Vroomen
- Dept. of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
7
|
Pei C, Qiu Y, Li F, Huang X, Si Y, Li Y, Zhang X, Chen C, Liu Q, Cao Z, Ding N, Gao S, Alho K, Yao D, Xu P. The different brain areas occupied for integrating information of hierarchical linguistic units: a study based on EEG and TMS. Cereb Cortex 2022; 33:4740-4751. [PMID: 36178127 DOI: 10.1093/cercor/bhac376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/29/2022] [Accepted: 08/30/2022] [Indexed: 11/13/2022] Open
Abstract
Human language units are hierarchical, and reading acquisition involves integrating multisensory information (typically from auditory and visual modalities) to access meaning. However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in auditory and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory, visual, or combined audio-visual modalities while electroencephalographic responses were recorded. With a frequency tagging approach, we analyzed the neural representations of basic linguistic units (i.e. characters/monosyllabic words) and higher-level linguistic structures (i.e. phrases and sentences) across the 3 modalities separately. We found that audio-visual integration occurs in all linguistic units, and the brain areas involved in the integration varied across different linguistic levels. In particular, the integration of sentences activated the local left prefrontal area. Therefore, we used continuous theta-burst stimulation to verify that the left prefrontal cortex plays a vital role in the audio-visual integration of sentence information. Our findings suggest the advantage of bimodal language comprehension at hierarchical stages in language-related information processing and provide evidence for the causal role of the left prefrontal regions in processing information of audio-visual sentences.
Collapse
Affiliation(s)
- Changfu Pei
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Yuan Qiu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China
| | - Xunan Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China
| | - Yajing Si
- School of Psychology, Xinxiang Medical University, Xinxiang, 453003, China
| | - Yuqin Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Xiabing Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Chunli Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Qiang Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, Sichuan, 610066, China
| | - Zehong Cao
- STEM, Mawson Lakes Campus, University of South Australia, Adelaide, SA 5095, Australia
| | - Nai Ding
- College of Biomedical Engineering and Instrument Sciences, Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, 310007, China
| | - Shan Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, FI 00014, Finland
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China.,Radiation Oncology Key Laboratory of Sichuan Province, Chengdu, 610041, China
| |
Collapse
|
8
|
Bosker HR. Evidence For Selective Adaptation and Recalibration in the Perception of Lexical Stress. LANGUAGE AND SPEECH 2022; 65:472-490. [PMID: 34227417 PMCID: PMC9014674 DOI: 10.1177/00238309211030307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Individuals vary in how they produce speech. This variability affects both the segments (vowels and consonants) and the suprasegmental properties of their speech (prosody). Previous literature has demonstrated that listeners can adapt to variability in how different talkers pronounce the segments of speech. This study shows that listeners can also adapt to variability in how talkers produce lexical stress. Experiment 1 demonstrates a selective adaptation effect in lexical stress perception: repeatedly hearing Dutch trochaic words biased perception of a subsequent lexical stress continuum towards more iamb responses. Experiment 2 demonstrates a recalibration effect in lexical stress perception: when ambiguous suprasegmental cues to lexical stress were disambiguated by lexical orthographic context as signaling a trochaic word in an exposure phase, Dutch participants categorized a subsequent test continuum as more trochee-like. Moreover, the selective adaptation and recalibration effects generalized to novel words, not encountered during exposure. Together, the experiments demonstrate that listeners also flexibly adapt to variability in the suprasegmental properties of speech, thus expanding our understanding of the utility of listener adaptation in speech perception. Moreover, the combined outcomes speak for an architecture of spoken word recognition involving abstract prosodic representations at a prelexical level of analysis.
Collapse
Affiliation(s)
- Hans Rutger Bosker
- Hans Rutger Bosker, Max Planck
Institute for Psycholinguistics, PO Box 310, 6500 AH Nijmegen, The
Netherlands.
| |
Collapse
|
9
|
Zhang M, Riecke L, Fraga-González G, Bonte M. Altered brain network topology during speech tracking in developmental dyslexia. Neuroimage 2022; 254:119142. [PMID: 35342007 DOI: 10.1016/j.neuroimage.2022.119142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 03/15/2022] [Accepted: 03/23/2022] [Indexed: 10/18/2022] Open
Abstract
Developmental dyslexia is often accompanied by altered phonological processing of speech. Underlying neural changes have typically been characterized in terms of stimulus- and/or task-related responses within individual brain regions or their functional connectivity. Less is known about potential changes in the more global functional organization of brain networks. Here we recorded electroencephalography (EEG) in typical and dyslexic readers while they listened to (a) a random sequence of syllables and (b) a series of tri-syllabic real words. The network topology of the phase synchronization of evoked cortical oscillations was investigated in four frequency bands (delta, theta, alpha and beta) using minimum spanning tree graphs. We found that, compared to syllable tracking, word tracking triggered a shift toward a more integrated network topology in the theta band in both groups. Importantly, this change was significantly stronger in the dyslexic readers, who also showed increased reliance on a right frontal cluster of electrodes for word tracking. The current findings point towards an altered effect of word-level processing on the functional brain network organization that may be associated with less efficient phonological and reading skills in dyslexia.
Collapse
Affiliation(s)
- Manli Zhang
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Lars Riecke
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Gorka Fraga-González
- Department of Child and Adolescent Psychiatry, Faculty of Medicine, University of Zurich, Switzerland
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
10
|
The Role of the Interaction between the Inferior Parietal Lobule and Superior Temporal Gyrus in the Multisensory Go/No-go Task. Neuroimage 2022; 254:119140. [PMID: 35342002 DOI: 10.1016/j.neuroimage.2022.119140] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 03/19/2022] [Accepted: 03/22/2022] [Indexed: 11/23/2022] Open
Abstract
Information from multiple sensory modalities interacts. Using functional magnetic resonance imaging (fMRI), we aimed to identify the neural structures correlated with how cooccurring sound modulates the visual motor response execution. The reaction time (RT) to audiovisual stimuli was significantly faster than the RT to visual stimuli. Signal detection analyses showed no significant difference in the perceptual sensitivity (d') between audiovisual and visual stimuli, while the response criteria (β or c) of the audiovisual stimuli was decreased compared to the visual stimuli. The functional connectivity between the left inferior parietal lobule (IPL) and bilateral superior temporal gyrus (STG) was enhanced in Go processing compared with No-go processing of audiovisual stimuli. Furthermore, the left precentral gyrus (PreCG) showed enhanced functional connectivity with the bilateral STG and other areas of the ventral stream in Go processing compared with No-go processing of audiovisual stimuli. These results revealed that the neuronal network correlated with modulations of the motor response execution after the presentation of both visual stimuli along with cooccurring sound in a multisensory Go/Nogo task, including the left IPL, left PreCG, bilateral STG and some areas of the ventral stream. The role of the interaction between the IPL and STG in transforming audiovisual information into motor behavior is discussed. The current study provides a new perspective for exploring potential brain mechanisms underlying how humans execute appropriate behaviors on the basis of multisensory information.
Collapse
|
11
|
Romanovska L, Janssen R, Bonte M. Longitudinal changes in cortical responses to letter-speech sound stimuli in 8-11 year-old children. NPJ SCIENCE OF LEARNING 2022; 7:2. [PMID: 35079026 PMCID: PMC8789908 DOI: 10.1038/s41539-021-00118-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 12/16/2021] [Indexed: 05/29/2023]
Abstract
While children are able to name letters fairly quickly, the automatisation of letter-speech sound mappings continues over the first years of reading development. In the current longitudinal fMRI study, we explored developmental changes in cortical responses to letters and speech sounds across 3 yearly measurements in a sample of 18 8-11 year old children. We employed a text-based recalibration paradigm in which combined exposure to text and ambiguous speech sounds shifts participants' later perception of the ambiguous sounds towards the text. Our results showed that activity of the left superior temporal and lateral inferior precentral gyri followed a non-linear developmental pattern across the measurement sessions. This pattern is reminiscent of previously reported inverted-u-shape developmental trajectories in children's visual cortical responses to text. Our findings suggest that the processing of letters and speech sounds involves non-linear changes in the brain's spoken language network possibly related to progressive automatisation of reading skills.
Collapse
Affiliation(s)
- Linda Romanovska
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | - Roef Janssen
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
12
|
Romanovska L, Bonte M. How Learning to Read Changes the Listening Brain. Front Psychol 2021; 12:726882. [PMID: 34987442 PMCID: PMC8721231 DOI: 10.3389/fpsyg.2021.726882] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 11/23/2021] [Indexed: 01/18/2023] Open
Abstract
Reading acquisition reorganizes existing brain networks for speech and visual processing to form novel audio-visual language representations. This requires substantial cortical plasticity that is reflected in changes in brain activation and functional as well as structural connectivity between brain areas. The extent to which a child's brain can accommodate these changes may underlie the high variability in reading outcome in both typical and dyslexic readers. In this review, we focus on reading-induced functional changes of the dorsal speech network in particular and discuss how its reciprocal interactions with the ventral reading network contributes to reading outcome. We discuss how the dynamic and intertwined development of both reading networks may be best captured by approaching reading from a skill learning perspective, using audio-visual learning paradigms and longitudinal designs to follow neuro-behavioral changes while children's reading skills unfold.
Collapse
Affiliation(s)
| | - Milene Bonte
- *Correspondence: Linda Romanovska, ; Milene Bonte,
| |
Collapse
|
13
|
Beach SD, Ozernov-Palchik O, May SC, Centanni TM, Gabrieli JDE, Pantazis D. Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:254-279. [PMID: 34396148 PMCID: PMC8360503 DOI: 10.1162/nol_a_00034] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 02/21/2021] [Indexed: 06/13/2023]
Abstract
Robust and efficient speech perception relies on the interpretation of acoustically variable phoneme realizations, yet prior neuroimaging studies are inconclusive regarding the degree to which subphonemic detail is maintained over time as categorical representations arise. It is also unknown whether this depends on the demands of the listening task. We addressed these questions by using neural decoding to quantify the (dis)similarity of brain response patterns evoked during two different tasks. We recorded magnetoencephalography (MEG) as adult participants heard isolated, randomized tokens from a /ba/-/da/ speech continuum. In the passive task, their attention was diverted. In the active task, they categorized each token as ba or da. We found that linear classifiers successfully decoded ba vs. da perception from the MEG data. Data from the left hemisphere were sufficient to decode the percept early in the trial, while the right hemisphere was necessary but not sufficient for decoding at later time points. We also decoded stimulus representations and found that they were maintained longer in the active task than in the passive task; however, these representations did not pattern more like discrete phonemes when an active categorical response was required. Instead, in both tasks, early phonemic patterns gave way to a representation of stimulus ambiguity that coincided in time with reliable percept decoding. Our results suggest that the categorization process does not require the loss of subphonemic detail, and that the neural representation of isolated speech sounds includes concurrent phonemic and subphonemic information.
Collapse
Affiliation(s)
- Sara D. Beach
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, USA
| | - Ola Ozernov-Palchik
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Sidney C. May
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Lynch School of Education and Human Development, Boston College, Chestnut Hill, MA, USA
| | - Tracy M. Centanni
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Psychology, Texas Christian University, Fort Worth, TX, USA
| | - John D. E. Gabrieli
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dimitrios Pantazis
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
14
|
Van Hirtum T, Ghesquière P, Wouters J. A Bridge over Troubled Listening: Improving Speech-in-Noise Perception by Children with Dyslexia. J Assoc Res Otolaryngol 2021; 22:465-480. [PMID: 33861393 DOI: 10.1007/s10162-021-00793-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/26/2021] [Indexed: 10/21/2022] Open
Abstract
Developmental dyslexia is most commonly associated with phonological processing difficulties. However, children with dyslexia may experience poor speech-in-noise perception as well. Although there is an ongoing debate whether a speech perception deficit is inherent to dyslexia or acts as an aggravating risk factor compromising learning to read indirectly, improving speech perception might boost reading-related skills and reading acquisition. In the current study, we evaluated advanced speech technology as applied in auditory prostheses, to promote and eventually normalize speech perception of school-aged children with dyslexia, i.e., envelope enhancement (EE). The EE strategy automatically detects and emphasizes onset cues and consequently reinforces the temporal structure of the speech envelope. Our results confirmed speech-in-noise perception difficulties by children with dyslexia. However, we found that exaggerating temporal "landmarks" of the speech envelope (i.e., amplitude rise time and modulations)-by using EE-passively and instantaneously improved speech perception in noise for children with dyslexia. Moreover, the benefit derived from EE was large enough to completely bridge the initial gap between children with dyslexia and their typical reading peers. Taken together, the beneficial outcome of EE suggests an important contribution of the temporal structure of the envelope to speech perception in noise difficulties in dyslexia, providing an interesting foundation for future intervention studies based on auditory and speech rhythm training.
Collapse
Affiliation(s)
- Tilde Van Hirtum
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven University of Leuven, Leuven, Belgium. .,Faculty of Psychology and Educational Sciences, Parenting and Special Education Research Unit, KU Leuven University of Leuven, Leuven, Belgium.
| | - Pol Ghesquière
- Faculty of Psychology and Educational Sciences, Parenting and Special Education Research Unit, KU Leuven University of Leuven, Leuven, Belgium
| | - Jan Wouters
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven University of Leuven, Leuven, Belgium
| |
Collapse
|
15
|
Luthra S, Magnuson JS, Myers EB. Boosting lexical support does not enhance lexically guided perceptual learning. J Exp Psychol Learn Mem Cogn 2021; 47:685-704. [PMID: 33983786 PMCID: PMC8287971 DOI: 10.1037/xlm0000945] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A challenge for listeners is to learn the appropriate mapping between acoustics and phonetic categories for an individual talker. Lexically guided perceptual learning (LGPL) studies have shown that listeners can leverage lexical knowledge to guide this process. For instance, listeners learn to interpret ambiguous /s/-/∫/ blends as /s/ if they have previously encountered them in /s/-biased contexts like epi?ode. Here, we examined whether the degree of preceding lexical support might modulate the extent of perceptual learning. In Experiment 1, we first demonstrated that perceptual learning could be obtained in a modified LGPL paradigm where listeners were first biased to interpret ambiguous tokens as one phoneme (e.g., /s/) and then later as another (e.g., /∫/). In subsequent experiments, we tested whether the extent of learning differed depending on whether targets encountered predictive contexts or neutral contexts prior to the auditory target (e.g., epi?ode). Experiment 2 used auditory sentence contexts (e.g., "I love The Walking Dead and eagerly await every new . . ."), whereas Experiment 3 used written sentence contexts. In Experiment 4, participants did not receive sentence contexts but rather saw the written form of the target word (episode) or filler text (########) prior to hearing the critical auditory token. While we consistently observed effects of context on in-the-moment processing of critical words, the size of the learning effect was not modulated by the type of context. We hypothesize that boosting lexical support through preceding context may not strongly influence perceptual learning when ambiguous speech sounds can be identified solely from lexical information. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
16
|
Romanovska L, Janssen R, Bonte M. Cortical responses to letters and ambiguous speech vary with reading skills in dyslexic and typically reading children. NEUROIMAGE-CLINICAL 2021; 30:102588. [PMID: 33618236 PMCID: PMC7907898 DOI: 10.1016/j.nicl.2021.102588] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 01/26/2021] [Accepted: 02/02/2021] [Indexed: 11/25/2022]
Abstract
Text recalibrates ambiguous speech perception in children with and without dyslexia. Dyslexia and poorer reading skills are linked to reduced left fusiform activation. Poorer letter-speech sound matching is linked to higher superior temporal activation.
One of the proposed issues underlying reading difficulties in dyslexia is insufficiently automatized letter-speech sound associations. In the current fMRI experiment, we employ text-based recalibration to investigate letter-speech sound mappings in 8–10 year-old children with and without dyslexia. Here an ambiguous speech sound /a?a/ midway between /aba/ and /ada/ is combined with disambiguating “aba” or “ada” text causing a perceptual shift of the ambiguous /a?a/ sound towards the text (recalibration). This perceptual shift has been found to be reduced in adults but not in children with dyslexia compared to typical readers. Our fMRI results show significantly reduced activation in the left fusiform in dyslexic compared to typical readers, despite comparable behavioural performance. Furthermore, enhanced audio-visual activation within this region was linked to better reading and phonological skills. In contrast, higher activation in bilateral superior temporal cortex was associated with lower letter-speech sound identification fluency. These findings reflect individual differences during the early stages of reading development with reduced recruitment of the left fusiform in dyslexic readers together with an increased involvement of the superior temporal cortex in children with less automatized letter-speech sound associations.
Collapse
Affiliation(s)
- Linda Romanovska
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Roef Janssen
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
17
|
Luthra S, Correia JM, Kleinschmidt DF, Mesite L, Myers EB. Lexical Information Guides Retuning of Neural Patterns in Perceptual Learning for Speech. J Cogn Neurosci 2020; 32:2001-2012. [PMID: 32662731 PMCID: PMC8048099 DOI: 10.1162/jocn_a_01612] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A listener's interpretation of a given speech sound can vary probabilistically from moment to moment. Previous experience (i.e., the contexts in which one has encountered an ambiguous sound) can further influence the interpretation of speech, a phenomenon known as perceptual learning for speech. This study used multivoxel pattern analysis to query how neural patterns reflect perceptual learning, leveraging archival fMRI data from a lexically guided perceptual learning study conducted by Myers and Mesite [Myers, E. B., & Mesite, L. M. Neural systems underlying perceptual adjustment to non-standard speech tokens. Journal of Memory and Language, 76, 80-93, 2014]. In that study, participants first heard ambiguous /s/-/∫/ blends in either /s/-biased lexical contexts (epi_ode) or /∫/-biased contexts (refre_ing); subsequently, they performed a phonetic categorization task on tokens from an /asi/-/a∫i/ continuum. In the current work, a classifier was trained to distinguish between phonetic categorization trials in which participants heard unambiguous productions of /s/ and those in which they heard unambiguous productions of /∫/. The classifier was able to generalize this training to ambiguous tokens from the middle of the continuum on the basis of individual participants' trial-by-trial perception. We take these findings as evidence that perceptual learning for speech involves neural recalibration, such that the pattern of activation approximates the perceived category. Exploratory analyses showed that left parietal regions (supramarginal and angular gyri) and right temporal regions (superior, middle, and transverse temporal gyri) were most informative for categorization. Overall, our results inform an understanding of how moment-to-moment variability in speech perception is encoded in the brain.
Collapse
Affiliation(s)
| | - João M Correia
- University of Algarve
- Basque Center on Cognition, Brain and Language
| | | | - Laura Mesite
- MGH Institute of Health Professions
- Harvard Graduate School of Education
| | | |
Collapse
|
18
|
Ullas S, Hausfeld L, Cutler A, Eisner F, Formisano E. Neural Correlates of Phonetic Adaptation as Induced by Lexical and Audiovisual Context. J Cogn Neurosci 2020; 32:2145-2158. [PMID: 32662723 DOI: 10.1162/jocn_a_01608] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio-video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.
Collapse
Affiliation(s)
- Shruti Ullas
- Maastricht University.,Maastricht Brain Imaging Centre
| | - Lars Hausfeld
- Maastricht University.,Maastricht Brain Imaging Centre
| | | | | | - Elia Formisano
- Maastricht University.,Maastricht Brain Imaging Centre.,Maastricht Centre for Systems Biology
| |
Collapse
|
19
|
Conant LL, Liebenthal E, Desai A, Seidenberg MS, Binder JR. Differential activation of the visual word form area during auditory phoneme perception in youth with dyslexia. Neuropsychologia 2020; 146:107543. [PMID: 32598966 DOI: 10.1016/j.neuropsychologia.2020.107543] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 03/16/2020] [Accepted: 06/21/2020] [Indexed: 12/12/2022]
Abstract
Developmental dyslexia is a learning disorder characterized by difficulties reading words accurately and/or fluently. Several behavioral studies have suggested the presence of anomalies at an early stage of phoneme processing, when the complex spectrotemporal patterns in the speech signal are analyzed and assigned to phonemic categories. In this study, fMRI was used to compare brain responses associated with categorical discrimination of speech syllables (P) and acoustically matched nonphonemic stimuli (N) in children and adolescents with dyslexia and in typically developing (TD) controls, aged 8-17 years. The TD group showed significantly greater activation during the P condition relative to N in an area of the left ventral occipitotemporal cortex that corresponds well with the region referred to as the "visual word form area" (VWFA). Regression analyses using reading performance as a continuous variable across the full group of participants yielded similar results. Overall, the findings are consistent with those of previous neuroimaging studies using print stimuli in individuals with dyslexia that found reduced activation in left occipitotemporal regions; however, the current study shows that these activation differences seen during reading are apparent during auditory phoneme discrimination in youth with dyslexia, suggesting that the primary deficit in at least a subset of children may lie early in the speech processing stream and that categorical perception may be an important target of early intervention in children at risk for dyslexia.
Collapse
Affiliation(s)
- Lisa L Conant
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA.
| | - Einat Liebenthal
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA; Department of Psychiatry, McLean Hospital, Harvard Medical School, Boston, MA, USA
| | - Anjali Desai
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Mark S Seidenberg
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Jeffrey R Binder
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
20
|
Xu W, Kolozsvari OB, Oostenveld R, Hämäläinen JA. Rapid changes in brain activity during learning of grapheme-phoneme associations in adults. Neuroimage 2020; 220:117058. [PMID: 32561476 DOI: 10.1016/j.neuroimage.2020.117058] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 06/11/2020] [Accepted: 06/12/2020] [Indexed: 02/06/2023] Open
Abstract
Learning to associate written letters with speech sounds is crucial for the initial phase of acquiring reading skills. However, little is known about the cortical reorganization for supporting letter-speech sound learning, particularly the brain dynamics during the learning of grapheme-phoneme associations. In the present study, we trained 30 Finnish participants (mean age: 24.33 years, SD: 3.50 years) to associate novel foreign letters with familiar Finnish speech sounds on two consecutive days (first day ~ 50 min; second day ~ 25 min), while neural activity was measured using magnetoencephalography (MEG). Two sets of audiovisual stimuli were used for the training in which the grapheme-phoneme association in one set (Learnable) could be learned based on the different learning cues provided, but not in the other set (Control). The learning progress was tracked at a trial-by-trial basis and used to segment different learning stages for the MEG source analysis. The learning-related changes were examined by comparing the brain responses to Learnable and Control uni/multi-sensory stimuli, as well as the brain responses to learning cues at different learning stages over the two days. We found dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned. Further, changes were observed in the brain responses to the novel letters during the learning process. We also found that some of these learning effects were observed only after memory consolidation the following day. Overall, the learning process modulated the activity in a large network of brain regions, including the superior temporal cortex and the dorsal (parietal) pathway. Most interestingly, middle- and inferior-temporal regions were engaged during multi-sensory memory encoding after the cross-modal relationship was extracted from the learning cues. Our findings highlight the brain dynamics and plasticity related to the learning of letter-speech sound associations and provide a more refined model of grapheme-phoneme learning in reading acquisition.
Collapse
Affiliation(s)
- Weiyong Xu
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland; Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland.
| | - Orsolya Beatrix Kolozsvari
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland; Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland.
| | - Robert Oostenveld
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands; NatMEG, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Jarmo Arvid Hämäläinen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland; Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland.
| |
Collapse
|
21
|
Correia JM, Caballero-Gaudes C, Guediche S, Carreiras M. Phonatory and articulatory representations of speech production in cortical and subcortical fMRI responses. Sci Rep 2020; 10:4529. [PMID: 32161310 PMCID: PMC7066132 DOI: 10.1038/s41598-020-61435-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 02/24/2020] [Indexed: 11/25/2022] Open
Abstract
Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively.
Collapse
Affiliation(s)
- Joao M Correia
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain. .,Centre for Biomedical Research (CBMR)/Department of Psychology, University of Algarve, Faro, Portugal.
| | | | - Sara Guediche
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain
| | - Manuel Carreiras
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain.,Ikerbasque. Basque Foundation for Science, Bilbao, Spain.,University of the Basque Country. UPV/EHU, Bilbao, Spain
| |
Collapse
|
22
|
Dale CL, Brown EG, Herman AB, Hinkley LBN, Subramaniam K, Fisher M, Vinogradov S, Nagarajan SS. Intervention-specific patterns of cortical function plasticity during auditory encoding in people with schizophrenia. Schizophr Res 2020; 215:241-249. [PMID: 31648842 PMCID: PMC7035971 DOI: 10.1016/j.schres.2019.10.022] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Revised: 08/06/2019] [Accepted: 10/03/2019] [Indexed: 01/07/2023]
Abstract
Schizophrenia is a neurocognitive illness characterized by behavioral and neural impairments in both early auditory processing and higher order verbal working memory. Previously we have shown intervention-specific cognitive performance improvements with computerized, targeted training of auditory processing (AT) when compared to a computer games (CG) control intervention that emphasized visual processing. To investigate spatiotemporal changes in patterns of neural activity specific to the AT intervention, the current study used magnetoencephalography (MEG) imaging to derive induced high gamma band oscillations (HGO) during auditory encoding, before and after 50 h (∼10 weeks) of exposure to either the AT or CG intervention. During stimulus encoding, AT intervention-specific changes in high gamma activity occurred in left middle frontal and left middle-superior temporal cortices. In contrast, CG intervention-specific changes were observed in right medial frontal and supramarginal gyri during stimulus encoding, and in bilateral temporal cortices during response preparation. These data reveal that, in schizophrenia, intensive exposure to either training of auditory processing or exposure to visuospatial activities produces significant but complementary patterns of cortical function plasticity within a distributed fronto-temporal network. These results underscore the importance of delineating the specific neuroplastic effects of targeted behavioral interventions to ensure desired neurophysiological changes and avoid unintended consequences on neural system functioning.
Collapse
Affiliation(s)
- Corby L Dale
- Department of Radiology and Biomedical Imaging, University of California San Francisco, United States; San Francisco Veterans' Affairs Medical Center, United States.
| | - Ethan G Brown
- Weill Cornell Medical College, New York, United States
| | - Alexander B Herman
- Department of Radiology and Biomedical Imaging, University of California San Francisco, United States; UCB-UCSF Graduate Program in Bioengineering, University of California, Berkeley, United States; Medical Science Training Program, University of California, San Francisco, United States
| | - Leighton B N Hinkley
- Department of Radiology and Biomedical Imaging, University of California San Francisco, United States
| | - Karuna Subramaniam
- Department of Radiology and Biomedical Imaging, University of California San Francisco, United States
| | - Melissa Fisher
- San Francisco Veterans' Affairs Medical Center, United States; Department of Psychiatry, University of California, San Francisco, United States
| | - Sophia Vinogradov
- San Francisco Veterans' Affairs Medical Center, United States; Department of Psychiatry, University of California, San Francisco, United States
| | - Srikantan S Nagarajan
- Department of Radiology and Biomedical Imaging, University of California San Francisco, United States; UCB-UCSF Graduate Program in Bioengineering, University of California, Berkeley, United States
| |
Collapse
|
23
|
Vandermosten M, Correia J, Vanderauwera J, Wouters J, Ghesquière P, Bonte M. Brain activity patterns of phonemic representations are atypical in beginning readers with family risk for dyslexia. Dev Sci 2019; 23:e12857. [PMID: 31090993 DOI: 10.1111/desc.12857] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 04/03/2019] [Accepted: 04/29/2019] [Indexed: 12/13/2022]
Abstract
There is an ongoing debate whether phonological deficits in dyslexics should be attributed to (a) less specified representations of speech sounds, like suggested by studies in young children with a familial risk for dyslexia, or (b) to an impaired access to these phonemic representations, as suggested by studies in adults with dyslexia. These conflicting findings are rooted in between study differences in sample characteristics and/or testing techniques. The current study uses the same multivariate functional MRI (fMRI) approach as previously used in adults with dyslexia to investigate phonemic representations in 30 beginning readers with a familial risk and 24 beginning readers without a familial risk of dyslexia, of whom 20 were later retrospectively classified as dyslexic. Based on fMRI response patterns evoked by listening to different utterances of /bA/ and /dA/ sounds, multivoxel analyses indicate that the underlying activation patterns of the two phonemes were distinct in children with a low family risk but not in children with high family risk. However, no group differences were observed between children that were later classified as typical versus dyslexic readers, regardless of their family risk status, indicating that poor phonemic representations constitute a risk for dyslexia but are not sufficient to result in reading problems. We hypothesize that poor phonemic representations are trait (family risk) and not state (dyslexia) dependent, and that representational deficits only lead to reading difficulties when they are present in conjunction with other neuroanatomical or-functional deficits.
Collapse
Affiliation(s)
- Maaike Vandermosten
- Research Group ExpORL, Department of Neuroscience, KU Leuven, Leuven, Belgium.,Department of Cognitive Neuroscience and Maastricht Brain Imaging Center, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Joao Correia
- Department of Cognitive Neuroscience and Maastricht Brain Imaging Center, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.,Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| | - Jolijn Vanderauwera
- Research Group ExpORL, Department of Neuroscience, KU Leuven, Leuven, Belgium.,Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Jan Wouters
- Research Group ExpORL, Department of Neuroscience, KU Leuven, Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Milene Bonte
- Department of Cognitive Neuroscience and Maastricht Brain Imaging Center, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
24
|
Hämäläinen JA, Parviainen T, Hsu YF, Salmelin R. Dynamics of brain activation during learning of syllable-symbol paired associations. Neuropsychologia 2019; 129:93-103. [PMID: 30930303 DOI: 10.1016/j.neuropsychologia.2019.03.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 02/20/2019] [Accepted: 03/25/2019] [Indexed: 11/15/2022]
Abstract
Initial stages of reading acquisition require the learning of letter and speech sound combinations. While the long-term effects of audio-visual learning are rather well studied, relatively little is known about the short-term learning effects at the brain level. Here we examined the cortical dynamics of short-term learning using magnetoencephalography (MEG) and electroencephalography (EEG) in two experiments that respectively addressed active and passive learning of the association between shown symbols and heard syllables. In experiment 1, learning was based on feedback provided after each trial. The learning of the audio-visual associations was contrasted with items for which the feedback was meaningless. In experiment 2, learning was based on statistical learning through passive exposure to audio-visual stimuli that were consistently presented with each other and contrasted with audio-visual stimuli that were randomly paired with each other. After 5-10 min of training and exposure, learning-related changes emerged in neural activation around 200 and 350 ms in the two experiments. The MEG results showed activity changes at 350 ms in caudal middle frontal cortex and posterior superior temporal sulcus, and at 500 ms in temporo-occipital cortex. Changes in brain activity coincided with a decrease in reaction times and an increase in accuracy scores. Changes in EEG activity were observed starting at the auditory P2 response followed by later changes after 300 ms. The results show that the short-term learning effects emerge rapidly (manifesting in later stages of audio-visual integration processes) and that these effects are modulated by selective attention processes.
Collapse
Affiliation(s)
- Jarmo A Hämäläinen
- Centre for Interdisciplinary Brain Research, Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland.
| | - Tiina Parviainen
- Centre for Interdisciplinary Brain Research, Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland
| | - Yi-Fang Hsu
- Department of Educational Psychology and Counseling, National Taiwan Normal University, 10610, Taipei, Taiwan; Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, 10610, Taipei, Taiwan
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, 00076, Aalto University, Finland; Aalto NeuroImaging, 00076, Aalto University, Finland
| |
Collapse
|
25
|
Romanovska L, Janssen R, Bonte M. Reading-Induced Shifts in Speech Perception in Dyslexic and Typically Reading Children. Front Psychol 2019; 10:221. [PMID: 30792685 PMCID: PMC6374624 DOI: 10.3389/fpsyg.2019.00221] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Accepted: 01/22/2019] [Indexed: 11/13/2022] Open
Abstract
One of the proposed mechanisms underlying reading difficulties observed in developmental dyslexia is impaired mapping of visual to auditory speech representations. We investigate these mappings in 20 typically reading and 20 children with dyslexia aged 8–10 years using text-based recalibration. In this paradigm, the pairing of visual text and ambiguous speech sounds shifts (recalibrates) the participant’s perception of the ambiguous speech in subsequent auditory-only post-test trials. Recent research in adults demonstrated this text-induced perceptual shift in typical, but not in dyslexic readers. Our current results instead show significant text-induced recalibration in both typically reading children and children with dyslexia. The strength of this effect was significantly linked to the strength of perceptual adaptation effects in children with dyslexia but not typically reading children. Furthermore, additional analyses in a sample of typically reading children of various reading levels revealed a significant link between recalibration and phoneme categorization. Taken together, our study highlights the importance of considering dynamic developmental changes in reading, letter-speech sound coupling and speech perception when investigating group differences between typical and dyslexic readers.
Collapse
Affiliation(s)
- Linda Romanovska
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Roef Janssen
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
26
|
Neural Prediction Errors Distinguish Perception and Misperception of Speech. J Neurosci 2018; 38:6076-6089. [PMID: 29891730 DOI: 10.1523/jneurosci.3258-17.2018] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 03/08/2018] [Accepted: 03/28/2018] [Indexed: 11/21/2022] Open
Abstract
Humans use prior expectations to improve perception, especially of sensory signals that are degraded or ambiguous. However, if sensory input deviates from prior expectations, then correct perception depends on adjusting or rejecting prior expectations. Failure to adjust or reject the prior leads to perceptual illusions, especially if there is partial overlap (and thus partial mismatch) between expectations and input. With speech, "slips of the ear" occur when expectations lead to misperception. For instance, an entomologist might be more susceptible to hear "The ants are my friends" for "The answer, my friend" (in the Bob Dylan song Blowing in the Wind). Here, we contrast two mechanisms by which prior expectations may lead to misperception of degraded speech. First, clear representations of the common sounds in the prior and input (i.e., expected sounds) may lead to incorrect confirmation of the prior. Second, insufficient representations of sounds that deviate between prior and input (i.e., prediction errors) could lead to deception. We used crossmodal predictions from written words that partially match degraded speech to compare neural responses when male and female human listeners were deceived into accepting the prior or correctly reject it. Combined behavioral and multivariate representational similarity analysis of fMRI data show that veridical perception of degraded speech is signaled by representations of prediction error in the left superior temporal sulcus. Instead of using top-down processes to support perception of expected sensory input, our findings suggest that the strength of neural prediction error representations distinguishes correct perception and misperception.SIGNIFICANCE STATEMENT Misperceiving spoken words is an everyday experience, with outcomes that range from shared amusement to serious miscommunication. For hearing-impaired individuals, frequent misperception can lead to social withdrawal and isolation, with severe consequences for wellbeing. In this work, we specify the neural mechanisms by which prior expectations, which are so often helpful for perception, can lead to misperception of degraded sensory signals. Most descriptive theories of illusory perception explain misperception as arising from a clear sensory representation of features or sounds that are in common between prior expectations and sensory input. Our work instead provides support for a complementary proposal: that misperception occurs when there is an insufficient sensory representations of the deviation between expectations and sensory signals.
Collapse
|
27
|
Keetels M, Bonte M, Vroomen J. A Selective Deficit in Phonetic Recalibration by Text in Developmental Dyslexia. Front Psychol 2018; 9:710. [PMID: 29867675 PMCID: PMC5962785 DOI: 10.3389/fpsyg.2018.00710] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Accepted: 04/23/2018] [Indexed: 11/30/2022] Open
Abstract
Upon hearing an ambiguous speech sound, listeners may adjust their perceptual interpretation of the speech input in accordance with contextual information, like accompanying text or lipread speech (i.e., phonetic recalibration; Bertelson et al., 2003). As developmental dyslexia (DD) has been associated with reduced integration of text and speech sounds, we investigated whether this deficit becomes manifest when text is used to induce this type of audiovisual learning. Adults with DD and normal readers were exposed to ambiguous consonants halfway between /aba/ and /ada/ together with text or lipread speech. After this audiovisual exposure phase, they categorized auditory-only ambiguous test sounds. Results showed that individuals with DD, unlike normal readers, did not use text to recalibrate their phoneme categories, whereas their recalibration by lipread speech was spared. Individuals with DD demonstrated similar deficits when ambiguous vowels (halfway between /wIt/ and /wet/) were recalibrated by text. These findings indicate that DD is related to a specific letter-speech sound association deficit that extends over phoneme classes (vowels and consonants), but – as lipreading was spared – does not extend to a more general audio–visual integration deficit. In particular, these results highlight diminished reading-related audiovisual learning in addition to the commonly reported phonological problems in developmental dyslexia.
Collapse
Affiliation(s)
- Mirjam Keetels
- Cognitive Neuropsychology Laboratory, Department of Cognitive Neuropsychology, Tilburg University, Tilburg, Netherlands
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Jean Vroomen
- Cognitive Neuropsychology Laboratory, Department of Cognitive Neuropsychology, Tilburg University, Tilburg, Netherlands
| |
Collapse
|
28
|
Baart M, Vroomen J. Recalibration of vocal affect by a dynamic face. Exp Brain Res 2018; 236:1911-1918. [PMID: 29696314 PMCID: PMC6010487 DOI: 10.1007/s00221-018-5270-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 04/20/2018] [Indexed: 11/04/2022]
Abstract
Perception of vocal affect is influenced by the concurrent sight of an emotional face. We demonstrate that the sight of an emotional face also can induce recalibration of vocal affect. Participants were exposed to videos of a ‘happy’ or ‘fearful’ face in combination with a slightly incongruous sentence with ambiguous prosody. After this exposure, ambiguous test sentences were rated as more ‘happy’ when the exposure phase contained ‘happy’ instead of ‘fearful’ faces. This auditory shift likely reflects recalibration that is induced by error minimization of the inter-sensory discrepancy. In line with this view, when the prosody of the exposure sentence was non-ambiguous and congruent with the face (without audiovisual discrepancy), aftereffects went in the opposite direction, likely reflecting adaptation. Our results demonstrate, for the first time, that perception of vocal affect is flexible and can be recalibrated by slightly discrepant visual information.
Collapse
Affiliation(s)
- Martijn Baart
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands. .,BCBL, Basque Center on Cognition, Brain and Language, Donostia, Spain.
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands.
| |
Collapse
|
29
|
Stekelenburg JJ, Keetels M, Vroomen J. Multisensory integration of speech sounds with letters vs. visual speech: only visual speech induces the mismatch negativity. Eur J Neurosci 2018. [PMID: 29537657 PMCID: PMC5969231 DOI: 10.1111/ejn.13908] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech.
Collapse
Affiliation(s)
- Jeroen J Stekelenburg
- Department of Cognitive Neuropsychology, Tilburg University, Warandelaan 2, PO box 90153, 5000 LE, Tilburg, the Netherlands
| | - Mirjam Keetels
- Department of Cognitive Neuropsychology, Tilburg University, Warandelaan 2, PO box 90153, 5000 LE, Tilburg, the Netherlands
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, Warandelaan 2, PO box 90153, 5000 LE, Tilburg, the Netherlands
| |
Collapse
|