1
|
Taran N, Gatenyo R, Hadjadj E, Farah R, Horowitz-Kraus T. Distinct connectivity patterns between perception and attention-related brain networks characterize dyslexia: Machine learning applied to resting-state fMRI. Cortex 2024; 181:216-232. [PMID: 39566125 PMCID: PMC11614717 DOI: 10.1016/j.cortex.2024.08.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 05/27/2024] [Accepted: 08/27/2024] [Indexed: 11/22/2024]
Abstract
Diagnosis of dyslexia often occurs in late schooling years, leading to academic and psychological challenges. Furthermore, diagnosis is time-consuming, costly, and reliant on arbitrary cutoffs. On the other hand, automated algorithms hold great potential in medical and psychological diagnostics. The aim of the present study was to develop a machine learning tool for the detection of dyslexia in children based on the intrinsic connectivity patterns of different brain networks underlying perception and attention. Here, 117 children (8-12 years old; 58 females; 52 typical readers; TR and 65 children with dyslexia) completed cognitive and reading assessments and underwent 10 min of resting-state fMRI. Functional connectivity coefficients between 264 brain regions were used as features for machine learning. Different supervised algorithms were employed for classification of children with and without dyslexia. A classifier trained on dorsal attention network features exhibited the highest performance (accuracy .79, sensitivity .92, specificity .64). Auditory, visual, and fronto-parietal network-based classification showed intermediate accuracy levels (70-75%). These results highlight significant neurobiological differences in brain networks associated with visual attention between TR and children with dyslexia. Distinct neural integration patterns can differentiate dyslexia from typical development, which may be utilized in the future as a biomarker for the presence and/or severity of dyslexia.
Collapse
Affiliation(s)
- Nikolay Taran
- Educational Neuroimaging Group, Faculty of Education in Science and Technology, Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa 3200003, Israel
| | - Rotem Gatenyo
- Educational Neuroimaging Group, Faculty of Education in Science and Technology, Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa 3200003, Israel
| | - Emmanuelle Hadjadj
- Educational Neuroimaging Group, Faculty of Education in Science and Technology, Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa 3200003, Israel
| | - Rola Farah
- Educational Neuroimaging Group, Faculty of Education in Science and Technology, Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa 3200003, Israel
| | - Tzipi Horowitz-Kraus
- Educational Neuroimaging Group, Faculty of Education in Science and Technology, Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa 3200003, Israel; Kennedy Krieger Institute, Baltimore, MD 21205, USA; Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
2
|
Zhang M, Riecke L, Bonte M. Cortical tracking of language structures: Modality-dependent and independent responses. Clin Neurophysiol 2024; 166:56-65. [PMID: 39111244 DOI: 10.1016/j.clinph.2024.07.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 04/18/2024] [Accepted: 07/20/2024] [Indexed: 09/15/2024]
Abstract
OBJECTIVES The mental parsing of linguistic hierarchy is crucial for language comprehension, and while there is growing interest in the cortical tracking of auditory speech, the neurophysiological substrates for tracking written language are still unclear. METHODS We recorded electroencephalographic (EEG) responses from participants exposed to auditory and visual streams of either random syllables or tri-syllabic real words. Using a frequency-tagging approach, we analyzed the neural representations of physically presented (i.e., syllables) and mentally constructed (i.e., words) linguistic units and compared them between the two sensory modalities. RESULTS We found that tracking syllables is partially modality dependent, with anterior and posterior scalp regions more involved in the tracking of spoken and written syllables, respectively. The cortical tracking of spoken and written words instead was found to involve a shared anterior region to a similar degree, suggesting a modality-independent process for word tracking. CONCLUSION Our study suggests that basic linguistic features are represented in a sensory modality-specific manner, while more abstract ones are modality-unspecific during the online processing of continuous language input. SIGNIFICANCE The current methodology may be utilized in future research to examine the development of reading skills, especially the deficiencies in fluent reading among those with dyslexia.
Collapse
Affiliation(s)
- Manli Zhang
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Lars Riecke
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
3
|
Bonte M, Brem S. Unraveling individual differences in learning potential: A dynamic framework for the case of reading development. Dev Cogn Neurosci 2024; 66:101362. [PMID: 38447471 PMCID: PMC10925938 DOI: 10.1016/j.dcn.2024.101362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 02/02/2024] [Accepted: 03/01/2024] [Indexed: 03/08/2024] Open
Abstract
Children show an enormous capacity to learn during development, but with large individual differences in the time course and trajectory of learning and the achieved skill level. Recent progress in developmental sciences has shown the contribution of a multitude of factors including genetic variation, brain plasticity, socio-cultural context and learning experiences to individual development. These factors interact in a complex manner, producing children's idiosyncratic and heterogeneous learning paths. Despite an increasing recognition of these intricate dynamics, current research on the development of culturally acquired skills such as reading still has a typical focus on snapshots of children's performance at discrete points in time. Here we argue that this 'static' approach is often insufficient and limits advancements in the prediction and mechanistic understanding of individual differences in learning capacity. We present a dynamic framework which highlights the importance of capturing short-term trajectories during learning across multiple stages and processes as a proxy for long-term development on the example of reading. This framework will help explain relevant variability in children's learning paths and outcomes and fosters new perspectives and approaches to study how children develop and learn.
Collapse
Affiliation(s)
- Milene Bonte
- Department of Cognitive Neuroscience and Maastricht Brain Imaging Center, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| | - Silvia Brem
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry Zurich, University of Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Switzerland; URPP Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zurich, Zurich, Switzerland
| |
Collapse
|
4
|
Guerra G, Tijms J, Tierney A, Vaessen A, Dick F, Bonte M. Auditory attention influences trajectories of symbol-speech sound learning in children with and without dyslexia. J Exp Child Psychol 2024; 237:105761. [PMID: 37666181 DOI: 10.1016/j.jecp.2023.105761] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 07/25/2023] [Accepted: 07/27/2023] [Indexed: 09/06/2023]
Abstract
The acquisition of letter-speech sound correspondences is a fundamental process underlying reading development, one that could be influenced by several linguistic and domain-general cognitive factors. In the current study, we mimicked the first steps of this process by examining behavioral trajectories of audiovisual associative learning in 110 7- to 12-year-old children with and without dyslexia. Children were asked to learn the associations between eight novel symbols and native speech sounds in a brief training and subsequently read words and pseudowords written in the artificial orthography. We then investigated the influence of auditory attention as one of the putative domain-general factors influencing associative learning. To this aim, we assessed children with experimental measures of auditory sustained selective attention and interference control. Our results showed shallower learning trajectories in children with dyslexia, especially during the later phases of the training blocks. Despite this, children with dyslexia performed similarly to typical readers on the post-training reading tests using the artificial orthography. Better auditory sustained selective attention and interference control skills predicted greater response accuracy during training. Sustained selective attention was also associated with the ability to apply these novel correspondences in the reading tests. Although this result has the limitations of a correlational design, it denotes that poor attentional skills may constitute a risk during the early stages of reading acquisition, when children start to learn letter-speech sound associations. Importantly, our findings underscore the importance of examining dynamics of learning in reading acquisition as well as individual differences in more domain-general attentional factors.
Collapse
Affiliation(s)
- Giada Guerra
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London WC1E 7HX, UK; Maastricht Brain Imaging Center and Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER Maastricht, The Netherlands.
| | - Jurgen Tijms
- RID Institute, Nieuwe Achtergracht 129, 1018 WS Amsterdam, The Netherlands; Rudolf Berlin Center, Department of Psychology, University of Amsterdam, 1018 WT Amsterdam, The Netherlands
| | - Adam Tierney
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London WC1E 7HX, UK
| | - Anniek Vaessen
- RID Institute, Nieuwe Achtergracht 129, 1018 WS Amsterdam, The Netherlands
| | - Frederic Dick
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London WC1E 7HX, UK; Department of Experimental Psychology, Division of Psychology and Language Sciences, University College London, London WC1H 0AP, UK
| | - Milene Bonte
- Maastricht Brain Imaging Center and Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER Maastricht, The Netherlands
| |
Collapse
|
5
|
Beck J, Dzięgiel-Fivet G, Jednoróg K. Similarities and differences in the neural correlates of letter and speech sound integration in blind and sighted readers. Neuroimage 2023; 278:120296. [PMID: 37495199 DOI: 10.1016/j.neuroimage.2023.120296] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/18/2023] [Accepted: 07/23/2023] [Indexed: 07/28/2023] Open
Abstract
Learning letter and speech sound (LS) associations is a major step in reading acquisition common for all alphabetic scripts, including Braille used by blind readers. The left superior temporal cortex (STC) plays an important role in audiovisual LS integration in sighted people, but it is still unknown what neural mechanisms are responsible for audiotactile LS integration in blind individuals. Here, we investigated the similarities and differences between LS integration in blind Braille (N = 42, age range: 9-60 y.o.) and sighted print (N = 47, age range: 9-60 y.o.) readers who acquired reading using different sensory modalities. In both groups, the STC responded to both isolated letters and isolated speech sounds, showed enhanced activation when they were presented together, and distinguished between congruent and incongruent letter and speech sound pairs. However, the direction of the congruency effect was different between the groups. Sighted subjects showed higher activity for incongruent LS pairs in the bilateral STC, similarly to previously studied typical readers of transparent orthographies. In the blind, congruent pairs resulted in an increased response in the right STC. These differences may be related to more sequential processing of Braille as compared to print reading. At the same time, behavioral efficiency in LS discrimination decisions and the congruency effect were found to be related to age and reading skill only in sighted participants, suggesting potential differences in the developmental trajectories of LS integration between blind and sighted readers.
Collapse
Affiliation(s)
- Joanna Beck
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland.
| | - Gabriela Dzięgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland.
| |
Collapse
|
6
|
Haugg A, Frei N, Menghini M, Stutz F, Steinegger S, Röthlisberger M, Brem S. Self-regulation of visual word form area activation with real-time fMRI neurofeedback. Sci Rep 2023; 13:9195. [PMID: 37280217 DOI: 10.1038/s41598-023-35932-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 05/25/2023] [Indexed: 06/08/2023] Open
Abstract
The Visual Word Form Area (VWFA) is a key region of the brain's reading network and its activation has been shown to be strongly associated with reading skills. Here, for the first time, we investigated whether voluntary regulation of VWFA activation is feasible using real-time fMRI neurofeedback. 40 adults with typical reading skills were instructed to either upregulate (UP group, N = 20) or downregulate (DOWN group, N = 20) their own VWFA activation during six neurofeedback training runs. The VWFA target region was individually defined based on a functional localizer task. Before and after training, also regulation runs without feedback ("no-feedback runs") were performed. When comparing the two groups, we found stronger activation across the reading network for the UP than the DOWN group. Further, activation in the VWFA was significantly stronger in the UP group than the DOWN group. Crucially, we observed a significant interaction of group and time (pre, post) for the no-feedback runs: The two groups did not differ significantly in their VWFA activation before neurofeedback training, but the UP group showed significantly stronger activation than the DOWN group after neurofeedback training. Our results indicate that upregulation of VWFA activation is feasible and that, once learned, successful upregulation can even be performed in the absence of feedback. These results are a crucial first step toward the development of a potential therapeutic support to improve reading skills in individuals with reading impairments.
Collapse
Affiliation(s)
- Amelie Haugg
- Department of Child and Adolescent Psychiatry and Psychotherapy, Psychiatric University Hospital Zurich, University of Zurich, Zurich, Switzerland.
| | - Nada Frei
- Department of Child and Adolescent Psychiatry and Psychotherapy, Psychiatric University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Milena Menghini
- Department of Child and Adolescent Psychiatry and Psychotherapy, Psychiatric University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Felizia Stutz
- Department of Child and Adolescent Psychiatry and Psychotherapy, Psychiatric University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Sara Steinegger
- Department of Child and Adolescent Psychiatry and Psychotherapy, Psychiatric University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Martina Röthlisberger
- Department of Child and Adolescent Psychiatry and Psychotherapy, Psychiatric University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Silvia Brem
- Department of Child and Adolescent Psychiatry and Psychotherapy, Psychiatric University Hospital Zurich, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
7
|
Verwimp C, Snellings P, Wiers RW, Tijms J. Goal-directedness enhances letter-speech sound learning and consolidation in an unknown orthography. Child Dev 2023. [PMID: 36734297 DOI: 10.1111/cdev.13901] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
This study examined how top-down control influenced letter-speech sound (L-SS) learning, the initial phase of learning to read. In 2020, 107 Dutch children (53 boys, Mage = 106.845 months) learned eight L-SS correspondences, either preceded by goal-directed or implicit instructions. Symbol knowledge and artificial word-reading ability were assessed immediately after learning and on the subsequent day to examine the effect of sleep. Goal-directed children were faster and more efficient in learning a new script and had better learning outcomes compared to children who were not instructed about the goal of the task. This study demonstrates that directing children toward the goal can promote L-SS learning and consolidation, giving insights into how top-down control influences the initial phase of reading acquisition.
Collapse
Affiliation(s)
- Cara Verwimp
- Department of Developmental Psychology, University of Amsterdam, Amsterdam, The Netherlands.,Rudolf Berlin Center, Amsterdam, The Netherlands.,RID, Amsterdam, The Netherlands
| | - Patrick Snellings
- Department of Developmental Psychology, University of Amsterdam, Amsterdam, The Netherlands.,Rudolf Berlin Center, Amsterdam, The Netherlands
| | - Reinout W Wiers
- Department of Developmental Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Jurgen Tijms
- Department of Developmental Psychology, University of Amsterdam, Amsterdam, The Netherlands.,Rudolf Berlin Center, Amsterdam, The Netherlands.,RID, Amsterdam, The Netherlands
| |
Collapse
|
8
|
Pei C, Qiu Y, Li F, Huang X, Si Y, Li Y, Zhang X, Chen C, Liu Q, Cao Z, Ding N, Gao S, Alho K, Yao D, Xu P. The different brain areas occupied for integrating information of hierarchical linguistic units: a study based on EEG and TMS. Cereb Cortex 2022; 33:4740-4751. [PMID: 36178127 DOI: 10.1093/cercor/bhac376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/29/2022] [Accepted: 08/30/2022] [Indexed: 11/13/2022] Open
Abstract
Human language units are hierarchical, and reading acquisition involves integrating multisensory information (typically from auditory and visual modalities) to access meaning. However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in auditory and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory, visual, or combined audio-visual modalities while electroencephalographic responses were recorded. With a frequency tagging approach, we analyzed the neural representations of basic linguistic units (i.e. characters/monosyllabic words) and higher-level linguistic structures (i.e. phrases and sentences) across the 3 modalities separately. We found that audio-visual integration occurs in all linguistic units, and the brain areas involved in the integration varied across different linguistic levels. In particular, the integration of sentences activated the local left prefrontal area. Therefore, we used continuous theta-burst stimulation to verify that the left prefrontal cortex plays a vital role in the audio-visual integration of sentence information. Our findings suggest the advantage of bimodal language comprehension at hierarchical stages in language-related information processing and provide evidence for the causal role of the left prefrontal regions in processing information of audio-visual sentences.
Collapse
Affiliation(s)
- Changfu Pei
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Yuan Qiu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China
| | - Xunan Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China
| | - Yajing Si
- School of Psychology, Xinxiang Medical University, Xinxiang, 453003, China
| | - Yuqin Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Xiabing Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Chunli Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Qiang Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, Sichuan, 610066, China
| | - Zehong Cao
- STEM, Mawson Lakes Campus, University of South Australia, Adelaide, SA 5095, Australia
| | - Nai Ding
- College of Biomedical Engineering and Instrument Sciences, Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, 310007, China
| | - Shan Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, FI 00014, Finland
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.,School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.,Research Unit of Neuroscience, Chinese Academy of Medical Science, 2019RU035, Chengdu, China.,Radiation Oncology Key Laboratory of Sichuan Province, Chengdu, 610041, China
| |
Collapse
|