1
|
Bonandrini R, Gornetti E, Paulesu E. A meta-analytical account of the functional lateralization of the reading network. Cortex 2024; 177:363-384. [PMID: 38936265 DOI: 10.1016/j.cortex.2024.05.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 03/25/2024] [Accepted: 05/29/2024] [Indexed: 06/29/2024]
Abstract
The observation that the neural correlates of reading are left-lateralized is ubiquitous in the cognitive neuroscience and neuropsychological literature. Still, reading is served by a constellation of neural units, and the extent to which these units are consistently left-lateralized is unclear. In this regard, the functional lateralization of the fusiform gyrus is of particular interest, by virtue of its hypothesized role as a "visual word form area". A quantitative Activation Likelihood Estimation meta-analysis was conducted on activation foci from 35 experiments investigating silent reading, and both a whole-brain and a bayesian ROI-based approach were used to assess the lateralization of the data submitted to meta-analysis. Perirolandic areas showed the highest level of left-lateralization, the fusiform cortex and the parietal cortex exhibited only a moderate pattern of left-lateralization, while in the occipital, insular cortices and in the cerebellum the lateralization turned out to be the lowest observed. The relatively limited functional lateralization of the fusiform gyrus was further explored in a regression analysis on the lateralization profile of each study. The functional lateralization of the fusiform gyrus during reading was positively associated with the lateralization of the precentral and inferior occipital gyri and negatively associated with the lateralization of the triangular portion of the inferior frontal gyrus and of the temporal pole. Overall, the present data highlight how lateralization patterns differ within the reading network. Furthermore, the present data highlight how the functional lateralization of the fusiform gyrus during reading is related to the degree of functional lateralization of other language brain areas.
Collapse
Affiliation(s)
| | - Edoardo Gornetti
- Department of Psychology, University of Milano-Bicocca, Milan, Italy; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands; The International Max Planck Research School for Language Sciences, Nijmegen, the Netherlands
| | - Eraldo Paulesu
- Department of Psychology, University of Milano-Bicocca, Milan, Italy; fMRI Unit, IRCCS Orthopedic Institute Galeazzi, Milan, Italy
| |
Collapse
|
2
|
Chauhan VS, McCook KC, White AL. Reading Reshapes Stimulus Selectivity in the Visual Word Form Area. eNeuro 2024; 11:ENEURO.0228-24.2024. [PMID: 38997142 DOI: 10.1523/eneuro.0228-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 07/03/2024] [Accepted: 07/04/2024] [Indexed: 07/14/2024] Open
Abstract
Reading depends on a brain region known as the "visual word form area" (VWFA) in the left ventral occipitotemporal cortex. This region's function is debated because its stimulus selectivity is not absolute, it is modulated by a variety of task demands, and it is inconsistently localized. We used fMRI to characterize the combination of sensory and cognitive factors that activate word-responsive regions that we precisely localized in 16 adult humans (4 male). We then presented three types of character strings: English words, pseudowords, and unfamiliar characters with matched visual features. Participants performed three different tasks while viewing those stimuli: detecting real words, detecting color in the characters, and detecting color in the fixation mark. There were three primary findings about the VWFA's response: (1) It preferred letter strings over unfamiliar characters even when the stimuli were ignored during the fixation task. (2) Compared with those baseline responses, engaging in the word reading task enhanced the response to words but suppressed the response to unfamiliar characters. (3) Attending to the stimuli to judge their color had little effect on the response magnitudes. Thus, the VWFA is uniquely modulated by a cognitive signal that is specific to voluntary linguistic processing and is not additive. Functional connectivity analyses revealed that communication between the VWFA and a left frontal language area increased when the participant engaged in the linguistic task. We conclude that the VWFA is inherently selective for familiar orthography, but it falls under control of the language network when the task demands it.
Collapse
Affiliation(s)
- Vassiki S Chauhan
- Department of Neuroscience & Behavior, Barnard College, Columbia University, New York, New York 10027
| | - Krystal C McCook
- Department of Neuroscience & Behavior, Barnard College, Columbia University, New York, New York 10027
| | - Alex L White
- Department of Neuroscience & Behavior, Barnard College, Columbia University, New York, New York 10027
| |
Collapse
|
3
|
Chauhan VS, McCook KC, White AL. Reading reshapes stimulus selectivity in the visual word form area. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.04.560764. [PMID: 38948708 PMCID: PMC11212929 DOI: 10.1101/2023.10.04.560764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Reading depends on a brain region known as the "visual word form area" (VWFA) in left ventral occipito-temporal cortex. This region's function is debated because its stimulus selectivity is not absolute, it is modulated by a variety of task demands, and it is inconsistently localized. We used fMRI to characterize the combination of sensory and cognitive factors that activate word-responsive regions that we precisely localized in 16 adult humans (4 male). We then presented three types of character strings: English words, pseudowords, and unfamiliar characters with matched visual features. Participants performed three different tasks while viewing those stimuli: detecting real words, detecting color in the characters, and detecting color in the fixation mark. There were three primary findings about the VWFA's response: (1) It preferred letter strings over unfamiliar characters even when the stimuli were ignored during the fixation task; (2) Compared to those baseline responses, engaging in the word reading task enhanced the response to words but suppressed the response to unfamiliar characters. (3) Attending to the stimuli to judge their font color had little effect on the response magnitudes. Thus, the VWFA is uniquely modulated by a cognitive signal that is specific to voluntary linguistic processing and is not additive. Functional connectivity analyses revealed that communication between the VWFA and a left frontal language area increased when the participant engaged in the linguistic task. We conclude that the VWFA is inherently selective for familiar orthography, but it falls under control of the language network when the task demands it.
Collapse
Affiliation(s)
- Vassiki S. Chauhan
- Department of Neuroscience & Behavior Barnard College, Columbia University 76 Claremont Ave New York, NY 10027 USA
| | - Krystal C McCook
- Department of Neuroscience & Behavior Barnard College, Columbia University 76 Claremont Ave New York, NY 10027 USA
| | - Alex L. White
- Department of Neuroscience & Behavior Barnard College, Columbia University 76 Claremont Ave New York, NY 10027 USA
| |
Collapse
|
4
|
Norman LJ, Hartley T, Thaler L. Changes in primary visual and auditory cortex of blind and sighted adults following 10 weeks of click-based echolocation training. Cereb Cortex 2024; 34:bhae239. [PMID: 38897817 PMCID: PMC11186672 DOI: 10.1093/cercor/bhae239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 05/14/2024] [Accepted: 05/29/2024] [Indexed: 06/21/2024] Open
Abstract
Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.
Collapse
Affiliation(s)
- Liam J Norman
- Department of Psychology, Durham University, Durham, DH1 3LE, UK
| | - Tom Hartley
- Department of Psychology and York Biomedical Research Institute, University of York, Heslington, YO10 5DD, UK
| | - Lore Thaler
- Department of Psychology, Durham University, Durham, DH1 3LE, UK
| |
Collapse
|
5
|
D'Angiulli A, Wymark D, Temi S, Bahrami S, Telfer A. Reconsidering Luria's speech mediation: Verbalization and haptic picture identification in children with congenital total blindness. Cortex 2024; 173:263-282. [PMID: 38432177 DOI: 10.1016/j.cortex.2024.01.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 11/20/2023] [Accepted: 01/18/2024] [Indexed: 03/05/2024]
Abstract
Current accounts of behavioral and neurocognitive correlates of plasticity in blindness are just beginning to incorporate the role of speech and verbal production. We assessed Vygotsky/Luria's speech mediation hypothesis, according to which speech activity can become a mediating tool for perception of complex stimuli, specifically, for encoding tactual/haptic spatial patterns which convey pictorial information (haptic pictures). We compared verbalization in congenitally totally blind (CTB) and age-matched sighted but visually impaired (VI) children during a haptic picture naming task which included two repeated, test-retest, identifications. The children were instructed to explore 10 haptic schematic pictures of objects (e.g., cup) and body parts (e.g., face) and provide (without experimenter's feedback) their typical name. Children's explorations and verbalizations were videorecorded and transcribed into audio segments. Using the Computerized Analysis of Language (CLAN) program, we extracted several measurements from the observed verbalizations, including number of utterances and words, utterance/word duration, and exploration time. Using the Word2Vec natural language processing technique we operationalized semantic content from the relative distances between the names provided. Furthermore, we conducted an observational content analysis in which three judges categorized verbalizations according to a rating scale assessing verbalization content. Results consistently indicated across all measures that the CTB children were faster and semantically more precise than their VI counterparts in the first identification test, however, the VI children reached the same level of precision and speed as the CTB children at retest. Overall, the task was harder for the VI group. Consistent with current neuroscience literature, the prominent role of speech in CTB and VI children's data suggests that an underlying cross-modal involvement of integrated brain networks, notably associated with Broca's network, likely also influenced by Braille, could play a key role in compensatory plasticity via the mediational mechanism postulated by Luria.
Collapse
Affiliation(s)
- Amedeo D'Angiulli
- Carleton University, Department of Neuroscience, Canada; Children's Hospital of Eastern Ontario Research Institute, Neurodevelopmental Health, Canada.
| | - Dana Wymark
- Carleton University, Department of Neuroscience, Canada
| | - Santa Temi
- Carleton University, Department of Neuroscience, Canada
| | - Sahar Bahrami
- Carleton University, Department of Neuroscience, Canada
| | - Andre Telfer
- Carleton University, Department of Neuroscience, Canada
| |
Collapse
|
6
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
7
|
Yablonski M, Karipidis II, Kubota E, Yeatman JD. The transition from vision to language: Distinct patterns of functional connectivity for subregions of the visual word form area. Hum Brain Mapp 2024; 45:e26655. [PMID: 38488471 PMCID: PMC10941549 DOI: 10.1002/hbm.26655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 12/22/2023] [Accepted: 02/26/2024] [Indexed: 03/18/2024] Open
Abstract
Reading entails transforming visual symbols to sound and meaning. This process depends on specialized circuitry in the visual cortex, the visual word form area (VWFA). Recent findings suggest that this text-selective cortex comprises at least two distinct subregions: the more posterior VWFA-1 is sensitive to visual features, while the more anterior VWFA-2 processes higher level language information. Here, we explore whether these two subregions also exhibit different patterns of functional connectivity. To this end, we capitalize on two complementary datasets: Using the Natural Scenes Dataset (NSD), we identify text-selective responses in high-quality 7T adult data (N = 8), and investigate functional connectivity patterns of VWFA-1 and VWFA-2 at the individual level. We then turn to the Healthy Brain Network (HBN) database to assess whether these patterns replicate in a large developmental sample (N = 224; age 6-20 years), and whether they relate to reading development. In both datasets, we find that VWFA-1 is primarily correlated with bilateral visual regions. In contrast, VWFA-2 is more strongly correlated with language regions in the frontal and lateral parietal lobes, particularly the bilateral inferior frontal gyrus. Critically, these patterns do not generalize to adjacent face-selective regions, suggesting a specific relationship between VWFA-2 and the frontal language network. No correlations were observed between functional connectivity and reading ability. Together, our findings support the distinction between subregions of the VWFA, and suggest that functional connectivity patterns in the ventral temporal cortex are consistent over a wide range of reading skills.
Collapse
Affiliation(s)
- Maya Yablonski
- Division of Developmental‐Behavioral Pediatrics, Department of PediatricsStanford University School of MedicineStanfordCaliforniaUSA
- Stanford University Graduate School of EducationStanfordCaliforniaUSA
| | - Iliana I. Karipidis
- Department of Psychiatry and Behavioral SciencesStanford School of MedicineStanfordCaliforniaUSA
- Department of Child and Adolescent Psychiatry and PsychotherapyUniversity Hospital of Psychiatry Zurich, University of ZurichZürichSwitzerland
- Neuroscience Center ZurichUniversity of Zurich and ETHZurichSwitzerland
| | - Emily Kubota
- Psychology DepartmentStanford UniversityStanfordCaliforniaUSA
| | - Jason D. Yeatman
- Division of Developmental‐Behavioral Pediatrics, Department of PediatricsStanford University School of MedicineStanfordCaliforniaUSA
- Stanford University Graduate School of EducationStanfordCaliforniaUSA
- Psychology DepartmentStanford UniversityStanfordCaliforniaUSA
| |
Collapse
|
8
|
Luo X, Li M, Zeng J, Dai Z, Cui Z, Zhu M, Tian M, Wu J, Han Z. Mechanisms underlying category learning in the human ventral occipito-temporal cortex. Neuroimage 2024; 287:120520. [PMID: 38242489 DOI: 10.1016/j.neuroimage.2024.120520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 01/07/2024] [Accepted: 01/17/2024] [Indexed: 01/21/2024] Open
Abstract
The human ventral occipito-temporal cortex (VOTC) has evolved into specialized regions that process specific categories, such as words, tools, and animals. The formation of these areas is driven by bottom-up visual and top-down nonvisual experiences. However, the specific mechanisms through which top-down nonvisual experiences modulate category-specific regions in the VOTC are still unknown. To address this question, we conducted a study in which participants were trained for approximately 13 h to associate three sets of novel meaningless figures with different top-down nonvisual features: the wordlike category with word features, the non-wordlike category with nonword features, and the visual familiarity condition with no nonvisual features. Pre- and post-training functional MRI (fMRI) experiments were used to measure brain activity during stimulus presentation. Our results revealed that training induced a categorical preference for the two training categories within the VOTC. Moreover, the locations of two training category-specific regions exhibited a notable overlap. Remarkably, within the overlapping category-specific region, training resulted in a dissociation in activation intensity and pattern between the two training categories. These findings provide important insights into how different nonvisual categorical information is encoded in the human VOTC.
Collapse
Affiliation(s)
- Xiangqi Luo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Mingyang Li
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310027, PR China
| | - Jiahong Zeng
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Zhiyun Dai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Zhenjiang Cui
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Minhong Zhu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Mengxin Tian
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Jiahao Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Zaizhu Han
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China.
| |
Collapse
|
9
|
Dalski A, Kular H, Jorgensen JG, Grill-Spector K, Grotheer M. Both mOTS-words and pOTS-words prefer emoji stimuli over text stimuli during a reading task. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.11.07.565794. [PMID: 37986766 PMCID: PMC10659328 DOI: 10.1101/2023.11.07.565794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
The visual word form area in the occipitotemporal sulcus (OTS), here referred to as OTS-words, responds more strongly to text than other visual stimuli and is crucial for reading. We hypothesized, that this text preference may be driven by a preference for reading tasks, as in most prior fMRI studies only the text stimuli were readable. Hence, we performed three fMRI experiments (N=15) and systematically varied the participant's task and the stimulus, investigating mOTS-words and pOTS-words subregions. In experiment 1, we contrasted text stimuli with non-readable visual stimuli (faces, limbs, houses, objects). Experiment 2 utilized an fMRI adaptation paradigm, presenting compound words in text or emoji formats. In experiment 3, participants performed a reading or a color task on compound words in text or emoji format. Using experiment 1 data, we identified mOTS-words and pOTS-words by contrasting texts with non-readable stimuli. In experiment 2, pOTS-words, but not mOTS-words, showed fMRI adaptation for compound words in both text and emoji formats. In experiment 3, surprisingly, both subregions showed higher responses to compound words in emoji than text format. Moreover, mOTS-words showed higher responses during the reading than the color task and a task-stimulus interaction. Multivariate analyses revealed that distributed responses in pOTS-words encode the visual stimulus, while responses in mOTS-words encode both stimulus and task. Together, our findings suggest that the function of the OTS-words subregions goes beyond the specific visual processing of text and that these regions are flexibly recruited whenever semantic meaning needs to be assigned to visual input.
Collapse
Affiliation(s)
- Alexia Dalski
- Department of Psychology, Philipps-Universität Marburg, Marburg 35039, Germany
- Center for Mind, Brain and Behavior – CMBB, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, Marburg 35032, Germany
| | - Holly Kular
- Department of Psychology, Stanford University, Stanford, CA 94305, USA
| | | | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, CA 94305, USA
- Wu Tsai Neurosciences Institute, Stanford University, CA 94305, USA
| | - Mareike Grotheer
- Department of Psychology, Philipps-Universität Marburg, Marburg 35039, Germany
- Center for Mind, Brain and Behavior – CMBB, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, Marburg 35032, Germany
| |
Collapse
|
10
|
Dziȩgiel-Fivet G, Beck J, Jednoróg K. The role of the left ventral occipitotemporal cortex in speech processing-The influence of visual deprivation. Front Hum Neurosci 2023; 17:1228808. [PMID: 38125712 PMCID: PMC10730934 DOI: 10.3389/fnhum.2023.1228808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 11/13/2023] [Indexed: 12/23/2023] Open
Abstract
The role of the left ventral occipitotemporal cortex (vOT) in reading is well-established in both sighted and blind readers. Its role in speech processing remains only partially understood. Here, we test the involvement of the left vOT in phonological processing of spoken language in the blind (N = 50, age: 6.76-60.32) and in the sighted (N = 54, age: 6.79-59.83) by means of whole-brain and region-of-interest (including individually identified) fMRI analyses. We confirm that the left vOT is sensitive to phonological processing (shows greater involvement in rhyming compared to control spoken language task) in both blind and sighted participants. However, in the sighted, the activation was observed only during the rhyming task and in the speech-specific region of the left vOT, pointing to task and modality specificity. In contrast, in the blind group, the left vOT was active during speech processing irrespective of task and in both speech and reading-specific vOT regions. Only in the blind, the left vOT presented a higher degree of sensitivity to phonological processing than other language nodes in the left inferior frontal and superior temporal cortex. Our results suggest a changed development of the left vOT sensitivity to spoken language, resulting from visual deprivation.
Collapse
Affiliation(s)
- Gabriela Dziȩgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | | | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
11
|
Arbel R, Heimler B, Amedi A. Rapid plasticity in the ventral visual stream elicited by a newly learnt auditory script in congenitally blind adults. Neuropsychologia 2023; 190:108685. [PMID: 37741551 DOI: 10.1016/j.neuropsychologia.2023.108685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 08/07/2023] [Accepted: 09/20/2023] [Indexed: 09/25/2023]
Abstract
Accumulating evidence in the last decades has given rise to a new theory of brain organization, positing that cortical regions are recruited for specific tasks irrespective of the sensory modality via which information is channeled. For instance, the visual reading network has been shown to be recruited for reading via the tactile Braille code in congenitally blind adults. Yet, how rapidly non-typical sensory input modulates activity in typically visual regions is yet to be explored. To this aim, we developed a novel reading orthography, termed OVAL, enabling congenitally blind adults to quickly acquire reading via the auditory modality. OVAL uses the EyeMusic, a visual-to-auditory sensory-substitution-device (SSD) to transform visually presented letters optimized for auditory transformation into sound. Using fMRI, we show modulation in the right ventral visual stream following 2-h of same-day training. Crucially, following more extensive training (i.e., ∼12 h) we show that OVAL reading recruits the left ventral visual stream including the location of the Visual Word Form Area, a key graphene-responsive region within the visual reading network. Our results show that while after 2 h of SSD training we can already observe the recruitment of the deprived ventral visual stream by auditory stimuli, computation-selective cross-modal recruitment requires longer training to establish.
Collapse
Affiliation(s)
- Roni Arbel
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; Department of Pediatrics, Hadassah Mount Scopus Hospital, Jerusalem, Israel.
| | - Benedetta Heimler
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Institute for Brain, Mind and Technology, Ivcher School of Psychology, Reichman University, Herzeliya, Israel; Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Tel Hashomer, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Institute for Brain, Mind and Technology, Ivcher School of Psychology, Reichman University, Herzeliya, Israel
| |
Collapse
|
12
|
Liu YF, Rapp B, Bedny M. Reading Braille by Touch Recruits Posterior Parietal Cortex. J Cogn Neurosci 2023; 35:1593-1616. [PMID: 37584592 PMCID: PMC10877400 DOI: 10.1162/jocn_a_02041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
Blind readers use a tactile reading system consisting of raised dot arrays: braille/⠃⠗⠇. How do human brains implement reading by touch? The current study looked for signatures of reading-specific orthographic processes in braille, separate from low-level somatosensory responses and semantic processes. Of specific interest were responses in posterior parietal cortices (PPCs), because of their role in high-level tactile perception. Congenitally blind, proficient braille readers read real words and pseudowords by touch while undergoing fMRI. We leveraged the system of contractions in English braille, where one braille cell can represent multiple English print letters (e.g., "ing" ⠬, "one" ⠐⠕), making it possible to separate physical and orthographic word length. All words in the study consisted of four braille cells, but their corresponding Roman letter spellings varied from four to seven letters (e.g., "con-c-er-t" ⠒⠉⠻⠞. contracted: four cells; uncontracted: seven letters). We found that the bilateral supramarginal gyrus in the PPC increased its activity as the uncontracted word length increased. By contrast, in the hand region of primary somatosensory cortex (S1), activity increased as a function of a low-level somatosensory feature: dot-number per word. The PPC also showed greater response to pseudowords than real words and distinguished between real and pseudowords in multivariate-pattern analysis. Parieto-occipital, early visual and ventral occipito-temporal, as well as prefrontal cortices also showed sensitivity to the real-versus-pseudoword distinction. We conclude that PPC is involved in orthographic processing for braille, that is, braille character and word recognition, possibly because of braille's tactile modality.
Collapse
Affiliation(s)
- Yun-Fei Liu
- Department of Psychological and Brain Sciences, Johns Hopkins University
| | - Brenda Rapp
- Department of Cognitive Science, Johns Hopkins University
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University
| |
Collapse
|
13
|
Beck J, Dzięgiel-Fivet G, Jednoróg K. Similarities and differences in the neural correlates of letter and speech sound integration in blind and sighted readers. Neuroimage 2023; 278:120296. [PMID: 37495199 DOI: 10.1016/j.neuroimage.2023.120296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/18/2023] [Accepted: 07/23/2023] [Indexed: 07/28/2023] Open
Abstract
Learning letter and speech sound (LS) associations is a major step in reading acquisition common for all alphabetic scripts, including Braille used by blind readers. The left superior temporal cortex (STC) plays an important role in audiovisual LS integration in sighted people, but it is still unknown what neural mechanisms are responsible for audiotactile LS integration in blind individuals. Here, we investigated the similarities and differences between LS integration in blind Braille (N = 42, age range: 9-60 y.o.) and sighted print (N = 47, age range: 9-60 y.o.) readers who acquired reading using different sensory modalities. In both groups, the STC responded to both isolated letters and isolated speech sounds, showed enhanced activation when they were presented together, and distinguished between congruent and incongruent letter and speech sound pairs. However, the direction of the congruency effect was different between the groups. Sighted subjects showed higher activity for incongruent LS pairs in the bilateral STC, similarly to previously studied typical readers of transparent orthographies. In the blind, congruent pairs resulted in an increased response in the right STC. These differences may be related to more sequential processing of Braille as compared to print reading. At the same time, behavioral efficiency in LS discrimination decisions and the congruency effect were found to be related to age and reading skill only in sighted participants, suggesting potential differences in the developmental trajectories of LS integration between blind and sighted readers.
Collapse
Affiliation(s)
- Joanna Beck
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland.
| | - Gabriela Dzięgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland.
| |
Collapse
|
14
|
Dȩbska A, Wójcik M, Chyl K, Dziȩgiel-Fivet G, Jednoróg K. Beyond the Visual Word Form Area - a cognitive characterization of the left ventral occipitotemporal cortex. Front Hum Neurosci 2023; 17:1199366. [PMID: 37576470 PMCID: PMC10416454 DOI: 10.3389/fnhum.2023.1199366] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 07/10/2023] [Indexed: 08/15/2023] Open
Abstract
The left ventral occipitotemporal cortex has been traditionally viewed as a pathway for visual object recognition including written letters and words. Its crucial role in reading was strengthened by the studies on the functionally localized "Visual Word Form Area" responsible for processing word-like information. However, in the past 20 years, empirical studies have challenged the assumptions of this brain region as processing exclusively visual or even orthographic stimuli. In this review, we aimed to present the development of understanding of the left ventral occipitotemporal cortex from the visually based letter area to the modality-independent symbolic language related region. We discuss theoretical and empirical research that includes orthographic, phonological, and semantic properties of language. Existing results showed that involvement of the left ventral occipitotemporal cortex is not limited to unimodal activity but also includes multimodal processes. The idea of the integrative nature of this region is supported by the broad functional and structural connectivity with language-related and attentional brain networks. We conclude that although the function of the area is not yet fully understood in human cognition, its role goes beyond visual word form processing. The left ventral occipitotemporal cortex seems to be crucial for combining higher-level language information with abstract forms that convey meaning independently of modality.
Collapse
Affiliation(s)
- Agnieszka Dȩbska
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Marta Wójcik
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Katarzyna Chyl
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
- The Educational Research Institute, Warsaw, Poland
| | - Gabriela Dziȩgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
15
|
Damera SR, Malone PS, Stevens BW, Klein R, Eberhardt SP, Auer ET, Bernstein LE, Riesenhuber M. Metamodal Coupling of Vibrotactile and Auditory Speech Processing Systems through Matched Stimulus Representations. J Neurosci 2023; 43:4984-4996. [PMID: 37197979 PMCID: PMC10324991 DOI: 10.1523/jneurosci.1710-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 03/10/2023] [Accepted: 04/29/2023] [Indexed: 05/19/2023] Open
Abstract
It has been postulated that the brain is organized by "metamodal," sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both "standard" and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain.SIGNIFICANCE STATEMENT It has been proposed that the brain is organized by "metamodal," sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals "to see" by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.
Collapse
Affiliation(s)
- Srikanth R Damera
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Patrick S Malone
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Benson W Stevens
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Richard Klein
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Silvio P Eberhardt
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | - Edward T Auer
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | - Lynne E Bernstein
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | | |
Collapse
|
16
|
White AL, Kay KN, Tang KA, Yeatman JD. Engaging in word recognition elicits highly specific modulations in visual cortex. Curr Biol 2023; 33:1308-1320.e5. [PMID: 36889316 PMCID: PMC10089978 DOI: 10.1016/j.cub.2023.02.042] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/26/2023] [Accepted: 02/13/2023] [Indexed: 03/09/2023]
Abstract
A person's cognitive state determines how their brain responds to visual stimuli. The most common such effect is a response enhancement when stimuli are task relevant and attended rather than ignored. In this fMRI study, we report a surprising twist on such attention effects in the visual word form area (VWFA), a region that plays a key role in reading. We presented participants with strings of letters and visually similar shapes, which were either relevant for a specific task (lexical decision or gap localization) or ignored (during a fixation dot color task). In the VWFA, the enhancement of responses to attended stimuli occurred only for letter strings, whereas non-letter shapes evoked smaller responses when attended than when ignored. The enhancement of VWFA activity was accompanied by strengthened functional connectivity with higher-level language regions. These task-dependent modulations of response magnitude and functional connectivity were specific to the VWFA and absent in the rest of visual cortex. We suggest that language regions send targeted excitatory feedback into the VWFA only when the observer is trying to read. This feedback enables the discrimination of familiar and nonsense words and is distinct from generic effects of visual attention.
Collapse
Affiliation(s)
- Alex L White
- Department of Neuroscience & Behavior, Barnard College, Columbia University, 76 Claremont Ave, New York, NY 10027, USA.
| | - Kendrick N Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 6th Street SE, Minneapolis, MN 55455, USA
| | - Kenny A Tang
- Graduate School of Education and Department of Psychology, Stanford University, Division of Developmental-Behavioral Pediatrics, Stanford University School of Medicine, 520 Galvez Mall, Stanford, CA 94305, USA
| | - Jason D Yeatman
- Graduate School of Education and Department of Psychology, Stanford University, Division of Developmental-Behavioral Pediatrics, Stanford University School of Medicine, 520 Galvez Mall, Stanford, CA 94305, USA
| |
Collapse
|
17
|
Zhan M, Pallier C, Agrawal A, Dehaene S, Cohen L. Does the visual word form area split in bilingual readers? A millimeter-scale 7-T fMRI study. SCIENCE ADVANCES 2023; 9:eadf6140. [PMID: 37018408 PMCID: PMC10075963 DOI: 10.1126/sciadv.adf6140] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 03/06/2023] [Indexed: 05/29/2023]
Abstract
In expert readers, a brain region known as the visual word form area (VWFA) is highly sensitive to written words, exhibiting a posterior-to-anterior gradient of increasing sensitivity to orthographic stimuli whose statistics match those of real words. Using high-resolution 7-tesla functional magnetic resonance imaging (fMRI), we ask whether, in bilingual readers, distinct cortical patches specialize for different languages. In 21 English-French bilinguals, unsmoothed 1.2-millimeters fMRI revealed that the VWFA is actually composed of several small cortical patches highly selective for reading, with a posterior-to-anterior word-similarity gradient, but with near-complete overlap between the two languages. In 10 English-Chinese bilinguals, however, while most word-specific patches exhibited similar reading specificity and word-similarity gradients for reading in Chinese and English, additional patches responded specifically to Chinese writing and, unexpectedly, to faces. Our results show that the acquisition of multiple writing systems can indeed tune the visual cortex differently in bilinguals, sometimes leading to the emergence of cortical patches specialized for a single language.
Collapse
Affiliation(s)
- Minye Zhan
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France
| | - Christophe Pallier
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France
| | - Aakash Agrawal
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France
- Collège de France, Université Paris-Sciences-Lettres (PSL), 11 Place Marcelin Berthelot, 75005 Paris, France
| | - Laurent Cohen
- Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Institut du Cerveau, ICM, Paris, France
- AP-HP, Hôpital de la Pitié Salpêtrière, Fédération de Neurologie, Paris, France
| |
Collapse
|
18
|
Tian M, Saccone EJ, Kim JS, Kanjlia S, Bedny M. Sensory modality and spoken language shape reading network in blind readers of Braille. Cereb Cortex 2023; 33:2426-2440. [PMID: 35671478 PMCID: PMC10016046 DOI: 10.1093/cercor/bhac216] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 05/06/2022] [Accepted: 05/07/2022] [Indexed: 01/24/2023] Open
Abstract
The neural basis of reading is highly consistent across many languages and scripts. Are there alternative neural routes to reading? How does the sensory modality of symbols (tactile vs. visual) influence their neural representations? We examined these questions by comparing reading of visual print (sighted group, n = 19) and tactile Braille (congenitally blind group, n = 19). Blind and sighted readers were presented with written (words, consonant strings, non-letter shapes) and spoken stimuli (words, backward speech) that varied in word-likeness. Consistent with prior work, the ventral occipitotemporal cortex (vOTC) was active during Braille and visual reading. A posterior/anterior vOTC word-form gradient was observed only in sighted readers with more anterior regions preferring larger orthographic units (words). No such gradient was observed in blind readers. Consistent with connectivity predictions, in blind compared to sighted readers, posterior parietal cortices were recruited to a greater degree and contained word-preferring patches. Lateralization of Braille in blind readers was predicted by laterality of spoken language and reading hand. The effect of spoken language increased along a cortical hierarchy, whereas effect of reading hand waned. These results suggested that the neural basis of reading is influenced by symbol modality and spoken language and support connectivity-based views of cortical function.
Collapse
Affiliation(s)
- Mengyu Tian
- Corresponding author: Department of Psychological and Brain Sciences, Johns Hopkins University, 3400 N Charles St, Baltimore, MD 21218, United States.
| | - Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
| | - Judy S Kim
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
- Department of Psychology, Yale University, 2 Hillhouse Ave., New Haven, CT 06511, United States
| | - Shipra Kanjlia
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue Pittsburgh, PA 15213, United States
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
| |
Collapse
|
19
|
Yizhar O, Tal Z, Amedi A. Loss of action-related function and connectivity in the blind extrastriate body area. Front Neurosci 2023; 17:973525. [PMID: 36968509 PMCID: PMC10035577 DOI: 10.3389/fnins.2023.973525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 02/23/2023] [Indexed: 03/11/2023] Open
Abstract
The Extrastriate Body Area (EBA) participates in the visual perception and motor actions of body parts. We recently showed that EBA’s perceptual function develops independently of visual experience, responding to stimuli with body-part information in a supramodal fashion. However, it is still unclear if the EBA similarly maintains its action-related function. Here, we used fMRI to study motor-evoked responses and connectivity patterns in the congenitally blind brain. We found that, unlike the case of perception, EBA does not develop an action-related response without visual experience. In addition, we show that congenital blindness alters EBA’s connectivity profile in a counterintuitive way—functional connectivity with sensorimotor cortices dramatically decreases, whereas connectivity with perception-related visual occipital cortices remains high. To the best of our knowledge, we show for the first time that action-related functions and connectivity in the visual cortex could be contingent on visuomotor experience. We further discuss the role of the EBA within the context of visuomotor control and predictive coding theory.
Collapse
Affiliation(s)
- Or Yizhar
- Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind and Technology, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- *Correspondence: Or Yizhar,
| | - Zohar Tal
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Amir Amedi
- Ivcher School of Psychology, The Institute for Brain, Mind and Technology, Reichman University, Herzliya, Israel
- The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
20
|
Maimon A, Netzer O, Heimler B, Amedi A. Testing geometry and 3D perception in children following vision restoring cataract-removal surgery. Front Neurosci 2023; 16:962817. [PMID: 36711132 PMCID: PMC9879291 DOI: 10.3389/fnins.2022.962817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 12/19/2022] [Indexed: 01/13/2023] Open
Abstract
As neuroscience and rehabilitative techniques advance, age-old questions concerning the visual experience of those who gain sight after blindness, once thought to be philosophical alone, take center stage and become the target for scientific inquiries. In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this study gained a sense of sight for the first time in their lives, having been born with bilateral congenital cataracts, while others suffered late-onset blindness in one eye alone. This study simultaneously supports yet raises further questions concerning Hubel and Wiesel's critical periods theory and provides additional insight into Molyneux's problem, the ability to correlate vision with touch quickly. We suggest that our findings present a relatively unexplored intermediate stage of 3D vision development. Importantly, we spotlight some essential geometrical perception visual abilities that strengthen the idea that spontaneous geometry intuitions arise independently from visual experience (and education), thus replicating and extending previous studies. We incorporate a new model, not previously explored, of testing children with congenital cataract removal surgeries who perform the task via vision. In contrast, previous work has explored these abilities in the congenitally blind via touch. Taken together, our findings provide insight into the development of what is commonly known as the visual system in the visually deprived and highlight the need to further empirically explore an amodal, task-based interpretation of specializations in the development and structure of the brain. Moreover, we propose a novel objective method, based on a simple binocular rivalry task and the Brock string task, for determining congenital (early) vs. late blindness where medical history and records are partial or lacking (e.g., as is often the case in cataract removal cases).
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel,The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel,*Correspondence: Amber Maimon,
| | - Ophir Netzer
- Gonda Brain Research Center, Bar-Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel,The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
21
|
Del Mauro G, Del Maschio N, Abutalebi J. The relationship between reading abilities and the left occipitotemporal sulcus: A dual perspective study. BRAIN AND LANGUAGE 2022; 235:105189. [PMID: 36260960 DOI: 10.1016/j.bandl.2022.105189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 10/07/2022] [Accepted: 10/07/2022] [Indexed: 06/16/2023]
Abstract
Reading activates a region within the left lateral occipitotemporal sulcus (OTS) known as the 'visual word form area' (VWFA). While several studies have investigated the impact of reading on brain structure through neuroplastic mechanisms, it has been recently suggested that individual differences in the pattern of the posterior OTS may predict reading skills in adults. In the present study, we first examined whether the structure and morphology and the anatomical connectivity of the left OTS are associated to reading ability. Second, we explored whether reading skills are predicted by the pattern of the left OTS. We found that reading skills were positively associated with increased connectivity between the left OTS and a network of reading-related regions in the left hemisphere. On the other hand, we did not observe an association between the pattern of the left OTS and reading skills. Finally, we found evidence that the morphology and the connectivity of the left OTS are correlated to its sulcal pattern.
Collapse
Affiliation(s)
- Gianpaolo Del Mauro
- Centre for Neurolinguistics and Psycholinguistics (CNPL), Faculty of Psychology, Vita-Salute San Raffaele University, Milan, Italy
| | - Nicola Del Maschio
- Centre for Neurolinguistics and Psycholinguistics (CNPL), Faculty of Psychology, Vita-Salute San Raffaele University, Milan, Italy; Facultyof Psychology, Vita-Salute San Raffaele University, Milan, Italy
| | - Jubin Abutalebi
- Centre for Neurolinguistics and Psycholinguistics (CNPL), Faculty of Psychology, Vita-Salute San Raffaele University, Milan, Italy; Facultyof Psychology, Vita-Salute San Raffaele University, Milan, Italy; TheArctic University of Norway, Tromsø, Norway.
| |
Collapse
|
22
|
Lin J, Zhang L, Guo R, Jiao S, Song X, Feng S, Wang K, Li M, Luo Y, Han Z. The influence of visual deprivation on the development of the thalamocortical network: Evidence from congenitally blind children and adults. Neuroimage 2022; 264:119722. [PMID: 36323383 DOI: 10.1016/j.neuroimage.2022.119722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 10/23/2022] [Accepted: 10/29/2022] [Indexed: 11/06/2022] Open
Abstract
The thalamus is heavily involved in relaying sensory signals to the cerebral cortex. A relevant issue is how the deprivation of congenital visual sensory information modulates the development of the thalamocortical network. The answer is unclear because previous studies on this topic did not investigate network development, structure-function combinations, and cognition-related behaviors in the same study. To overcome these limitations, we recruited 30 congenitally blind subjects (8 children, 22 adults) and 31 sighted subjects (10 children, 21 adults), and conducted multiple analyses [i.e., gray matter volume (GMV) analysis using the voxel-based morphometry (VBM) method, resting-state functional connectivity (FC), and brain-behavior correlation]. We found that congenital blindness elicited significant changes in the development of GMV in visual and somatosensory thalamic regions. Blindness also resulted in significant changes in the development of FC between somatosensory thalamic regions and visual cortical regions as well as advanced information processing regions. Moreover, the somatosensory thalamic regions and their FCs with visual cortical regions were reorganized to process high-level tactile language information in blind individuals. These findings provide a refined understanding of the neuroanatomical and functional plasticity of the thalamocortical network.
Collapse
Affiliation(s)
- Junfeng Lin
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Linjun Zhang
- School of Chinese as a Second Language, Peking University, Beijing 100091, China
| | - Runhua Guo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Saiyi Jiao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Xiaomeng Song
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Suting Feng
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Ke Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Mingyang Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Yudan Luo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Zaizhu Han
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
23
|
Lee B, Secora K. Fingerspelling and Its Role in Translanguaging. LANGUAGES (BASEL, SWITZERLAND) 2022; 7:278. [PMID: 37920277 PMCID: PMC10622114 DOI: 10.3390/languages7040278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Fingerspelling is a critical component of many sign languages. This manual representation of orthographic code is one key way in which signers engage in translanguaging, drawing from all of their linguistic and semiotic resources to support communication. Translanguaging in bimodal bilinguals is unique because it involves drawing from languages in different modalities, namely a signed language like American Sign Language and a spoken language like English (or its written form). Fingerspelling can be seen as a unique product of the unified linguistic system that translanguaging theories purport, as it blends features of both sign and print. The goals of this paper are twofold: to integrate existing research on fingerspelling in order to characterize it as a cognitive-linguistic phenomenon and to discuss the role of fingerspelling in translanguaging and communication. We will first review and synthesize research from linguistics and cognitive neuroscience to summarize our current understanding of fingerspelling, its production, comprehension, and acquisition. We will then discuss how fingerspelling relates to translanguaging theories and how it can be incorporated into translanguaging practices to support literacy and other communication goals.
Collapse
Affiliation(s)
- Brittany Lee
- Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Kristen Secora
- Theory and Practice in Teacher Education, University of Tennessee Knoxville, Knoxville, TN 37996, USA
| |
Collapse
|
24
|
Gori M, Amadeo MB, Pavani F, Valzolgher C, Campus C. Temporal visual representation elicits early auditory-like responses in hearing but not in deaf individuals. Sci Rep 2022; 12:19036. [PMID: 36351944 PMCID: PMC9646881 DOI: 10.1038/s41598-022-22224-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 10/11/2022] [Indexed: 11/10/2022] Open
Abstract
It is evident that the brain is capable of large-scale reorganization following sensory deprivation, but the extent of such reorganization is to date, not clear. The auditory modality is the most accurate to represent temporal information, and deafness is an ideal clinical condition to study the reorganization of temporal representation when the audio signal is not available. Here we show that hearing, but not deaf individuals, show a strong ERP response to visual stimuli in temporal areas during a time-bisection task. This ERP response appears 50-90 ms after the flash and recalls some aspects of the N1 ERP component usually elicited by auditory stimuli. The same ERP is not evident for a visual space-bisection task, suggesting that the early recruitment of temporal cortex is specific for building a highly resolved temporal representation within the visual modality. These findings provide evidence that the lack of auditory input can interfere with typical development of complex visual temporal representations.
Collapse
Affiliation(s)
- Monica Gori
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| | - Maria Bianca Amadeo
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| | - Francesco Pavani
- grid.11696.390000 0004 1937 0351Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy ,grid.11696.390000 0004 1937 0351Centro Interateneo di Ricerca Cognizione, Linguaggio e Sordità (CIRCLeS), University of Trento, Trento, Italy ,grid.461862.f0000 0004 0614 7222Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), Bron, France
| | - Chiara Valzolgher
- grid.11696.390000 0004 1937 0351Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy ,grid.461862.f0000 0004 0614 7222Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), Bron, France
| | - Claudio Campus
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| |
Collapse
|
25
|
Sabourin CJ, Merrikhi Y, Lomber SG. Do blind people hear better? Trends Cogn Sci 2022; 26:999-1012. [PMID: 36207258 DOI: 10.1016/j.tics.2022.08.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/22/2022] [Accepted: 08/25/2022] [Indexed: 01/12/2023]
Abstract
For centuries, anecdotal evidence such as the perfect pitch of the blind piano tuner or blind musician has supported the notion that individuals who have lost their sight early in life have superior hearing abilities compared with sighted people. Recently, auditory psychophysical and functional imaging studies have identified that specific auditory enhancements in the early blind can be linked to activation in extrastriate visual cortex, suggesting crossmodal plasticity. Furthermore, the nature of the sensory reorganization in occipital cortex supports the concept of a task-based functional cartography for the cerebral cortex rather than a sensory-based organization. In total, studies of early-blind individuals provide valuable insights into mechanisms of cortical plasticity and principles of cerebral organization.
Collapse
Affiliation(s)
- Carina J Sabourin
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Biological and Biomedical Engineering Graduate Program, McGill University, Montreal, Quebec H3G 1Y6, Canada
| | - Yaser Merrikhi
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada
| | - Stephen G Lomber
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Biological and Biomedical Engineering Graduate Program, McGill University, Montreal, Quebec H3G 1Y6, Canada; Department of Psychology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec H3G 1Y6, Canada.
| |
Collapse
|
26
|
Korczyk M, Zimmermann M, Bola Ł, Szwed M. Superior visual rhythm discrimination in expert musicians is most likely not related to cross-modal recruitment of the auditory cortex. Front Psychol 2022; 13:1036669. [PMID: 36337485 PMCID: PMC9632485 DOI: 10.3389/fpsyg.2022.1036669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Accepted: 10/06/2022] [Indexed: 11/25/2022] Open
Abstract
Training can influence behavioral performance and lead to brain reorganization. In particular, training in one modality, for example, auditory, can improve performance in another modality, for example, visual. Previous research suggests that one of the mechanisms behind this phenomenon could be the cross-modal recruitment of the sensory areas, for example, the auditory cortex. Studying expert musicians offers a chance to explore this process. Rhythm is an aspect of music that can be presented in various modalities. We designed an fMRI experiment in which professional pianists and non-musicians discriminated between two sequences of rhythms presented auditorily (series of sounds) or visually (series of flashes). Behavioral results showed that musicians performed in both visual and auditory rhythmic tasks better than non-musicians. We found no significant between-group differences in fMRI activations within the auditory cortex. However, we observed that musicians had increased activation in the right Inferior Parietal Lobe when compared to non-musicians. We conclude that the musicians’ superior visual rhythm discrimination is not related to cross-modal recruitment of the auditory cortex; instead, it could be related to activation in higher-level, multimodal areas in the cortex.
Collapse
Affiliation(s)
| | | | - Łukasz Bola
- Intitute of Psychology, Jagiellonian University, Kraków, Poland
- Institute of Psychology, Polish Academy of Sciences, Warszawa, Poland
| | - Marcin Szwed
- Intitute of Psychology, Jagiellonian University, Kraków, Poland
- *Correspondence: Marcin Szwed,
| |
Collapse
|
27
|
Arbel R, Heimler B, Amedi A. Face shape processing via visual-to-auditory sensory substitution activates regions within the face processing networks in the absence of visual experience. Front Neurosci 2022; 16:921321. [PMID: 36263367 PMCID: PMC9576157 DOI: 10.3389/fnins.2022.921321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/05/2022] [Indexed: 11/16/2022] Open
Abstract
Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.
Collapse
Affiliation(s)
- Roni Arbel
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Pediatrics, Hadassah University Hospital-Mount Scopus, Jerusalem, Israel
- *Correspondence: Roni Arbel,
| | - Benedetta Heimler
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind, and Technology, Reichman University, Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation, Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind, and Technology, Reichman University, Herzeliya, Israel
| |
Collapse
|
28
|
Deane G. Machines That Feel and Think: The Role of Affective Feelings and Mental Action in (Artificial) General Intelligence. ARTIFICIAL LIFE 2022; 28:289-309. [PMID: 35881678 DOI: 10.1162/artl_a_00368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
What role do affective feelings (feelings/emotions/moods) play in adaptive behaviour? What are the implications of this for understanding and developing artificial general intelligence? Leading theoretical models of brain function are beginning to shed light on these questions. While artificial agents have excelled within narrowly circumscribed and specialised domains, domain-general intelligence has remained an elusive goal in artificial intelligence research. By contrast, humans and nonhuman animals are characterised by a capacity for flexible behaviour and general intelligence. In this article I argue that computational models of mental phenomena in predictive processing theories of the brain are starting to reveal the mechanisms underpinning domain-general intelligence in biological agents, and can inform the understanding and development of artificial general intelligence. I focus particularly on approaches to computational phenomenology in the active inference framework. Specifically, I argue that computational mechanisms of affective feelings in active inference-affective self-modelling-are revealing of how biological agents are able to achieve flexible behavioural repertoires and general intelligence. I argue that (i) affective self-modelling functions to "tune" organisms to the most tractable goals in the environmental context; and (ii) affective and agentic self-modelling is central to the capacity to perform mental actions in goal-directed imagination and creative cognition. I use this account as a basis to argue that general intelligence of the level and kind found in biological agents will likely require machines to be implemented with analogues of affective self-modelling.
Collapse
Affiliation(s)
- George Deane
- University of Edinburgh, School of Philosophy, Psychology, and Language Sciences.
| |
Collapse
|
29
|
Campbell EE, Bergelson E. Making sense of sensory language: Acquisition of sensory knowledge by individuals with congenital sensory impairments. Neuropsychologia 2022; 174:108320. [PMID: 35842021 DOI: 10.1016/j.neuropsychologia.2022.108320] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 06/21/2022] [Accepted: 07/06/2022] [Indexed: 10/17/2022]
Abstract
The present article provides a narrative review on how language communicates sensory information and how knowledge of sight and sound develops in individuals born deaf or blind. Studying knowledge of the perceptually inaccessible sensory domain for these populations offers a lens into how humans learn about that which they cannot perceive. We first review the linguistic strategies within language that communicate sensory information. Highlighting the power of language to shape knowledge, we next review the detailed knowledge of sensory information by individuals with congenital sensory impairments, limitations therein, and neural representations of imperceptible phenomena. We suggest that the acquisition of sensory knowledge is supported by language, experience with multiple perceptual domains, and cognitive and social abilities which mature over the first years of life, both in individuals with and without sensory impairment. We conclude by proposing a developmental trajectory for acquiring sensory knowledge in the absence of sensory perception.
Collapse
Affiliation(s)
- Erin E Campbell
- Duke University, Department of Psychology and Neuroscience, USA.
| | - Elika Bergelson
- Duke University, Department of Psychology and Neuroscience, USA
| |
Collapse
|
30
|
Maimon A, Yizhar O, Buchs G, Heimler B, Amedi A. A case study in phenomenology of visual experience with retinal prosthesis versus visual-to-auditory sensory substitution. Neuropsychologia 2022; 173:108305. [PMID: 35752268 PMCID: PMC9297294 DOI: 10.1016/j.neuropsychologia.2022.108305] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 04/30/2022] [Accepted: 06/13/2022] [Indexed: 11/26/2022]
Abstract
The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more "second nature" to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| | - Or Yizhar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Max Planck Institute for Human Development, Research Group Adaptive Memory and Decision Making, Berlin, Germany; Max Planck Institute for Human Development, Max Planck Dahlem Campus of Cognition (MPDCC), Berlin, Germany
| | - Galit Buchs
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| |
Collapse
|
31
|
Raeding with the fingres: Towards a universal model of letter position coding. Psychon Bull Rev 2022; 29:2275-2283. [PMID: 35650465 DOI: 10.3758/s13423-022-02078-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/01/2022] [Indexed: 11/08/2022]
Abstract
Letter position coding in word recognition has been widely investigated in the visual modality (e.g., labotarory is confusable with laboratory), but not as much in the tactile modality using braille, leading to an incomplete understanding of whether this process is modality-dependent. Unlike sighted readers, braille readers do not show a transposed-letter similarity effect with nonadjacent transpositions (e.g., labotarory = labodanory; Perea et al., 2012). While this latter finding was taken to suggest that the flexibility in letter position coding was due to visual factors (e.g., perceptual uncertainty in the location of visual objects (letters)), it is necessary to test whether transposed-letter effects occur with adjacent letters to reach firm conclusions. Indeed, in the auditory modality (i.e., another serial modality), a transposed-phoneme effect occurs for adjacent but not for nonadjacent transpositions. In a lexical decision task, we examined whether pseudowords created by transposing two adjacent letters of a word (e.g., laboartory) are more confusable with their base word (laboratory) than pseudowords created by replacing those letters (laboestory) in braille. Results showed that transposed-letter pseudowords produced more errors and slower responses than the orthographic controls. Thus, these findings suggest that the mechanism of serial order, while universal, can be shaped by the sensory modality at play.
Collapse
|
32
|
Dai R, Huang Z, Weng X, He S. Early visual exposure primes future cross-modal specialization of the fusiform face area in tactile face processing in the blind. Neuroimage 2022; 253:119062. [PMID: 35263666 DOI: 10.1016/j.neuroimage.2022.119062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 02/21/2022] [Accepted: 03/05/2022] [Indexed: 10/18/2022] Open
Abstract
The fusiform face area (FFA) is a core cortical region for face information processing. Evidence suggests that its sensitivity to faces is largely innate and tuned by visual experience. However, how experience in different time windows shape the plasticity of the FFA remains unclear. In this study, we investigated the role of visual experience at different time points of an individual's early development in the cross-modal face specialization of the FFA. Participants (n = 74) were classified into five groups: congenital blind, early blind, late blind, low vision, and sighted control. Functional magnetic resonance imaging data were acquired when the participants haptically processed carved faces and other objects. Our results showed a robust and highly consistent face-selective activation in the FFA region in the early blind participants, invariant to size and level of abstraction of the face stimuli. The cross-modal face activation in the FFA was much less consistent in other groups. These results suggest that early visual experience primes cross-modal specialization of the FFA, and even after the absence of visual experience for more than 14 years in early blind participants, their FFA can engage in cross-modal processing of face information.
Collapse
Affiliation(s)
- Rui Dai
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
| | - Zirui Huang
- Center for Consciousness Science, Department of Anesthesiology, University of Michigan Medical School, Ann Arbor, MI 48109, USA
| | - Xuchu Weng
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou 510631, China.
| | - Sheng He
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, 20031, China; University of Chinese Academy of Sciences, Beijing 100049, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
33
|
Qu J, Pang Y, Liu X, Cao Y, Huang C, Mei L. Task modulates the orthographic and phonological representations in the bilateral ventral Occipitotemporal cortex. Brain Imaging Behav 2022; 16:1695-1707. [PMID: 35247162 DOI: 10.1007/s11682-022-00641-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/18/2022] [Indexed: 11/25/2022]
Abstract
As a key area in word reading, the left ventral occipitotemporal cortex is proposed for abstract orthographic processing, and its middle part has even been labeled as the visual word form area. Because the definition of the VWFA largely varies and the reading task differs across studies, the function of the left ventral occipitotemporal cortex in word reading is continuingly debated on whether this region is specific for orthographic processing or be involved in an interactive framework. By using representational similarity analysis (RSA), this study examined information representation in the VWFA at the individual level and the modulatory effect of reading task. Twenty-four subjects were scanned while performing the explicit (i.e., the naming task) and implicit (i.e., the perceptual task) reading tasks. Activation analysis showed that the naming task elicited greater activation in regions related to phonological processing (e.g., the bilateral prefrontal cortex and temporoparietal cortex), while the perceptual task recruited greater activation in visual cortex and default mode network (e.g., the bilateral middle frontal gyrus, angular gyrus, and the right middle temporal gyrus). More importantly, RSA also showed that task modulated information representation in the bilateral anterior occipitotemporal cortex and VWFA. Specifically, ROI-based RSA revealed enhanced orthographic and phonological representations in the bilateral anterior fusiform cortex and VWFA in the naming task relative to the perceptual task. These results suggest that lexical representation in the VWFA is influenced by the demand of phonological processing, which supports the interactive account of the VWFA.
Collapse
Affiliation(s)
- Jing Qu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Yingdan Pang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Xiaoyu Liu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Ying Cao
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Chengmei Huang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Leilei Mei
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China.
| |
Collapse
|
34
|
Bottini R, Nava E, De Cuntis I, Benetti S, Collignon O. Synesthesia in a congenitally blind individual. Neuropsychologia 2022; 170:108226. [DOI: 10.1016/j.neuropsychologia.2022.108226] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 02/25/2022] [Accepted: 03/23/2022] [Indexed: 11/25/2022]
|
35
|
Andin J, Holmer E. Reorganization of large-scale brain networks in deaf signing adults: The role of auditory cortex in functional reorganization following deafness. Neuropsychologia 2022; 166:108139. [PMID: 34990695 DOI: 10.1016/j.neuropsychologia.2021.108139] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 12/17/2021] [Accepted: 12/31/2021] [Indexed: 01/24/2023]
Abstract
If the brain is deprived of input from one or more senses during development, functional and structural reorganization of the deprived regions takes place. However, little is known about how sensory deprivation affects large-scale brain networks. In the present study, we use data-driven independent component analysis (ICA) to characterize large-scale brain networks in 15 deaf early signers and 24 hearing non-signers based on resting-state functional MRI data. We found differences between the groups in independent components representing the left lateralized control network, the default network, the ventral somatomotor network, and the attention network. In addition, we showed stronger functional connectivity for deaf compared to hearing individuals from the middle and superior temporal cortices to the cingulate cortex, insular cortex, cuneus and precuneus, supramarginal gyrus, supplementary motor area, and cerebellum crus 1, and stronger connectivity for hearing non-signers to hippocampus, middle and superior frontal gyri, pre- and postcentral gyri, and cerebellum crus 8. These results show that deafness induces large-scale network reorganization, with the middle/superior temporal cortex as a central node of plasticity. Cross-modal reorganization may be associated with behavioral adaptations to the environment, including superior ability in some visual functions such as visual working memory and visual attention, in deaf signers.
Collapse
Affiliation(s)
- Josefine Andin
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, SE, 581 83, Linköping, Sweden.
| | - Emil Holmer
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, SE, 581 83, Linköping, Sweden; Center for Medical Image Science and Visualization, Linköping University, Sweden.
| |
Collapse
|
36
|
Romanovska L, Bonte M. How Learning to Read Changes the Listening Brain. Front Psychol 2021; 12:726882. [PMID: 34987442 PMCID: PMC8721231 DOI: 10.3389/fpsyg.2021.726882] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 11/23/2021] [Indexed: 01/18/2023] Open
Abstract
Reading acquisition reorganizes existing brain networks for speech and visual processing to form novel audio-visual language representations. This requires substantial cortical plasticity that is reflected in changes in brain activation and functional as well as structural connectivity between brain areas. The extent to which a child's brain can accommodate these changes may underlie the high variability in reading outcome in both typical and dyslexic readers. In this review, we focus on reading-induced functional changes of the dorsal speech network in particular and discuss how its reciprocal interactions with the ventral reading network contributes to reading outcome. We discuss how the dynamic and intertwined development of both reading networks may be best captured by approaching reading from a skill learning perspective, using audio-visual learning paradigms and longitudinal designs to follow neuro-behavioral changes while children's reading skills unfold.
Collapse
Affiliation(s)
| | - Milene Bonte
- *Correspondence: Linda Romanovska, ; Milene Bonte,
| |
Collapse
|
37
|
Nordt M, Gomez J, Natu VS, Rezai AA, Finzi D, Kular H, Grill-Spector K. Cortical recycling in high-level visual cortex during childhood development. Nat Hum Behav 2021; 5:1686-1697. [PMID: 34140657 PMCID: PMC8678383 DOI: 10.1038/s41562-021-01141-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 05/17/2021] [Indexed: 02/05/2023]
Abstract
Human ventral temporal cortex contains category-selective regions that respond preferentially to ecologically relevant categories such as faces, bodies, places and words and that are causally involved in the perception of these categories. How do these regions develop during childhood? We used functional magnetic resonance imaging to measure longitudinal development of category selectivity in school-age children over 1 to 5 years. We discovered that, from young childhood to the teens, face- and word-selective regions in ventral temporal cortex expand and become more category selective, but limb-selective regions shrink and lose their preference for limbs. Critically, as a child develops, increases in face and word selectivity are directly linked to decreases in limb selectivity, revealing that during childhood, limb selectivity in ventral temporal cortex is repurposed into word and face selectivity. These data provide evidence for cortical recycling during childhood development. This has important implications for understanding typical as well as atypical brain development and necessitates a rethinking of how cortical function develops during childhood.
Collapse
Affiliation(s)
- Marisa Nordt
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Jesse Gomez
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Vaidehi S Natu
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Alex A Rezai
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Dawn Finzi
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Holly Kular
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, CA, USA.
- Neurosciences Program, Stanford University, Stanford, CA, USA.
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.
| |
Collapse
|
38
|
Werth R. Is Developmental Dyslexia Due to a Visual and Not a Phonological Impairment? Brain Sci 2021; 11:1313. [PMID: 34679378 PMCID: PMC8534212 DOI: 10.3390/brainsci11101313] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 09/21/2021] [Accepted: 09/27/2021] [Indexed: 11/16/2022] Open
Abstract
It is a widely held belief that developmental dyslexia (DD) is a phonological disorder in which readers have difficulty associating graphemes with their corresponding phonemes. In contrast, the magnocellular theory of dyslexia assumes that DD is a visual disorder caused by dysfunctional magnocellular neural pathways. The review explores arguments for and against these theories. Recent results have shown that DD is caused by (1) a reduced ability to simultaneously recognize sequences of letters that make up words, (2) longer fixation times required to simultaneously recognize strings of letters, and (3) amplitudes of saccades that do not match the number of simultaneously recognized letters. It was shown that pseudowords that could not be recognized simultaneously were recognized almost without errors when the fixation time was extended. However, there is an individual maximum number of letters that each reader with DD can recognize simultaneously. Findings on the neurobiological basis of temporal summation have shown that a necessary prolongation of fixation times is due to impaired processing mechanisms of the visual system, presumably involving magnocells and parvocells. An area in the mid-fusiform gyrus also appears to play a significant role in the ability to simultaneously recognize words and pseudowords. The results also contradict the assumption that DD is due to a lack of eye movement control. The present research does not support the assumption that DD is caused by a phonological disorder but shows that DD is due to a visual processing dysfunction.
Collapse
Affiliation(s)
- Reinhard Werth
- Institute for Social Pediatrics and Adolescent Medicine, University of Munich, Haydnstrasse 5, D-80336 Munich, Germany
| |
Collapse
|
39
|
Arcaro MJ, Livingstone MS. On the relationship between maps and domains in inferotemporal cortex. Nat Rev Neurosci 2021; 22:573-583. [PMID: 34345018 PMCID: PMC8865285 DOI: 10.1038/s41583-021-00490-4] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/24/2021] [Indexed: 02/07/2023]
Abstract
How does the brain encode information about the environment? Decades of research have led to the pervasive notion that the object-processing pathway in primate cortex consists of multiple areas that are each specialized to process different object categories (such as faces, bodies, hands, non-face objects and scenes). The anatomical consistency and modularity of these regions have been interpreted as evidence that these regions are innately specialized. Here, we propose that ventral-stream modules do not represent clusters of circuits that each evolved to process some specific object category particularly important for survival, but instead reflect the effects of experience on a domain-general architecture that evolved to be able to adapt, within a lifetime, to its particular environment. Furthermore, we propose that the mechanisms underlying the development of domains are both evolutionarily old and universal across cortex. Topographic maps are fundamental, governing the development of specializations across systems, providing a framework for brain organization.
Collapse
|
40
|
Sakai H, Ueda S, Ueno K, Kumada T. Neuroplastic Reorganization Induced by Sensory Augmentation for Self-Localization During Locomotion. FRONTIERS IN NEUROERGONOMICS 2021; 2:691993. [PMID: 38235242 PMCID: PMC10790880 DOI: 10.3389/fnrgo.2021.691993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 07/21/2021] [Indexed: 01/19/2024]
Abstract
Sensory skills can be augmented through training and technological support. This process is underpinned by neural plasticity in the brain. We previously demonstrated that auditory-based sensory augmentation can be used to assist self-localization during locomotion. However, the neural mechanisms underlying this phenomenon remain unclear. Here, by using functional magnetic resonance imaging, we aimed to identify the neuroplastic reorganization induced by sensory augmentation training for self-localization during locomotion. We compared activation in response to auditory cues for self-localization before, the day after, and 1 month after 8 days of sensory augmentation training in a simulated driving environment. Self-localization accuracy improved after sensory augmentation training, compared with the control (normal driving) condition; importantly, sensory augmentation training resulted in auditory responses not only in temporal auditory areas but also in higher-order somatosensory areas extending to the supramarginal gyrus and the parietal operculum. This sensory reorganization had disappeared by 1 month after the end of the training. These results suggest that the use of auditory cues for self-localization during locomotion relies on multimodality in higher-order somatosensory areas, despite substantial evidence that information for self-localization during driving is estimated from visual cues on the proximal part of the road. Our findings imply that the involvement of higher-order somatosensory, rather than visual, areas is crucial for acquiring augmented sensory skills for self-localization during locomotion.
Collapse
Affiliation(s)
- Hiroyuki Sakai
- Human Science Laboratory, Toyota Central R&D Laboratories, Inc., Tokyo, Japan
| | - Sayako Ueda
- TOYOTA Collaboration Center, RIKEN Center for Brain Science, Wako, Japan
| | - Kenichi Ueno
- Support Unit for Functional Magnetic Resonance Imaging, RIKEN Center for Brain Science, Wako, Japan
| | | |
Collapse
|
41
|
Ankeeta A, Saxena R, Kumaran SS, Dwivedi SN, Jagannathan NR, Narang V. Evaluation of Memory and Language Network in Children and Adolescents with Visual Impairment: A Combined Functional Connectivity and Voxel-based Morphometry Study. Neuroophthalmology 2021; 45:147-161. [PMID: 34194122 DOI: 10.1080/01658107.2020.1855452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Abstract
Functional network changes associated with Braille reading are different in early blind (EB) and late blind (LB) participants. The objectives were to study the functional connectivity (of memory and language areas based on blood oxygen level-dependent [BOLD] mapping) and structural changes in EB and LB children and adolescents. A total of 110 participants (all right-handed) were recruited in two age groups of 6-12 years (children) and 13-19 years (adolescents) consisting of EB (n = 20), LB (n = 20), and sighted controls (SC, n = 15) in each group. Group differences were estimated between children and adolescent groups. Structural changes in visual cortex and medial temporal area, increased BOLD activations and altered functional connectivity in the primary visual cortex, inferior frontal gyrus, middle temporal gyrus, and hippocampus during Braille reading task were observed in adolescents as compared with children blind groups (pFDR corrected <0.05). Functional results were positively correlated with duration of Braille reading and age at onset in EB and LB groups (p ≤ 0.01). Visual, language, and learning memory networks were different in adolescents and children of both EB and LB groups, and also between EB and LB groups suggesting cross-modal plasticity. The functional and structural results revealed education dependent cross-modal plasticity in visually impaired participants. Memory and language network were affected more in the LB group than the EB group, and more in children than adolescents.
Collapse
Affiliation(s)
- A Ankeeta
- Department of NMR & MRI Facility, All India Institute of Medical Sciences, New Delhi, India
| | - Rohit Saxena
- DR. R. P. Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - S Senthil Kumaran
- Department of NMR & MRI Facility, All India Institute of Medical Sciences, New Delhi, India
| | - Sada Nand Dwivedi
- Department of Biostatistics, All India Institute of Medical Sciences, New Delhi, India
| | | | - Vaishna Narang
- Centre for Linguistics, School of Language, Literature and Culture Studies, Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
42
|
Abstract
The scientific study of reading has a rich history that spans disciplines from vision science to linguistics, psychology, cognitive neuroscience, neurology, and education. The study of reading can elucidate important general mechanisms in spatial vision, attentional control, object recognition, and perceptual learning, as well as the principles of plasticity and cortical topography. However, literacy also prompts the development of specific neural circuits to process a unique and artificial stimulus. In this review, we describe the sequence of operations that transforms visual features into language, how the key neural circuits are sculpted by experience during development, and what goes awry in children for whom learning to read is a struggle. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Jason D Yeatman
- Graduate School of Education, Stanford University, Stanford, California 93405, USA; .,Division of Developmental-Behavioral Pediatrics, Stanford University School of Medicine, Stanford, California 94305, USA.,Department of Psychology, Stanford University, Stanford, California 94305, USA
| | - Alex L White
- Graduate School of Education, Stanford University, Stanford, California 93405, USA; .,Division of Developmental-Behavioral Pediatrics, Stanford University School of Medicine, Stanford, California 94305, USA.,Department of Neuroscience and Behavior, Barnard College, New York, New York 10027, USA
| |
Collapse
|
43
|
Crollen V, Warusfel H, Noël MP, Collignon O. Early visual deprivation does not prevent the emergence of basic numerical abilities in blind children. Cognition 2021; 210:104586. [DOI: 10.1016/j.cognition.2021.104586] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 12/21/2020] [Accepted: 01/06/2021] [Indexed: 11/26/2022]
|
44
|
Araneda R, Silva Moura S, Dricot L, De Volder AG. Beat Detection Recruits the Visual Cortex in Early Blind Subjects. Life (Basel) 2021; 11:life11040296. [PMID: 33807372 PMCID: PMC8066101 DOI: 10.3390/life11040296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 03/25/2021] [Accepted: 03/29/2021] [Indexed: 11/16/2022] Open
Abstract
Using functional magnetic resonance imaging, here we monitored the brain activity in 12 early blind subjects and 12 blindfolded control subjects, matched for age, gender and musical experience, during a beat detection task. Subjects were required to discriminate regular ("beat") from irregular ("no beat") rhythmic sequences composed of sounds or vibrotactile stimulations. In both sensory modalities, the brain activity differences between the two groups involved heteromodal brain regions including parietal and frontal cortical areas and occipital brain areas, that were recruited in the early blind group only. Accordingly, early blindness induced brain plasticity changes in the cerebral pathways involved in rhythm perception, with a participation of the visually deprived occipital brain areas whatever the sensory modality for input. We conclude that the visually deprived cortex switches its input modality from vision to audition and vibrotactile sense to perform this temporal processing task, supporting the concept of a metamodal, multisensory organization of this cortex.
Collapse
Affiliation(s)
- Rodrigo Araneda
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
| | - Sandra Silva Moura
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
| | - Laurence Dricot
- Institute of Neuroscience (IoNS; NEUR Section), Université Catholique de Louvain, 1200 Brussels, Belgium;
| | - Anne G. De Volder
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
- Correspondence: ; Tel.: +32-2-764-54-82
| |
Collapse
|
45
|
Can rotated words be processed automatically? Evidence from rotated repetition priming. Mem Cognit 2021; 49:1163-1171. [PMID: 33721262 PMCID: PMC7958561 DOI: 10.3758/s13421-021-01147-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/28/2021] [Indexed: 11/08/2022]
Abstract
Visual word processing has its own dedicated neural system that, due to the novelty of this activity, is unlikely to have acquired its specialization through natural selection. Understanding the properties of this system could shed light on its recruitment and the background of its disorders. Although recognition of simple visual objects is orientation invariant, this is not necessarily the case for written words. We used a masked repetition priming paradigm to find out whether words retain their readability when viewed in atypical orientations. Subjects had to read out upright target words that were preceded by rotated prime words of the same or different identity. Priming duration was varied in Experiment 1 to assess the temporal emergence of a rotated priming effect. In Experiment 2, the letter order of the prime words was reversed in order to differentiate the processing stage where priming occurs. The orientational pattern of the priming effects seen in our results mostly confirms earlier word recognition models, but also serves a more detailed view about the effects of orientation on word form processing.
Collapse
|
46
|
Perea M, Baciero A, Marcet A, Fernández-López M, Gómez P. Do Grading Gray Stimuli Help to Encode Letter Position? Vision (Basel) 2021; 5:12. [PMID: 33806403 PMCID: PMC8005957 DOI: 10.3390/vision5010012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/04/2021] [Accepted: 03/01/2021] [Indexed: 11/16/2022] Open
Abstract
Numerous experiments in the past decades recurrently showed that a transposed-letter pseudoword (e.g., JUGDE) is much more wordlike than a replacement-letter control (e.g., JUPTE). Critically, there is an ongoing debate as to whether this effect arises at a perceptual level (e.g., perceptual uncertainty at assigning letter position of an array of visual objects) or at an abstract language-specific level (e.g., via a level of "open bigrams" between the letter and word levels). Here, we designed an experiment to test the limits of perceptual accounts of letter position coding. The stimuli in a lexical decision task were presented either with a homogeneous letter intensity or with a graded gray intensity, which indicated an unambiguous letter order. The pseudowords were either transposed-letter pseudowords or replaced-letter pseudowords (e.g., jugde vs. jupte). The results showed much longer response times and substantially more errors in the transposed-letter pseudowords than in the replacement-letter pseudowords, regardless of visual format. These findings favor the idea that language-specific orthographic element factors play an essential role when encoding letter position during word recognition.
Collapse
Affiliation(s)
- Manuel Perea
- Departamento de Metodología and ERI-Lectura, Universitat de València, 46010 Valencia, Spain;
- Centro de Ciencia Cognitiva, Universidad Antonio de Nebrija, 28015 Madrid, Spain;
| | - Ana Baciero
- Centro de Ciencia Cognitiva, Universidad Antonio de Nebrija, 28015 Madrid, Spain;
| | - Ana Marcet
- Departamento de Didáctica de la Lengua y la Literatura, Universitat de València, 46022 Valencia, Spain;
| | - María Fernández-López
- Departamento de Metodología and ERI-Lectura, Universitat de València, 46010 Valencia, Spain;
| | - Pablo Gómez
- Department of Psychology, Palm Desert Campus, California State University, San Bernardino, CA 92407, USA;
| |
Collapse
|
47
|
Dzięgiel-Fivet G, Plewko J, Szczerbiński M, Marchewka A, Szwed M, Jednoróg K. Neural network for Braille reading and the speech-reading convergence in the blind: Similarities and differences to visual reading. Neuroimage 2021; 231:117851. [PMID: 33582273 DOI: 10.1016/j.neuroimage.2021.117851] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 02/04/2021] [Accepted: 02/05/2021] [Indexed: 10/22/2022] Open
Abstract
All writing systems represent units of spoken language. Studies on the neural correlates of reading in different languages show that this skill relies on access to brain areas dedicated to speech processing. Speech-reading convergence onto a common perisylvian network is therefore considered universal among different writing systems. Using fMRI, we test whether this holds true also for tactile Braille reading in the blind. The neural networks for Braille and visual reading overlapped in the left ventral occipitotemporal (vOT) cortex. Even though we showed similar perisylvian specialization for speech in both groups, blind subjects did not engage this speech system for reading. In contrast to the sighted, speech-reading convergence in the blind was absent in the perisylvian network. Instead, the blind engaged vOT not only in reading but also in speech processing. The involvement of the vOT in speech processing and its engagement in reading in the blind suggests that vOT is included in a modality independent language network in the blind, also evidenced by functional connectivity results. The analysis of individual speech-reading convergence suggests that there may be segregated neuronal populations in the vOT for speech processing and reading in the blind.
Collapse
Affiliation(s)
- Gabriela Dzięgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| | - Joanna Plewko
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | | | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Marcin Szwed
- Department of Psychology, Jagiellonian University, Cracow, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
48
|
Visual motion processing recruits regions selective for auditory motion in early deaf individuals. Neuroimage 2021; 230:117816. [PMID: 33524580 DOI: 10.1016/j.neuroimage.2021.117816] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 01/18/2021] [Accepted: 01/25/2021] [Indexed: 01/24/2023] Open
Abstract
In early deaf individuals, the auditory deprived temporal brain regions become engaged in visual processing. In our study we tested further the hypothesis that intrinsic functional specialization guides the expression of cross-modal responses in the deprived auditory cortex. We used functional MRI to characterize the brain response to horizontal, radial and stochastic visual motion in early deaf and hearing individuals matched for the use of oral or sign language. Visual motion showed enhanced response in the 'deaf' mid-lateral planum temporale, a region selective to auditory motion as demonstrated by a separate auditory motion localizer in hearing people. Moreover, multivariate pattern analysis revealed that this reorganized temporal region showed enhanced decoding of motion categories in the deaf group, while visual motion-selective region hMT+/V5 showed reduced decoding when compared to hearing people. Dynamic Causal Modelling revealed that the 'deaf' motion-selective temporal region shows a specific increase of its functional interactions with hMT+/V5 and is now part of a large-scale visual motion selective network. In addition, we observed preferential responses to radial, compared to horizontal, visual motion in the 'deaf' right superior temporal cortex region that also show preferential response to approaching/receding sounds in the hearing brain. Overall, our results suggest that the early experience of auditory deprivation interacts with intrinsic constraints and triggers a large-scale reallocation of computational load between auditory and visual brain regions that typically support the multisensory processing of motion information.
Collapse
|
49
|
Matuszewski J, Kossowski B, Bola Ł, Banaszkiewicz A, Paplińska M, Gyger L, Kherif F, Szwed M, Frackowiak RS, Jednoróg K, Draganski B, Marchewka A. Brain plasticity dynamics during tactile Braille learning in sighted subjects: Multi-contrast MRI approach. Neuroimage 2020; 227:117613. [PMID: 33307223 DOI: 10.1016/j.neuroimage.2020.117613] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/20/2020] [Accepted: 11/29/2020] [Indexed: 01/11/2023] Open
Abstract
A growing body of empirical evidence supports the notion of diverse neurobiological processes underlying learning-induced plasticity changes in the human brain. There are still open questions about how brain plasticity depends on cognitive task complexity, how it supports interactions between brain systems and with what temporal and spatial trajectory. We investigated brain and behavioural changes in sighted adults during 8-months training of tactile Braille reading whilst monitoring brain structure and function at 5 different time points. We adopted a novel multivariate approach that includes behavioural data and specific MRI protocols sensitive to tissue properties to assess local functional and structural and myelin changes over time. Our results show that while the reading network, located in the ventral occipitotemporal cortex, rapidly adapts to tactile input, sensory areas show changes in grey matter volume and intra-cortical myelin at different times. This approach has allowed us to examine and describe neuroplastic mechanisms underlying complex cognitive systems and their (sensory) inputs and (motor) outputs differentially, at a mesoscopic level.
Collapse
Affiliation(s)
- Jacek Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| | - Bartosz Kossowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Łukasz Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland; Institute of Psychology, Jagiellonian University, Krakow, Poland
| | - Anna Banaszkiewicz
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | | | - Lucien Gyger
- LREN, Department for Clinical Neurosciences, CHUV, University of Lausanne, Lausanne, Switzerland
| | - Ferath Kherif
- LREN, Department for Clinical Neurosciences, CHUV, University of Lausanne, Lausanne, Switzerland
| | - Marcin Szwed
- Institute of Psychology, Jagiellonian University, Krakow, Poland
| | | | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Bogdan Draganski
- LREN, Department for Clinical Neurosciences, CHUV, University of Lausanne, Lausanne, Switzerland; Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
50
|
Arbel R, Heimler B, Amedi A. The sound of reading: Color-to-timbre substitution boosts reading performance via OVAL, a novel auditory orthography optimized for visual-to-auditory mapping. PLoS One 2020; 15:e0242619. [PMID: 33237931 PMCID: PMC7688106 DOI: 10.1371/journal.pone.0242619] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 11/05/2020] [Indexed: 11/25/2022] Open
Abstract
Reading is a unique human cognitive skill and its acquisition was proven to extensively affect both brain organization and neuroanatomy. Differently from western sighted individuals, literacy rates via tactile reading systems, such as Braille, are declining, thus imposing an alarming threat to literacy among non-visual readers. This decline is due to many reasons including the length of training needed to master Braille, which must also include extensive tactile sensitivity exercises, the lack of proper Braille instruction and the high costs of Braille devices. The far-reaching consequences of low literacy rates, raise the need to develop alternative, cheap and easy-to-master non-visual reading systems. To this aim, we developed OVAL, a new auditory orthography based on a visual-to-auditory sensory-substitution algorithm. Here we present its efficacy for successful words-reading, and investigation of the extent to which redundant features defining characters (i.e., adding specific colors to letters conveyed into audition via different musical instruments) facilitate or impede auditory reading outcomes. Thus, we tested two groups of blindfolded sighted participants who were either exposed to a monochromatic or to a color version of OVAL. First, we showed that even before training, all participants were able to discriminate between 11 OVAL characters significantly more than chance level. Following 6 hours of specific OVAL training, participants were able to identify all the learned characters, differentiate them from untrained letters, and read short words/pseudo-words of up to 5 characters. The Color group outperformed the Monochromatic group in all tasks, suggesting that redundant characters' features are beneficial for auditory reading. Overall, these results suggest that OVAL is a promising auditory-reading tool that can be used by blind individuals, by people with reading deficits as well as for the investigation of reading specific processing dissociated from the visual modality.
Collapse
Affiliation(s)
- Roni Arbel
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Carem, Jerusalem, Israel
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Carem, Jerusalem, Israel
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzliya, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Carem, Jerusalem, Israel
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzliya, Israel
| |
Collapse
|