1
|
Botch TL, Finn ES. Neural Representations of Concreteness and Concrete Concepts Are Specific to the Individual. J Neurosci 2024; 44:e0288242024. [PMID: 39349055 DOI: 10.1523/jneurosci.0288-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 08/29/2024] [Accepted: 09/09/2024] [Indexed: 10/02/2024] Open
Abstract
Different people listening to the same story may converge upon a largely shared interpretation while still developing idiosyncratic experiences atop that shared foundation. What linguistic properties support this individualized experience of natural language? Here, we investigate how the "concrete-abstract" axis-the extent to which a word is grounded in sensory experience-relates to within- and across-subject variability in the neural representations of language. Leveraging a dataset of human participants of both sexes who each listened to four auditory stories while undergoing functional magnetic resonance imaging, we demonstrate that neural representations of "concreteness" are both reliable across stories and relatively unique to individuals, while neural representations of "abstractness" are variable both within individuals and across the population. Using natural language processing tools, we show that concrete words exhibit similar neural representations despite spanning larger distances within a high-dimensional semantic space, which potentially reflects an underlying representational signature of sensory experience-namely, imageability-shared by concrete words but absent from abstract words. Our findings situate the concrete-abstract axis as a core dimension that supports both shared and individualized representations of natural language.
Collapse
Affiliation(s)
- Thomas L Botch
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Emily S Finn
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| |
Collapse
|
2
|
Zhang R, An G, Hao Y, Wu DO. Bridging Visual and Textual Semantics: Towards Consistency for Unbiased Scene Graph Generation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:7102-7119. [PMID: 38625774 DOI: 10.1109/tpami.2024.3389030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2024]
Abstract
Scene Graph Generation (SGG) aims to detect visual relationships in an image. However, due to long-tailed bias, SGG is far from practical. Most methods depend heavily on the assistance of statistics co-occurrence to generate a balanced dataset, so they are dataset-specific and easily affected by noises. The fundamental cause is that SGG is simplified as a classification task instead of a reasoning task, thus the ability capturing the fine-grained details is limited and the difficulty in handling ambiguity is increased. By imitating the way of dual process in cognitive psychology, a Visual-Textual Semantics Consistency Network (VTSCN) is proposed to model the SGG task as a reasoning process, and relieve the long-tailed bias significantly. In VTSCN, as the rapid autonomous process (Type1 process), we design a Hybrid Union Representation (HUR) module, which is divided into two steps for spatial awareness and working memories modeling. In addition, as the higher order reasoning process (Type2 process), a Global Textual Semantics Modeling (GTS) module is designed to individually model the textual contexts with the word embeddings of pairwise objects. As the final associative process of cognition, a Heterogeneous Semantics Consistency (HSC) module is designed to balance the type1 process and the type2 process. Lastly, our VTSCN raises a new way for SGG model design by fully considering human cognitive process. Experiments on Visual Genome, GQA and PSG datasets show our method is superior to state-of-the-art methods, and ablation studies validate the effectiveness of our VTSCN.
Collapse
|
3
|
Conwell C, Prince JS, Kay KN, Alvarez GA, Konkle T. A large-scale examination of inductive biases shaping high-level visual representation in brains and machines. Nat Commun 2024; 15:9383. [PMID: 39477923 PMCID: PMC11526138 DOI: 10.1038/s41467-024-53147-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 10/01/2024] [Indexed: 11/02/2024] Open
Abstract
The rapid release of high-performing computer vision models offers new potential to study the impact of different inductive biases on the emergent brain alignment of learned representations. Here, we perform controlled comparisons among a curated set of 224 diverse models to test the impact of specific model properties on visual brain predictivity - a process requiring over 1.8 billion regressions and 50.3 thousand representational similarity analyses. We find that models with qualitatively different architectures (e.g. CNNs versus Transformers) and task objectives (e.g. purely visual contrastive learning versus vision- language alignment) achieve near equivalent brain predictivity, when other factors are held constant. Instead, variation across visual training diets yields the largest, most consistent effect on brain predictivity. Many models achieve similarly high brain predictivity, despite clear variation in their underlying representations - suggesting that standard methods used to link models to brains may be too flexible. Broadly, these findings challenge common assumptions about the factors underlying emergent brain alignment, and outline how we can leverage controlled model comparison to probe the common computational principles underlying biological and artificial visual systems.
Collapse
Affiliation(s)
- Colin Conwell
- Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Jacob S Prince
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Kendrick N Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN, USA
| | - George A Alvarez
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Talia Konkle
- Department of Psychology, Harvard University, Cambridge, MA, USA.
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Kempner Institute for Natural and Artificial Intelligence, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
4
|
Steel A, Angeli PA, Silson EH, Robertson CE. Retinotopic coding organizes the opponent dynamic between internally and externally oriented brain networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.25.615084. [PMID: 39386717 PMCID: PMC11463438 DOI: 10.1101/2024.09.25.615084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
How the human brain integrates internally- (i.e., mnemonic) and externally-oriented (i.e., perceptual) information is a long-standing puzzle in neuroscience. In particular, the internally-oriented networks like the default network (DN) and externally-oriented dorsal attention networks (dATNs) are thought to be globally competitive, which implies DN disengagement during cognitive states that drive the dATNs and vice versa. If these networks are globally opposed, how is internal and external information integrated across these networks? Here, using precision neuroimaging methods, we show that these internal/external networks are not as dissociated as traditionally thought. Using densely sampled high-resolution fMRI data, we defined individualized whole-brain networks from participants at rest, and the retinotopic preferences of individual voxels within these networks during an independent visual mapping task. We show that while the overall network activity between the DN and dATN is opponent at rest, a latent retinotopic code structures this global opponency. Specifically, the anti-correlation (i.e., global opponency) between the DN and dATN at rest is structured at the voxel-level by each voxel's retinotopic preferences, such that the spontaneous activity of voxels preferring similar visual field locations are more anti-correlated than those that prefer different visual field locations. Further, this retinotopic scaffold integrates with the domain-specific preferences of subregions within these networks, enabling efficient, parallel processing of retinotopic and domain-specific information. Thus, DN and dATN dynamics are opponent, but not competitive: voxel-scale anti-correlation between these networks preserves and encodes information in the negative BOLD responses, even in the absence of visual input or task demands. These findings suggest that retinotopic coding may serve as a fundamental organizing principle for brain-wide communication, providing a new framework for understanding how the brain balances and integrates internal cognition with external perception.
Collapse
Affiliation(s)
- Adam Steel
- Beckman Institute, University of Illinois, Urbana, Illinois, USA
- Department of Psychology, University of Illinois, Urbana, Illinois, USA
- Department of Psychology, Dartmouth College, Hanover, NH, USA
- Lead contact
| | - Peter A. Angeli
- Department of Psychology, Dartmouth College, Hanover, NH, USA
| | - Edward H. Silson
- Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom
| | | |
Collapse
|
5
|
Dima DC, Janarthanan S, Culham JC, Mohsenzadeh Y. Shared representations of human actions across vision and language. Neuropsychologia 2024; 202:108962. [PMID: 39047974 DOI: 10.1016/j.neuropsychologia.2024.108962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 06/26/2024] [Accepted: 07/20/2024] [Indexed: 07/27/2024]
Abstract
Humans can recognize and communicate about many actions performed by others. How are actions organized in the mind, and is this organization shared across vision and language? We collected similarity judgments of human actions depicted through naturalistic videos and sentences, and tested four models of action categorization, defining actions at different levels of abstraction ranging from specific (action verb) to broad (action target: whether an action is directed towards an object, another person, or the self). The similarity judgments reflected a shared organization of action representations across videos and sentences, determined mainly by the target of actions, even after accounting for other semantic features. Furthermore, language model embeddings predicted the behavioral similarity of action videos and sentences, and captured information about the target of actions alongside unique semantic information. Together, our results show that action concepts are similarly organized in the mind across vision and language, and that this organization reflects socially relevant goals.
Collapse
Affiliation(s)
- Diana C Dima
- Dept of Computer Science, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada.
| | | | - Jody C Culham
- Dept of Psychology, Western University, London, Ontario, Canada
| | - Yalda Mohsenzadeh
- Dept of Computer Science, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| |
Collapse
|
6
|
Querella P, Attout L, Fias W, Majerus S. From long-term to short-term: Distinct neural networks underlying semantic knowledge and its recruitment in working memory. Neuropsychologia 2024; 202:108949. [PMID: 38971371 DOI: 10.1016/j.neuropsychologia.2024.108949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 04/30/2024] [Accepted: 07/01/2024] [Indexed: 07/08/2024]
Abstract
Although numerous studies suggest that working memory (WM) and semantic long-term knowledge interact, the nature and underlying neural mechanisms of this intervention remain poorly understood. Using functional magnetic resonance imaging (fMRI), this study investigated the extent to which neural markers of semantic knowledge in long-term memory (LTM) are activated during the WM maintenance stage in 32 young adults. First, the multivariate neural patterns associated with four semantic categories were determined via an implicit semantic activation task. Next, the participants maintained words - the names of the four semantic categories implicitly activated in the first task - in a verbal WM task. Multi-voxel pattern analyses showed reliable neural decoding of the four semantic categories in the implicit semantic activation and the verbal WM tasks. Critically, however, no between-task classification of semantic categories was observed. Searchlight analyses showed that for the WM task, semantic category information could be decoded in anterior temporal areas associated with abstract semantic category knowledge. In the implicit semantic activation task, semantic category information was decoded in superior temporal, occipital and frontal cortices associated with domain-specific semantic feature representations. These results indicate that item-level semantic activation during verbal WM involves shallow rather than deep semantic information.
Collapse
Affiliation(s)
- Pauline Querella
- Psychology and Cognitive Neuroscience Research Unit, University of Liège, Belgium.
| | - Lucie Attout
- Psychology and Cognitive Neuroscience Research Unit, University of Liège, Belgium; National Fund for Scientific Research, Belgium, Department of Psychology, Psychology and Cognitive Neuroscience Research Unit, University of Liège, Place des Orateurs 1 (B33), 4000, Liège, Belgium
| | - Wim Fias
- Department of Experimental Psychology, Ghent University, Belgium
| | - Steve Majerus
- Psychology and Cognitive Neuroscience Research Unit, University of Liège, Belgium; National Fund for Scientific Research, Belgium, Department of Psychology, Psychology and Cognitive Neuroscience Research Unit, University of Liège, Place des Orateurs 1 (B33), 4000, Liège, Belgium
| |
Collapse
|
7
|
Reilly J, Shain C, Borghesani V, Kuhnke P, Vigliocco G, Peelle JE, Mahon BZ, Buxbaum LJ, Majid A, Brysbaert M, Borghi AM, De Deyne S, Dove G, Papeo L, Pexman PM, Poeppel D, Lupyan G, Boggio P, Hickok G, Gwilliams L, Fernandino L, Mirman D, Chrysikou EG, Sandberg CW, Crutch SJ, Pylkkänen L, Yee E, Jackson RL, Rodd JM, Bedny M, Connell L, Kiefer M, Kemmerer D, de Zubicaray G, Jefferies E, Lynott D, Siew CSQ, Desai RH, McRae K, Diaz MT, Bolognesi M, Fedorenko E, Kiran S, Montefinese M, Binder JR, Yap MJ, Hartwigsen G, Cantlon J, Bi Y, Hoffman P, Garcea FE, Vinson D. What we mean when we say semantic: Toward a multidisciplinary semantic glossary. Psychon Bull Rev 2024:10.3758/s13423-024-02556-7. [PMID: 39231896 DOI: 10.3758/s13423-024-02556-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/19/2024] [Indexed: 09/06/2024]
Abstract
Tulving characterized semantic memory as a vast repository of meaning that underlies language and many other cognitive processes. This perspective on lexical and conceptual knowledge galvanized a new era of research undertaken by numerous fields, each with their own idiosyncratic methods and terminology. For example, "concept" has different meanings in philosophy, linguistics, and psychology. As such, many fundamental constructs used to delineate semantic theories remain underspecified and/or opaque. Weak construct specificity is among the leading causes of the replication crisis now facing psychology and related fields. Term ambiguity hinders cross-disciplinary communication, falsifiability, and incremental theory-building. Numerous cognitive subdisciplines (e.g., vision, affective neuroscience) have recently addressed these limitations via the development of consensus-based guidelines and definitions. The project to follow represents our effort to produce a multidisciplinary semantic glossary consisting of succinct definitions, background, principled dissenting views, ratings of agreement, and subjective confidence for 17 target constructs (e.g., abstractness, abstraction, concreteness, concept, embodied cognition, event semantics, lexical-semantic, modality, representation, semantic control, semantic feature, simulation, semantic distance, semantic dimension). We discuss potential benefits and pitfalls (e.g., implicit bias, prescriptiveness) of these efforts to specify a common nomenclature that other researchers might index in specifying their own theoretical perspectives (e.g., They said X, but I mean Y).
Collapse
Affiliation(s)
| | - Cory Shain
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Philipp Kuhnke
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Leipzig University, Leipzig, Germany
| | | | | | | | - Laurel J Buxbaum
- Thomas Jefferson University, Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | | | | | | | | | - Guy Dove
- University of Louisville, Louisville, KY, USA
| | - Liuba Papeo
- Centre National de La Recherche Scientifique (CNRS), University Claude-Bernard Lyon, Lyon, France
| | | | | | | | - Paulo Boggio
- Universidade Presbiteriana Mackenzie, São Paulo, Brazil
| | | | | | | | | | | | | | | | | | - Eiling Yee
- University of Connecticut, Storrs, CT, USA
| | | | | | | | | | | | | | | | | | | | | | | | - Ken McRae
- Western University, London, ON, Canada
| | | | | | | | | | | | | | - Melvin J Yap
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- National University of Singapore, Singapore, Singapore
| | - Gesa Hartwigsen
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Leipzig University, Leipzig, Germany
| | | | - Yanchao Bi
- University of Edinburgh, Edinburgh, UK
- Beijing Normal University, Beijing, China
| | | | | | | |
Collapse
|
8
|
Arcaro M, Livingstone M. A Whole-Brain Topographic Ontology. Annu Rev Neurosci 2024; 47:21-40. [PMID: 38360565 DOI: 10.1146/annurev-neuro-082823-073701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
It is a common view that the intricate array of specialized domains in the ventral visual pathway is innately prespecified. What this review postulates is that it is not. We explore the origins of domain specificity, hypothesizing that the adult brain emerges from an interplay between a domain-general map-based architecture, shaped by intrinsic mechanisms, and experience. We argue that the most fundamental innate organization of cortex in general, and not just the visual pathway, is a map-based topography that governs how the environment maps onto the brain, how brain areas interconnect, and ultimately, how the brain processes information.
Collapse
Affiliation(s)
- Michael Arcaro
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | | |
Collapse
|
9
|
Do J, James O, Kim YJ. Choice-dependent delta-band neural trajectory during semantic category decision making in the human brain. iScience 2024; 27:110173. [PMID: 39040068 PMCID: PMC11260863 DOI: 10.1016/j.isci.2024.110173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 04/15/2024] [Accepted: 05/31/2024] [Indexed: 07/24/2024] Open
Abstract
Recent human brain imaging studies have identified widely distributed cortical areas that represent information about the meaning of language. Yet, the dynamic nature of widespread neural activity as a correlate of the semantic information processing remains poorly explored. Our state space analysis of electroencephalograms (EEGs) recorded during semantic match-to-category task show that depending on the semantic category and decision path chosen by participants, whole-brain delta-band dynamics follow distinct trajectories that are correlated with participants' response time on a trial-by-trial basis. Especially, the proximity of the neural trajectory to category decision-specific region in the state space was predictive of participants' decision-making reaction times. We also found that posterolateral regions primarily encoded word categories while postero-central regions encoded category decisions. Our results demonstrate the role of neural dynamics embedded in the evolving multivariate delta-band activity patterns in processing the semantic relatedness of words and the semantic category-based decision-making.
Collapse
Affiliation(s)
- Jongrok Do
- Center for Cognition and Sociality, Institute for Basic Science, Daejeon 34126, Republic of Korea
| | - Oliver James
- Center for Cognition and Sociality, Institute for Basic Science, Daejeon 34126, Republic of Korea
| | - Yee-Joon Kim
- Center for Cognition and Sociality, Institute for Basic Science, Daejeon 34126, Republic of Korea
| |
Collapse
|
10
|
Mazurchuk S, Fernandino L, Tong JQ, Conant LL, Binder JR. The neural representation of body part concepts. Cereb Cortex 2024; 34:bhae213. [PMID: 38863113 PMCID: PMC11166504 DOI: 10.1093/cercor/bhae213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/26/2024] [Accepted: 04/27/2024] [Indexed: 06/13/2024] Open
Abstract
Neuropsychological and neuroimaging studies provide evidence for a degree of category-related organization of conceptual knowledge in the brain. Some of this evidence indicates that body part concepts are distinctly represented from other categories; yet, the neural correlates and mechanisms underlying these dissociations are unclear. We expand on the limited prior data by measuring functional magnetic resonance imaging responses induced by body part words and performing a series of analyses investigating the cortical representation of this semantic category. Across voxel-level contrasts, pattern classification, representational similarity analysis, and vertex-wise encoding analyses, we find converging evidence that the posterior middle temporal gyrus, the supramarginal gyrus, and the ventral premotor cortex in the left hemisphere play important roles in the preferential representation of this category compared to other concrete objects.
Collapse
Affiliation(s)
- Stephen Mazurchuk
- Department of Neurology, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, United States
- Department of Biophysics, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, United States
| | - Leonardo Fernandino
- Department of Neurology, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, United States
- Department of Biomedical Engineering, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, United States
| | - Jia-Qing Tong
- Department of Neurology, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, United States
- Department of Biophysics, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, United States
| | - Lisa L Conant
- Department of Neurology, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, United States
| | - Jeffrey R Binder
- Department of Neurology, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, United States
- Department of Biophysics, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, United States
| |
Collapse
|
11
|
Faghel-Soubeyrand S, Richoz AR, Waeber D, Woodhams J, Caldara R, Gosselin F, Charest I. Neural computations in prosopagnosia. Cereb Cortex 2024; 34:bhae211. [PMID: 38795358 PMCID: PMC11127037 DOI: 10.1093/cercor/bhae211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/30/2024] [Accepted: 05/03/2024] [Indexed: 05/27/2024] Open
Abstract
We report an investigation of the neural processes involved in the processing of faces and objects of brain-lesioned patient PS, a well-documented case of pure acquired prosopagnosia. We gathered a substantial dataset of high-density electrophysiological recordings from both PS and neurotypicals. Using representational similarity analysis, we produced time-resolved brain representations in a format that facilitates direct comparisons across time points, different individuals, and computational models. To understand how the lesions in PS's ventral stream affect the temporal evolution of her brain representations, we computed the temporal generalization of her brain representations. We uncovered that PS's early brain representations exhibit an unusual similarity to later representations, implying an excessive generalization of early visual patterns. To reveal the underlying computational deficits, we correlated PS' brain representations with those of deep neural networks (DNN). We found that the computations underlying PS' brain activity bore a closer resemblance to early layers of a visual DNN than those of controls. However, the brain representations in neurotypicals became more akin to those of the later layers of the model compared to PS. We confirmed PS's deficits in high-level brain representations by demonstrating that her brain representations exhibited less similarity with those of a DNN of semantics.
Collapse
Affiliation(s)
- Simon Faghel-Soubeyrand
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Woodstock Rd, Oxford OX2 6GG
| | - Anne-Raphaelle Richoz
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Delphine Waeber
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Jessica Woodhams
- School of Psychology, University of Birmingham, Hills Building, Edgbaston Park Rd, Birmingham B15 2TT, UK
| | - Roberto Caldara
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Frédéric Gosselin
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
| | - Ian Charest
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
| |
Collapse
|
12
|
Zuanazzi A, Ripollés P, Lin WM, Gwilliams L, King JR, Poeppel D. Negation mitigates rather than inverts the neural representations of adjectives. PLoS Biol 2024; 22:e3002622. [PMID: 38814982 PMCID: PMC11139306 DOI: 10.1371/journal.pbio.3002622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 04/11/2024] [Indexed: 06/01/2024] Open
Abstract
Combinatoric linguistic operations underpin human language processes, but how meaning is composed and refined in the mind of the reader is not well understood. We address this puzzle by exploiting the ubiquitous function of negation. We track the online effects of negation ("not") and intensifiers ("really") on the representation of scalar adjectives (e.g., "good") in parametrically designed behavioral and neurophysiological (MEG) experiments. The behavioral data show that participants first interpret negated adjectives as affirmative and later modify their interpretation towards, but never exactly as, the opposite meaning. Decoding analyses of neural activity further reveal significant above chance decoding accuracy for negated adjectives within 600 ms from adjective onset, suggesting that negation does not invert the representation of adjectives (i.e., "not bad" represented as "good"); furthermore, decoding accuracy for negated adjectives is found to be significantly lower than that for affirmative adjectives. Overall, these results suggest that negation mitigates rather than inverts the neural representations of adjectives. This putative suppression mechanism of negation is supported by increased synchronization of beta-band neural activity in sensorimotor areas. The analysis of negation provides a steppingstone to understand how the human brain represents changes of meaning over time.
Collapse
Affiliation(s)
- Arianna Zuanazzi
- Department of Psychology, New York University, New York, New York, United States of America
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, New York, United States of America
- Music and Audio Research Lab (MARL), New York University, New York, New York, United States of America
- Center for Language, Music and Emotion (ClaME), New York University, New York, New York, United States of America
| | - Wy Ming Lin
- Hector Research Institute for Education Sciences and Psychology, University of Tübingen, Tübingen, Germany
| | - Laura Gwilliams
- Department of Psychology, Stanford University, Stanford, California, United States of America
| | - Jean-Rémi King
- Department of Psychology, New York University, New York, New York, United States of America
- Ecole Normale Supérieure, PSL University, Paris, France
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Language, Music and Emotion (ClaME), New York University, New York, New York, United States of America
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| |
Collapse
|
13
|
Zhang X, Lian J, Yu Z, Tang H, Liang D, Liu J, Liu JK. Revealing the mechanisms of semantic satiation with deep learning models. Commun Biol 2024; 7:487. [PMID: 38649503 PMCID: PMC11035687 DOI: 10.1038/s42003-024-06162-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 04/08/2024] [Indexed: 04/25/2024] Open
Abstract
The phenomenon of semantic satiation, which refers to the loss of meaning of a word or phrase after being repeated many times, is a well-known psychological phenomenon. However, the microscopic neural computational principles responsible for these mechanisms remain unknown. In this study, we use a deep learning model of continuous coupled neural networks to investigate the mechanism underlying semantic satiation and precisely describe this process with neuronal components. Our results suggest that, from a mesoscopic perspective, semantic satiation may be a bottom-up process. Unlike existing macroscopic psychological studies that suggest that semantic satiation is a top-down process, our simulations use a similar experimental paradigm as classical psychology experiments and observe similar results. Satiation of semantic objectives, similar to the learning process of our network model used for object recognition, relies on continuous learning and switching between objects. The underlying neural coupling strengthens or weakens satiation. Taken together, both neural and network mechanisms play a role in controlling semantic satiation.
Collapse
Affiliation(s)
- Xinyu Zhang
- School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China
| | - Jing Lian
- School of Electronics and Information Engineering, Lanzhou Jiaotong University, Lanzhou, 730070, Gansu, China
| | - Zhaofei Yu
- School of Computer Science, Peking University, Beijing, 100871, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, Beijing, China
| | - Huajin Tang
- The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, 310027, Zhejiang, China
- The MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, 310027, Zhejiang, China
| | - Dong Liang
- Department of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, Jiangsu, China
| | - Jizhao Liu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China.
| | - Jian K Liu
- School of Computer Science, Centre for Human Brain Health, University of Birmingham, Birmingham, B15 2TT, UK.
| |
Collapse
|
14
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
15
|
Jain S, Vo VA, Wehbe L, Huth AG. Computational Language Modeling and the Promise of In Silico Experimentation. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:80-106. [PMID: 38645624 PMCID: PMC11025654 DOI: 10.1162/nol_a_00101] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 01/18/2023] [Indexed: 04/23/2024]
Abstract
Language neuroscience currently relies on two major experimental paradigms: controlled experiments using carefully hand-designed stimuli, and natural stimulus experiments. These approaches have complementary advantages which allow them to address distinct aspects of the neurobiology of language, but each approach also comes with drawbacks. Here we discuss a third paradigm-in silico experimentation using deep learning-based encoding models-that has been enabled by recent advances in cognitive computational neuroscience. This paradigm promises to combine the interpretability of controlled experiments with the generalizability and broad scope of natural stimulus experiments. We show four examples of simulating language neuroscience experiments in silico and then discuss both the advantages and caveats of this approach.
Collapse
Affiliation(s)
- Shailee Jain
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
| | - Vy A. Vo
- Brain-Inspired Computing Lab, Intel Labs, Hillsboro, OR, USA
| | - Leila Wehbe
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Alexander G. Huth
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
- Department of Neuroscience, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
16
|
Noda T, Aschauer DF, Chambers AR, Seiler JPH, Rumpel S. Representational maps in the brain: concepts, approaches, and applications. Front Cell Neurosci 2024; 18:1366200. [PMID: 38584779 PMCID: PMC10995314 DOI: 10.3389/fncel.2024.1366200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 03/08/2024] [Indexed: 04/09/2024] Open
Abstract
Neural systems have evolved to process sensory stimuli in a way that allows for efficient and adaptive behavior in a complex environment. Recent technological advances enable us to investigate sensory processing in animal models by simultaneously recording the activity of large populations of neurons with single-cell resolution, yielding high-dimensional datasets. In this review, we discuss concepts and approaches for assessing the population-level representation of sensory stimuli in the form of a representational map. In such a map, not only are the identities of stimuli distinctly represented, but their relational similarity is also mapped onto the space of neuronal activity. We highlight example studies in which the structure of representational maps in the brain are estimated from recordings in humans as well as animals and compare their methodological approaches. Finally, we integrate these aspects and provide an outlook for how the concept of representational maps could be applied to various fields in basic and clinical neuroscience.
Collapse
Affiliation(s)
- Takahiro Noda
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Dominik F. Aschauer
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Anna R. Chambers
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
- Eaton Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA, United States
| | - Johannes P.-H. Seiler
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Simon Rumpel
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| |
Collapse
|
17
|
Keller TA, Mason RA, Legg AE, Just MA. The neural and cognitive basis of expository text comprehension. NPJ SCIENCE OF LEARNING 2024; 9:21. [PMID: 38514702 PMCID: PMC10957871 DOI: 10.1038/s41539-024-00232-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 02/26/2024] [Indexed: 03/23/2024]
Abstract
As science and technology rapidly progress, it becomes increasingly important to understand how individuals comprehend expository technical texts that explain these advances. This study examined differences in individual readers' technical comprehension performance and differences among texts, using functional brain imaging to measure regional brain activity while students read passages on technical topics and then took a comprehension test. Better comprehension of the technical passages was related to higher activation in regions of the left inferior frontal gyrus, left superior parietal lobe, bilateral dorsolateral prefrontal cortex, and bilateral hippocampus. These areas are associated with the construction of a mental model of the passage and with the integration of new and prior knowledge in memory. Poorer comprehension of the passages was related to greater activation of the ventromedial prefrontal cortex and the precuneus, areas involved in autobiographical and episodic memory retrieval. More comprehensible passages elicited more brain activation associated with establishing links among different types of information in the text and activation associated with establishing conceptual coherence within the text representation. These findings converge with previous behavioral research in their implications for teaching technical learners to become better comprehenders and for improving the structure of instructional texts, to facilitate scientific and technological comprehension.
Collapse
Affiliation(s)
- Timothy A Keller
- Center for Cognitive Brain Imaging, Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Robert A Mason
- Center for Cognitive Brain Imaging, Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - Aliza E Legg
- Center for Cognitive Brain Imaging, Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - Marcel Adam Just
- Center for Cognitive Brain Imaging, Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| |
Collapse
|
18
|
Faghel-Soubeyrand S, Ramon M, Bamps E, Zoia M, Woodhams J, Richoz AR, Caldara R, Gosselin F, Charest I. Decoding face recognition abilities in the human brain. PNAS NEXUS 2024; 3:pgae095. [PMID: 38516275 PMCID: PMC10957238 DOI: 10.1093/pnasnexus/pgae095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 02/20/2024] [Indexed: 03/23/2024]
Abstract
Why are some individuals better at recognizing faces? Uncovering the neural mechanisms supporting face recognition ability has proven elusive. To tackle this challenge, we used a multimodal data-driven approach combining neuroimaging, computational modeling, and behavioral tests. We recorded the high-density electroencephalographic brain activity of individuals with extraordinary face recognition abilities-super-recognizers-and typical recognizers in response to diverse visual stimuli. Using multivariate pattern analyses, we decoded face recognition abilities from 1 s of brain activity with up to 80% accuracy. To better understand the mechanisms subtending this decoding, we compared representations in the brains of our participants with those in artificial neural network models of vision and semantics, as well as with those involved in human judgments of shape and meaning similarity. Compared to typical recognizers, we found stronger associations between early brain representations of super-recognizers and midlevel representations of vision models as well as shape similarity judgments. Moreover, we found stronger associations between late brain representations of super-recognizers and representations of the artificial semantic model as well as meaning similarity judgments. Overall, these results indicate that important individual variations in brain processing, including neural computations extending beyond purely visual processes, support differences in face recognition abilities. They provide the first empirical evidence for an association between semantic computations and face recognition abilities. We believe that such multimodal data-driven approaches will likely play a critical role in further revealing the complex nature of idiosyncratic face recognition in the human brain.
Collapse
Affiliation(s)
- Simon Faghel-Soubeyrand
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK
- Département de psychologie, Université de Montréal, Montréal, Québec H2V 2S9, Canada
| | - Meike Ramon
- Institute of Psychology, University of Lausanne, Lausanne CH-1015, Switzerland
| | - Eva Bamps
- Center for Contextual Psychiatry, Department of Neurosciences, KU Leuven, Leuven ON5, Belgium
| | - Matteo Zoia
- Department for Biomedical Research, University of Bern, Bern 3008, Switzerland
| | - Jessica Woodhams
- Département de psychologie, Université de Montréal, Montréal, Québec H2V 2S9, Canada
- School of Psychology, University of Birmingham, Hills Building, Edgbaston Park Rd, Birmingham B15 2TT, UK
| | | | - Roberto Caldara
- Département de Psychology, Université de Fribourg, Fribourg CH-1700, Switzerland
| | - Frédéric Gosselin
- Département de psychologie, Université de Montréal, Montréal, Québec H2V 2S9, Canada
| | - Ian Charest
- Département de psychologie, Université de Montréal, Montréal, Québec H2V 2S9, Canada
| |
Collapse
|
19
|
Rezaii N, Hochberg D, Quimby M, Wong B, McGinnis S, Dickerson BC, Putcha D. Language uncovers visuospatial dysfunction in posterior cortical atrophy: a natural language processing approach. Front Neurosci 2024; 18:1342909. [PMID: 38379764 PMCID: PMC10876777 DOI: 10.3389/fnins.2024.1342909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 01/18/2024] [Indexed: 02/22/2024] Open
Abstract
Introduction Posterior Cortical Atrophy (PCA) is a syndrome characterized by a progressive decline in higher-order visuospatial processing, leading to symptoms such as space perception deficit, simultanagnosia, and object perception impairment. While PCA is primarily known for its impact on visuospatial abilities, recent studies have documented language abnormalities in PCA patients. This study aims to delineate the nature and origin of language impairments in PCA, hypothesizing that language deficits reflect the visuospatial processing impairments of the disease. Methods We compared the language samples of 25 patients with PCA with age-matched cognitively normal (CN) individuals across two distinct tasks: a visually-dependent picture description and a visually-independent job description task. We extracted word frequency, word utterance latency, and spatial relational words for this comparison. We then conducted an in-depth analysis of the language used in the picture description task to identify specific linguistic indicators that reflect the visuospatial processing deficits of PCA. Results Patients with PCA showed significant language deficits in the visually-dependent task, characterized by higher word frequency, prolonged utterance latency, and fewer spatial relational words, but not in the visually-independent task. An in-depth analysis of the picture description task further showed that PCA patients struggled to identify certain visual elements as well as the overall theme of the picture. A predictive model based on these language features distinguished PCA patients from CN individuals with high classification accuracy. Discussion The findings indicate that language is a sensitive behavioral construct to detect visuospatial processing abnormalities of PCA. These insights offer theoretical and clinical avenues for understanding and managing PCA, underscoring language as a crucial marker for the visuospatial deficits of this atypical variant of Alzheimer's disease.
Collapse
Affiliation(s)
- Neguine Rezaii
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Daisy Hochberg
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Megan Quimby
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Bonnie Wong
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Scott McGinnis
- Center for Brain Mind Medicine, Department of Neurology, Brigham and Women’s Hospital, Boston, MA, United States
| | - Bradford C. Dickerson
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
- Alzheimer’s Disease Research Center, Massachusetts General Hospital, Charlestown, MA, United States
| | - Deepti Putcha
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| |
Collapse
|
20
|
Eisenhauer S, Gonzalez Alam TRDJ, Cornelissen PL, Smallwood J, Jefferies E. Individual word representations dissociate from linguistic context along a cortical unimodal to heteromodal gradient. Hum Brain Mapp 2024; 45:e26607. [PMID: 38339897 PMCID: PMC10836172 DOI: 10.1002/hbm.26607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 11/30/2023] [Accepted: 01/15/2024] [Indexed: 02/12/2024] Open
Abstract
Language comprehension involves multiple hierarchical processing stages across time, space, and levels of representation. When processing a word, the sensory input is transformed into increasingly abstract representations that need to be integrated with the linguistic context. Thus, language comprehension involves both input-driven as well as context-dependent processes. While neuroimaging research has traditionally focused on mapping individual brain regions to the distinct underlying processes, recent studies indicate that whole-brain distributed patterns of cortical activation might be highly relevant for cognitive functions, including language. One such pattern, based on resting-state connectivity, is the 'principal cortical gradient', which dissociates sensory from heteromodal brain regions. The present study investigated the extent to which this gradient provides an organizational principle underlying language function, using a multimodal neuroimaging dataset of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) recordings from 102 participants during sentence reading. We found that the brain response to individual representations of a word (word length, orthographic distance, and word frequency), which reflect visual; orthographic; and lexical properties, gradually increases towards the sensory end of the gradient. Although these properties showed opposite effect directions in fMRI and MEG, their association with the sensory end of the gradient was consistent across both neuroimaging modalities. In contrast, MEG revealed that properties reflecting a word's relation to its linguistic context (semantic similarity and position within the sentence) involve the heteromodal end of the gradient to a stronger extent. This dissociation between individual word and contextual properties was stable across earlier and later time windows during word presentation, indicating interactive processing of word representations and linguistic context at opposing ends of the principal gradient. To conclude, our findings indicate that the principal gradient underlies the organization of a range of linguistic representations while supporting a gradual distinction between context-independent and context-dependent representations. Furthermore, the gradient reveals convergent patterns across neuroimaging modalities (similar location along the gradient) in the presence of divergent responses (opposite effect directions).
Collapse
Affiliation(s)
- Susanne Eisenhauer
- Department of PsychologyUniversity of YorkYorkUK
- York Neuroimaging Centre, Innovation WayYorkUK
| | | | | | | | - Elizabeth Jefferies
- Department of PsychologyUniversity of YorkYorkUK
- York Neuroimaging Centre, Innovation WayYorkUK
| |
Collapse
|
21
|
Steel A, Silson EH, Garcia BD, Robertson CE. A retinotopic code structures the interaction between perception and memory systems. Nat Neurosci 2024; 27:339-347. [PMID: 38168931 PMCID: PMC10923171 DOI: 10.1038/s41593-023-01512-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 10/31/2023] [Indexed: 01/05/2024]
Abstract
Conventional views of brain organization suggest that regions at the top of the cortical hierarchy processes internally oriented information using an abstract amodal neural code. Despite this, recent reports have described the presence of retinotopic coding at the cortical apex, including the default mode network. What is the functional role of retinotopic coding atop the cortical hierarchy? Here we report that retinotopic coding structures interactions between internally oriented (mnemonic) and externally oriented (perceptual) brain areas. Using functional magnetic resonance imaging, we observed robust inverted (negative) retinotopic coding in category-selective memory areas at the cortical apex, which is functionally linked to the classic (positive) retinotopic coding in category-selective perceptual areas in high-level visual cortex. These functionally linked retinotopic populations in mnemonic and perceptual areas exhibit spatially specific opponent responses during both bottom-up perception and top-down recall, suggesting that these areas are interlocked in a mutually inhibitory dynamic. These results show that retinotopic coding structures interactions between perceptual and mnemonic neural systems, providing a scaffold for their dynamic interaction.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| | - Edward H Silson
- Psychosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, UK
| | - Brenda D Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Caroline E Robertson
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
22
|
Rezaii N, Hochberg D, Quimby M, Wong B, McGinnis S, Dickerson BC, Putcha D. Language Uncovers Visuospatial Dysfunction in Posterior Cortical Atrophy: A Natural Language Processing Approach. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.11.21.23298864. [PMID: 38045263 PMCID: PMC10690359 DOI: 10.1101/2023.11.21.23298864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Introduction Posterior Cortical Atrophy (PCA) is a syndrome characterized by a progressive decline in higher-order visuospatial processing, leading to symptoms such as space perception deficit, simultanagnosia, and object perception impairment. While PCA is primarily known for its impact on visuospatial abilities, recent studies have documented language abnormalities in PCA patients. This study aims to delineate the nature and origin of language impairments in PCA, hypothesizing that language deficits reflect the visuospatial processing impairments of the disease. Methods We compared the language samples of 25 patients with PCA with age-matched cognitively normal (CN) individuals across two distinct tasks: a visually-dependent picture description and a visually-independent job description task. We extracted word frequency, word utterance latency, and spatial relational words for this comparison. We then conducted an in-depth analysis of the language used in the picture description task to identify specific linguistic indicators that reflect the visuospatial processing deficits of PCA. Results Patients with PCA showed significant language deficits in the visually-dependent task, characterized by higher word frequency, prolonged utterance latency, and fewer spatial relational words, but not in the visually-independent task. An in-depth analysis of the picture description task further showed that PCA patients struggled to identify certain visual elements as well as the overall theme of the picture. A predictive model based on these language features distinguished PCA patients from CN individuals with high classification accuracy. Discussion The findings indicate that language is a sensitive behavioral construct to detect visuospatial processing abnormalities of PCA. These insights offer theoretical and clinical avenues for understanding and managing PCA, underscoring language as a crucial marker for the visuospatial deficits of this atypical variant of Alzheimer's disease.
Collapse
Affiliation(s)
- Neguine Rezaii
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114
| | - Daisy Hochberg
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114
| | - Megan Quimby
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114
| | - Bonnie Wong
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114
| | - Scott McGinnis
- Center for Brain Mind Medicine, Department of Neurology, Brigham & Women’s Hospital, Boston, MA 02115
| | - Bradford C Dickerson
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Alzheimer’s Disease Research Center, Massachusetts General Hospital, Charlestown, MA 02129
| | - Deepti Putcha
- Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114
| |
Collapse
|
23
|
Aho K, Roads BD, Love BC. Signatures of cross-modal alignment in children's early concepts. Proc Natl Acad Sci U S A 2023; 120:e2309688120. [PMID: 37819984 PMCID: PMC10589699 DOI: 10.1073/pnas.2309688120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/05/2023] [Indexed: 10/13/2023] Open
Abstract
Whether supervised or unsupervised, human and machine learning is usually characterized as event-based. However, learning may also proceed by systems alignment in which mappings are inferred between entire systems, such as visual and linguistic systems. Systems alignment is possible because items that share similar visual contexts, such as a car and a truck, will also tend to share similar linguistic contexts. Because of the mirrored similarity relationships across systems, the visual and linguistic systems can be aligned at some later time absent either input. In a series of simulation studies, we considered whether children's early concepts support systems alignment. We found that children's early concepts are close to optimal for inferring novel concepts through systems alignment, enabling agents to correctly infer more than 85% of visual-word mappings absent supervision. One possible explanation for why children's early concepts support systems alignment is that they are distinguished structurally by their dense semantic neighborhoods. Artificial agents using these structural features to select concepts proved highly effective, both in environments mirroring children's conceptual world and those that exclude the concepts that children commonly acquire. For children, systems alignment and event-based learning likely complement one another. Likewise, artificial systems can benefit from incorporating these developmental principles.
Collapse
Affiliation(s)
- Kaarina Aho
- Department of Experimental Psychology, University College London, LondonWC1H 0AP, United Kingdom
| | - Brett D. Roads
- Department of Experimental Psychology, University College London, LondonWC1H 0AP, United Kingdom
| | - Bradley C. Love
- Department of Experimental Psychology, University College London, LondonWC1H 0AP, United Kingdom
- The Alan Turing Institute, LondonNW1 2DB, United Kingdom
| |
Collapse
|
24
|
Steel A, Silson EH, Garcia BD, Robertson CE. A retinotopic code structures the interaction between perception and memory systems. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.15.540807. [PMID: 37292758 PMCID: PMC10245578 DOI: 10.1101/2023.05.15.540807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Conventional views of brain organization suggest that the cortical apex processes internally-oriented information using an abstract, amodal neural code. Yet, recent reports have described the presence of retinotopic coding at the cortical apex, including the default mode network. What is the functional role of retinotopic coding atop the cortical hierarchy? Here, we report that retinotopic coding structures interactions between internally-oriented (mnemonic) and externally-oriented (perceptual) brain areas. Using fMRI, we observed robust, inverted (negative) retinotopic coding in category-selective memory areas at the cortical apex, which is functionally linked to the classic (positive) retinotopic coding in category-selective perceptual areas in high-level visual cortex. Specifically, these functionally-linked retinotopic populations in mnemonic and perceptual areas exhibit spatially-specific opponent responses during both bottom-up perception and top-down recall, suggesting that these areas are interlocked in a mutually-inhibitory dynamic. Together, these results show that retinotopic coding structures interactions between perceptual and mnemonic neural systems, thereby scaffolding their dynamic interaction.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, 03755
| | - Edward H. Silson
- Psychology, School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, UK EH8 9JZ
| | - Brenda D. Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, 03755
| | | |
Collapse
|
25
|
Kocsis Z, Jenison RL, Taylor PN, Calmus RM, McMurray B, Rhone AE, Sarrett ME, Deifelt Streese C, Kikuchi Y, Gander PE, Berger JI, Kovach CK, Choi I, Greenlee JD, Kawasaki H, Cope TE, Griffiths TD, Howard MA, Petkov CI. Immediate neural impact and incomplete compensation after semantic hub disconnection. Nat Commun 2023; 14:6264. [PMID: 37805497 PMCID: PMC10560235 DOI: 10.1038/s41467-023-42088-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 09/28/2023] [Indexed: 10/09/2023] Open
Abstract
The human brain extracts meaning using an extensive neural system for semantic knowledge. Whether broadly distributed systems depend on or can compensate after losing a highly interconnected hub is controversial. We report intracranial recordings from two patients during a speech prediction task, obtained minutes before and after neurosurgical treatment requiring disconnection of the left anterior temporal lobe (ATL), a candidate semantic knowledge hub. Informed by modern diaschisis and predictive coding frameworks, we tested hypotheses ranging from solely neural network disruption to complete compensation by the indirectly affected language-related and speech-processing sites. Immediately after ATL disconnection, we observed neurophysiological alterations in the recorded frontal and auditory sites, providing direct evidence for the importance of the ATL as a semantic hub. We also obtained evidence for rapid, albeit incomplete, attempts at neural network compensation, with neural impact largely in the forms stipulated by the predictive coding framework, in specificity, and the modern diaschisis framework, more generally. The overall results validate these frameworks and reveal an immediate impact and capability of the human brain to adjust after losing a brain hub.
Collapse
Affiliation(s)
- Zsuzsanna Kocsis
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA.
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Rick L Jenison
- Departments of Neuroscience and Psychology, University of Wisconsin, Madison, WI, USA
| | - Peter N Taylor
- CNNP Lab, Interdisciplinary Computing and Complex BioSystems Group, School of Computing, Newcastle University, Newcastle upon Tyne, UK
- UCL Institute of Neurology, Queen Square, London, UK
| | - Ryan M Calmus
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Bob McMurray
- Department of Psychological and Brain Science, University of Iowa, Iowa City, IA, USA
| | - Ariane E Rhone
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | | | | | - Yukiko Kikuchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Phillip E Gander
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
- Department of Radiology, University of Iowa, Iowa City, IA, USA
- Iowa Neuroscience Institute, University of Iowa, Iowa City, IA, USA
| | - Joel I Berger
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | | | - Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | | | - Hiroto Kawasaki
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | - Thomas E Cope
- Department of Clinical Neurosciences, Cambridge University, Cambridge, UK
- MRC Cognition and Brain Sciences Unit, Cambridge University, Cambridge, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Matthew A Howard
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | - Christopher I Petkov
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA.
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| |
Collapse
|
26
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
27
|
Du C, Fu K, Li J, He H. Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:10760-10777. [PMID: 37030711 DOI: 10.1109/tpami.2023.3263181] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Decoding human visual neural representations is a challenging task with great scientific significance in revealing vision-processing mechanisms and developing brain-like intelligent machines. Most existing methods are difficult to generalize to novel categories that have no corresponding neural data for training. The two main reasons are 1) the under-exploitation of the multimodal semantic knowledge underlying the neural data and 2) the small number of paired (stimuli-responses) training data. To overcome these limitations, this paper presents a generic neural decoding method called BraVL that uses multimodal learning of brain-visual-linguistic features. We focus on modeling the relationships between brain, visual and linguistic features via multimodal deep generative models. Specifically, we leverage the mixture-of-product-of-experts formulation to infer a latent code that enables a coherent joint generation of all three modalities. To learn a more consistent joint representation and improve the data efficiency in the case of limited brain activity data, we exploit both intra- and inter-modality mutual information maximization regularization terms. In particular, our BraVL model can be trained under various semi-supervised scenarios to incorporate the visual and textual features obtained from the extra categories. Finally, we construct three trimodal matching datasets, and the extensive experiments lead to some interesting conclusions and cognitive insights: 1) decoding novel visual categories from human brain activity is practically possible with good accuracy; 2) decoding models using the combination of visual and linguistic features perform much better than those using either of them alone; 3) visual perception may be accompanied by linguistic influences to represent the semantics of visual stimuli.
Collapse
|
28
|
Waterhouse L. Why multiple intelligences theory is a neuromyth. Front Psychol 2023; 14:1217288. [PMID: 37701872 PMCID: PMC10493274 DOI: 10.3389/fpsyg.2023.1217288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 08/14/2023] [Indexed: 09/14/2023] Open
Abstract
A neuromyth is a commonly accepted but unscientific claim about brain function. Many researchers have claimed Howard Gardner's multiple intelligences (MI) theory is a neuromyth because they have seen no evidence supporting his proposal for independent brain-based intelligences for different types of cognitive abilities. Although Gardner has made claims that there are dedicated neural networks or modules for each of the intelligences, nonetheless Gardner has stated his theory could not be a neuromyth because he never claimed it was a neurological theory. This paper explains the lack of evidence to support MI theory. Most important, no researcher has directly looked for a brain basis for the intelligences. Moreover, factor studies have not shown the intelligences to be independent, and studies of MI teaching effects have not explored alternate causes for positive effects and have not been conducted by standard scientific methods. Gardner's MI theory was not a neuromyth initially because it was based on theories of the 1980s of brain modularity for cognition, and few researchers then were concerned by the lack of validating brain studies. However, in the past 40 years neuroscience research has shown that the brain is not organized in separate modules dedicated to specific forms of cognition. Despite the lack of empirical support for Gardner's theory, MI teaching strategies are widely used in classrooms all over the world. Crucially, belief in MI and use of MI in the classroom limit the effort to find evidence-based teaching methods. Studies of possible interventions to try to change student and teacher belief in neuromyths are currently being undertaken. Intervention results are variable: One research group found that teachers who knew more about the brain still believed education neuromyths. Teachers need to learn to detect and reject neuromyths. Widespread belief in a neuromyth does not make a theory legitimate. Theories must be based on sound empirical evidence. It is now time for MI theory to be rejected, once and for all, and for educators to turn to evidence-based teaching strategies.
Collapse
Affiliation(s)
- Lynn Waterhouse
- The College of New Jersey, Ewing Township, NJ, United States
| |
Collapse
|
29
|
Sulfaro AA, Robinson AK, Carlson TA. Modelling perception as a hierarchical competition differentiates imagined, veridical, and hallucinated percepts. Neurosci Conscious 2023; 2023:niad018. [PMID: 37621984 PMCID: PMC10445666 DOI: 10.1093/nc/niad018] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 08/26/2023] Open
Abstract
Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally generated stimuli (i.e. sensory input) and internally generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate between veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume that low-level stimulus information (e.g. in early visual cortices) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally generated stimuli may be hallucinated when sensory input is dampened or eradicated. Our approach can explain individual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.
Collapse
Affiliation(s)
- Alexander A Sulfaro
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| | - Amanda K Robinson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
- Queensland Brain Institute, QBI Building 79, The University of Queensland, St Lucia, QLD 4067, Australia
| | - Thomas A Carlson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| |
Collapse
|
30
|
Dirani J, Pylkkänen L. The time course of cross-modal representations of conceptual categories. Neuroimage 2023; 277:120254. [PMID: 37391047 DOI: 10.1016/j.neuroimage.2023.120254] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 05/29/2023] [Accepted: 06/27/2023] [Indexed: 07/02/2023] Open
Abstract
To what extent does language production activate cross-modal conceptual representations? In picture naming, we view specific exemplars of concepts and then name them with a label, like "dog". In overt reading, the written word does not express a specific exemplar. Here we used a decoding approach with magnetoencephalography (MEG) to address whether picture naming and overt word reading involve shared representations of superordinate categories (e.g., animal). This addresses a fundamental question about the modality-generality of conceptual representations and their temporal evolution. Crucially, we do this using a language production task that does not require explicit categorization judgment and that controls for word form properties across semantic categories. We trained our models to classify the animal/tool distinction using MEG data of one modality at each time point and then tested the generalization of those models on the other modality. We obtained evidence for the automatic activation of cross-modal semantic category representations for both pictures and words later than their respective modality-specific representations. Cross-modal representations were activated at 150 ms and lasted until around 450 ms. The time course of lexical activation was also assessed revealing that semantic category is represented before lexical access for pictures but after lexical access for words. Notably, this earlier activation of semantic category in pictures occurred simultaneously with visual representations. We thus show evidence for the spontaneous activation of cross-modal semantic categories in picture naming and word reading. These results serve to anchor a more comprehensive spatio-temporal delineation of the semantic feature space during production planning.
Collapse
Affiliation(s)
- Julien Dirani
- Department of Psychology, New York University, New York, NY, 10003, USA.
| | - Liina Pylkkänen
- Department of Psychology, New York University, New York, NY, 10003, USA; Department of Linguistics, New York University, New York, NY, 10003, USA; NYUAD Research Institute, New York University Abu Dhabi, Abu Dhabi, 129188, UAE
| |
Collapse
|
31
|
Steel A, Garcia BD, Goyal K, Mynick A, Robertson CE. Scene Perception and Visuospatial Memory Converge at the Anterior Edge of Visually Responsive Cortex. J Neurosci 2023; 43:5723-5737. [PMID: 37474310 PMCID: PMC10401646 DOI: 10.1523/jneurosci.2043-22.2023] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 07/10/2023] [Accepted: 07/14/2023] [Indexed: 07/22/2023] Open
Abstract
To fluidly engage with the world, our brains must simultaneously represent both the scene in front of us and our memory of the immediate surrounding environment (i.e., local visuospatial context). How does the brain's functional architecture enable sensory and mnemonic representations to closely interface while also avoiding sensory-mnemonic interference? Here, we asked this question using first-person, head-mounted virtual reality and fMRI. Using virtual reality, human participants of both sexes learned a set of immersive, real-world visuospatial environments in which we systematically manipulated the extent of visuospatial context associated with a scene image in memory across three learning conditions, spanning from a single FOV to a city street. We used individualized, within-subject fMRI to determine which brain areas support memory of the visuospatial context associated with a scene during recall (Experiment 1) and recognition (Experiment 2). Across the whole brain, activity in three patches of cortex was modulated by the amount of known visuospatial context, each located immediately anterior to one of the three scene perception areas of high-level visual cortex. Individual subject analyses revealed that these anterior patches corresponded to three functionally defined place memory areas, which selectively respond when visually recalling personally familiar places. In addition to showing activity levels that were modulated by the amount of visuospatial context, multivariate analyses showed that these anterior areas represented the identity of the specific environment being recalled. Together, these results suggest a convergence zone for scene perception and memory of the local visuospatial context at the anterior edge of high-level visual cortex.SIGNIFICANCE STATEMENT As we move through the world, the visual scene around us is integrated with our memory of the wider visuospatial context. Here, we sought to understand how the functional architecture of the brain enables coexisting representations of the current visual scene and memory of the surrounding environment. Using a combination of immersive virtual reality and fMRI, we show that memory of visuospatial context outside the current FOV is represented in a distinct set of brain areas immediately anterior and adjacent to the perceptually oriented scene-selective areas of high-level visual cortex. This functional architecture would allow efficient interaction between immediately adjacent mnemonic and perceptual areas while also minimizing interference between mnemonic and perceptual representations.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Brenda D Garcia
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Kala Goyal
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Anna Mynick
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Caroline E Robertson
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| |
Collapse
|
32
|
Meschke EX, Castello MVDO, la Tour TD, Gallant JL. Model connectivity: leveraging the power of encoding models to overcome the limitations of functional connectivity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.17.549356. [PMID: 37503232 PMCID: PMC10370105 DOI: 10.1101/2023.07.17.549356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Functional connectivity (FC) is the most popular method for recovering functional networks of brain areas with fMRI. However, because FC is defined as temporal correlations in brain activity, FC networks are confounded by noise and lack a precise functional role. To overcome these limitations, we developed model connectivity (MC). MC is defined as similarities in encoding model weights, which quantify reliable functional activity in terms of interpretable stimulus- or task-related features. To compare FC and MC, both methods were applied to a naturalistic story listening dataset. FC recovered spatially broad networks that are confounded by noise, and that lack a clear role during natural language comprehension. By contrast, MC recovered spatially localized networks that are robust to noise, and that represent distinct categories of semantic concepts. Thus, MC is a powerful data-driven approach for recovering and interpreting the functional networks that support complex cognitive processes.
Collapse
|
33
|
Xu Y, Vignali L, Sigismondi F, Crepaldi D, Bottini R, Collignon O. Similar object shape representation encoded in the inferolateral occipitotemporal cortex of sighted and early blind people. PLoS Biol 2023; 21:e3001930. [PMID: 37490508 PMCID: PMC10368275 DOI: 10.1371/journal.pbio.3001930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/23/2023] [Indexed: 07/27/2023] Open
Abstract
We can sense an object's shape by vision or touch. Previous studies suggested that the inferolateral occipitotemporal cortex (ILOTC) implements supramodal shape representations as it responds more to seeing or touching objects than shapeless textures. However, such activation in the anterior portion of the ventral visual pathway could be due to the conceptual representation of an object or visual imagery triggered by touching an object. We addressed these possibilities by directly comparing shape and conceptual representations of objects in early blind (who lack visual experience/imagery) and sighted participants. We found that bilateral ILOTC in both groups showed stronger activation during a shape verification task than during a conceptual verification task made on the names of the same manmade objects. Moreover, the distributed activity in the ILOTC encoded shape similarity but not conceptual association among objects. Besides the ILOTC, we also found shape representation in both groups' bilateral ventral premotor cortices and intraparietal sulcus (IPS), a frontoparietal circuit relating to object grasping and haptic processing. In contrast, the conceptual verification task activated both groups' left perisylvian brain network relating to language processing and, interestingly, the cuneus in early blind participants only. The ILOTC had stronger functional connectivity to the frontoparietal circuit than to the left perisylvian network, forming a modular structure specialized in shape representation. Our results conclusively support that the ILOTC selectively implements shape representation independently of visual experience, and this unique functionality likely comes from its privileged connection to the frontoparietal haptic circuit.
Collapse
Affiliation(s)
- Yangwen Xu
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Lorenzo Vignali
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
- International School for Advanced Studies (SISSA), Trieste, Italy
| | | | - Davide Crepaldi
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Roberto Bottini
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
- Psychological Sciences Research Institute (IPSY) and Institute of NeuroScience (IoNS), University of Louvain, Louvain-la-Neuve, Belgium
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
| |
Collapse
|
34
|
Tang J, LeBel A, Jain S, Huth AG. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci 2023; 26:858-866. [PMID: 37127759 PMCID: PMC11304553 DOI: 10.1038/s41593-023-01304-9] [Citation(s) in RCA: 54] [Impact Index Per Article: 54.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 03/15/2023] [Indexed: 05/03/2023]
Abstract
A brain-computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain-computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain-computer interfaces.
Collapse
Affiliation(s)
- Jerry Tang
- Department of Computer Science, The University of Texas at Austin, Austin, TX, USA
| | - Amanda LeBel
- Department of Neuroscience, The University of Texas at Austin, Austin, TX, USA
| | - Shailee Jain
- Department of Computer Science, The University of Texas at Austin, Austin, TX, USA
| | - Alexander G Huth
- Department of Computer Science, The University of Texas at Austin, Austin, TX, USA.
- Department of Neuroscience, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
35
|
Kramer MA, Hebart MN, Baker CI, Bainbridge WA. The features underlying the memorability of objects. SCIENCE ADVANCES 2023; 9:eadd2981. [PMID: 37126552 PMCID: PMC10132746 DOI: 10.1126/sciadv.add2981] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 03/29/2023] [Indexed: 05/03/2023]
Abstract
What makes certain images more memorable than others? While much of memory research has focused on participant effects, recent studies using a stimulus-centric perspective have sparked debate on the determinants of memory, including the roles of semantic and visual features and whether the most prototypical or atypical items are best remembered. Prior studies have typically relied on constrained stimulus sets, limiting a generalized view of the features underlying what we remember. Here, we collected more than 1 million memory ratings for a naturalistic dataset of 26,107 object images designed to comprehensively sample concrete objects. We establish a model of object features that is predictive of image memorability and examined whether memorability could be accounted for by the typicality of the objects. We find that semantic features exert a stronger influence than perceptual features on what we remember and that the relationship between memorability and typicality is more complex than a simple positive or negative association alone.
Collapse
Affiliation(s)
- Max A. Kramer
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Martin N. Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
| | - Chris I. Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Wilma A. Bainbridge
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Neuroscience Institute, University of Chicago, Chicago, IL, USA
| |
Collapse
|
36
|
Deniz F, Tseng C, Wehbe L, Dupré la Tour T, Gallant JL. Semantic Representations during Language Comprehension Are Affected by Context. J Neurosci 2023; 43:3144-3158. [PMID: 36973013 PMCID: PMC10146529 DOI: 10.1523/jneurosci.2459-21.2023] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 02/17/2023] [Accepted: 02/26/2023] [Indexed: 03/29/2023] Open
Abstract
The meaning of words in natural language depends crucially on context. However, most neuroimaging studies of word meaning use isolated words and isolated sentences with little context. Because the brain may process natural language differently from how it processes simplified stimuli, there is a pressing need to determine whether prior results on word meaning generalize to natural language. fMRI was used to record human brain activity while four subjects (two female) read words in four conditions that vary in context: narratives, isolated sentences, blocks of semantically similar words, and isolated words. We then compared the signal-to-noise ratio (SNR) of evoked brain responses, and we used a voxelwise encoding modeling approach to compare the representation of semantic information across the four conditions. We find four consistent effects of varying context. First, stimuli with more context evoke brain responses with higher SNR across bilateral visual, temporal, parietal, and prefrontal cortices compared with stimuli with little context. Second, increasing context increases the representation of semantic information across bilateral temporal, parietal, and prefrontal cortices at the group level. In individual subjects, only natural language stimuli consistently evoke widespread representation of semantic information. Third, context affects voxel semantic tuning. Finally, models estimated using stimuli with little context do not generalize well to natural language. These results show that context has large effects on the quality of neuroimaging data and on the representation of meaning in the brain. Thus, neuroimaging studies that use stimuli with little context may not generalize well to the natural regime.SIGNIFICANCE STATEMENT Context is an important part of understanding the meaning of natural language, but most neuroimaging studies of meaning use isolated words and isolated sentences with little context. Here, we examined whether the results of neuroimaging studies that use out-of-context stimuli generalize to natural language. We find that increasing context improves the quality of neuro-imaging data and changes where and how semantic information is represented in the brain. These results suggest that findings from studies using out-of-context stimuli may not generalize to natural language used in daily life.
Collapse
Affiliation(s)
- Fatma Deniz
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
- Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin 10623, Germany
| | - Christine Tseng
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
| | - Leila Wehbe
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
| | - Tom Dupré la Tour
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
- Department of Psychology, University of California, Berkeley, California 94720
| |
Collapse
|
37
|
Nakai T, Nishimoto S. Artificial neural network modelling of the neural population code underlying mathematical operations. Neuroimage 2023; 270:119980. [PMID: 36848969 DOI: 10.1016/j.neuroimage.2023.119980] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 02/10/2023] [Accepted: 02/23/2023] [Indexed: 02/28/2023] Open
Abstract
Mathematical operations have long been regarded as a sparse, symbolic process in neuroimaging studies. In contrast, advances in artificial neural networks (ANN) have enabled extracting distributed representations of mathematical operations. Recent neuroimaging studies have compared distributed representations of the visual, auditory and language domains in ANNs and biological neural networks (BNNs). However, such a relationship has not yet been examined in mathematics. Here we hypothesise that ANN-based distributed representations can explain brain activity patterns of symbolic mathematical operations. We used the fMRI data of a series of mathematical problems with nine different combinations of operators to construct voxel-wise encoding/decoding models using both sparse operator and latent ANN features. Representational similarity analysis demonstrated shared representations between ANN and BNN, an effect particularly evident in the intraparietal sulcus. Feature-brain similarity (FBS) analysis served to reconstruct a sparse representation of mathematical operations based on distributed ANN features in each cortical voxel. Such reconstruction was more efficient when using features from deeper ANN layers. Moreover, latent ANN features allowed the decoding of novel operators not used during model training from brain activity. The current study provides novel insights into the neural code underlying mathematical thought.
Collapse
Affiliation(s)
- Tomoya Nakai
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan; Lyon Neuroscience Research Center (CRNL), INSERM U1028 - CNRS UMR5292, University of Lyon, Bron, France.
| | - Shinji Nishimoto
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan; Graduate School of Frontier Biosciences, Osaka University, Suita, Japan; Graduate School of Medicine, Osaka University, Suita, Japan
| |
Collapse
|
38
|
Tambini A, Miller J, Ehlert L, Kiyonaga A, D’Esposito M. Structured memory representations develop at multiple time scales in hippocampal-cortical networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.06.535935. [PMID: 37066263 PMCID: PMC10104124 DOI: 10.1101/2023.04.06.535935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Abstract
Influential views of systems memory consolidation posit that the hippocampus rapidly forms representations of specific events, while neocortical networks extract regularities across events, forming the basis of schemas and semantic knowledge. Neocortical extraction of schematic memory representations is thought to occur on a protracted timescale of months, especially for information that is unrelated to prior knowledge. However, this theorized evolution of memory representations across extended timescales, and differences in the temporal dynamics of consolidation across brain regions, lack reliable empirical support. To examine the temporal dynamics of memory representations, we repeatedly exposed human participants to structured information via sequences of fractals, while undergoing longitudinal fMRI for three months. Sequence-specific activation patterns emerged in the hippocampus during the first 1-2 weeks of learning, followed one week later by high-level visual cortex, and subsequently the medial prefrontal and parietal cortices. Schematic, sequence-general representations emerged in the prefrontal cortex after 3 weeks of learning, followed by the medial temporal lobe and anterior temporal cortex. Moreover, hippocampal and most neocortical representations showed sustained rather than time-limited dynamics, suggesting that representations tend to persist across learning. These results show that specific hippocampal representations emerge early, followed by both specific and schematic representations at a gradient of timescales across hippocampal-cortical networks as learning unfolds. Thus, memory representations do not exist only in specific brain regions at a given point in time, but are simultaneously present at multiple levels of abstraction across hippocampal-cortical networks.
Collapse
Affiliation(s)
- Arielle Tambini
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY
| | - Jacob Miller
- Wu Tsai Institute, Department of Psychiatry, Yale University, New Haven, CT
| | - Luke Ehlert
- Department of Neurobiology and Behavior, University of California. Irvine, CA
| | - Anastasia Kiyonaga
- Department of Cognitive Science, University of California, San Diego, CA
| | - Mark D’Esposito
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA
- Department of Psychology, University of California, Berkeley, CA
| |
Collapse
|
39
|
Noah S, Meyyappan S, Ding M, Mangun GR. Time Courses of Attended and Ignored Object Representations. J Cogn Neurosci 2023; 35:645-658. [PMID: 36735619 PMCID: PMC10024573 DOI: 10.1162/jocn_a_01972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Selective attention prioritizes information that is relevant to behavioral goals. Previous studies have shown that attended visual information is processed and represented more efficiently, but distracting visual information is not fully suppressed, and may also continue to be represented in the brain. In natural vision, to-be-attended and to-be-ignored objects may be present simultaneously in the scene. Understanding precisely how each is represented in the visual system, and how these neural representations evolve over time, remains a key goal in cognitive neuroscience. In this study, we recorded EEG while participants performed a cued object-based attention task that involved attending to target objects and ignoring simultaneously presented and spatially overlapping distractor objects. We performed support vector machine classification on the stimulus-evoked EEG data to separately track the temporal dynamics of target and distractor representations. We found that (1) both target and distractor objects were decodable during the early phase of object processing (∼100 msec to ∼200 msec after target onset), and (2) the representations of both objects were sustained over time, remaining decodable above chance until ∼1000-msec latency. However, (3) the distractor object information faded significantly beginning after about 300-msec latency. These findings provide information about the fate of attended and ignored visual information in complex scene perception.
Collapse
Affiliation(s)
- Sean Noah
- University of California, Davis.,University of California, Berkeley
| | | | | | | |
Collapse
|
40
|
Santavirta S, Karjalainen T, Nazari-Farsani S, Hudson M, Putkinen V, Seppälä K, Sun L, Glerean E, Hirvonen J, Karlsson HK, Nummenmaa L. Functional organization of social perception in the human brain. Neuroimage 2023; 272:120025. [PMID: 36958619 PMCID: PMC10112277 DOI: 10.1016/j.neuroimage.2023.120025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 03/07/2023] [Accepted: 03/11/2023] [Indexed: 03/25/2023] Open
Abstract
Humans rapidly extract diverse and complex information from ongoing social interactions, but the perceptual and neural organization of the different aspects of social perception remains unresolved. We showed short movie clips with rich social content to 97 healthy participants while their haemodynamic brain activity was measured with fMRI. The clips were annotated moment-to-moment for a large set of social features and 45 of the features were evaluated reliably between annotators. Cluster analysis of the social features revealed that 13 dimensions were sufficient for describing the social perceptual space. Three different analysis methods were used to map the social perceptual processes in the human brain. Regression analysis mapped regional neural response profiles for different social dimensions. Multivariate pattern analysis then established the spatial specificity of the responses and intersubject correlation analysis connected social perceptual processing with neural synchronization. The results revealed a gradient in the processing of social information in the brain. Posterior temporal and occipital regions were broadly tuned to most social dimensions and the classifier revealed that these responses showed spatial specificity for social dimensions; in contrast Heschl gyri and parietal areas were also broadly associated with different social signals, yet the spatial patterns of responses did not differentiate social dimensions. Frontal and subcortical regions responded only to a limited number of social dimensions and the spatial response patterns did not differentiate social dimension. Altogether these results highlight the distributed nature of social processing in the brain.
Collapse
Affiliation(s)
- Severi Santavirta
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland.
| | - Tomi Karjalainen
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Sanaz Nazari-Farsani
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Matthew Hudson
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; School of Psychology, University of Plymouth, Plymouth, United Kingdom
| | - Vesa Putkinen
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Kerttu Seppälä
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; Department of Medical Physics, Turku University Hospital, Turku, Finland
| | - Lihua Sun
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; Department of Nuclear Medicine, Pudong Hospital, Fudan University, Shanghai, China
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| | - Jussi Hirvonen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland; Medical Imaging Center, Department of Radiology, Tampere University and Tampere University Hospital, Tampere, Finland
| | - Henry K Karlsson
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Lauri Nummenmaa
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; Department of Psychology, University of Turku, Turku, Finland
| |
Collapse
|
41
|
Frisby SL, Halai AD, Cox CR, Lambon Ralph MA, Rogers TT. Decoding semantic representations in mind and brain. Trends Cogn Sci 2023; 27:258-281. [PMID: 36631371 DOI: 10.1016/j.tics.2022.12.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 12/12/2022] [Accepted: 12/13/2022] [Indexed: 01/11/2023]
Abstract
A key goal for cognitive neuroscience is to understand the neurocognitive systems that support semantic memory. Recent multivariate analyses of neuroimaging data have contributed greatly to this effort, but the rapid development of these novel approaches has made it difficult to track the diversity of findings and to understand how and why they sometimes lead to contradictory conclusions. We address this challenge by reviewing cognitive theories of semantic representation and their neural instantiation. We then consider contemporary approaches to neural decoding and assess which types of representation each can possibly detect. The analysis suggests why the results are heterogeneous and identifies crucial links between cognitive theory, data collection, and analysis that can help to better connect neuroimaging to mechanistic theories of semantic cognition.
Collapse
Affiliation(s)
- Saskia L Frisby
- Medical Research Council (MRC) Cognition and Brain Sciences Unit, Chaucer Road, Cambridge CB2 7EF, UK.
| | - Ajay D Halai
- Medical Research Council (MRC) Cognition and Brain Sciences Unit, Chaucer Road, Cambridge CB2 7EF, UK
| | - Christopher R Cox
- Department of Psychology, Louisiana State University, Baton Rouge, LA 70803, USA
| | - Matthew A Lambon Ralph
- Medical Research Council (MRC) Cognition and Brain Sciences Unit, Chaucer Road, Cambridge CB2 7EF, UK
| | - Timothy T Rogers
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53706, USA.
| |
Collapse
|
42
|
Zhao J. Network neurosurgery. Chin Neurosurg J 2023; 9:4. [PMID: 36732790 PMCID: PMC9893634 DOI: 10.1186/s41016-023-00317-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Accepted: 01/03/2023] [Indexed: 02/04/2023] Open
Affiliation(s)
- Jizong Zhao
- grid.411617.40000 0004 0642 1244China National Clinical Research Center for Neurological Diseases, Beijing, China ,National Center for Neurological Disorders, Beijing, China ,grid.411617.40000 0004 0642 1244Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
43
|
Bonham LW, Geier EG, Sirkis DW, Leong JK, Ramos EM, Wang Q, Karydas A, Lee SE, Sturm VE, Sawyer RP, Friedberg A, Ichida JK, Gitler AD, Sugrue L, Cordingley M, Bee W, Weber E, Kramer JH, Rankin KP, Rosen HJ, Boxer AL, Seeley WW, Ravits J, Miller BL, Yokoyama JS. Radiogenomics of C9orf72 Expansion Carriers Reveals Global Transposable Element Derepression and Enables Prediction of Thalamic Atrophy and Clinical Impairment. J Neurosci 2023. [PMID: 36446586 DOI: 10.1101/2022.04.29.490104] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023] Open
Abstract
Hexanucleotide repeat expansion (HRE) within C9orf72 is the most common genetic cause of frontotemporal dementia (FTD). Thalamic atrophy occurs in both sporadic and familial FTD but is thought to distinctly affect HRE carriers. Separately, emerging evidence suggests widespread derepression of transposable elements (TEs) in the brain in several neurodegenerative diseases, including C9orf72 HRE-mediated FTD (C9-FTD). Whether TE activation can be measured in peripheral blood and how the reduction in peripheral C9orf72 expression observed in HRE carriers relates to atrophy and clinical impairment remain unknown. We used FreeSurfer software to assess the effects of C9orf72 HRE and clinical diagnosis (n = 78 individuals, male and female) on atrophy of thalamic nuclei. We also generated a novel, human, whole-blood RNA-sequencing dataset to determine the relationships among peripheral C9orf72 expression, TE activation, thalamic atrophy, and clinical severity (n = 114 individuals, male and female). We confirmed global thalamic atrophy and reduced C9orf72 expression in HRE carriers. Moreover, we identified disproportionate atrophy of the right mediodorsal lateral nucleus in HRE carriers and showed that C9orf72 expression associated with clinical severity, independent of thalamic atrophy. Strikingly, we found global peripheral activation of TEs, including the human endogenous LINE-1 element L1HS L1HS levels were associated with atrophy of multiple pulvinar nuclei, a thalamic region implicated in C9-FTD. Integration of peripheral transcriptomic and neuroimaging data from human HRE carriers revealed atrophy of specific thalamic nuclei, demonstrated that C9orf72 levels relate to clinical severity, and identified marked derepression of TEs, including L1HS, which predicted atrophy of FTD-relevant thalamic nuclei.SIGNIFICANCE STATEMENT Pathogenic repeat expansion in C9orf72 is the most frequent genetic cause of FTD and amyotrophic lateral sclerosis (ALS; C9-FTD/ALS). The clinical, neuroimaging, and pathologic features of C9-FTD/ALS are well characterized, whereas the intersections of transcriptomic dysregulation and brain structure remain largely unexplored. Herein, we used a novel radiogenomic approach to examine the relationship between peripheral blood transcriptomics and thalamic atrophy, a neuroimaging feature disproportionately impacted in C9-FTD/ALS. We confirmed reduction of C9orf72 in blood and found broad dysregulation of transposable elements-genetic elements typically repressed in the human genome-in symptomatic C9orf72 expansion carriers, which associated with atrophy of thalamic nuclei relevant to FTD. C9orf72 expression was also associated with clinical severity, suggesting that peripheral C9orf72 levels capture disease-relevant information.
Collapse
Affiliation(s)
- Luke W Bonham
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California 94158
| | - Ethan G Geier
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Transposon Therapeutics, San Diego, California 92122
| | - Daniel W Sirkis
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Josiah K Leong
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Psychological Science, University of Arkansas, Fayetteville, Arkansas 72701
| | - Eliana Marisa Ramos
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095
| | - Qing Wang
- Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095
| | - Anna Karydas
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Suzee E Lee
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Virginia E Sturm
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Russell P Sawyer
- Department of Neurology and Rehabilitation Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio 45267
| | - Adit Friedberg
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Justin K Ichida
- Department of Stem Cell Biology and Regenerative Medicine, Keck School of Medicine of USC, University of Southern California, Los Angeles, California 90033
| | - Aaron D Gitler
- Department of Genetics, Stanford University School of Medicine, Stanford, California 94305
| | - Leo Sugrue
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California 94158
| | | | - Walter Bee
- Transposon Therapeutics, San Diego, California 92122
| | - Eckard Weber
- Transposon Therapeutics, San Diego, California 92122
| | - Joel H Kramer
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Katherine P Rankin
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Howard J Rosen
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Adam L Boxer
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - William W Seeley
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Pathology, University of California, San Francisco, San Francisco, California 94158
| | - John Ravits
- Department of Neurosciences, ALS Translational Research, University of California, San Diego, La Jolla, California 92093
| | - Bruce L Miller
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Jennifer S Yokoyama
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
44
|
Steel A, Garcia BD, Silson EH, Robertson CE. Evaluating the efficacy of multi-echo ICA denoising on model-based fMRI. Neuroimage 2022; 264:119723. [PMID: 36328274 DOI: 10.1016/j.neuroimage.2022.119723] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 09/30/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022] Open
Abstract
fMRI is an indispensable tool for neuroscience investigation, but this technique is limited by multiple sources of physiological and measurement noise. These noise sources are particularly problematic for analysis techniques that require high signal-to-noise ratio for stable model fitting, such as voxel-wise modeling. Multi-echo data acquisition in combination with echo-time dependent ICA denoising (ME-ICA) represents one promising strategy to mitigate physiological and hardware-related noise sources as well as motion-related artifacts. However, most studies employing ME-ICA to date are resting-state fMRI studies, and therefore we have a limited understanding of the impact of ME-ICA on complex task or model-based fMRI paradigms. Here, we addressed this knowledge gap by comparing data quality and model fitting performance of data acquired during a visual population receptive field (pRF) mapping (N = 13 participants) experiment after applying one of three preprocessing procedures: ME-ICA, optimally combined multi-echo data without ICA-denoising, and typical single echo processing. As expected, multi-echo fMRI improved temporal signal-to-noise compared to single echo fMRI, with ME-ICA amplifying the improvement compared to optimal combination alone. However, unexpectedly, this boost in temporal signal-to-noise did not directly translate to improved model fitting performance: compared to single echo acquisition, model fitting was only improved after ICA-denoising. Specifically, compared to single echo acquisition, ME-ICA resulted in improved variance explained by our pRF model throughout the visual system, including anterior regions of the temporal and parietal lobes where SNR is typically low, while optimal combination without ICA did not. ME-ICA also improved reliability of parameter estimates compared to single echo and optimally combined multi-echo data without ICA-denoising. Collectively, these results suggest that ME-ICA is effective for denoising task-based fMRI data for modeling analyzes and maintains the integrity of the original data. Therefore, ME-ICA may be beneficial for complex fMRI experiments, including voxel-wise modeling and naturalistic paradigms.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, US.
| | - Brenda D Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, US
| | - Edward H Silson
- Psychology, School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, UK
| | - Caroline E Robertson
- Department of Psychology and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, US
| |
Collapse
|
45
|
Prince JS, Charest I, Kurzawski JW, Pyles JA, Tarr MJ, Kay KN. Improving the accuracy of single-trial fMRI response estimates using GLMsingle. eLife 2022; 11:77599. [PMID: 36444984 PMCID: PMC9708069 DOI: 10.7554/elife.77599] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 10/15/2022] [Indexed: 11/30/2022] Open
Abstract
Advances in artificial intelligence have inspired a paradigm shift in human neuroscience, yielding large-scale functional magnetic resonance imaging (fMRI) datasets that provide high-resolution brain responses to thousands of naturalistic visual stimuli. Because such experiments necessarily involve brief stimulus durations and few repetitions of each stimulus, achieving sufficient signal-to-noise ratio can be a major challenge. We address this challenge by introducing GLMsingle, a scalable, user-friendly toolbox available in MATLAB and Python that enables accurate estimation of single-trial fMRI responses (glmsingle.org). Requiring only fMRI time-series data and a design matrix as inputs, GLMsingle integrates three techniques for improving the accuracy of trial-wise general linear model (GLM) beta estimates. First, for each voxel, a custom hemodynamic response function (HRF) is identified from a library of candidate functions. Second, cross-validation is used to derive a set of noise regressors from voxels unrelated to the experiment. Third, to improve the stability of beta estimates for closely spaced trials, betas are regularized on a voxel-wise basis using ridge regression. Applying GLMsingle to the Natural Scenes Dataset and BOLD5000, we find that GLMsingle substantially improves the reliability of beta estimates across visually-responsive cortex in all subjects. Comparable improvements in reliability are also observed in a smaller-scale auditory dataset from the StudyForrest experiment. These improvements translate into tangible benefits for higher-level analyses relevant to systems and cognitive neuroscience. We demonstrate that GLMsingle: (i) helps decorrelate response estimates between trials nearby in time; (ii) enhances representational similarity between subjects within and across datasets; and (iii) boosts one-versus-many decoding of visual stimuli. GLMsingle is a publicly available tool that can significantly improve the quality of past, present, and future neuroimaging datasets sampling brain activity across many experimental conditions.
Collapse
Affiliation(s)
- Jacob S Prince
- Department of Psychology, Harvard University, Cambridge, United States
| | - Ian Charest
- Center for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, United Kingdom.,cerebrUM, Département de Psychologie, Université de Montréal, Montréal, Canada
| | - Jan W Kurzawski
- Department of Psychology, New York University, New York, United States
| | - John A Pyles
- Center for Human Neuroscience, Department of Psychology, University of Washington, Seattle, United States
| | - Michael J Tarr
- Department of Psychology, Neuroscience Institute, Carnegie Mellon University, Pittsburgh, United States
| | - Kendrick N Kay
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, United States
| |
Collapse
|
46
|
Miller JA, Tambini A, Kiyonaga A, D'Esposito M. Long-term learning transforms prefrontal cortex representations during working memory. Neuron 2022; 110:3805-3819.e6. [PMID: 36240768 PMCID: PMC9768795 DOI: 10.1016/j.neuron.2022.09.019] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 06/28/2022] [Accepted: 09/14/2022] [Indexed: 11/06/2022]
Abstract
The role of the lateral prefrontal cortex (lPFC) in working memory (WM) is debated. Non-human primate (NHP) electrophysiology shows that the lPFC stores WM representations, but human neuroimaging suggests that the lPFC controls WM content in sensory cortices. These accounts are confounded by differences in task training and stimulus exposure. We tested whether long-term training alters lPFC function by densely sampling WM activity using functional MRI. Over 3 months, participants trained on both a WM and serial reaction time (SRT) task, wherein fractal stimuli were embedded within sequences. WM performance improved for trained (but not novel) fractals and, neurally, delay activity increased in distributed lPFC voxels across learning. Item-level WM representations became detectable within lPFC patterns, and lPFC activity reflected sequence relationships from the SRT task. These findings demonstrate that human lPFC develops stimulus-selective responses with learning, and WM representations are shaped by long-term experience, which could reconcile competing accounts of WM functioning.
Collapse
Affiliation(s)
- Jacob A Miller
- Wu Tsai Institute, Department of Psychiatry, Yale University, New Haven, CT, USA.
| | - Arielle Tambini
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Anastasia Kiyonaga
- Department of Cognitive Science, University of California, San Diego, CA, USA
| | - Mark D'Esposito
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA; Department of Psychology, University of California, Berkeley, CA, USA
| |
Collapse
|
47
|
Representations and decodability of diverse cognitive functions are preserved across the human cortex, cerebellum, and subcortex. Commun Biol 2022; 5:1245. [PMCID: PMC9663596 DOI: 10.1038/s42003-022-04221-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 11/03/2022] [Indexed: 11/16/2022] Open
Abstract
AbstractWhich part of the brain contributes to our complex cognitive processes? Studies have revealed contributions of the cerebellum and subcortex to higher-order cognitive functions; however, it has been unclear whether such functional representations are preserved across the cortex, cerebellum, and subcortex. In this study, we use functional magnetic resonance imaging data with 103 cognitive tasks and construct three voxel-wise encoding and decoding models independently using cortical, cerebellar, and subcortical voxels. Representational similarity analysis reveals that the structure of task representations is preserved across the three brain parts. Principal component analysis visualizes distinct organizations of abstract cognitive functions in each part of the cerebellum and subcortex. More than 90% of the cognitive tasks are decodable from the cerebellum and subcortical activities, even for the novel tasks not included in model training. Furthermore, we show that the cerebellum and subcortex have sufficient information to reconstruct activity in the cerebral cortex.
Collapse
|
48
|
Aho K, Roads BD, Love BC. System alignment supports cross-domain learning and zero-shot generalisation. Cognition 2022; 227:105200. [PMID: 35717766 PMCID: PMC10469439 DOI: 10.1016/j.cognition.2022.105200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 05/31/2022] [Accepted: 06/02/2022] [Indexed: 11/23/2022]
Abstract
Recent findings suggest conceptual relationships hold across modalities. For instance, if two concepts occur in similar linguistic contexts, they also likely occur in similar visual contexts. These similarity structures may provide a valuable signal for alignment when learning to map between domains, such as when learning the names of objects. To assess this possibility, we conducted a paired-associate learning experiment in which participants mapped objects that varied on two visual features to locations that varied along two spatial dimensions. We manipulated whether the featural and spatial systems were aligned or misaligned. Although system alignment was not required to complete this supervised learning task, we found that participants learned more efficiently when systems aligned and that aligned systems facilitated zero-shot generalisation. We fit a variety of models to individuals' responses and found that models which included an offline unsupervised alignment mechanism best accounted for human performance. Our results provide empirical evidence that people align entire representation systems to accelerate learning, even when learning seemingly arbitrary associations between two domains.
Collapse
Affiliation(s)
- Kaarina Aho
- University College London, Department of Experimental Psychology, 26 Bedford Way, London WC1H 0AP, United Kingdom.
| | - Brett D Roads
- University College London, Department of Experimental Psychology, 26 Bedford Way, London WC1H 0AP, United Kingdom
| | - Bradley C Love
- University College London, Department of Experimental Psychology, 26 Bedford Way, London WC1H 0AP, United Kingdom; The Alan Turing Institute, 96 Euston Road, London NW12DB, United Kingdom
| |
Collapse
|
49
|
Sereno MI, Sood MR, Huang RS. Topological Maps and Brain Computations From Low to High. Front Syst Neurosci 2022; 16:787737. [PMID: 35747394 PMCID: PMC9210993 DOI: 10.3389/fnsys.2022.787737] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 03/29/2022] [Indexed: 01/02/2023] Open
Abstract
We first briefly summarize data from microelectrode studies on visual maps in non-human primates and other mammals, and characterize differences among the features of the approximately topological maps in the three main sensory modalities. We then explore the almost 50% of human neocortex that contains straightforward topological visual, auditory, and somatomotor maps by presenting a new parcellation as well as a movie atlas of cortical area maps on the FreeSurfer average surface, fsaverage. Third, we review data on moveable map phenomena as well as a recent study showing that cortical activity during sensorimotor actions may involve spatially locally coherent traveling wave and bump activity. Finally, by analogy with remapping phenomena and sensorimotor activity, we speculate briefly on the testable possibility that coherent localized spatial activity patterns might be able to ‘escape’ from topologically mapped cortex during ‘serial assembly of content’ operations such as scene and language comprehension, to form composite ‘molecular’ patterns that can move across some cortical areas and possibly return to topologically mapped cortex to generate motor output there.
Collapse
Affiliation(s)
- Martin I. Sereno
- Department of Psychology, San Diego State University, San Diego, CA, United States
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
- *Correspondence: Martin I. Sereno,
| | - Mariam Reeny Sood
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Ruey-Song Huang
- Centre for Cognitive and Brain Sciences, University of Macau, Macau, Macao SAR, China
| |
Collapse
|
50
|
The connectional anatomy of visual mental imagery: evidence from a patient with left occipito-temporal damage. Brain Struct Funct 2022; 227:3075-3083. [PMID: 35622159 DOI: 10.1007/s00429-022-02505-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 04/29/2022] [Indexed: 01/14/2023]
Abstract
Most of us can use our "mind's eye" to mentally visualize things that are not in our direct line of sight, an ability known as visual mental imagery. Extensive left temporal damage can impair patients' visual mental imagery experience, but the critical locus of lesion is unknown. Our recent meta-analysis of 27 fMRI studies of visual mental imagery highlighted a well-delimited region in the left lateral midfusiform gyrus, which was consistently activated during visual mental imagery, and which we called the Fusiform Imagery Node (FIN). Here, we describe the connectional anatomy of FIN in neurotypical participants and in RDS, a right-handed patient with an extensive occipito-temporal stroke in the left hemisphere. The stroke provoked right homonymous hemianopia, alexia without agraphia, and color anomia. Despite these deficits, RDS had normal subjective experience of visual mental imagery and reasonably preserved behavioral performance on tests of visual mental imagery of object shape, object color, letters, faces, and spatial relationships. We found that the FIN was spared by the lesion. We then assessed the connectional anatomy of the FIN in the MNI space and in the patient's native space, by visualizing the fibers of the inferior longitudinal fasciculus (ILF) and of the arcuate fasciculus (AF) passing through the FIN. In both spaces, the ILF connected the FIN with the anterior temporal lobe, and the AF linked it with frontal regions. Our evidence is consistent with the hypothesis that the FIN is a node of a brain network dedicated to voluntary visual mental imagery. The FIN could act as a bridge between visual information and semantic knowledge processed in the anterior temporal lobe and in the language circuits.
Collapse
|