1
|
Tarder-Stoll H, Baldassano C, Aly M. The brain hierarchically represents the past and future during multistep anticipation. Nat Commun 2024; 15:9094. [PMID: 39438448 PMCID: PMC11496687 DOI: 10.1038/s41467-024-53293-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 10/01/2024] [Indexed: 10/25/2024] Open
Abstract
Memory for temporal structure enables both planning of future events and retrospection of past events. We investigated how the brain flexibly represents extended temporal sequences into the past and future during anticipation. Participants learned sequences of environments in immersive virtual reality. Pairs of sequences had the same environments in a different order, enabling context-specific learning. During fMRI, participants anticipated upcoming environments multiple steps into the future in a given sequence. Temporal structure was represented in the hippocampus and across higher-order visual regions (1) bidirectionally, with graded representations into the past and future and (2) hierarchically, with further events into the past and future represented in successively more anterior brain regions. In hippocampus, these bidirectional representations were context-specific, and suppression of far-away environments predicted response time costs in anticipation. Together, this work sheds light on how we flexibly represent sequential structure to enable planning over multiple timescales.
Collapse
Affiliation(s)
- Hannah Tarder-Stoll
- Department of Psychology, Columbia University, New York, USA.
- Rotman Research Institute, Baycrest Health Sciences, Toronto, Canada.
| | | | - Mariam Aly
- Department of Psychology, Columbia University, New York, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
2
|
Tarder-Stoll H, Baldassano C, Aly M. The brain hierarchically represents the past and future during multistep anticipation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.07.24.550399. [PMID: 37546761 PMCID: PMC10402095 DOI: 10.1101/2023.07.24.550399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Memory for temporal structure enables both planning of future events and retrospection of past events. We investigated how the brain flexibly represents extended temporal sequences into the past and future during anticipation. Participants learned sequences of environments in immersive virtual reality. Pairs of sequences had the same environments in a different order, enabling context-specific learning. During fMRI, participants anticipated upcoming environments multiple steps into the future in a given sequence. Temporal structure was represented in the hippocampus and across higher-order visual regions (1) bidirectionally, with graded representations into the past and future and (2) hierarchically, with further events into the past and future represented in successively more anterior brain regions. In hippocampus, these bidirectional representations were context-specific, and suppression of far-away environments predicted response time costs in anticipation. Together, this work sheds light on how we flexibly represent sequential structure to enable planning over multiple timescales.
Collapse
|
3
|
Lahner B, Dwivedi K, Iamshchinina P, Graumann M, Lascelles A, Roig G, Gifford AT, Pan B, Jin S, Ratan Murty NA, Kay K, Oliva A, Cichy R. Modeling short visual events through the BOLD moments video fMRI dataset and metadata. Nat Commun 2024; 15:6241. [PMID: 39048577 PMCID: PMC11269733 DOI: 10.1038/s41467-024-50310-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 07/04/2024] [Indexed: 07/27/2024] Open
Abstract
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1000 short (3 s) naturalistic video clips of visual events across ten human subjects. We use the videos' extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
Collapse
Affiliation(s)
- Benjamin Lahner
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA.
| | - Kshitij Dwivedi
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
- Department of Computer Science, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Polina Iamshchinina
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Monika Graumann
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Alex Lascelles
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Gemma Roig
- Department of Computer Science, Goethe University Frankfurt, Frankfurt am Main, Germany
- The Hessian Center for AI (hessian.AI), Darmstadt, Germany
| | | | - Bowen Pan
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - SouYoung Jin
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - N Apurva Ratan Murty
- Department of Brain and Cognitive Science, MIT, Cambridge, MA, USA
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| | - Kendrick Kay
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN, USA
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Radoslaw Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| |
Collapse
|
4
|
Deen B, Husain G, Freiwald WA. A familiar face and person processing area in the human temporal pole. Proc Natl Acad Sci U S A 2024; 121:e2321346121. [PMID: 38954551 PMCID: PMC11252731 DOI: 10.1073/pnas.2321346121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 05/24/2024] [Indexed: 07/04/2024] Open
Abstract
How does the brain process the faces of familiar people? Neuropsychological studies have argued for an area of the temporal pole (TP) linking faces with person identities, but magnetic susceptibility artifacts in this region have hampered its study with fMRI. Using data acquisition and analysis methods optimized to overcome this artifact, we identify a familiar face response in TP, reliably observed in individual brains. This area responds strongly to visual images of familiar faces over unfamiliar faces, objects, and scenes. However, TP did not just respond to images of faces, but also to a variety of high-level social cognitive tasks, including semantic, episodic, and theory of mind tasks. The response profile of TP contrasted with a nearby region of the perirhinal cortex that responded specifically to faces, but not to social cognition tasks. TP was functionally connected with a distributed network in the association cortex associated with social cognition, while PR was functionally connected with face-preferring areas of the ventral visual cortex. This work identifies a missing link in the human face processing system that specifically processes familiar faces, and is well placed to integrate visual information about faces with higher-order conceptual information about other people. The results suggest that separate streams for person and face processing reach anterior temporal areas positioned at the top of the cortical hierarchy.
Collapse
Affiliation(s)
- Ben Deen
- Department of Psychology and Brain Institute, Tulane University, New Orleans, LA70118
- Laboratory of Neural Systems, The Rockefeller University, New York, NY10065
| | - Gazi Husain
- Hunter College, City University of New York, New York, NY10065
| | | |
Collapse
|
5
|
Li SPD, Shao J, Lu Z, McCloskey M, Park S. A scene with an invisible wall - navigational experience shapes visual scene representation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.03.601933. [PMID: 39005327 PMCID: PMC11244994 DOI: 10.1101/2024.07.03.601933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Human navigation heavily relies on visual information. Although many previous studies have investigated how navigational information is inferred from visual features of scenes, little is understood about the impact of navigational experience on visual scene representation. In this study, we examined how navigational experience influences both the behavioral and neural responses to a visual scene. During training, participants navigated in the virtual reality (VR) environments which we manipulated navigational experience while holding the visual properties of scenes constant. Half of the environments allowed free navigation (navigable), while the other half featured an 'invisible wall' preventing the participants to continue forward even though the scene was visually navigable (non-navigable). During testing, participants viewed scene images from the VR environment while completing either a behavioral perceptual identification task (Experimentl) or an fMRI scan (Experiment2). Behaviorally, we found that participants judged a scene pair to be significantly more visually different if their prior navigational experience varied, even after accounting for visual similarities between the scene pairs. Neurally, multi-voxel pattern of the parahippocampal place area (PPA) distinguished visual scenes based on prior navigational experience alone. These results suggest that the human visual scene cortex represents information about navigability obtained through prior experience, beyond those computable from the visual properties of the scene. Taken together, these results suggest that scene representation is modulated by prior navigational experience to help us construct a functionally meaningful visual environment.
Collapse
Affiliation(s)
- Shi Pui Donald Li
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jiayu Shao
- laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Zhengang Lu
- Department of Psychology, New York University, New York City, NY, USA
| | - Michael McCloskey
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Soojin Park
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Psychology, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
6
|
Kosakowski HL, Cohen MA, Herrera L, Nichoson I, Kanwisher N, Saxe R. Cortical Face-Selective Responses Emerge Early in Human Infancy. eNeuro 2024; 11:ENEURO.0117-24.2024. [PMID: 38871455 PMCID: PMC11258539 DOI: 10.1523/eneuro.0117-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 05/28/2024] [Accepted: 06/04/2024] [Indexed: 06/15/2024] Open
Abstract
In human adults, multiple cortical regions respond robustly to faces, including the occipital face area (OFA) and fusiform face area (FFA), implicated in face perception, and the superior temporal sulcus (STS) and medial prefrontal cortex (MPFC), implicated in higher-level social functions. When in development, does face selectivity arise in each of these regions? Here, we combined two awake infant functional magnetic resonance imaging (fMRI) datasets to create a sample size twice the size of previous reports (n = 65 infants; 2.6-9.6 months). Infants watched movies of faces, bodies, objects, and scenes, while fMRI data were collected. Despite variable amounts of data from each infant, individual subject whole-brain activation maps revealed responses to faces compared to nonface visual categories in the approximate location of OFA, FFA, STS, and MPFC. To determine the strength and nature of face selectivity in these regions, we used cross-validated functional region of interest analyses. Across this larger sample size, face responses in OFA, FFA, STS, and MPFC were significantly greater than responses to bodies, objects, and scenes. Even the youngest infants (2-5 months) showed significantly face-selective responses in FFA, STS, and MPFC, but not OFA. These results demonstrate that face selectivity is present in multiple cortical regions within months of birth, providing powerful constraints on theories of cortical development.
Collapse
Affiliation(s)
- Heather L Kosakowski
- Department of Psychology, Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138
| | - Michael A Cohen
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Psychology and Program in Neuroscience, Amherst College, Amherst, Massachusetts 01002
| | - Lyneé Herrera
- Psychology Department, University of Denver, Denver, Colorado 80210
| | - Isabel Nichoson
- Tulane Brain Institute, Tulane University, New Orleans, Louisiana 70118
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| |
Collapse
|
7
|
Kauf C, Kim HS, Lee EJ, Jhingan N, Selena She J, Taliaferro M, Gibson E, Fedorenko E. Linguistic inputs must be syntactically parsable to fully engage the language network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.21.599332. [PMID: 38948870 PMCID: PMC11212959 DOI: 10.1101/2024.06.21.599332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Human language comprehension is remarkably robust to ill-formed inputs (e.g., word transpositions). This robustness has led some to argue that syntactic parsing is largely an illusion, and that incremental comprehension is more heuristic, shallow, and semantics-based than is often assumed. However, the available data are also consistent with the possibility that humans always perform rule-like symbolic parsing and simply deploy error correction mechanisms to reconstruct ill-formed inputs when needed. We put these hypotheses to a new stringent test by examining brain responses to a) stimuli that should pose a challenge for syntactic reconstruction but allow for complex meanings to be built within local contexts through associative/shallow processing (sentences presented in a backward word order), and b) grammatically well-formed but semantically implausible sentences that should impede semantics-based heuristic processing. Using a novel behavioral syntactic reconstruction paradigm, we demonstrate that backward-presented sentences indeed impede the recovery of grammatical structure during incremental comprehension. Critically, these backward-presented stimuli elicit a relatively low response in the language areas, as measured with fMRI. In contrast, semantically implausible but grammatically well-formed sentences elicit a response in the language areas similar in magnitude to naturalistic (plausible) sentences. In other words, the ability to build syntactic structures during incremental language processing is both necessary and sufficient to fully engage the language network. Taken together, these results provide strongest to date support for a generalized reliance of human language comprehension on syntactic parsing.
Collapse
Affiliation(s)
- Carina Kauf
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Hee So Kim
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Elizabeth J. Lee
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Niharika Jhingan
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Jingyuan Selena She
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Maya Taliaferro
- Department of Psychology, New York University, New York, NY 10012 USA
| | - Edward Gibson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138 USA
| |
Collapse
|
8
|
Huang C, Li A, Pang Y, Yang J, Zhang J, Wu X, Mei L. How the intrinsic functional connectivity patterns of the semantic network support semantic processing. Brain Imaging Behav 2024; 18:539-554. [PMID: 38261218 DOI: 10.1007/s11682-024-00849-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/05/2024] [Indexed: 01/24/2024]
Abstract
Semantic processing, a core of language comprehension, involves the activation of brain regions dispersed extensively across the frontal, temporal, and parietal cortices that compose the semantic network. To comprehend the functional structure of this semantic network and how it prepares for semantic processing, we investigated its intrinsic functional connectivity (FC) and the relation between this pattern and semantic processing ability in a large sample from the Human Connectome Project (HCP) dataset. We first defined a well-studied brain network for semantic processing, and then we characterized the within-network connectivity (WNC) and the between-network connectivity (BNC) within this network using a voxel-based global brain connectivity (GBC) method based on resting-state functional magnetic resonance imaging (fMRI). The results showed that 97.73% of the voxels in the semantic network displayed considerably greater WNC than BNC, demonstrating that the semantic network is a fairly encapsulated network. Moreover, multiple connector hubs in the semantic network were identified after applying the criterion of WNC > 1 SD above the mean WNC of the semantic network. More importantly, three of these connector hubs (i.e., the left anterior temporal lobe, angular gyrus, and orbital part of the inferior frontal gyrus) were reliably associated with semantic processing ability. Our findings suggest that the three identified regions use WNC as the central mechanism for supporting semantic processing and that task-independent spontaneous connectivity in the semantic network is essential for semantic processing.
Collapse
Affiliation(s)
- Chengmei Huang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, 510631, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Center for Studies of Psychological Application, South China Normal University, Guangzhou, 510631, China
| | - Aqian Li
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, 510631, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Center for Studies of Psychological Application, South China Normal University, Guangzhou, 510631, China
| | - Yingdan Pang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, 510631, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Center for Studies of Psychological Application, South China Normal University, Guangzhou, 510631, China
| | - Jiayi Yang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, 510631, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Center for Studies of Psychological Application, South China Normal University, Guangzhou, 510631, China
| | - Jingxian Zhang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, 510631, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Center for Studies of Psychological Application, South China Normal University, Guangzhou, 510631, China
| | - Xiaoyan Wu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, 510631, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Center for Studies of Psychological Application, South China Normal University, Guangzhou, 510631, China
| | - Leilei Mei
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, 510631, China.
| |
Collapse
|
9
|
Bougou V, Vanhoyland M, Bertrand A, Van Paesschen W, Op De Beeck H, Janssen P, Theys T. Neuronal tuning and population representations of shape and category in human visual cortex. Nat Commun 2024; 15:4608. [PMID: 38816391 PMCID: PMC11139926 DOI: 10.1038/s41467-024-49078-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 05/22/2024] [Indexed: 06/01/2024] Open
Abstract
Object recognition and categorization are essential cognitive processes which engage considerable neural resources in the human ventral visual stream. However, the tuning properties of human ventral stream neurons for object shape and category are virtually unknown. We performed large-scale recordings of spiking activity in human Lateral Occipital Complex in response to stimuli in which the shape dimension was dissociated from the category dimension. Consistent with studies in nonhuman primates, the neuronal representations were primarily shape-based, although we also observed category-like encoding for images of animals. Surprisingly, linear decoders could reliably classify stimulus category even in data sets that were entirely shape-based. In addition, many recording sites showed an interaction between shape and category tuning. These results represent a detailed study on shape and category coding at the neuronal level in the human ventral visual stream, furnishing essential evidence that reconciles human imaging and macaque single-cell studies.
Collapse
Affiliation(s)
- Vasiliki Bougou
- Research Group of Experimental Neurosurgery and Neuroanatomy, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
- Laboratory for Neuro-and Psychophysiology, Research Group Neurophysiology, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
| | - Michaël Vanhoyland
- Research Group of Experimental Neurosurgery and Neuroanatomy, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
- Laboratory for Neuro-and Psychophysiology, Research Group Neurophysiology, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
- Department of Neurosurgery, University Hospitals Leuven, Leuven, Belgium
| | | | - Wim Van Paesschen
- Department of Neurology, University Hospitals Leuven, Leuven, Belgium
- Laboratory for Epilepsy Research, KU Leuven, Leuven, Belgium
| | - Hans Op De Beeck
- Laboratory Biological Psychology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Peter Janssen
- Laboratory for Neuro-and Psychophysiology, Research Group Neurophysiology, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium.
| | - Tom Theys
- Research Group of Experimental Neurosurgery and Neuroanatomy, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
- Department of Neurosurgery, University Hospitals Leuven, Leuven, Belgium
| |
Collapse
|
10
|
Chen YY, Areti A, Yoshor D, Foster BL. Perception and Memory Reinstatement Engage Overlapping Face-Selective Regions within Human Ventral Temporal Cortex. J Neurosci 2024; 44:e2180232024. [PMID: 38627090 PMCID: PMC11140664 DOI: 10.1523/jneurosci.2180-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 04/03/2024] [Accepted: 04/05/2024] [Indexed: 04/19/2024] Open
Abstract
Humans have the remarkable ability to vividly retrieve sensory details of past events. According to the theory of sensory reinstatement, during remembering, brain regions specialized for processing specific sensory stimuli are reactivated to support content-specific retrieval. Recently, several studies have emphasized transformations in the spatial organization of these reinstated activity patterns. Specifically, studies of scene stimuli suggest a clear anterior shift in the location of retrieval activations compared with the activity observed during perception. However, it is not clear that such transformations occur universally, with inconsistent evidence for other important stimulus categories, particularly faces. One challenge in addressing this question is the careful delineation of face-selective cortices, which are interdigitated with other selective regions, in configurations that spatially differ across individuals. Therefore, we conducted a multisession neuroimaging study to first carefully map individual participants' (nine males and seven females) face-selective regions within ventral temporal cortex (VTC), followed by a second session to examine the activity patterns within these regions during face memory encoding and retrieval. While face-selective regions were expectedly engaged during face perception at encoding, memory retrieval engagement exhibited a more selective and constricted reinstatement pattern within these regions, but did not show any consistent direction of spatial transformation (e.g., anteriorization). We also report on unique human intracranial recordings from VTC under the same experimental conditions. These findings highlight the importance of considering the complex configuration of category-selective cortex in elucidating principles shaping the neural transformations that occur from perception to memory.
Collapse
Affiliation(s)
- Yvonne Y Chen
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | | | - Daniel Yoshor
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Brett L Foster
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| |
Collapse
|
11
|
Molloy MF, Saygin ZM, Osher DE. Predicting high-level visual areas in the absence of task fMRI. Sci Rep 2024; 14:11376. [PMID: 38762549 PMCID: PMC11102456 DOI: 10.1038/s41598-024-62098-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 05/13/2024] [Indexed: 05/20/2024] Open
Abstract
The ventral visual stream is organized into units, or functional regions of interest (fROIs), specialized for processing high-level visual categories. Task-based fMRI scans ("localizers") are typically used to identify each individual's nuanced set of fROIs. The unique landscape of an individual's functional activation may rely in large part on their specialized connectivity patterns; recent studies corroborate this by showing that connectivity can predict individual differences in neural responses. We focus on the ventral visual stream and ask: how well can an individual's resting state functional connectivity localize their fROIs for face, body, scene, and object perception? And are the neural processors for any particular visual category better predicted by connectivity than others, suggesting a tighter mechanistic relationship between connectivity and function? We found, among 18 fROIs predicted from connectivity for each subject, all but one were selective for their preferred visual category. Defining an individual's fROIs based on their connectivity patterns yielded regions that were more selective than regions identified from previous studies or atlases in nearly all cases. Overall, we found that in the absence of a domain-specific localizer task, a 10-min resting state scan can be reliably used for defining these fROIs.
Collapse
Affiliation(s)
- M Fiona Molloy
- Department of Psychology, The Ohio State University, 1835 Neil Avenue, Columbus, OH, 43210, USA
- Department of Psychiatry, University of Michigan, Ann Arbor, MI, USA
| | - Zeynep M Saygin
- Department of Psychology, The Ohio State University, 1835 Neil Avenue, Columbus, OH, 43210, USA
| | - David E Osher
- Department of Psychology, The Ohio State University, 1835 Neil Avenue, Columbus, OH, 43210, USA.
| |
Collapse
|
12
|
Tsantani M, Yon D, Cook R. Neural Representations of Observed Interpersonal Synchrony/Asynchrony in the Social Perception Network. J Neurosci 2024; 44:e2009222024. [PMID: 38527811 PMCID: PMC11097257 DOI: 10.1523/jneurosci.2009-22.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/19/2023] [Accepted: 01/10/2024] [Indexed: 03/27/2024] Open
Abstract
The visual perception of individuals is thought to be mediated by a network of regions in the occipitotemporal cortex that supports specialized processing of faces, bodies, and actions. In comparison, we know relatively little about the neural mechanisms that support the perception of multiple individuals and the interactions between them. The present study sought to elucidate the visual processing of social interactions by identifying which regions of the social perception network represent interpersonal synchrony. In an fMRI study with 32 human participants (26 female, 6 male), we used multivoxel pattern analysis to investigate whether activity in face-selective, body-selective, and interaction-sensitive regions across the social perception network supports the decoding of synchronous versus asynchronous head-nodding and head-shaking. Several regions were found to support significant decoding of synchrony/asynchrony, including extrastriate body area (EBA), face-selective and interaction-sensitive mid/posterior right superior temporal sulcus, and occipital face area. We also saw robust cross-classification across actions in the EBA, suggestive of movement-invariant representations of synchrony/asynchrony. Exploratory whole-brain analyses also identified a region of the right fusiform cortex that responded more strongly to synchronous than to asynchronous motion. Critically, perceiving interpersonal synchrony/asynchrony requires the simultaneous extraction and integration of dynamic information from more than one person. Hence, the representation of synchrony/asynchrony cannot be attributed to augmented or additive processing of individual actors. Our findings therefore provide important new evidence that social interactions recruit dedicated visual processing within the social perception network that extends beyond that engaged by the faces and bodies of the constituent individuals.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom
| | - Daniel Yon
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom
| | - Richard Cook
- School of Psychology, University of Leeds, Leeds LS2 9JU, United Kingdom
- Department of Psychology, University of York, York YO10 5DD, United Kingdom
| |
Collapse
|
13
|
Kamps FS, Chen EM, Kanwisher N, Saxe R. Representation of navigational affordances and ego-motion in the occipital place area. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.30.591964. [PMID: 38746251 PMCID: PMC11092631 DOI: 10.1101/2024.04.30.591964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Humans effortlessly use vision to plan and guide navigation through the local environment, or "scene". A network of three cortical regions responds selectively to visual scene information, including the occipital (OPA), parahippocampal (PPA), and medial place areas (MPA) - but how this network supports visually-guided navigation is unclear. Recent evidence suggests that one region in particular, the OPA, supports visual representations for navigation, while PPA and MPA support other aspects of scene processing. However, most previous studies tested only static scene images, which lack the dynamic experience of navigating through scenes. We used dynamic movie stimuli to test whether OPA, PPA, and MPA represent two critical kinds of navigationally-relevant information: navigational affordances (e.g., can I walk to the left, right, or both?) and ego-motion (e.g., am I walking forward or backward? turning left or right?). We found that OPA is sensitive to both affordances and ego-motion, as well as the conflict between these cues - e.g., turning toward versus away from an open doorway. These effects were significantly weaker or absent in PPA and MPA. Responses in OPA were also dissociable from those in early visual cortex, consistent with the idea that OPA responses are not merely explained by lower-level visual features. OPA responses to affordances and ego-motion were stronger in the contralateral than ipsilateral visual field, suggesting that OPA encodes navigationally relevant information within an egocentric reference frame. Taken together, these results support the hypothesis that OPA contains visual representations that are useful for planning and guiding navigation through scenes.
Collapse
|
14
|
Ferris CS, Inman CS, Hamann S. FMRI correlates of autobiographical memory: Comparing silent retrieval with narrated retrieval. Neuropsychologia 2024; 196:108842. [PMID: 38428520 PMCID: PMC11299904 DOI: 10.1016/j.neuropsychologia.2024.108842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 12/31/2023] [Accepted: 02/25/2024] [Indexed: 03/03/2024]
Abstract
FMRI studies of autobiographical memory (AM) retrieval typically ask subjects to retrieve memories silently to avoid speech-related motion artifacts. Recently, some fMRI studies have started to use overt (spoken) retrieval to probe moment-to-moment retrieved content. However, the extent to which the overt retrieval method alters fMRI activations during retrieval is unknown. Here we examined this question by eliciting unrehearsed AMs during fMRI scanning either overtly or silently, in the same subjects, in different runs. Differences between retrieval modality (silent vs. narrated) included greater activation for silent retrieval in the anterior hippocampus, left angular gyrus, PCC, and superior PFC, and greater activation for narrated retrieval in speech production regions, posterior hippocampus, and the DLPFC. To probe temporal dynamics, we divided each retrieval period into an initial search phase and a later elaboration phase. The activations during the search and elaboration phases were broadly similar regardless of modality, and these activations were in line with previous fMRI studies of AM temporal dynamics employing silent retrieval. For both retrieval modalities, search activated the hippocampus, mPFC, ACC, and PCC, and elaboration activated the left DLPFC and middle temporal gyri. To examine content-specific reactivation during retrieval, the timecourse of narrated memory content was transcribed and modeled. We observed dynamic activation associated with object content in the lateral occipital complex, and activation associated with scene content in the retrosplenial cortex. The current findings show that both silent and narrated AMs activate a broadly similar memory network, with some key differences, and add to current knowledge regarding the content-specific dynamics of AM retrieval. However, these observed differences between retrieval modality suggest that studies using overt retrieval should carefully consider this method's possible effects on cognitive and neural processing.
Collapse
Affiliation(s)
- Charles S Ferris
- Emory University, Department of Psychology, 36 Eagle Row, Atlanta, GA, 30322, USA.
| | - Cory S Inman
- University of Utah, Department of Psychology, 380 S 1530 E Beh S 502, Salt Lake City, UT, 84112, USA.
| | - Stephan Hamann
- Emory University, Department of Psychology, 36 Eagle Row, Atlanta, GA, 30322, USA.
| |
Collapse
|
15
|
Schneider JM, Scott TL, Legault J, Qi Z. Limited but specific engagement of the mature language network during linguistic statistical learning. Cereb Cortex 2024; 34:bhae123. [PMID: 38566510 PMCID: PMC10987970 DOI: 10.1093/cercor/bhae123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 03/04/2024] [Accepted: 03/05/2024] [Indexed: 04/04/2024] Open
Abstract
Statistical learning (SL) is the ability to detect and learn regularities from input and is foundational to language acquisition. Despite the dominant role of SL as a theoretical construct for language development, there is a lack of direct evidence supporting the shared neural substrates underlying language processing and SL. It is also not clear whether the similarities, if any, are related to linguistic processing, or statistical regularities in general. The current study tests whether the brain regions involved in natural language processing are similarly recruited during auditory, linguistic SL. Twenty-two adults performed an auditory linguistic SL task, an auditory nonlinguistic SL task, and a passive story listening task as their neural activation was monitored. Within the language network, the left posterior temporal gyrus showed sensitivity to embedded speech regularities during auditory, linguistic SL, but not auditory, nonlinguistic SL. Using a multivoxel pattern similarity analysis, we uncovered similarities between the neural representation of auditory, linguistic SL, and language processing within the left posterior temporal gyrus. No other brain regions showed similarities between linguistic SL and language comprehension, suggesting that a shared neurocomputational process for auditory SL and natural language processing within the left posterior temporal gyrus is specific to linguistic stimuli.
Collapse
Affiliation(s)
- Julie M Schneider
- Department of Communication Sciences and Disorders, Louisiana State University, 77 Hatcher Hall, Field House Dr., Baton Rouge, LA 70803, United States
- Department of Linguistics & Cognitive Science, University of Delaware, 125 E Main St, Newark, DE 19716, United States
| | - Terri L Scott
- School of Medicine, University of California San Francisco, 533 Parnassus Ave, San Francisco, CA 94143, United States
| | - Jennifer Legault
- Department of Psychology, Elizabethtown College, One Alpha Dr, Elizabethtown, PA 17022, United States
| | - Zhenghan Qi
- Department of Linguistics & Cognitive Science, University of Delaware, 125 E Main St, Newark, DE 19716, United States
- Bouvé College of Health Sciences, Northeastern University, 360 Huntington Ave, Boston, MA 02115, United States
| |
Collapse
|
16
|
Jung Y, Hsu D, Dilks DD. "Walking selectivity" in the occipital place area in 8-year-olds, not 5-year-olds. Cereb Cortex 2024; 34:bhae101. [PMID: 38494889 PMCID: PMC10945045 DOI: 10.1093/cercor/bhae101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 02/20/2024] [Accepted: 02/21/2024] [Indexed: 03/19/2024] Open
Abstract
A recent neuroimaging study in adults found that the occipital place area (OPA)-a cortical region involved in "visually guided navigation" (i.e. moving about the immediately visible environment, avoiding boundaries, and obstacles)-represents visual information about walking, not crawling, suggesting that OPA is late developing, emerging only when children are walking, not beforehand. But when precisely does this "walking selectivity" in OPA emerge-when children first begin to walk in early childhood, or perhaps counterintuitively, much later in childhood, around 8 years of age, when children are adult-like walking? To directly test these two hypotheses, using functional magnetic resonance imaging (fMRI) in two groups of children, 5- and 8-year-olds, we measured the responses in OPA to first-person perspective videos through scenes from a "walking" perspective, as well as three control perspectives ("crawling," "flying," and "scrambled"). We found that the OPA in 8-year-olds-like adults-exhibited walking selectivity (i.e. responding significantly more to the walking videos than to any of the others, and no significant differences across the crawling, flying, and scrambled videos), while the OPA in 5-year-olds exhibited no walking selectively. These findings reveal that OPA undergoes protracted development, with walking selectivity only emerging around 8 years of age.
Collapse
Affiliation(s)
- Yaelan Jung
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Debbie Hsu
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Daniel D Dilks
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
17
|
Nara S, Kaiser D. Integrative processing in artificial and biological vision predicts the perceived beauty of natural images. SCIENCE ADVANCES 2024; 10:eadi9294. [PMID: 38427730 PMCID: PMC10906925 DOI: 10.1126/sciadv.adi9294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 01/29/2024] [Indexed: 03/03/2024]
Abstract
Previous research shows that the beauty of natural images is already determined during perceptual analysis. However, it is unclear which perceptual computations give rise to the perception of beauty. Here, we tested whether perceived beauty is predicted by spatial integration across an image, a perceptual computation that reduces processing demands by aggregating image parts into more efficient representations of the whole. We quantified integrative processing in an artificial deep neural network model, where the degree of integration was determined by the amount of deviation between activations for the whole image and its constituent parts. This quantification of integration predicted beauty ratings for natural images across four studies with different stimuli and designs. In a complementary functional magnetic resonance imaging study, we show that integrative processing in human visual cortex similarly predicts perceived beauty. Together, our results establish integration as a computational principle that facilitates perceptual analysis and thereby mediates the perception of beauty.
Collapse
Affiliation(s)
- Sanjeev Nara
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen, Gießen Germany
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen, Gießen Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps-University Marburg and Justus Liebig University Gießen, Marburg, Germany
| |
Collapse
|
18
|
Kreichman O, Gilaie‐Dotan S. Parafoveal vision reveals qualitative differences between fusiform face area and parahippocampal place area. Hum Brain Mapp 2024; 45:e26616. [PMID: 38379465 PMCID: PMC10879909 DOI: 10.1002/hbm.26616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 01/02/2024] [Accepted: 01/22/2024] [Indexed: 02/22/2024] Open
Abstract
The center-periphery visual field axis guides early visual system organization with enhanced resources devoted to central vision leading to reduced peripheral performance relative to that of central vision (i.e., behavioral eccentricity effect) for many visual functions. The center-periphery organization extends to high-order visual cortex where, for example, the well-studied face-sensitive fusiform face area (FFA) shows sensitivity to central vision and the place-sensitive parahippocampal place area (PPA) shows sensitivity to peripheral vision. As we have recently found that face perception is more sensitive to eccentricity than place perception, here we examined whether these behavioral findings reflect differences in FFA's and PPA's sensitivities to eccentricity. We assumed FFA would show higher sensitivity to eccentricity than PPA would, but that both regions' modulation by eccentricity would be invariant to the viewed category. We parametrically investigated (fMRI, n = 32) how FFA's and PPA's activations are modulated by eccentricity (≤8°) and category (upright/inverted faces/houses) while keeping stimulus size constant. As expected, FFA showed an overall higher sensitivity to eccentricity than PPA. However, both regions' activation modulations by eccentricity were dependent on the viewed category. In FFA, a reduction of activation with growing eccentricity ("BOLD eccentricity effect") was found (with different amplitudes) for all categories. In PPA however, qualitatively different BOLD eccentricity effect modulations were found (e.g., at 8° mild BOLD eccentricity effect for houses but a reverse BOLD eccentricity effect for faces and no modulation for inverted faces). Our results emphasize that peripheral vision investigations are critical to further our understanding of visual processing.
Collapse
Affiliation(s)
- Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life ScienceBar Ilan UniversityRamat GanIsrael
- The Gonda Multidisciplinary Brain Research CenterBar Ilan UniversityRamat GanIsrael
| | - Sharon Gilaie‐Dotan
- School of Optometry and Vision Science, Faculty of Life ScienceBar Ilan UniversityRamat GanIsrael
- The Gonda Multidisciplinary Brain Research CenterBar Ilan UniversityRamat GanIsrael
- UCL Institute of Cognitive NeuroscienceLondonUK
| |
Collapse
|
19
|
Soulos P, Isik L. Disentangled deep generative models reveal coding principles of the human face processing network. PLoS Comput Biol 2024; 20:e1011887. [PMID: 38408105 PMCID: PMC10919870 DOI: 10.1371/journal.pcbi.1011887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/07/2024] [Accepted: 02/02/2024] [Indexed: 02/28/2024] Open
Abstract
Despite decades of research, much is still unknown about the computations carried out in the human face processing network. Recently, deep networks have been proposed as a computational account of human visual processing, but while they provide a good match to neural data throughout visual cortex, they lack interpretability. We introduce a method for interpreting brain activity using a new class of deep generative models, disentangled representation learning models, which learn a low-dimensional latent space that "disentangles" different semantically meaningful dimensions of faces, such as rotation, lighting, or hairstyle, in an unsupervised manner by enforcing statistical independence between dimensions. We find that the majority of our model's learned latent dimensions are interpretable by human raters. Further, these latent dimensions serve as a good encoding model for human fMRI data. We next investigate the representation of different latent dimensions across face-selective voxels. We find that low- and high-level face features are represented in posterior and anterior face-selective regions, respectively, corroborating prior models of human face recognition. Interestingly, though, we find identity-relevant and irrelevant face features across the face processing network. Finally, we provide new insight into the few "entangled" (uninterpretable) dimensions in our model by showing that they match responses in the ventral stream and carry information about facial identity. Disentangled face encoding models provide an exciting alternative to standard "black box" deep learning approaches for modeling and interpreting human brain data.
Collapse
Affiliation(s)
- Paul Soulos
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
20
|
Steel A, Silson EH, Garcia BD, Robertson CE. A retinotopic code structures the interaction between perception and memory systems. Nat Neurosci 2024; 27:339-347. [PMID: 38168931 PMCID: PMC10923171 DOI: 10.1038/s41593-023-01512-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 10/31/2023] [Indexed: 01/05/2024]
Abstract
Conventional views of brain organization suggest that regions at the top of the cortical hierarchy processes internally oriented information using an abstract amodal neural code. Despite this, recent reports have described the presence of retinotopic coding at the cortical apex, including the default mode network. What is the functional role of retinotopic coding atop the cortical hierarchy? Here we report that retinotopic coding structures interactions between internally oriented (mnemonic) and externally oriented (perceptual) brain areas. Using functional magnetic resonance imaging, we observed robust inverted (negative) retinotopic coding in category-selective memory areas at the cortical apex, which is functionally linked to the classic (positive) retinotopic coding in category-selective perceptual areas in high-level visual cortex. These functionally linked retinotopic populations in mnemonic and perceptual areas exhibit spatially specific opponent responses during both bottom-up perception and top-down recall, suggesting that these areas are interlocked in a mutually inhibitory dynamic. These results show that retinotopic coding structures interactions between perceptual and mnemonic neural systems, providing a scaffold for their dynamic interaction.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| | - Edward H Silson
- Psychosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, UK
| | - Brenda D Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Caroline E Robertson
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
21
|
Lee J, Park S. Multi-modal Representation of the Size of Space in the Human Brain. J Cogn Neurosci 2024; 36:340-361. [PMID: 38010320 DOI: 10.1162/jocn_a_02092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes the geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small- and large-sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberations. By using a multivoxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory-specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that angular gyrus and the right medial frontal gyrus had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory-specific regions and modality-integrated regions increases in the multimodal condition compared with single modality conditions. Our results suggest that spatial size perception relies on both sensory-specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
|
22
|
Roseman M, Elias U, Kletenik I, Ferguson MA, Fox MD, Horowitz Z, Marshall GA, Spiers HJ, Arzy S. A neural circuit for spatial orientation derived from brain lesions. Cereb Cortex 2024; 34:bhad486. [PMID: 38100330 PMCID: PMC10793567 DOI: 10.1093/cercor/bhad486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 11/27/2023] [Accepted: 11/27/2023] [Indexed: 12/17/2023] Open
Abstract
There is disagreement regarding the major components of the brain network supporting spatial cognition. To address this issue, we applied a lesion mapping approach to the clinical phenomenon of topographical disorientation. Topographical disorientation is the inability to maintain accurate knowledge about the physical environment and use it for navigation. A review of published topographical disorientation cases identified 65 different lesion sites. Our lesion mapping analysis yielded a topographical disorientation brain map encompassing the classic regions of the navigation network: medial parietal, medial temporal, and temporo-parietal cortices. We also identified a ventromedial region of the prefrontal cortex, which has been absent from prior descriptions of this network. Moreover, we revealed that the regions mapped are correlated with the Default Mode Network sub-network C. Taken together, this study provides causal evidence for the distribution of the spatial cognitive system, demarking the major components and identifying novel regions.
Collapse
Affiliation(s)
- Moshe Roseman
- Neuropsychiatry Lab, Department of Medical Neurosciences, Faculty of Medicine, Hadassah Ein Kerem Campus, Hebrew University of Jerusalem, Jerusalem 9112001, Israel
| | - Uri Elias
- Neuropsychiatry Lab, Department of Medical Neurosciences, Faculty of Medicine, Hadassah Ein Kerem Campus, Hebrew University of Jerusalem, Jerusalem 9112001, Israel
| | - Isaiah Kletenik
- Center for Brain Circuit Therapeutics, Departments of Neurology, Psychiatry, and Radiology, Brigham & Women’s Hospital, Boston, MA 02115, United States
- Harvard Medical School, Boston, MA 02115, United States
- Division of Cognitive and Behavioral Neurology, Department of Neurology, Brigham and Women’s Hospital, Boston, MA 02115, United States
| | - Michael A Ferguson
- Center for Brain Circuit Therapeutics, Departments of Neurology, Psychiatry, and Radiology, Brigham & Women’s Hospital, Boston, MA 02115, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Michael D Fox
- Center for Brain Circuit Therapeutics, Departments of Neurology, Psychiatry, and Radiology, Brigham & Women’s Hospital, Boston, MA 02115, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Zalman Horowitz
- Neuropsychiatry Lab, Department of Medical Neurosciences, Faculty of Medicine, Hadassah Ein Kerem Campus, Hebrew University of Jerusalem, Jerusalem 9112001, Israel
| | - Gad A Marshall
- Harvard Medical School, Boston, MA 02115, United States
- Division of Cognitive and Behavioral Neurology, Department of Neurology, Brigham and Women’s Hospital, Boston, MA 02115, United States
- Center for Alzheimer Research and Treatment, Department of Neurology, Brigham and Women’s Hospital, Boston, MA 02115, United States
- Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, United States
| | - Hugo J Spiers
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, United Kingdom
| | - Shahar Arzy
- Neuropsychiatry Lab, Department of Medical Neurosciences, Faculty of Medicine, Hadassah Ein Kerem Campus, Hebrew University of Jerusalem, Jerusalem 9112001, Israel
- Department of Neurology, Hadassah Hebrew University Medical School, Jerusalem 9112001, Israel
- Department of Brain and Cognitive Sciences, Hebrew University of Jerusalem, Jerusalem 9190501, Israel
| |
Collapse
|
23
|
Nugiel T, Demeter DV, Mitchell ME, Garza A, Hernandez AE, Juranek J, Church JA. Brain connectivity and academic skills in English learners. Cereb Cortex 2024; 34:bhad414. [PMID: 38044467 PMCID: PMC10793574 DOI: 10.1093/cercor/bhad414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 10/11/2023] [Accepted: 10/23/2023] [Indexed: 12/05/2023] Open
Abstract
English learners (ELs) are a rapidly growing population in schools in the United States with limited experience and proficiency in English. To better understand the path for EL's academic success in school, it is important to understand how EL's brain systems are used for academic learning in English. We studied, in a cohort of Hispanic middle-schoolers (n = 45, 22F) with limited English proficiency and a wide range of reading and math abilities, brain network properties related to academic abilities. We applied a method for localizing brain regions of interest (ROIs) that are group-constrained, yet individually specific, to test how resting state functional connectivity between regions that are important for academic learning (reading, math, and cognitive control regions) are related to academic abilities. ROIs were selected from task localizers probing reading and math skills in the same participants. We found that connectivity across all ROIs, as well as connectivity of just the cognitive control ROIs, were positively related to measures of reading skills but not math skills. This work suggests that cognitive control brain systems have a central role for reading in ELs. Our results also indicate that an individualized approach for localizing brain function may clarify brain-behavior relationships.
Collapse
Affiliation(s)
- Tehila Nugiel
- Department of Psychology, Florida State University, Tallahassee, FL 32304, United States
| | - Damion V Demeter
- Department of Cognitive Science, University of California San Diego, La Jolla, CA 92037, United States
| | - Mackenzie E Mitchell
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - AnnaCarolina Garza
- Department of Psychology, The University of Texas at Austin, Austin, TX 78712, United States
| | - Arturo E Hernandez
- Department of Psychology, University of Houston, Houston, TX 77204, United States
| | - Jenifer Juranek
- Department of Pediatrics, University of Texas Health Science Center, Houston, TX 77225, United States
| | - Jessica A Church
- Department of Psychology, The University of Texas at Austin, Austin, TX 78712, United States
- Biomedical Imaging Center, The University of Texas at Austin, Austin, TX 78712, United States
| |
Collapse
|
24
|
Lee JJ, Scott TL, Perrachione TK. Efficient functional localization of language regions in the brain. Neuroimage 2024; 285:120489. [PMID: 38065277 PMCID: PMC10999251 DOI: 10.1016/j.neuroimage.2023.120489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 11/25/2023] [Accepted: 12/05/2023] [Indexed: 12/17/2023] Open
Abstract
Important recent advances in the cognitive neuroscience of language have been made using functional localizers to demarcate language-selective regions in individual brains. Although single-subject localizers offer insights that are unavailable in classic group analyses, they require additional scan time that imposes costs on investigators and participants. In particular, the unique practical challenges of scanning children and other special populations has led to less adoption of localizers for neuroimaging research with these theoretically and clinically important groups. Here, we examined how measurements of the spatial extent and functional response profiles of language regions are affected by the duration of an auditory language localizer. We compared how parametrically smaller amounts of data collected from one scanning session affected (i) consistency of group-level whole-brain parcellations, (ii) functional selectivity of subject-level activation in individually defined functional regions of interest (fROIs), (iii) sensitivity and specificity of subject-level whole-brain and fROI activation, and (iv) test-retest reliability of subject-level whole-brain and fROI activation. For many of these metrics, the localizer duration could be reduced by 50-75% while preserving the stability and reliability of both the spatial extent and functional response profiles of language areas. These results indicate that, for most measures relevant to cognitive neuroimaging studies, the brain's language network can be localized just as effectively with 3.5 min of scan time as it can with 12 min. Minimizing the time required to reliably localize the brain's language network allows more effective localizer use in situations where each minute of scan time is particularly precious.
Collapse
Affiliation(s)
- Jayden J Lee
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Ave., Boston, MA 02215, United States
| | - Terri L Scott
- Department of Neurological Surgery, University of California - San Francisco, San Francisco, CA, United States
| | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Ave., Boston, MA 02215, United States.
| |
Collapse
|
25
|
McMahon E, Bonner MF, Isik L. Hierarchical organization of social action features along the lateral visual pathway. Curr Biol 2023; 33:5035-5047.e8. [PMID: 37918399 PMCID: PMC10841461 DOI: 10.1016/j.cub.2023.10.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 09/01/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023]
Abstract
Recent theoretical work has argued that in addition to the classical ventral (what) and dorsal (where/how) visual streams, there is a third visual stream on the lateral surface of the brain specialized for processing social information. Like visual representations in the ventral and dorsal streams, representations in the lateral stream are thought to be hierarchically organized. However, no prior studies have comprehensively investigated the organization of naturalistic, social visual content in the lateral stream. To address this question, we curated a naturalistic stimulus set of 250 3-s videos of two people engaged in everyday actions. Each clip was richly annotated for its low-level visual features, mid-level scene and object properties, visual social primitives (including the distance between people and the extent to which they were facing), and high-level information about social interactions and affective content. Using a condition-rich fMRI experiment and a within-subject encoding model approach, we found that low-level visual features are represented in early visual cortex (EVC) and middle temporal (MT) area, mid-level visual social features in extrastriate body area (EBA) and lateral occipital complex (LOC), and high-level social interaction information along the superior temporal sulcus (STS). Communicative interactions, in particular, explained unique variance in regions of the STS after accounting for variance explained by all other labeled features. Taken together, these results provide support for representation of increasingly abstract social visual content-consistent with hierarchical organization-along the lateral visual stream and suggest that recognizing communicative actions may be a key computational goal of the lateral visual pathway.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA.
| | - Michael F Bonner
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA
| | - Leyla Isik
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA; Department of Biomedical Engineering, Whiting School of Engineering, Johns Hopkins University, Suite 400 West, Wyman Park Building, 3400 N. Charles Street, Baltimore, MD 21218, USA
| |
Collapse
|
26
|
Ayzenberg V, Granovetter MC, Robert S, Patterson C, Behrmann M. Differential functional reorganization of ventral and dorsal visual pathways following childhood hemispherectomy. Dev Cogn Neurosci 2023; 64:101323. [PMID: 37976921 PMCID: PMC10682827 DOI: 10.1016/j.dcn.2023.101323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/28/2023] [Accepted: 11/09/2023] [Indexed: 11/19/2023] Open
Abstract
Hemispherectomy is a surgical procedure in which an entire hemisphere of a patient's brain is resected or functionally disconnected to manage seizures in individuals with drug-resistant epilepsy. Despite the extensive loss of both ventral and dorsal visual pathways in one hemisphere, pediatric patients who have undergone hemispherectomy show a remarkably high degree of perceptual function across many domains. In the current study, we sought to understand the extent to which functions of the ventral and dorsal visual pathways reorganize to the contralateral hemisphere following childhood hemispherectomy. To this end, we collected fMRI data from an equal number of left and right hemispherectomy patients who completed tasks that typically elicit lateralized responses from the ventral or the dorsal pathway, namely, word (left ventral), face (right ventral), tool (left dorsal), and global form (right dorsal) perception. Overall, there was greater evidence of functional reorganization in the ventral pathway than in the dorsal pathway. Importantly, because ventral and dorsal reorganization was tested within the very same patients, these results cannot be explained by idiosyncratic factors such as disease etiology, age at the time of surgery, or age at testing. These findings suggest that because the dorsal pathway may mature earlier, it may have a shorter developmental window of plasticity than the ventral pathway and, hence, be less malleable after perturbation.
Collapse
Affiliation(s)
- Vladislav Ayzenberg
- Department of Psychology, University of Pennsylvania, PA, USA; Department of Psychology and Neuroscience Institute, Carnegie Mellon University, PA, USA.
| | - Michael C Granovetter
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, PA, USA; School of Medicine, University of Pittsburgh, PA, USA
| | - Sophia Robert
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, PA, USA
| | - Christina Patterson
- School of Medicine, University of Pittsburgh, PA, USA; Department of Pediatrics, University of Pittsburgh, PA, USA
| | - Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, PA, USA; Department of Pediatrics, University of Pittsburgh, PA, USA; Department of Ophthalmology, University of Pittsburgh, PA, USA.
| |
Collapse
|
27
|
Chen L, Cichy RM, Kaiser D. Alpha-frequency feedback to early visual cortex orchestrates coherent naturalistic vision. SCIENCE ADVANCES 2023; 9:eadi2321. [PMID: 37948520 PMCID: PMC10637741 DOI: 10.1126/sciadv.adi2321] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 10/12/2023] [Indexed: 11/12/2023]
Abstract
During naturalistic vision, the brain generates coherent percepts by integrating sensory inputs scattered across the visual field. Here, we asked whether this integration process is mediated by rhythmic cortical feedback. In electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) experiments, we experimentally manipulated integrative processing by changing the spatiotemporal coherence of naturalistic videos presented across visual hemifields. Our EEG data revealed that information about incoherent videos is coded in feedforward-related gamma activity while information about coherent videos is coded in feedback-related alpha activity, indicating that integration is indeed mediated by rhythmic activity. Our fMRI data identified scene-selective cortex and human middle temporal complex (hMT) as likely sources of this feedback. Analytically combining our EEG and fMRI data further revealed that feedback-related representations in the alpha band shape the earliest stages of visual processing in cortex. Together, our findings indicate that the construction of coherent visual experiences relies on cortical feedback rhythms that fully traverse the visual hierarchy.
Collapse
Affiliation(s)
- Lixiang Chen
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Radoslaw M. Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Gießen 35392, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg 35032, Germany
| |
Collapse
|
28
|
Molloy MF, Osher DE. A personalized cortical atlas for functional regions of interest. J Neurophysiol 2023; 130:1067-1080. [PMID: 37727907 PMCID: PMC10994647 DOI: 10.1152/jn.00108.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 09/18/2023] [Accepted: 09/18/2023] [Indexed: 09/21/2023] Open
Abstract
Advances in functional MRI (fMRI) allow mapping an individual's brain function in vivo. Task fMRI can localize domain-specific regions of cognitive processing or functional regions of interest (fROIs) within an individual. Moreover, data from resting state (no task) fMRI can be used to define an individual's connectome, which can characterize that individual's functional organization via connectivity-based parcellations. However, can connectivity-based parcellations alone predict an individual's fROIs? Here, we describe an approach to compute individualized rs-fROIs (i.e., regions that correspond to given fROI constructed using only resting state data) for motor control, working memory, high-level vision, and language comprehension. The rs-fROIs were computed and validated using a large sample of young adults (n = 1,018) with resting state and task fMRI from the Human Connectome Project. First, resting state parcellations were defined across a sequence of resolutions from broadscale to fine-grained networks in a training group of 500 individuals. Second, 21 rs-fROIs were defined from the training group by identifying the rs network that most closely matched task-defined fROIs across all individuals. Third, the selectivity of rs-fROIs was investigated in a training set of the remaining 518 individuals. All computed rs-fROIs were indeed selective for their preferred category. Critically, the rs-fROIs had higher selectivity than probabilistic atlas parcels for nearly all fROIs. In conclusion, we present a potential approach to define selective fROIs on an individual-level circumventing the need for multiple task-based localizers.NEW & NOTEWORTHY We compute individualized resting state parcels that identify an individual's own functional regions of interest (fROIs) for high-level vision, language comprehension, motor control, and working memory, using only their functional connectome. This approach demonstrates a rapid and powerful alternative for finding a large set of fROIs in an individual, using only their unique connectivity pattern, which does not require the costly acquisition of multiple fMRI localizer tasks.
Collapse
Affiliation(s)
- M. Fiona Molloy
- Department of Psychology, The Ohio State University, Columbus, Ohio, United States
- Department of Psychiatry, University of Michigan, Ann Arbor, Michigan, United States
| | - David E. Osher
- Department of Psychology, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|
29
|
Liu Q, Gao F, Wang X, Xia J, Yuan G, Zheng S, Zhong M, Zhu X. Cognitive inflexibility is linked to abnormal frontoparietal-related activation and connectivity in obsessive-compulsive disorder. Hum Brain Mapp 2023; 44:5460-5470. [PMID: 37683103 PMCID: PMC10543351 DOI: 10.1002/hbm.26457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 05/23/2023] [Accepted: 08/02/2023] [Indexed: 09/10/2023] Open
Abstract
Although it was acknowledged that patients with obsessive-compulsive disorder (OCD) would exhibit cognitive inflexibility, the underlying neural mechanism has not been fully clarified. Therefore, this study aimed to investigate the neural substrates involved in cognitive inflexibility among individuals with OCD. A total of 42 patients with OCD and 48 healthy controls (HCs) completed clinical assessment and functional magnetic resonance imaging (fMRI) data collection during cued task switching. Behavioral performances and fMRI activation were compared between the OCD group and the HC group. Psychophysiological interactions (PPIs) analyses were applied to explore functional connectivity related to task switching. Pearson correlation was used to investigate the relationships among behavioral performance, fMRI activity, and obsessive-compulsive symptoms in OCD. The OCD group had a greater switch cost than HCs (χ2 = 5.89, p < .05). A significant difference in reaction time was found during switch (χ2 = 17.72, p < .001) and repeat (χ2 = 16.60, p = .018) between the two groups, while there was no significant difference in group accuracy. Comparison of group differences showed that the OCD group had increased activation in the right superior parietal cortex (rSPL) during task switching, and exhibited increased connectivity of frontoparietal network/default mode network (FPN-DMN; i.e., middle frontal gyrus [MFG]/inferior parietal cortex-precuneus, MFG-middle/posterior cingulate gyrus) and within the FPN (inferior parietal cortex-postcentral gyrus). In the OCD group, the compulsion score was positively correlated with accuracy during switch (r = .405, p = .008, FDRq <.05), and negatively correlated with activation of rSPL (r = -.328, p = .034, FDRq >.05). Patients with OCD had impaired cognitive flexibility and cautious response strategy. The neural mechanism of cognitive inflexibility in OCD may involve increased activation in the rSPL, as well as hyperconnectivity within the FPN and between the FPN and DMN.
Collapse
Affiliation(s)
- Qian Liu
- Medical Psychological Centerthe Second Xiangya Hospital, Central South UniversityChangshaHunanChina
- Medical Psychological Institute of Central South UniversityChangshaHunanChina
- National Clinical Research Center for Mental DisordersChangshaHunanChina
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of EducationGuangzhouChina
- School of PsychologySouth China Normal UniversityGuangzhouChina
- Center for Studies of Psychological ApplicationSouth China Normal UniversityGuangzhouChina
- Guangdong Key Laboratory of Mental Health and Cognitive ScienceSouth China Normal UniversityGuangzhouChina
| | - Feng Gao
- Medical Psychological Centerthe Second Xiangya Hospital, Central South UniversityChangshaHunanChina
- Medical Psychological Institute of Central South UniversityChangshaHunanChina
- National Clinical Research Center for Mental DisordersChangshaHunanChina
| | - Xiang Wang
- Medical Psychological Centerthe Second Xiangya Hospital, Central South UniversityChangshaHunanChina
- Medical Psychological Institute of Central South UniversityChangshaHunanChina
- National Clinical Research Center for Mental DisordersChangshaHunanChina
| | - Jie Xia
- Medical Psychological Centerthe Second Xiangya Hospital, Central South UniversityChangshaHunanChina
- Medical Psychological Institute of Central South UniversityChangshaHunanChina
- National Clinical Research Center for Mental DisordersChangshaHunanChina
| | - Gangxuan Yuan
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of EducationGuangzhouChina
- School of PsychologySouth China Normal UniversityGuangzhouChina
- Center for Studies of Psychological ApplicationSouth China Normal UniversityGuangzhouChina
- Guangdong Key Laboratory of Mental Health and Cognitive ScienceSouth China Normal UniversityGuangzhouChina
| | - Shuxin Zheng
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of EducationGuangzhouChina
- School of PsychologySouth China Normal UniversityGuangzhouChina
- Center for Studies of Psychological ApplicationSouth China Normal UniversityGuangzhouChina
- Guangdong Key Laboratory of Mental Health and Cognitive ScienceSouth China Normal UniversityGuangzhouChina
| | - Mingtian Zhong
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of EducationGuangzhouChina
- School of PsychologySouth China Normal UniversityGuangzhouChina
- Center for Studies of Psychological ApplicationSouth China Normal UniversityGuangzhouChina
- Guangdong Key Laboratory of Mental Health and Cognitive ScienceSouth China Normal UniversityGuangzhouChina
| | - Xiongzhao Zhu
- Medical Psychological Centerthe Second Xiangya Hospital, Central South UniversityChangshaHunanChina
- Medical Psychological Institute of Central South UniversityChangshaHunanChina
- National Clinical Research Center for Mental DisordersChangshaHunanChina
| |
Collapse
|
30
|
Tuckute G, Sathe A, Srikant S, Taliaferro M, Wang M, Schrimpf M, Kay K, Fedorenko E. Driving and suppressing the human language network using large language models. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.16.537080. [PMID: 37090673 PMCID: PMC10120732 DOI: 10.1101/2023.04.16.537080] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2023]
Abstract
Transformer models such as GPT generate human-like language and are highly predictive of human brain responses to language. Here, using fMRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of brain response associated with each sentence. Then, we use the model to identify new sentences that are predicted to drive or suppress responses in the human language network. We show that these model-selected novel sentences indeed strongly drive and suppress activity of human language areas in new individuals. A systematic analysis of the model-selected sentences reveals that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of neural network models to not only mimic human language but also noninvasively control neural activity in higher-level cortical areas, like the language network.
Collapse
Affiliation(s)
- Greta Tuckute
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Aalok Sathe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Shashank Srikant
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- MIT-IBM Watson AI Lab, Cambridge, MA 02142, USA
| | - Maya Taliaferro
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Mingye Wang
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Martin Schrimpf
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- Quest for Intelligence, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- Neuro-X Institute, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland
| | - Kendrick Kay
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455 USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138 USA
| |
Collapse
|
31
|
Steel A, Silson EH, Garcia BD, Robertson CE. A retinotopic code structures the interaction between perception and memory systems. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.15.540807. [PMID: 37292758 PMCID: PMC10245578 DOI: 10.1101/2023.05.15.540807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Conventional views of brain organization suggest that the cortical apex processes internally-oriented information using an abstract, amodal neural code. Yet, recent reports have described the presence of retinotopic coding at the cortical apex, including the default mode network. What is the functional role of retinotopic coding atop the cortical hierarchy? Here, we report that retinotopic coding structures interactions between internally-oriented (mnemonic) and externally-oriented (perceptual) brain areas. Using fMRI, we observed robust, inverted (negative) retinotopic coding in category-selective memory areas at the cortical apex, which is functionally linked to the classic (positive) retinotopic coding in category-selective perceptual areas in high-level visual cortex. Specifically, these functionally-linked retinotopic populations in mnemonic and perceptual areas exhibit spatially-specific opponent responses during both bottom-up perception and top-down recall, suggesting that these areas are interlocked in a mutually-inhibitory dynamic. Together, these results show that retinotopic coding structures interactions between perceptual and mnemonic neural systems, thereby scaffolding their dynamic interaction.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, 03755
| | - Edward H. Silson
- Psychology, School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, UK EH8 9JZ
| | - Brenda D. Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, 03755
| | | |
Collapse
|
32
|
Benn Y, Ivanova AA, Clark O, Mineroff Z, Seikus C, Silva JS, Varley R, Fedorenko E. The language network is not engaged in object categorization. Cereb Cortex 2023; 33:10380-10400. [PMID: 37557910 PMCID: PMC10545444 DOI: 10.1093/cercor/bhad289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/11/2023] Open
Abstract
The relationship between language and thought is the subject of long-standing debate. One claim states that language facilitates categorization of objects based on a certain feature (e.g. color) through the use of category labels that reduce interference from other, irrelevant features. Therefore, language impairment is expected to affect categorization of items grouped by a single feature (low-dimensional categories, e.g. "Yellow Things") more than categorization of items that share many features (high-dimensional categories, e.g. "Animals"). To test this account, we conducted two behavioral studies with individuals with aphasia and an fMRI experiment with healthy adults. The aphasia studies showed that selective low-dimensional categorization impairment was present in some, but not all, individuals with severe anomia and was not characteristic of aphasia in general. fMRI results revealed little activity in language-responsive brain regions during both low- and high-dimensional categorization; instead, categorization recruited the domain-general multiple-demand network (involved in wide-ranging cognitive tasks). Combined, results demonstrate that the language system is not implicated in object categorization. Instead, selective low-dimensional categorization impairment might be caused by damage to brain regions responsible for cognitive control. Our work adds to the growing evidence of the dissociation between the language system and many cognitive tasks in adults.
Collapse
Affiliation(s)
- Yael Benn
- Department of Psychology, Manchester Metropolitan University, Manchester M15 6BH, United Kingdom
| | - Anna A Ivanova
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
| | - Oliver Clark
- Department of Psychology, Manchester Metropolitan University, Manchester M15 6BH, United Kingdom
| | - Zachary Mineroff
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
| | - Chloe Seikus
- Division of Psychology & Language Sciences, University College London, London WC1E 6BT, UK
| | - Jack Santos Silva
- Division of Psychology & Language Sciences, University College London, London WC1E 6BT, UK
| | - Rosemary Varley
- Division of Psychology & Language Sciences, University College London, London WC1E 6BT, UK
| | - Evelina Fedorenko
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, United States
| |
Collapse
|
33
|
Martin S, Frieling R, Saur D, Hartwigsen G. TMS over the pre-SMA enhances semantic cognition via remote network effects on task-based activity and connectivity. Brain Stimul 2023; 16:1346-1357. [PMID: 37704032 PMCID: PMC10615837 DOI: 10.1016/j.brs.2023.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 09/05/2023] [Accepted: 09/08/2023] [Indexed: 09/15/2023] Open
Abstract
BACKGROUND The continuous decline of executive abilities with age is mirrored by increased neural activity of domain-general networks during task processing. So far, it remains unclear how much domain-general networks contribute to domain-specific processes such as language when cognitive demands increase. The current neuroimaging study explored the potential of intermittent theta-burst stimulation (iTBS) over a domain-general hub to enhance executive and semantic processing in healthy middle-aged to older adults. METHODS We implemented a cross-over within-subject study design with three task-based neuroimaging sessions per participant. Using an individualized stimulation approach, each participant received once effective and once sham iTBS over the pre-supplementary motor area (pre-SMA), a region of domain-general control. Subsequently, task-specific stimulation effects were assessed in functional MRI using a semantic and a non-verbal executive task with varying cognitive demand. RESULTS Effective stimulation increased activity only during semantic processing in visual and dorsal attention networks. Further, iTBS induced increased seed-based connectivity in task-specific networks for semantic and executive conditions with high cognitive load but overall reduced whole-brain coupling between domain-general networks. Notably, stimulation-induced changes in activity and connectivity related differently to behavior: While stronger activity of the parietal dorsal attention network was linked to poorer semantic performance, its enhanced coupling with the pre-SMA was associated with more efficient semantic processing. CONCLUSIONS iTBS modulates networks in a task-dependent manner and generates effects at regions remote to the stimulation site. These neural changes are linked to more efficient semantic processing, which underlines the general potential of network stimulation approaches in cognitive aging.
Collapse
Affiliation(s)
- Sandra Martin
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103, Leipzig, Germany; Language & Aphasia Laboratory, Department of Neurology, University of Leipzig Medical Center, Liebigstrasse 20, 04103, Leipzig, Germany.
| | - Regine Frieling
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103, Leipzig, Germany
| | - Dorothee Saur
- Language & Aphasia Laboratory, Department of Neurology, University of Leipzig Medical Center, Liebigstrasse 20, 04103, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103, Leipzig, Germany; Wilhelm Wundt Institute for Psychology, Leipzig University, Neumarkt 9-19, 04109, Leipzig, Germany
| |
Collapse
|
34
|
Chen YY, Areti A, Yoshor D, Foster BL. Individual-specific memory reinstatement patterns within human face-selective cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.06.552130. [PMID: 37609262 PMCID: PMC10441346 DOI: 10.1101/2023.08.06.552130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Humans have the remarkable ability to vividly retrieve sensory details of past events. According to the theory of sensory reinstatement, during remembering, brain regions involved in the sensory processing of prior events are reactivated to support this perception of the past. Recently, several studies have emphasized potential transformations in the spatial organization of reinstated activity patterns. In particular, studies of scene stimuli suggest a clear anterior shift in the location of retrieval activations compared with those during perception. However, it is not clear that such transformations occur universally, with evidence lacking for other important stimulus categories, particularly faces. Critical to addressing these questions, and to studies of reinstatement more broadly, is the growing importance of considering meaningful variations in the organization of sensory systems across individuals. Therefore, we conducted a multi-session neuroimaging study to first carefully map individual participants face-selective regions within ventral temporal cortex (VTC), followed by a second session to examine the correspondence of activity patterns during face memory encoding and retrieval. Our results showed distinct configurations of face-selective regions within the VTC across individuals. While a significant degree of overlap was observed between face perception and memory encoding, memory retrieval engagement exhibited a more selective and constricted reinstatement pattern within these regions. Importantly, these activity patterns were consistently tied to individual-specific neural substrates, but did not show any consistent direction of spatial transformation (e.g., anteriorization). To provide further insight to these findings, we also report on unique human intracranial recordings from VTC under the same experimental conditions. Our findings highlight the importance of considering individual variations in functional neuroanatomy in the context of assessing the nature of cortical reinstatement. Consideration of such factors will be important for establishing general principles shaping the neural transformations that occur from perception to memory.
Collapse
Affiliation(s)
- Yvonne Y Chen
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, 19104, USA
| | | | - Daniel Yoshor
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, 19104, USA
| | - Brett L Foster
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, 19104, USA
| |
Collapse
|
35
|
Ayzenberg V, Granovetter MC, Robert S, Patterson C, Behrmann M. Differential functional reorganization of ventral and dorsal visual pathways following childhood hemispherectomy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.03.551494. [PMID: 37577633 PMCID: PMC10418255 DOI: 10.1101/2023.08.03.551494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Hemispherectomy is a surgical procedure in which an entire hemisphere of a patient's brain is resected or functionally disconnected to manage seizures in individuals with drug-resistant epilepsy. Despite the extensive loss of input from both ventral and dorsal visual pathways of one hemisphere, pediatric patients who have undergone hemispherectomy show a remarkably high degree of perceptual function across many domains. In the current study, we sought to understand the extent to which functions of the ventral and dorsal visual pathways reorganize to the contralateral hemisphere following childhood hemispherectomy. To this end, we collected fMRI data from an equal number of left and right hemispherectomy patients who completed tasks that typically elicit lateralized responses from the ventral or the dorsal pathway, namely, word (left ventral), face (right ventral), tool (left dorsal), and global form (right dorsal) perception. Overall, there was greater evidence of functional reorganization in the ventral pathway than in the dorsal pathway. Importantly, because ventral and dorsal reorganization was tested in the very same patients, these results cannot be explained by idiosyncratic factors such as disease etiology, age at the time of surgery, or age at testing. These findings suggest that because the dorsal pathway may mature earlier, it may have a shorter developmental window of plasticity than the ventral pathway and, hence, be less malleable.
Collapse
Affiliation(s)
- Vladislav Ayzenberg
- Department of Psychology, University of Pennsylvania
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University
| | - Michael C Granovetter
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University
- School of Medicine, University of Pittsburgh
| | - Sophia Robert
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University
| | - Christina Patterson
- School of Medicine, University of Pittsburgh
- Department of Pediatrics, University of Pittsburgh
| | - Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University
- Department of Pediatrics, University of Pittsburgh
| |
Collapse
|
36
|
Steel A, Garcia BD, Goyal K, Mynick A, Robertson CE. Scene Perception and Visuospatial Memory Converge at the Anterior Edge of Visually Responsive Cortex. J Neurosci 2023; 43:5723-5737. [PMID: 37474310 PMCID: PMC10401646 DOI: 10.1523/jneurosci.2043-22.2023] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 07/10/2023] [Accepted: 07/14/2023] [Indexed: 07/22/2023] Open
Abstract
To fluidly engage with the world, our brains must simultaneously represent both the scene in front of us and our memory of the immediate surrounding environment (i.e., local visuospatial context). How does the brain's functional architecture enable sensory and mnemonic representations to closely interface while also avoiding sensory-mnemonic interference? Here, we asked this question using first-person, head-mounted virtual reality and fMRI. Using virtual reality, human participants of both sexes learned a set of immersive, real-world visuospatial environments in which we systematically manipulated the extent of visuospatial context associated with a scene image in memory across three learning conditions, spanning from a single FOV to a city street. We used individualized, within-subject fMRI to determine which brain areas support memory of the visuospatial context associated with a scene during recall (Experiment 1) and recognition (Experiment 2). Across the whole brain, activity in three patches of cortex was modulated by the amount of known visuospatial context, each located immediately anterior to one of the three scene perception areas of high-level visual cortex. Individual subject analyses revealed that these anterior patches corresponded to three functionally defined place memory areas, which selectively respond when visually recalling personally familiar places. In addition to showing activity levels that were modulated by the amount of visuospatial context, multivariate analyses showed that these anterior areas represented the identity of the specific environment being recalled. Together, these results suggest a convergence zone for scene perception and memory of the local visuospatial context at the anterior edge of high-level visual cortex.SIGNIFICANCE STATEMENT As we move through the world, the visual scene around us is integrated with our memory of the wider visuospatial context. Here, we sought to understand how the functional architecture of the brain enables coexisting representations of the current visual scene and memory of the surrounding environment. Using a combination of immersive virtual reality and fMRI, we show that memory of visuospatial context outside the current FOV is represented in a distinct set of brain areas immediately anterior and adjacent to the perceptually oriented scene-selective areas of high-level visual cortex. This functional architecture would allow efficient interaction between immediately adjacent mnemonic and perceptual areas while also minimizing interference between mnemonic and perceptual representations.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Brenda D Garcia
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Kala Goyal
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Anna Mynick
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Caroline E Robertson
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| |
Collapse
|
37
|
Landsiedel J, Koldewyn K. Auditory dyadic interactions through the "eye" of the social brain: How visual is the posterior STS interaction region? IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2023; 1:1-20. [PMID: 37719835 PMCID: PMC10503480 DOI: 10.1162/imag_a_00003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 09/19/2023]
Abstract
Human interactions contain potent social cues that meet not only the eye but also the ear. Although research has identified a region in the posterior superior temporal sulcus as being particularly sensitive to visually presented social interactions (SI-pSTS), its response to auditory interactions has not been tested. Here, we used fMRI to explore brain response to auditory interactions, with a focus on temporal regions known to be important in auditory processing and social interaction perception. In Experiment 1, monolingual participants listened to two-speaker conversations (intact or sentence-scrambled) and one-speaker narrations in both a known and an unknown language. Speaker number and conversational coherence were explored in separately localised regions-of-interest (ROI). In Experiment 2, bilingual participants were scanned to explore the role of language comprehension. Combining univariate and multivariate analyses, we found initial evidence for a heteromodal response to social interactions in SI-pSTS. Specifically, right SI-pSTS preferred auditory interactions over control stimuli and represented information about both speaker number and interactive coherence. Bilateral temporal voice areas (TVA) showed a similar, but less specific, profile. Exploratory analyses identified another auditory-interaction sensitive area in anterior STS. Indeed, direct comparison suggests modality specific tuning, with SI-pSTS preferring visual information while aSTS prefers auditory information. Altogether, these results suggest that right SI-pSTS is a heteromodal region that represents information about social interactions in both visual and auditory domains. Future work is needed to clarify the roles of TVA and aSTS in auditory interaction perception and further probe right SI-pSTS interaction-selectivity using non-semantic prosodic cues.
Collapse
Affiliation(s)
- Julia Landsiedel
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | - Kami Koldewyn
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| |
Collapse
|
38
|
Lee J, Park S. Multi-modal representation of the size of space in the human brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.24.550343. [PMID: 37546991 PMCID: PMC10402083 DOI: 10.1101/2023.07.24.550343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small and large sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberation. By using a multi-voxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that AG and the right IFG pars opercularis had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory specific regions and modality-integrated regions increase in the multimodal condition compared to single modality conditions. Our results suggest that the spatial size perception relies on both sensory specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
Affiliation(s)
- Jaeeun Lee
- Department of Psychology, University of Minnesota, Minneapolis, MN
| | - Soojin Park
- Department of Psychology, Yonsei University, Seoul, South Korea
| |
Collapse
|
39
|
Boch M, Wagner IC, Karl S, Huber L, Lamm C. Functionally analogous body- and animacy-responsive areas are present in the dog (Canis familiaris) and human occipito-temporal lobe. Commun Biol 2023; 6:645. [PMID: 37369804 PMCID: PMC10300132 DOI: 10.1038/s42003-023-05014-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 06/05/2023] [Indexed: 06/29/2023] Open
Abstract
Comparing the neural correlates of socio-cognitive skills across species provides insights into the evolution of the social brain and has revealed face- and body-sensitive regions in the primate temporal lobe. Although from a different lineage, dogs share convergent visuo-cognitive skills with humans and a temporal lobe which evolved independently in carnivorans. We investigated the neural correlates of face and body perception in dogs (N = 15) and humans (N = 40) using functional MRI. Combining univariate and multivariate analysis approaches, we found functionally analogous occipito-temporal regions involved in the perception of animate entities and bodies in both species and face-sensitive regions in humans. Though unpredicted, we also observed neural representations of faces compared to inanimate objects, and dog compared to human bodies in dog olfactory regions. These findings shed light on the evolutionary foundations of human and dog social cognition and the predominant role of the temporal lobe.
Collapse
Affiliation(s)
- Magdalena Boch
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria.
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria.
| | - Isabella C Wagner
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
- Centre for Microbiology and Environmental Systems Science, University of Vienna, Vienna, Austria
| | - Sabrina Karl
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria
| | - Ludwig Huber
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria
| | - Claus Lamm
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
40
|
Yan C, Ehinger BV, Pérez-Bellido A, Peelen MV, de Lange FP. Humans predict the forest, not the trees: statistical learning of spatiotemporal structure in visual scenes. Cereb Cortex 2023; 33:8300-8311. [PMID: 37005064 PMCID: PMC7614728 DOI: 10.1093/cercor/bhad115] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 03/11/2023] [Accepted: 03/13/2023] [Indexed: 04/04/2023] Open
Abstract
The human brain is capable of using statistical regularities to predict future inputs. In the real world, such inputs typically comprise a collection of objects (e.g. a forest constitutes numerous trees). The present study aimed to investigate whether perceptual anticipation relies on lower-level or higher-level information. Specifically, we examined whether the human brain anticipates each object in a scene individually or anticipates the scene as a whole. To explore this issue, we first trained participants to associate co-occurring objects within fixed spatial arrangements. Meanwhile, participants implicitly learned temporal regularities between these displays. We then tested how spatial and temporal violations of the structure modulated behavior and neural activity in the visual system using fMRI. We found that participants only showed a behavioral advantage of temporal regularities when the displays conformed to their previously learned spatial structure, demonstrating that humans form configuration-specific temporal expectations instead of predicting individual objects. Similarly, we found suppression of neural responses for temporally expected compared with temporally unexpected objects in lateral occipital cortex only when the objects were embedded within expected configurations. Overall, our findings indicate that humans form expectations about object configurations, demonstrating the prioritization of higher-level over lower-level information in temporal expectation.
Collapse
Affiliation(s)
- Chuyao Yan
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, Nijmegen 6525 EN, The Netherlands
- School of Psychology, Nanjing Normal University, Nanjing 210098, China
| | - Benedikt V Ehinger
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, Nijmegen 6525 EN, The Netherlands
- Stuttgart Center for Simulation Science, University of Stuttgart, Stuttgart 70049, Germany
| | - Alexis Pérez-Bellido
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, Nijmegen 6525 EN, The Netherlands
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona 17108035, Spain
- Institute of Neurosciences, University of Barcelona, Barcelona 17108035, Spain
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, Nijmegen 6525 EN, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, Nijmegen 6525 EN, The Netherlands
| |
Collapse
|
41
|
Shain C, Paunov A, Chen X, Lipkin B, Fedorenko E. No evidence of theory of mind reasoning in the human language network. Cereb Cortex 2023; 33:6299-6319. [PMID: 36585774 PMCID: PMC10183748 DOI: 10.1093/cercor/bhac505] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 11/30/2022] [Accepted: 12/01/2022] [Indexed: 01/01/2023] Open
Abstract
Language comprehension and the ability to infer others' thoughts (theory of mind [ToM]) are interrelated during development and language use. However, neural evidence that bears on the relationship between language and ToM mechanisms is mixed. Although robust dissociations have been reported in brain disorders, brain activations for contrasts that target language and ToM bear similarities, and some have reported overlap. We take another look at the language-ToM relationship by evaluating the response of the language network, as measured with fMRI, to verbal and nonverbal ToM across 151 participants. Individual-participant analyses reveal that all core language regions respond more strongly when participants read vignettes about false beliefs compared to the control vignettes. However, we show that these differences are largely due to linguistic confounds, and no such effects appear in a nonverbal ToM task. These results argue against cognitive and neural overlap between language processing and ToM. In exploratory analyses, we find responses to social processing in the "periphery" of the language network-right-hemisphere homotopes of core language areas and areas in bilateral angular gyri-but these responses are not selectively ToM-related and may reflect general visual semantic processing.
Collapse
Affiliation(s)
- Cory Shain
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, MIT Bldg 46-316077 Massachusetts Avenue, Cambridge, MA 02139, United States
| | - Alexander Paunov
- INSERM-CEA Cognitive Neuroimaging Unit (UNICOG), NeuroSpin Center, Gif sur Yvette 91191, France
| | - Xuanyi Chen
- Department of Cognitive Sciences, Rice University, 6100 Main Street, Houston, TX 77005, United States
| | - Benjamin Lipkin
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, MIT Bldg 46-316077 Massachusetts Avenue, Cambridge, MA 02139, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, MIT Bldg 46-316077 Massachusetts Avenue, Cambridge, MA 02139, United States
- Program in Speech Hearing in Bioscience and Technology, Harvard Medical School, 260 Longwood Avenue, TMEC 333, Boston, MA 02115, United States
| |
Collapse
|
42
|
Hauptman M, Blank I, Fedorenko E. Non-literal language processing is jointly supported by the language and theory of mind networks: Evidence from a novel meta-analytic fMRI approach. Cortex 2023; 162:96-114. [PMID: 37023480 PMCID: PMC10210011 DOI: 10.1016/j.cortex.2023.01.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 11/08/2022] [Accepted: 01/11/2023] [Indexed: 03/12/2023]
Abstract
Going beyond the literal meaning of language is key to communicative success. However, the mechanisms that support non-literal inferences remain debated. Using a novel meta-analytic approach, we evaluate the contribution of linguistic, social-cognitive, and executive mechanisms to non-literal interpretation. We identified 74 fMRI experiments (n = 1,430 participants) from 2001 to 2021 that contrasted non-literal language comprehension with a literal control condition, spanning ten phenomena (e.g., metaphor, irony, indirect speech). Applying the activation likelihood estimation approach to the 825 activation peaks yielded six left-lateralized clusters. We then evaluated the locations of both the individual-study peaks and the clusters against probabilistic functional atlases (cf. anatomical locations, as is typically done) for three candidate brain networks-the language-selective network (Fedorenko, Behr, & Kanwisher, 2011), which supports language processing, the Theory of Mind (ToM) network (Saxe & Kanwisher, 2003), which supports social inferences, and the domain-general Multiple-Demand (MD) network (Duncan, 2010), which supports executive control. These atlases were created by overlaying individual activation maps of participants who performed robust and extensively validated 'localizer' tasks that selectively target each network in question (n = 806 for language; n = 198 for ToM; n = 691 for MD). We found that both the individual-study peaks and the ALE clusters fell primarily within the language network and the ToM network. These results suggest that non-literal processing is supported by both i) mechanisms that process literal linguistic meaning, and ii) mechanisms that support general social inference. They thus undermine a strong divide between literal and non-literal aspects of language and challenge the claim that non-literal processing requires additional executive resources.
Collapse
Affiliation(s)
- Miriam Hauptman
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, USA; Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Idan Blank
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, USA; Department of Psychology, UCLA, Los Angeles, CA 90095, USA; Department of Linguistics, UCLA, Los Angeles, CA 90095, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, USA; Program in Speech and Hearing in Bioscience and Technology, Harvard University, Boston, MA 02114, USA.
| |
Collapse
|
43
|
Yargholi E, Op de Beeck H. Category Trumps Shape as an Organizational Principle of Object Space in the Human Occipitotemporal Cortex. J Neurosci 2023; 43:2960-2972. [PMID: 36922027 PMCID: PMC10124953 DOI: 10.1523/jneurosci.2179-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 02/22/2023] [Accepted: 03/03/2023] [Indexed: 03/17/2023] Open
Abstract
The organizational principles of the object space represented in the human ventral visual cortex are debated. Here we contrast two prominent proposals that, in addition to an organization in terms of animacy, propose either a representation related to aspect ratio (stubby-spiky) or to the distinction between faces and bodies. We designed a critical test that dissociates the latter two categories from aspect ratio and investigated responses from human fMRI (of either sex) and deep neural networks (BigBiGAN). Representational similarity and decoding analyses showed that the object space in the occipitotemporal cortex and BigBiGAN was partially explained by animacy but not by aspect ratio. Data-driven approaches showed clusters for face and body stimuli and animate-inanimate separation in the representational space of occipitotemporal cortex and BigBiGAN, but no arrangement related to aspect ratio. In sum, the findings go in favor of a model in terms of an animacy representation combined with strong selectivity for faces and bodies.SIGNIFICANCE STATEMENT We contrasted animacy, aspect ratio, and face-body as principal dimensions characterizing object space in the occipitotemporal cortex. This is difficult to test, as typically faces and bodies differ in aspect ratio (faces are mostly stubby and bodies are mostly spiky). To dissociate the face-body distinction from the difference in aspect ratio, we created a new stimulus set in which faces and bodies have a similar and very wide distribution of values along the shape dimension of the aspect ratio. Brain imaging (fMRI) with this new stimulus set showed that, in addition to animacy, the object space is mainly organized by the face-body distinction and selectivity for aspect ratio is minor (despite its wide distribution).
Collapse
Affiliation(s)
- Elahe' Yargholi
- Department of Brain and Cognition, Leuven Brain Institute, Faculty of Psychology & Educational Sciences, KU Leuven, 3000 Leuven, Belgium
| | - Hans Op de Beeck
- Department of Brain and Cognition, Leuven Brain Institute, Faculty of Psychology & Educational Sciences, KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
44
|
Hu J, Small H, Kean H, Takahashi A, Zekelman L, Kleinman D, Ryan E, Nieto-Castañón A, Ferreira V, Fedorenko E. Precision fMRI reveals that the language-selective network supports both phrase-structure building and lexical access during language production. Cereb Cortex 2023; 33:4384-4404. [PMID: 36130104 PMCID: PMC10110436 DOI: 10.1093/cercor/bhac350] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 08/01/2022] [Accepted: 08/02/2022] [Indexed: 11/13/2022] Open
Abstract
A fronto-temporal brain network has long been implicated in language comprehension. However, this network's role in language production remains debated. In particular, it remains unclear whether all or only some language regions contribute to production, and which aspects of production these regions support. Across 3 functional magnetic resonance imaging experiments that rely on robust individual-subject analyses, we characterize the language network's response to high-level production demands. We report 3 novel results. First, sentence production, spoken or typed, elicits a strong response throughout the language network. Second, the language network responds to both phrase-structure building and lexical access demands, although the response to phrase-structure building is stronger and more spatially extensive, present in every language region. Finally, contra some proposals, we find no evidence of brain regions-within or outside the language network-that selectively support phrase-structure building in production relative to comprehension. Instead, all language regions respond more strongly during production than comprehension, suggesting that production incurs a greater cost for the language network. Together, these results align with the idea that language comprehension and production draw on the same knowledge representations, which are stored in a distributed manner within the language-selective network and are used to both interpret and generate linguistic utterances.
Collapse
Affiliation(s)
- Jennifer Hu
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Hannah Small
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Atsushi Takahashi
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Leo Zekelman
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | | | - Elizabeth Ryan
- St. George’s Medical School, St. George’s University, Grenada, West Indies
| | - Alfonso Nieto-Castañón
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215, United States
| | - Victor Ferreira
- Department of Psychology, UCSD, La Jolla, CA 92093, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
45
|
Hebart MN, Contier O, Teichmann L, Rockter AH, Zheng CY, Kidder A, Corriveau A, Vaziri-Pashkam M, Baker CI. THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior. eLife 2023; 12:e82580. [PMID: 36847339 PMCID: PMC10038662 DOI: 10.7554/elife.82580] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/25/2023] [Indexed: 03/01/2023] Open
Abstract
Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here, we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.
Collapse
Affiliation(s)
- Martin N Hebart
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Department of Medicine, Justus Liebig University GiessenGiessenGermany
| | - Oliver Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Max Planck School of Cognition, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Lina Teichmann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Adam H Rockter
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Charles Y Zheng
- Machine Learning Core, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Alexis Kidder
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Anna Corriveau
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Maryam Vaziri-Pashkam
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| |
Collapse
|
46
|
Li J, Kean H, Fedorenko E, Saygin Z. Intact reading ability despite lacking a canonical visual word form area in an individual born without the left superior temporal lobe. Cogn Neuropsychol 2023; 39:249-275. [PMID: 36653302 DOI: 10.1080/02643294.2023.2164923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
The visual word form area (VWFA), a region canonically located within left ventral temporal cortex (VTC), is specialized for orthography in literate adults presumbly due to its connectivity with frontotemporal language regions. But is a typical, left-lateralized language network critical for the VWFA's emergence? We investigated this question in an individual (EG) born without the left superior temporal lobe but who has normal reading ability. EG showed canonical typical face-selectivity bilateraly but no wordselectivity either in right VWFA or in the spared left VWFA. Moreover, in contrast with the idea that the VWFA is simply part of the language network, no part of EG's VTC showed selectivity to higher-level linguistic processing. Interestingly, EG's VWFA showed reliable multivariate patterns that distinguished words from other categories. These results suggest that a typical left-hemisphere language network is necessary for acanonical VWFA, and that orthographic processing can otherwise be supported by a distributed neural code.
Collapse
Affiliation(s)
- Jin Li
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Hope Kean
- Department of Brain and Cognitive Sciences / McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences / McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Zeynep Saygin
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
47
|
Ayzenberg V, Simmons C, Behrmann M. Temporal asymmetries and interactions between dorsal and ventral visual pathways during object recognition. Cereb Cortex Commun 2023; 4:tgad003. [PMID: 36726794 PMCID: PMC9883614 DOI: 10.1093/texcom/tgad003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 12/30/2022] [Accepted: 01/02/2023] [Indexed: 01/15/2023] Open
Abstract
Despite their anatomical and functional distinctions, there is growing evidence that the dorsal and ventral visual pathways interact to support object recognition. However, the exact nature of these interactions remains poorly understood. Is the presence of identity-relevant object information in the dorsal pathway simply a byproduct of ventral input? Or, might the dorsal pathway be a source of input to the ventral pathway for object recognition? In the current study, we used high-density EEG-a technique with high temporal precision and spatial resolution sufficient to distinguish parietal and temporal lobes-to characterise the dynamics of dorsal and ventral pathways during object viewing. Using multivariate analyses, we found that category decoding in the dorsal pathway preceded that in the ventral pathway. Importantly, the dorsal pathway predicted the multivariate responses of the ventral pathway in a time-dependent manner, rather than the other way around. Together, these findings suggest that the dorsal pathway is a critical source of input to the ventral pathway for object recognition.
Collapse
Affiliation(s)
- Vladislav Ayzenberg
- Neuroscience Institute and Psychology Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Claire Simmons
- School of Medicine, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Marlene Behrmann
- Neuroscience Institute and Psychology Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| |
Collapse
|
48
|
Chen X, Liu X, Parker BJ, Zhen Z, Weiner KS. Functionally and structurally distinct fusiform face area(s) in over 1000 participants. Neuroimage 2023. [PMID: 36427753 DOI: 10.1101/2022.04.08.487562v1.full.pdf] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/22/2023] Open
Abstract
The fusiform face area (FFA) is a widely studied region causally involved in face perception. Even though cognitive neuroscientists have been studying the FFA for over two decades, answers to foundational questions regarding the function, architecture, and connectivity of the FFA from a large (N>1000) group of participants are still lacking. To fill this gap in knowledge, we quantified these multimodal features of fusiform face-selective regions in 1053 participants in the Human Connectome Project. After manually defining over 4,000 fusiform face-selective regions, we report five main findings. First, 68.76% of hemispheres have two cortically separate regions (pFus-faces/FFA-1 and mFus-faces/FFA-2). Second, in 26.69% of hemispheres, pFus-faces/FFA-1 and mFus-faces/FFA-2 are spatially contiguous, yet are distinct based on functional, architectural, and connectivity metrics. Third, pFus-faces/FFA-1 is more face-selective than mFus-faces/FFA-2, and the two regions have distinct functional connectivity fingerprints. Fourth, pFus-faces/FFA-1 is cortically thinner and more heavily myelinated than mFus-faces/FFA-2. Fifth, face-selective patterns and functional connectivity fingerprints of each region are more similar in monozygotic than dizygotic twins and more so than architectural gradients. As we share our areal definitions with the field, future studies can explore how structural and functional features of these regions will inform theories regarding how visual categories are represented in the brain.
Collapse
Affiliation(s)
- Xiayu Chen
- Faculty of Psychology, Beijing Normal University, Beijing 100875, China; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Xingyu Liu
- Faculty of Psychology, Beijing Normal University, Beijing 100875, China
| | - Benjamin J Parker
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, United States
| | - Zonglei Zhen
- Faculty of Psychology, Beijing Normal University, Beijing 100875, China; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China.
| | - Kevin S Weiner
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, United States; Department of Psychology, University of California, Berkeley, CA 94720, United States
| |
Collapse
|
49
|
Chen X, Liu X, Parker BJ, Zhen Z, Weiner KS. Functionally and structurally distinct fusiform face area(s) in over 1000 participants. Neuroimage 2023; 265:119765. [PMID: 36427753 PMCID: PMC9889174 DOI: 10.1016/j.neuroimage.2022.119765] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 11/19/2022] [Accepted: 11/21/2022] [Indexed: 11/24/2022] Open
Abstract
The fusiform face area (FFA) is a widely studied region causally involved in face perception. Even though cognitive neuroscientists have been studying the FFA for over two decades, answers to foundational questions regarding the function, architecture, and connectivity of the FFA from a large (N>1000) group of participants are still lacking. To fill this gap in knowledge, we quantified these multimodal features of fusiform face-selective regions in 1053 participants in the Human Connectome Project. After manually defining over 4,000 fusiform face-selective regions, we report five main findings. First, 68.76% of hemispheres have two cortically separate regions (pFus-faces/FFA-1 and mFus-faces/FFA-2). Second, in 26.69% of hemispheres, pFus-faces/FFA-1 and mFus-faces/FFA-2 are spatially contiguous, yet are distinct based on functional, architectural, and connectivity metrics. Third, pFus-faces/FFA-1 is more face-selective than mFus-faces/FFA-2, and the two regions have distinct functional connectivity fingerprints. Fourth, pFus-faces/FFA-1 is cortically thinner and more heavily myelinated than mFus-faces/FFA-2. Fifth, face-selective patterns and functional connectivity fingerprints of each region are more similar in monozygotic than dizygotic twins and more so than architectural gradients. As we share our areal definitions with the field, future studies can explore how structural and functional features of these regions will inform theories regarding how visual categories are represented in the brain.
Collapse
Affiliation(s)
- Xiayu Chen
- Faculty of Psychology, Beijing Normal University, Beijing 100875, China; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Xingyu Liu
- Faculty of Psychology, Beijing Normal University, Beijing 100875, China
| | - Benjamin J Parker
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, United States
| | - Zonglei Zhen
- Faculty of Psychology, Beijing Normal University, Beijing 100875, China; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China.
| | - Kevin S Weiner
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, United States; Department of Psychology, University of California, Berkeley, CA 94720, United States
| |
Collapse
|
50
|
Steel A, Garcia BD, Silson EH, Robertson CE. Evaluating the efficacy of multi-echo ICA denoising on model-based fMRI. Neuroimage 2022; 264:119723. [PMID: 36328274 DOI: 10.1016/j.neuroimage.2022.119723] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 09/30/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022] Open
Abstract
fMRI is an indispensable tool for neuroscience investigation, but this technique is limited by multiple sources of physiological and measurement noise. These noise sources are particularly problematic for analysis techniques that require high signal-to-noise ratio for stable model fitting, such as voxel-wise modeling. Multi-echo data acquisition in combination with echo-time dependent ICA denoising (ME-ICA) represents one promising strategy to mitigate physiological and hardware-related noise sources as well as motion-related artifacts. However, most studies employing ME-ICA to date are resting-state fMRI studies, and therefore we have a limited understanding of the impact of ME-ICA on complex task or model-based fMRI paradigms. Here, we addressed this knowledge gap by comparing data quality and model fitting performance of data acquired during a visual population receptive field (pRF) mapping (N = 13 participants) experiment after applying one of three preprocessing procedures: ME-ICA, optimally combined multi-echo data without ICA-denoising, and typical single echo processing. As expected, multi-echo fMRI improved temporal signal-to-noise compared to single echo fMRI, with ME-ICA amplifying the improvement compared to optimal combination alone. However, unexpectedly, this boost in temporal signal-to-noise did not directly translate to improved model fitting performance: compared to single echo acquisition, model fitting was only improved after ICA-denoising. Specifically, compared to single echo acquisition, ME-ICA resulted in improved variance explained by our pRF model throughout the visual system, including anterior regions of the temporal and parietal lobes where SNR is typically low, while optimal combination without ICA did not. ME-ICA also improved reliability of parameter estimates compared to single echo and optimally combined multi-echo data without ICA-denoising. Collectively, these results suggest that ME-ICA is effective for denoising task-based fMRI data for modeling analyzes and maintains the integrity of the original data. Therefore, ME-ICA may be beneficial for complex fMRI experiments, including voxel-wise modeling and naturalistic paradigms.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, US.
| | - Brenda D Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, US
| | - Edward H Silson
- Psychology, School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, UK
| | - Caroline E Robertson
- Department of Psychology and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, US
| |
Collapse
|