1
|
Zhang Y, Zhou K, Bao P, Liu J. A biologically inspired computational model of human ventral temporal cortex. Neural Netw 2024; 178:106437. [PMID: 38936111 DOI: 10.1016/j.neunet.2024.106437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 06/01/2024] [Accepted: 06/05/2024] [Indexed: 06/29/2024]
Abstract
Our minds represent miscellaneous objects in the physical world metaphorically in an abstract and complex high-dimensional object space, which is implemented in a two-dimensional surface of the ventral temporal cortex (VTC) with topologically organized object selectivity. Here we investigated principles guiding the topographical organization of object selectivities in the VTC by constructing a hybrid Self-Organizing Map (SOM) model that harnesses a biologically inspired algorithm of wiring cost minimization and adheres to the constraints of the lateral wiring span of human VTC neurons. In a series of in silico experiments with functional brain neuroimaging and neurophysiological single-unit data from humans and non-human primates, the VTC-SOM predicted the topographical structure of fine-scale category-selective regions (face-, tool-, body-, and place-selective regions) and the boundary in large-scale abstract functional maps (animate vs. inanimate, real-word small-size vs. big-size, central vs. peripheral), with no significant loss in functionality (e.g., categorical selectivity and view-invariant representations). In addition, when the same principle was applied to V1 orientation preferences, a pinwheel-like topology emerged, suggesting the model's broad applicability. In summary, our study illustrates that the simple principle of wiring cost minimization, coupled with the appropriate biological constraint of lateral wiring span, is able to implement the high-dimensional object space in a two-dimensional cortical surface.
Collapse
Affiliation(s)
- Yiyuan Zhang
- Tsinghua Laboratory of Brain & Intelligence, Department of Psychology, Tsinghua University, Beijing, 100084, China
| | - Ke Zhou
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
| | - Pinglei Bao
- Department of Psychology, Peking University, Beijing, 100871, China
| | - Jia Liu
- Tsinghua Laboratory of Brain & Intelligence, Department of Psychology, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
2
|
Contier O, Baker CI, Hebart MN. Distributed representations of behaviour-derived object dimensions in the human visual system. Nat Hum Behav 2024:10.1038/s41562-024-01980-y. [PMID: 39251723 DOI: 10.1038/s41562-024-01980-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 08/06/2024] [Indexed: 09/11/2024]
Abstract
Object vision is commonly thought to involve a hierarchy of brain regions processing increasingly complex image features, with high-level visual cortex supporting object recognition and categorization. However, object vision supports diverse behavioural goals, suggesting basic limitations of this category-centric framework. To address these limitations, we mapped a series of dimensions derived from a large-scale analysis of human similarity judgements directly onto the brain. Our results reveal broadly distributed representations of behaviourally relevant information, demonstrating selectivity to a wide variety of novel dimensions while capturing known selectivities for visual features and categories. Behaviour-derived dimensions were superior to categories at predicting brain responses, yielding mixed selectivity in much of visual cortex and sparse selectivity in category-selective clusters. This framework reconciles seemingly disparate findings regarding regional specialization, explaining category selectivity as a special case of sparse response profiles among representational dimensions, suggesting a more expansive view on visual processing in the human brain.
Collapse
Affiliation(s)
- Oliver Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
- Max Planck School of Cognition, Leipzig, Germany.
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Martin N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
3
|
Özdemir Ş, Şentürk YD, Ünver N, Demircan C, Olivers CNL, Egner T, Günseli E. Effects of Context Changes on Memory Reactivation. J Neurosci 2024; 44:e2096232024. [PMID: 39103222 PMCID: PMC11376331 DOI: 10.1523/jneurosci.2096-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 06/26/2024] [Accepted: 07/10/2024] [Indexed: 08/07/2024] Open
Abstract
While the influence of context on long-term memory (LTM) is well documented, its effects on the interaction between working memory (WM) and LTM remain less understood. In this study, we explored these interactions using a delayed match-to-sample task, where participants (6 males, 16 females) encountered the same target object across six consecutive trials, facilitating the transition from WM to LTM. During half of these target repetitions, the background color changed. We measured the WM storage of the target using the contralateral delay activity in electroencephalography. Our results reveal that task-irrelevant context changes trigger the reactivation of long-term memories in WM. This reactivation may be attributed to content-context binding in WM and hippocampal pattern separation.
Collapse
Affiliation(s)
- Şahcan Özdemir
- Department of Psychology, Sabancı University, Istanbul 34956, Turkey
| | | | - Nursima Ünver
- Department of Psychology, Sabancı University, Istanbul 34956, Turkey
| | - Can Demircan
- Department of Psychology, Sabancı University, Istanbul 34956, Turkey
| | - Christian N L Olivers
- Department of Experimental and Applied Psychology, Vrije Universiteit, Amsterdam 1081 BT, the Netherlands
| | - Tobias Egner
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708
| | - Eren Günseli
- Department of Psychology, Sabancı University, Istanbul 34956, Turkey
| |
Collapse
|
4
|
Cohanpour M, Aly M, Gottlieb J. Neural Representations of Sensory Uncertainty and Confidence Are Associated with Perceptual Curiosity. J Neurosci 2024; 44:e0974232024. [PMID: 38969505 PMCID: PMC11326865 DOI: 10.1523/jneurosci.0974-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 04/07/2024] [Accepted: 06/18/2024] [Indexed: 07/07/2024] Open
Abstract
Humans are immensely curious and motivated to reduce uncertainty, but little is known about the neural mechanisms that generate curiosity. Curiosity is inversely associated with confidence, suggesting that it is triggered by states of low confidence (subjective uncertainty), but the neural mechanisms of this link, have been little investigated. Inspired by studies of sensory uncertainty, we hypothesized that visual areas provide multivariate representations of uncertainty, which are read out by higher-order structures to generate signals of confidence and, ultimately, curiosity. We scanned participants (17 female, 15 male) using fMRI while they performed a new task in which they rated their confidence in identifying distorted images of animals and objects and their curiosity to see the clear image. We measured the activity evoked by each image in the occipitotemporal cortex (OTC) and devised a new metric of "OTC Certainty" indicating the strength of evidence this activity conveys about the animal versus object categories. We show that, perceptual curiosity peaked at low confidence and OTC Certainty negatively correlated with curiosity, establishing a link between curiosity and a multivariate representation of sensory uncertainty. Moreover, univariate (average) activity in two frontal areas-vmPFC and ACC-correlated positively with confidence and negatively with curiosity, and the vmPFC mediated the relationship between OTC Certainty and curiosity. The results reveal novel mechanisms through which uncertainty about an event generates curiosity about that event.
Collapse
Affiliation(s)
- Michael Cohanpour
- Department of Neuroscience, Columbia University, New York, New York 10025
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York 10025
| | - Mariam Aly
- Department of Psychology, Columbia University, New York, New York 10025
| | - Jacqueline Gottlieb
- Department of Neuroscience, Columbia University, New York, New York 10025
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York 10025
- Kavli Institute for Brain Science, Columbia University, New York, New York 10025
| |
Collapse
|
5
|
Arcaro M, Livingstone M. A Whole-Brain Topographic Ontology. Annu Rev Neurosci 2024; 47:21-40. [PMID: 38360565 DOI: 10.1146/annurev-neuro-082823-073701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
It is a common view that the intricate array of specialized domains in the ventral visual pathway is innately prespecified. What this review postulates is that it is not. We explore the origins of domain specificity, hypothesizing that the adult brain emerges from an interplay between a domain-general map-based architecture, shaped by intrinsic mechanisms, and experience. We argue that the most fundamental innate organization of cortex in general, and not just the visual pathway, is a map-based topography that governs how the environment maps onto the brain, how brain areas interconnect, and ultimately, how the brain processes information.
Collapse
Affiliation(s)
- Michael Arcaro
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | | |
Collapse
|
6
|
Contier O, Baker CI, Hebart MN. Distributed representations of behavior-derived object dimensions in the human visual system. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.23.553812. [PMID: 37662312 PMCID: PMC10473665 DOI: 10.1101/2023.08.23.553812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Object vision is commonly thought to involve a hierarchy of brain regions processing increasingly complex image features, with high-level visual cortex supporting object recognition and categorization. However, object vision supports diverse behavioral goals, suggesting basic limitations of this category-centric framework. To address these limitations, we mapped a series of dimensions derived from a large-scale analysis of human similarity judgments directly onto the brain. Our results reveal broadly distributed representations of behaviorally-relevant information, demonstrating selectivity to a wide variety of novel dimensions while capturing known selectivities for visual features and categories. Behavior-derived dimensions were superior to categories at predicting brain responses, yielding mixed selectivity in much of visual cortex and sparse selectivity in category-selective clusters. This framework reconciles seemingly disparate findings regarding regional specialization, explaining category selectivity as a special case of sparse response profiles among representational dimensions, suggesting a more expansive view on visual processing in the human brain.
Collapse
Affiliation(s)
- O Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Max Planck School of Cognition, Leipzig, Germany
| | - C I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda MD, USA
| | - M N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
7
|
Harnett NG, Fleming LL, Clancy KJ, Ressler KJ, Rosso IM. Affective Visual Circuit Dysfunction in Trauma and Stress-Related Disorders. Biol Psychiatry 2024:S0006-3223(24)01433-1. [PMID: 38996901 DOI: 10.1016/j.biopsych.2024.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/12/2024] [Accepted: 07/03/2024] [Indexed: 07/14/2024]
Abstract
Posttraumatic stress disorder (PTSD) is widely recognized as involving disruption of core neurocircuitry that underlies processing, regulation, and response to threat. In particular, the prefrontal cortex-hippocampal-amygdala circuit is a major contributor to posttraumatic dysfunction. However, the functioning of core threat neurocircuitry is partially dependent on sensorial inputs, and previous research has demonstrated that dense, reciprocal connections exist between threat circuits and the ventral visual stream. Furthermore, emergent evidence suggests that trauma exposure and resultant PTSD symptoms are associated with altered structure and function of the ventral visual stream. In the current review, we discuss evidence that both threat and visual circuitry together are an integral part of PTSD pathogenesis. An overview of the relevance of visual processing to PTSD is discussed in the context of both basic and translational research, highlighting the impact of stress on affective visual circuitry. This review further synthesizes emergent literature to suggest potential timing-dependent effects of traumatic stress on threat and visual circuits that may contribute to PTSD development. We conclude with recommendations for future research to move the field toward a more complete understanding of PTSD neurobiology.
Collapse
Affiliation(s)
- Nathaniel G Harnett
- Division of Depression and Anxiety, McLean Hospital, Belmont, Massachusetts; Department of Psychiatry, Harvard Medical School, Boston, Massachusetts.
| | - Leland L Fleming
- Division of Depression and Anxiety, McLean Hospital, Belmont, Massachusetts; Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
| | - Kevin J Clancy
- Division of Depression and Anxiety, McLean Hospital, Belmont, Massachusetts; Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
| | - Kerry J Ressler
- Division of Depression and Anxiety, McLean Hospital, Belmont, Massachusetts; Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
| | - Isabelle M Rosso
- Division of Depression and Anxiety, McLean Hospital, Belmont, Massachusetts; Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
8
|
Bezsudnova Y, Quinn AJ, Wynn SC, Jensen O. Spatiotemporal Properties of Common Semantic Categories for Words and Pictures. J Cogn Neurosci 2024; 36:1760-1769. [PMID: 38739567 DOI: 10.1162/jocn_a_02182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
The timing of semantic processing during object recognition in the brain is a topic of ongoing discussion. One way of addressing this question is by applying multivariate pattern analysis to human electrophysiological responses to object images of different semantic categories. However, although multivariate pattern analysis can reveal whether neuronal activity patterns are distinct for different stimulus categories, concerns remain on whether low-level visual features also contribute to the classification results. To circumvent this issue, we applied a cross-decoding approach to magnetoencephalography data from stimuli from two different modalities: images and their corresponding written words. We employed items from three categories and presented them in a randomized order. We show that if the classifier is trained on words, pictures are classified between 150 and 430 msec after stimulus onset, and when training on pictures, words are classified between 225 and 430 msec. The topographical map, identified using a searchlight approach for cross-modal activation in both directions, showed left lateralization, confirming the involvement of linguistic representations. These results point to semantic activation of pictorial stimuli occurring at ∼150 msec, whereas for words, the semantic activation occurs at ∼230 msec.
Collapse
Affiliation(s)
| | | | - Syanah C Wynn
- University of Birmingham
- Gutenberg University Medical Center Mainz
| | | |
Collapse
|
9
|
Tian S, Chen L, Wang X, Li G, Fu Z, Ji Y, Lu J, Wang X, Shan S, Bi Y. Vision matters for shape representation: Evidence from sculpturing and drawing in the blind. Cortex 2024; 174:241-255. [PMID: 38582629 DOI: 10.1016/j.cortex.2024.02.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/23/2024] [Accepted: 02/27/2024] [Indexed: 04/08/2024]
Abstract
Shape is a property that could be perceived by vision and touch, and is classically considered to be supramodal. While there is mounting evidence for the shared cognitive and neural representation space between visual and tactile shape, previous research tended to rely on dissimilarity structures between objects and had not examined the detailed properties of shape representation in the absence of vision. To address this gap, we conducted three explicit object shape knowledge production experiments with congenitally blind and sighted participants, who were asked to produce verbal features, 3D clay models, and 2D drawings of familiar objects with varying levels of tactile exposure, including tools, large nonmanipulable objects, and animals. We found that the absence of visual experience (i.e., in the blind group) led to stronger differences in animals than in tools and large objects, suggesting that direct tactile experience of objects is essential for shape representation when vision is unavailable. For tools with rich tactile/manipulation experiences, the blind produced overall good shapes comparable to the sighted, yet also showed intriguing differences. The blind group had more variations and a systematic bias in the geometric property of tools (making them stubbier than the sighted), indicating that visual experience contributes to aligning internal representations and calibrating overall object configurations, at least for tools. Taken together, the object shape representation reflects the intricate orchestration of vision, touch and language.
Collapse
Affiliation(s)
- Shuang Tian
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Lingjuan Chen
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Guochao Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ze Fu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yufeng Ji
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Jiahui Lu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaosha Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Shiguang Shan
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China; Chinese Institute for Brain Research, Beijing, China.
| |
Collapse
|
10
|
Hagen S, Zhao Y, Moonen L, Ulken N, Peelen MV. What drives the automatic retrieval of real-world object size knowledge? J Exp Psychol Hum Percept Perform 2024; 50:358-369. [PMID: 38300565 PMCID: PMC7616435 DOI: 10.1037/xhp0001189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2024]
Abstract
Real-world object size is a behaviorally relevant object property that is automatically retrieved when viewing object images: participants are faster to indicate the bigger of two object images when this object is also bigger in the real world. What drives this size Stroop effect? One possibility is that it reflects the automatic retrieval of real-world size after objects are recognized at the basic level (e.g., recognizing an object as a plane activates large real-world size). An alternative possibility is that the size Stroop effect is driven by automatic associations between low-/mid-level visual features (e.g., rectilinearity) and real-world size, bypassing object recognition. Here, we tested both accounts. In Experiment 1, objects were displayed upright and inverted, slowing down recognition while equating visual features. Inversion strongly reduced the Stroop effect, indicating that object recognition contributed to the Stroop effect. Independently of inversion, however, trial-wise differences in rectilinearity also contributed to the Stroop effect. In Experiment 2, the Stroop effect was compared between manmade objects (for which rectilinearity was associated with size) and animals (no association between rectilinearity and size). The Stroop effect was larger for animals than for manmade objects, indicating that rectilinear feature differences were not necessary for the Stroop effect. Finally, in Experiment 3, unrecognizable "texform" objects that maintained size-related visual feature differences were displayed upright and inverted. Results revealed a small Stroop effect for both upright and inverted conditions. Altogether, these results indicate that the size Stroop effect partly follows object recognition with an additional contribution from visual feature associations. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Simen Hagen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Yuanfang Zhao
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Lydia Moonen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Neele Ulken
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| |
Collapse
|
11
|
Feng X, Xu S, Li Y, Liu J. Body size as a metric for the affordable world. eLife 2024; 12:RP90583. [PMID: 38547366 PMCID: PMC10987089 DOI: 10.7554/elife.90583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2024] Open
Abstract
The physical body of an organism serves as a vital interface for interactions with its environment. Here, we investigated the impact of human body size on the perception of action possibilities (affordances) offered by the environment. We found that the body size delineated a distinct boundary on affordances, dividing objects of continuous real-world sizes into two discrete categories with each affording distinct action sets. Additionally, the boundary shifted with imagined body sizes, suggesting a causal link between body size and affordance perception. Intriguingly, ChatGPT, a large language model lacking physical embodiment, exhibited a modest yet comparable affordance boundary at the scale of human body size, suggesting the boundary is not exclusively derived from organism-environment interactions. A subsequent fMRI experiment offered preliminary evidence of affordance processing exclusively for objects within the body size range, but not for those beyond. This suggests that only objects capable of being manipulated are the objects capable of offering affordance in the eyes of an organism. In summary, our study suggests a novel definition of object-ness in an affordance-based context, advocating the concept of embodied cognition in understanding the emergence of intelligence constrained by an organism's physical attributes.
Collapse
Affiliation(s)
- Xinran Feng
- Department of Psychology & Tsinghua Laboratory of Brain and Intelligence, Tsinghua UniversityBeijingChina
| | - Shan Xu
- Faculty of Psychology, Beijing Normal UniversityBeijingChina
| | - Yuannan Li
- Department of Psychology & Tsinghua Laboratory of Brain and Intelligence, Tsinghua UniversityBeijingChina
| | - Jia Liu
- Department of Psychology & Tsinghua Laboratory of Brain and Intelligence, Tsinghua UniversityBeijingChina
| |
Collapse
|
12
|
Lu Z, Golomb JD. Human EEG and artificial neural networks reveal disentangled representations of object real-world size in natural images. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.19.553999. [PMID: 37662197 PMCID: PMC10473678 DOI: 10.1101/2023.08.19.553999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Remarkably, human brains have the ability to accurately perceive and process the real-world size of objects, despite vast differences in distance and perspective. While previous studies have delved into this phenomenon, distinguishing this ability from other visual perceptions, like depth, has been challenging. Using the THINGS EEG2 dataset with high time-resolution human brain recordings and more ecologically valid naturalistic stimuli, our study uses an innovative approach to disentangle neural representations of object real-world size from retinal size and perceived real-world depth in a way that was not previously possible. Leveraging this state-of-the-art dataset, our EEG representational similarity results reveal a pure representation of object real-world size in human brains. We report a representational timeline of visual object processing: object real-world depth appeared first, then retinal size, and finally, real-world size. Additionally, we input both these naturalistic images and object-only images without natural background into artificial neural networks. Consistent with the human EEG findings, we also successfully disentangled representation of object real-world size from retinal size and real-world depth in all three types of artificial neural networks (visual-only ResNet, visual-language CLIP, and language-only Word2Vec). Moreover, our multi-modal representational comparison framework across human EEG and artificial neural networks reveals real-world size as a stable and higher-level dimension in object space incorporating both visual and semantic information. Our research provides a detailed and clear characterization of the object processing process, which offers further advances and insights into our understanding of object space and the construction of more brain-like visual models.
Collapse
|
13
|
Fairchild GT, Holler DE, Fabbri S, Gomez MA, Walsh-Snow JC. Naturalistic Object Representations Depend on Distance and Size Cues. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.16.585308. [PMID: 38559105 PMCID: PMC10980039 DOI: 10.1101/2024.03.16.585308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Egocentric distance and real-world size are important cues for object perception and action. Nevertheless, most studies of human vision rely on two-dimensional pictorial stimuli that convey ambiguous distance and size information. Here, we use fMRI to test whether pictures are represented differently in the human brain from real, tangible objects that convey unambiguous distance and size cues. Participants directly viewed stimuli in two display formats (real objects and matched printed pictures of those objects) presented at different egocentric distances (near and far). We measured the effects of format and distance on fMRI response amplitudes and response patterns. We found that fMRI response amplitudes in the lateral occipital and posterior parietal cortices were stronger overall for real objects than for pictures. In these areas and many others, including regions involved in action guidance, responses to real objects were stronger for near vs. far stimuli, whereas distance had little effect on responses to pictures-suggesting that distance determines relevance to action for real objects, but not for pictures. Although stimulus distance especially influenced response patterns in dorsal areas that operate in the service of visually guided action, distance also modulated representations in ventral cortex, where object responses are thought to remain invariant across contextual changes. We observed object size representations for both stimulus formats in ventral cortex but predominantly only for real objects in dorsal cortex. Together, these results demonstrate that whether brain responses reflect physical object characteristics depends on whether the experimental stimuli convey unambiguous information about those characteristics. Significance Statement Classic frameworks of vision attribute perception of inherent object characteristics, such as size, to the ventral visual pathway, and processing of spatial characteristics relevant to action, such as distance, to the dorsal visual pathway. However, these frameworks are based on studies that used projected images of objects whose actual size and distance from the observer were ambiguous. Here, we find that when object size and distance information in the stimulus is less ambiguous, these characteristics are widely represented in both visual pathways. Our results provide valuable new insights into the brain representations of objects and their various physical attributes in the context of naturalistic vision.
Collapse
|
14
|
Stoinski LM, Perkuhn J, Hebart MN. THINGSplus: New norms and metadata for the THINGS database of 1854 object concepts and 26,107 natural object images. Behav Res Methods 2024; 56:1583-1603. [PMID: 37095326 PMCID: PMC10991023 DOI: 10.3758/s13428-023-02110-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2023] [Indexed: 04/26/2023]
Abstract
To study visual and semantic object representations, the need for well-curated object concepts and images has grown significantly over the past years. To address this, we have previously developed THINGS, a large-scale database of 1854 systematically sampled object concepts with 26,107 high-quality naturalistic images of these concepts. With THINGSplus, we significantly extend THINGS by adding concept- and image-specific norms and metadata for all 1854 concepts and one copyright-free image example per concept. Concept-specific norms were collected for the properties of real-world size, manmadeness, preciousness, liveliness, heaviness, naturalness, ability to move or be moved, graspability, holdability, pleasantness, and arousal. Further, we provide 53 superordinate categories as well as typicality ratings for all their members. Image-specific metadata includes a nameability measure, based on human-generated labels of the objects depicted in the 26,107 images. Finally, we identified one new public domain image per concept. Property (M = 0.97, SD = 0.03) and typicality ratings (M = 0.97, SD = 0.01) demonstrate excellent consistency, with the subsequently collected arousal ratings as the only exception (r = 0.69). Our property (M = 0.85, SD = 0.11) and typicality (r = 0.72, 0.74, 0.88) data correlated strongly with external norms, again with the lowest validity for arousal (M = 0.41, SD = 0.08). To summarize, THINGSplus provides a large-scale, externally validated extension to existing object norms and an important extension to THINGS, allowing detailed selection of stimuli and control variables for a wide range of research interested in visual object processing, language, and semantic memory.
Collapse
Affiliation(s)
- Laura M Stoinski
- Max Planck Institute for Human Cognitive & Brain Sciences, Leipzig, Germany.
| | - Jonas Perkuhn
- Max Planck Institute for Human Cognitive & Brain Sciences, Leipzig, Germany
| | - Martin N Hebart
- Max Planck Institute for Human Cognitive & Brain Sciences, Leipzig, Germany
- Justus Liebig University, Gießen, Germany
| |
Collapse
|
15
|
Morales-Torres R, Egner T. Beyond stimulus-response rules: Task sets incorporate information about performance difficulty. J Exp Psychol Learn Mem Cogn 2024:2024-56645-001. [PMID: 38407129 PMCID: PMC11345883 DOI: 10.1037/xlm0001337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
The capacity for goal-directed behavior relies on the generation and implementation of task sets. While task sets are traditionally defined as mnemonic ensembles linking task goals to stimulus-response mappings, we here asked the question whether they may also entail information about task difficulty: does the level of focus required for performing a task become incorporated within the task set? We addressed this question by employing a cued task-switching protocol, wherein participants engaged in two intermixed tasks with trial-unique stimuli. Both tasks were equally challenging during a baseline and a transfer phase, while their difficulty was manipulated during an intermediate learning phase by varying the proportion of trials with congruent versus incongruent response mappings between the two tasks. Comparing congruency effects between the baseline and transfer phases, Experiment 1 showed that the task with a low (high) proportion of congruent trials in the learning phase displayed reduced (increased) cross-task interference effects in the transfer phase, indicating that the level of task focus required in the learning phase had become associated with each task set. Experiment 2 indicated that strengthening of task focus level in the task with a low proportion of congruent trials was the primary driver of this effect. Experiment 3 ruled out the possibility of cue-control associations mediating this effect. Taken together, our results show that task sets can become associated with the focus level required to successfully implement them, thus significantly expanding our concept of the type of information that makes up a task set. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Ricardo Morales-Torres
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, United States of America
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, United States of America
| | - Tobias Egner
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, United States of America
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, United States of America
| |
Collapse
|
16
|
Hauptman M, Elli G, Pant R, Bedny M. Neural specialization for 'visual' concepts emerges in the absence of vision. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.23.552701. [PMID: 37662234 PMCID: PMC10473738 DOI: 10.1101/2023.08.23.552701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Vision provides a key source of information about many concepts, including 'living things' (e.g., tiger) and visual events (e.g., sparkle). According to a prominent theoretical framework, neural specialization for different conceptual categories is shaped by sensory features, e.g., living things are neurally dissociable from navigable places because living things concepts depend more on visual features. We tested this framework by comparing the neural basis of 'visual' concepts across sighted (n=22) and congenitally blind (n=21) adults. Participants judged the similarity of words varying in their reliance on vision while undergoing fMRI. We compared neural responses to living things nouns (birds, mammals) and place nouns (natural, manmade). In addition, we compared visual event verbs (e.g., 'sparkle') to non-visual events (sound emission, hand motion, mouth motion). People born blind exhibited distinctive univariate and multivariate responses to living things in a temporo-parietal semantic network activated by nouns, including the precuneus (PC). To our knowledge, this is the first demonstration that neural selectivity for living things does not require vision. We additionally observed preserved neural signatures of 'visual' light events in the left middle temporal gyrus (LMTG+). Across a wide range of semantic types, neural representations of sensory concepts develop independent of sensory experience.
Collapse
Affiliation(s)
- Miriam Hauptman
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Giulia Elli
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Rashi Pant
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
- Department of Biological Psychology & Neuropsychology, Universität Hamburg, Germany
| | - Marina Bedny
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
17
|
Şentürk YD, Ünver N, Demircan C, Egner T, Günseli E. The reactivation of task rules triggers the reactivation of task-relevant items. Cortex 2024; 171:465-480. [PMID: 38141571 DOI: 10.1016/j.cortex.2023.10.024] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 10/10/2023] [Indexed: 12/25/2023]
Abstract
Working memory (WM) describes the temporary storage of task-relevant items and procedural rules to guide action. Despite its central importance for goal-directed behavior, the interplay between WM and long-term memory (LTM) remains poorly understood. Recent studies have shown that repeated use of the same task-relevant item in WM results in a hand-off of the storage of that item to LTM, and switching to a new item reactivates WM. To further elucidate the rules governing WM-LTM interactions, we here planned to probe whether a change in task rules, independent of a switch in task-relevant items, would also lead to WM reactivation of maintained items. To this end, we used scalp-recorded electroencephalogram (EEG) data, specifically the contralateral delay activity (CDA), to track WM item storage while manipulating repetitions and changes in task rules and task-relevant items across trials in a visual WM task. We tested two rival hypotheses: If changes in task rules result in a reactivation of the target item representation, then the CDA should increase when a task change is cued even when the same target has been repeated across trials. However, if the reactivation of a task-relevant item only depends on the mnemonic availability of the item itself instead of the task it is used for, then only the changes in task-relevant items should reactivate the representations. Accordingly, the CDA amplitude should decrease for repeated task-relevant items independently of a task change. We found a larger CDA on task-switch compared to task-repeat trials, suggesting that the reactivation of task rules triggers the reactivation of task-relevant items in WM. By demonstrating that WM reactivation of LTM is interdependent for task rules and task-relevant items, this study informs our understanding of visual WM and its interplay with LTM. PREREGISTERED STAGE 1 PROTOCOL: https://osf.io/zp9e8 (date of in-principle acceptance: 19/12/2021).
Collapse
Affiliation(s)
- Yağmur D Şentürk
- Department of Psychology, Sabancı University, Istanbul, Türkiye.
| | - Nursima Ünver
- Department of Psychology, Sabancı University, Istanbul, Türkiye; Department of Psychology, University of Toronto, Canada.
| | - Can Demircan
- Department of Psychology, Sabancı University, Istanbul, Türkiye
| | - Tobias Egner
- Department of Psychology & Neuroscience, Duke University, Durham, NC, USA
| | - Eren Günseli
- Department of Psychology, Sabancı University, Istanbul, Türkiye
| |
Collapse
|
18
|
Karakose-Akbiyik S, Sussman O, Wurm MF, Caramazza A. The Role of Agentive and Physical Forces in the Neural Representation of Motion Events. J Neurosci 2024; 44:e1363232023. [PMID: 38050107 PMCID: PMC10860628 DOI: 10.1523/jneurosci.1363-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 11/14/2023] [Accepted: 11/19/2023] [Indexed: 12/06/2023] Open
Abstract
How does the brain represent information about motion events in relation to agentive and physical forces? In this study, we investigated the neural activity patterns associated with observing animated actions of agents (e.g., an agent hitting a chair) in comparison to similar movements of inanimate objects that were either shaped solely by the physics of the scene (e.g., gravity causing an object to fall down a hill and hit a chair) or initiated by agents (e.g., a visible agent causing an object to hit a chair). Using an fMRI-based multivariate pattern analysis (MVPA), this design allowed testing where in the brain the neural activity patterns associated with motion events change as a function of, or are invariant to, agentive versus physical forces behind them. A total of 29 human participants (nine male) participated in the study. Cross-decoding revealed a shared neural representation of animate and inanimate motion events that is invariant to agentive or physical forces in regions spanning frontoparietal and posterior temporal cortices. In contrast, the right lateral occipitotemporal cortex showed a higher sensitivity to agentive events, while the left dorsal premotor cortex was more sensitive to information about inanimate object events that were solely shaped by the physics of the scene.
Collapse
Affiliation(s)
| | - Oliver Sussman
- Department of Psychology, Harvard University, Cambridge, Massachusetts 02138
| | - Moritz F Wurm
- Center for Mind/Brain Sciences - CIMeC, University of Trento, 38068 Rovereto, Italy
| | - Alfonso Caramazza
- Department of Psychology, Harvard University, Cambridge, Massachusetts 02138
- Center for Mind/Brain Sciences - CIMeC, University of Trento, 38068 Rovereto, Italy
| |
Collapse
|
19
|
Peristeri E, Andreou M, Ketseridou SN, Machairas I, Papadopoulou V, Stravoravdi AS, Bamidis PD, Frantzidis CA. Animacy Processing in Autism: Event-Related Potentials Reflect Social Functioning Skills. Brain Sci 2023; 13:1656. [PMID: 38137104 PMCID: PMC10742338 DOI: 10.3390/brainsci13121656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 11/14/2023] [Accepted: 11/28/2023] [Indexed: 12/24/2023] Open
Abstract
Though previous studies with autistic individuals have provided behavioral evidence of animacy perception difficulties, the spatio-temporal dynamics of animacy processing in autism remain underexplored. This study investigated how animacy is neurally encoded in autistic adults, and whether potential deficits in animacy processing have cascading deleterious effects on their social functioning skills. We employed a picture naming paradigm that recorded accuracy and response latencies to animate and inanimate pictures in young autistic adults and age- and IQ-matched healthy individuals, while also employing high-density EEG analysis to map the spatio-temporal dynamics of animacy processing. Participants' social skills were also assessed through a social comprehension task. The autistic adults exhibited lower accuracy than controls on the animate pictures of the task and also exhibited altered brain responses, including larger and smaller N100 amplitudes than controls on inanimate and animate stimuli, respectively. At late stages of processing, there were shorter slow negative wave latencies for the autistic group as compared to controls for the animate trials only. The autistic individuals' altered brain responses negatively correlated with their social difficulties. The results suggest deficits in brain responses to animacy in the autistic group, which were related to the individuals' social functioning skills.
Collapse
Affiliation(s)
- Eleni Peristeri
- Language Development Lab, Department of English Studies, Faculty of Philosophy, Aristotle University of Thessaloniki, PC 54124 Thessaloniki, Greece;
| | - Maria Andreou
- Department of Speech and Language Therapy, University of Peloponnese, PC 24100 Kalamata, Greece
| | - Smaranda-Nafsika Ketseridou
- Laboratory of Medical Physics & Digital Innovation, Faculty of Health Sciences, School of Medicine, Aristotle University of Thessaloniki, PC 54124 Thessaloniki, Greece; (S.-N.K.); (I.M.); (P.D.B.); (C.A.F.)
| | - Ilias Machairas
- Laboratory of Medical Physics & Digital Innovation, Faculty of Health Sciences, School of Medicine, Aristotle University of Thessaloniki, PC 54124 Thessaloniki, Greece; (S.-N.K.); (I.M.); (P.D.B.); (C.A.F.)
| | - Valentina Papadopoulou
- Department of Psychology, Aristotle University of Thessaloniki, PC 54124 Thessaloniki, Greece;
| | | | - Panagiotis D. Bamidis
- Laboratory of Medical Physics & Digital Innovation, Faculty of Health Sciences, School of Medicine, Aristotle University of Thessaloniki, PC 54124 Thessaloniki, Greece; (S.-N.K.); (I.M.); (P.D.B.); (C.A.F.)
| | - Christos A. Frantzidis
- Laboratory of Medical Physics & Digital Innovation, Faculty of Health Sciences, School of Medicine, Aristotle University of Thessaloniki, PC 54124 Thessaloniki, Greece; (S.-N.K.); (I.M.); (P.D.B.); (C.A.F.)
- School of Computer Science, University of Lincoln, Lincoln PC LN6 7TS, UK;
| |
Collapse
|
20
|
Wilmskoetter J, Roth R, McDowell K, Munsell B, Fontenot S, Andrews K, Chang A, Johnson LP, Sangtian S, Behroozmand R, van Mierlo P, Fridriksson J, Bonilha L. Semantic Categorization of Naming Responses Based on Prearticulatory Electrical Brain Activity. J Clin Neurophysiol 2023; 40:608-615. [PMID: 37931162 PMCID: PMC10628367 DOI: 10.1097/wnp.0000000000000933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
PURPOSE Object naming requires visual decoding, conceptualization, semantic categorization, and phonological encoding, all within 400 to 600 ms of stimulus presentation and before a word is spoken. In this study, we sought to predict semantic categories of naming responses based on prearticulatory brain activity recorded with scalp EEG in healthy individuals. METHODS We assessed 19 healthy individuals who completed a naming task while undergoing EEG. The naming task consisted of 120 drawings of animate/inanimate objects or abstract drawings. We applied a one-dimensional, two-layer, neural network to predict the semantic categories of naming responses based on prearticulatory brain activity. RESULTS Classifications of animate, inanimate, and abstract responses had an average accuracy of 80%, sensitivity of 72%, and specificity of 87% across participants. Across participants, time points with the highest average weights were between 470 and 490 milliseconds after stimulus presentation, and electrodes with the highest weights were located over the left and right frontal brain areas. CONCLUSIONS Scalp EEG can be successfully used in predicting naming responses through prearticulatory brain activity. Interparticipant variability in feature weights suggests that individualized models are necessary for highest accuracy. Our findings may inform future applications of EEG in reconstructing speech for individuals with and without speech impairments.
Collapse
Affiliation(s)
- Janina Wilmskoetter
- Department of Rehabilitation Sciences, College of Health
Professions, Medical University of South Carolina; Charleston, SC 29425, USA
| | - Rebecca Roth
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| | - Konnor McDowell
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| | - Brent Munsell
- Department of Computer Science, College of Arts and
Sciences; University of North Carolina-Chapel Hill; Chapel Hill, NC 27599, USA
| | - Skyler Fontenot
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| | - Keeghan Andrews
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| | - Allen Chang
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| | - Lorelei Phillip Johnson
- Department of Communication Sciences and Disorders;
University of South Carolina; Columbia, SC 29208, USA
| | - Stacey Sangtian
- Department of Communication Sciences and Disorders;
University of South Carolina; Columbia, SC 29208, USA
| | - Roozbeh Behroozmand
- Department of Communication Sciences and Disorders;
University of South Carolina; Columbia, SC 29208, USA
| | | | - Julius Fridriksson
- Department of Communication Sciences and Disorders;
University of South Carolina; Columbia, SC 29208, USA
| | - Leonardo Bonilha
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| |
Collapse
|
21
|
Yao M, Wen B, Yang M, Guo J, Jiang H, Feng C, Cao Y, He H, Chang L. High-dimensional topographic organization of visual features in the primate temporal lobe. Nat Commun 2023; 14:5931. [PMID: 37739988 PMCID: PMC10517140 DOI: 10.1038/s41467-023-41584-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 09/07/2023] [Indexed: 09/24/2023] Open
Abstract
The inferotemporal cortex supports our supreme object recognition ability. Numerous studies have been conducted to elucidate the functional organization of this brain area, but there are still important questions that remain unanswered, including how this organization differs between humans and non-human primates. Here, we use deep neural networks trained on object categorization to construct a 25-dimensional space of visual features, and systematically measure the spatial organization of feature preference in both male monkey brains and human brains using fMRI. These feature maps allow us to predict the selectivity of a previously unknown region in monkey brains, which is corroborated by additional fMRI and electrophysiology experiments. These maps also enable quantitative analyses of the topographic organization of the temporal lobe, demonstrating the existence of a pair of orthogonal gradients that differ in spatial scale and revealing significant differences in the functional organization of high-level visual areas between monkey and human brains.
Collapse
Affiliation(s)
- Mengna Yao
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Bincheng Wen
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Mingpo Yang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Jiebin Guo
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Haozhou Jiang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Chao Feng
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Yilei Cao
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Huiguang He
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Le Chang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
22
|
Sharma S, Vinken K, Livingstone MS. When the whole is only the parts: non-holistic object parts predominate face-cell responses to illusory faces. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.22.558887. [PMID: 37790322 PMCID: PMC10542491 DOI: 10.1101/2023.09.22.558887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Humans are inclined to perceive faces in everyday objects with a face-like configuration. This illusion, known as face pareidolia, is often attributed to a specialized network of 'face cells' in primates. We found that face cells in macaque inferotemporal cortex responded selectively to pareidolia images, but this selectivity did not require a holistic, face-like configuration, nor did it encode human faceness ratings. Instead, it was driven mostly by isolated object parts that are perceived as eyes only within a face-like context. These object parts lack usual characteristics of primate eyes, pointing to the role of lower-level features. Our results suggest that face-cell responses are dominated by local, generic features, unlike primate visual perception, which requires holistic information. These findings caution against interpreting neural activity through the lens of human perception. Doing so could impose human perceptual biases, like seeing faces where none exist, onto our understanding of neural activity.
Collapse
Affiliation(s)
- Saloni Sharma
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115
| | - Kasper Vinken
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115
| | | |
Collapse
|
23
|
Almeida J, Fracasso A, Kristensen S, Valério D, Bergström F, Chakravarthi R, Tal Z, Walbrin J. Neural and behavioral signatures of the multidimensionality of manipulable object processing. Commun Biol 2023; 6:940. [PMID: 37709924 PMCID: PMC10502059 DOI: 10.1038/s42003-023-05323-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 09/04/2023] [Indexed: 09/16/2023] Open
Abstract
Understanding how we recognize objects requires unravelling the variables that govern the way we think about objects and the neural organization of object representations. A tenable hypothesis is that the organization of object knowledge follows key object-related dimensions. Here, we explored, behaviorally and neurally, the multidimensionality of object processing. We focused on within-domain object information as a proxy for the decisions we typically engage in our daily lives - e.g., identifying a hammer in the context of other tools. We extracted object-related dimensions from subjective human judgments on a set of manipulable objects. We show that the extracted dimensions are cognitively interpretable and relevant - i.e., participants are able to consistently label them, and these dimensions can guide object categorization; and are important for the neural organization of knowledge - i.e., they predict neural signals elicited by manipulable objects. This shows that multidimensionality is a hallmark of the organization of manipulable object knowledge.
Collapse
Affiliation(s)
- Jorge Almeida
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
| | - Alessio Fracasso
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Stephanie Kristensen
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Daniela Valério
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Fredrik Bergström
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- Department of Psychology, University of Gothenburg, Gothenburg, Sweden
| | | | - Zohar Tal
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Jonathan Walbrin
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
24
|
Vinken K, Prince JS, Konkle T, Livingstone MS. The neural code for "face cells" is not face-specific. SCIENCE ADVANCES 2023; 9:eadg1736. [PMID: 37647400 PMCID: PMC10468123 DOI: 10.1126/sciadv.adg1736] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 07/27/2023] [Indexed: 09/01/2023]
Abstract
Face cells are neurons that respond more to faces than to non-face objects. They are found in clusters in the inferotemporal cortex, thought to process faces specifically, and, hence, studied using faces almost exclusively. Analyzing neural responses in and around macaque face patches to hundreds of objects, we found graded response profiles for non-face objects that predicted the degree of face selectivity and provided information on face-cell tuning beyond that from actual faces. This relationship between non-face and face responses was not predicted by color and simple shape properties but by information encoded in deep neural networks trained on general objects rather than face classification. These findings contradict the long-standing assumption that face versus non-face selectivity emerges from face-specific features and challenge the practice of focusing on only the most effective stimulus. They provide evidence instead that category-selective neurons are best understood by their tuning directions in a domain-general object space.
Collapse
Affiliation(s)
- Kasper Vinken
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USA
| | - Jacob S. Prince
- Department of Psychology, Harvard University, Cambridge, MA 02478, USA
| | - Talia Konkle
- Department of Psychology, Harvard University, Cambridge, MA 02478, USA
| | | |
Collapse
|
25
|
Gong Z, Zhou M, Dai Y, Wen Y, Liu Y, Zhen Z. A large-scale fMRI dataset for the visual processing of naturalistic scenes. Sci Data 2023; 10:559. [PMID: 37612327 PMCID: PMC10447576 DOI: 10.1038/s41597-023-02471-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 08/14/2023] [Indexed: 08/25/2023] Open
Abstract
One ultimate goal of visual neuroscience is to understand how the brain processes visual stimuli encountered in the natural environment. Achieving this goal requires records of brain responses under massive amounts of naturalistic stimuli. Although the scientific community has put a lot of effort into collecting large-scale functional magnetic resonance imaging (fMRI) data under naturalistic stimuli, more naturalistic fMRI datasets are still urgently needed. We present here the Natural Object Dataset (NOD), a large-scale fMRI dataset containing responses to 57,120 naturalistic images from 30 participants. NOD strives for a balance between sampling variation between individuals and sampling variation between stimuli. This enables NOD to be utilized not only for determining whether an observation is generalizable across many individuals, but also for testing whether a response pattern is generalized to a variety of naturalistic stimuli. We anticipate that the NOD together with existing naturalistic neuroimaging datasets will serve as a new impetus for our understanding of the visual processing of naturalistic stimuli.
Collapse
Affiliation(s)
- Zhengxin Gong
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Ming Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Yuxuan Dai
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yushan Wen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Youyi Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China.
| | - Zonglei Zhen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
26
|
Gweon H, Fan J, Kim B. Socially intelligent machines that learn from humans and help humans learn. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2023; 381:20220048. [PMID: 37271177 DOI: 10.1098/rsta.2022.0048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 04/17/2023] [Indexed: 06/06/2023]
Abstract
A hallmark of human intelligence is the ability to understand and influence other minds. Humans engage in inferential social learning (ISL) by using commonsense psychology to learn from others and help others learn. Recent advances in artificial intelligence (AI) are raising new questions about the feasibility of human-machine interactions that support such powerful modes of social learning. Here, we envision what it means to develop socially intelligent machines that can learn, teach, and communicate in ways that are characteristic of ISL. Rather than machines that simply predict human behaviours or recapitulate superficial aspects of human sociality (e.g. smiling, imitating), we should aim to build machines that can learn from human inputs and generate outputs for humans by proactively considering human values, intentions and beliefs. While such machines can inspire next-generation AI systems that learn more effectively from humans (as learners) and even help humans acquire new knowledge (as teachers), achieving these goals will also require scientific studies of its counterpart: how humans reason about machine minds and behaviours. We close by discussing the need for closer collaborations between the AI/ML and cognitive science communities to advance a science of both natural and artificial intelligence. This article is part of a discussion meeting issue 'Cognitive artificial intelligence'.
Collapse
Affiliation(s)
- Hyowon Gweon
- Department of Psychology, Stanford University, Stanford, CA 94305, USA
| | - Judith Fan
- Department of Psychology, Stanford University, Stanford, CA 94305, USA
- Department of Psychology, University of California, San Diego, CA 92093, USA
| | - Been Kim
- Google Research, Mountain View, CA 94043, USA
| |
Collapse
|
27
|
Doshi FR, Konkle T. Cortical topographic motifs emerge in a self-organized map of object space. SCIENCE ADVANCES 2023; 9:eade8187. [PMID: 37343093 DOI: 10.1126/sciadv.ade8187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 05/17/2023] [Indexed: 06/23/2023]
Abstract
The human ventral visual stream has a highly systematic organization of object information, but the causal pressures driving these topographic motifs are highly debated. Here, we use self-organizing principles to learn a topographic representation of the data manifold of a deep neural network representational space. We find that a smooth mapping of this representational space showed many brain-like motifs, with a large-scale organization by animacy and real-world object size, supported by mid-level feature tuning, with naturally emerging face- and scene-selective regions. While some theories of the object-selective cortex posit that these differently tuned regions of the brain reflect a collection of distinctly specified functional modules, the present work provides computational support for an alternate hypothesis that the tuning and topography of the object-selective cortex reflect a smooth mapping of a unified representational space.
Collapse
Affiliation(s)
- Fenil R Doshi
- Department of Psychology and Center for Brain Sciences, Harvard University, Cambridge, MA, USA
| | - Talia Konkle
- Department of Psychology and Center for Brain Sciences, Harvard University, Cambridge, MA, USA
| |
Collapse
|
28
|
Coggan DD, Tong F. Spikiness and animacy as potential organizing principles of human ventral visual cortex. Cereb Cortex 2023; 33:8194-8217. [PMID: 36958809 PMCID: PMC10321104 DOI: 10.1093/cercor/bhad108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 03/05/2023] [Accepted: 03/06/2023] [Indexed: 03/25/2023] Open
Abstract
Considerable research has been devoted to understanding the fundamental organizing principles of the ventral visual pathway. A recent study revealed a series of 3-4 topographical maps arranged along the macaque inferotemporal (IT) cortex. The maps articulated a two-dimensional space based on the spikiness and animacy of visual objects, with "inanimate-spiky" and "inanimate-stubby" regions of the maps constituting two previously unidentified cortical networks. The goal of our study was to determine whether a similar functional organization might exist in human IT. To address this question, we presented the same object stimuli and images from "classic" object categories (bodies, faces, houses) to humans while recording fMRI activity at 7 Tesla. Contrasts designed to reveal the spikiness-animacy object space evoked extensive significant activation across human IT. However, unlike the macaque, we did not observe a clear sequence of complete maps, and selectivity for the spikiness-animacy space was deeply and mutually entangled with category-selectivity. Instead, we observed multiple new stimulus preferences in category-selective regions, including functional sub-structure related to object spikiness in scene-selective cortex. Taken together, these findings highlight spikiness as a promising organizing principle of human IT and provide new insights into the role of category-selective regions in visual object processing.
Collapse
Affiliation(s)
- David D Coggan
- Department of Psychology, Vanderbilt University, 111 21st Ave S, Nashville, TN 37240, United States
| | - Frank Tong
- Department of Psychology, Vanderbilt University, 111 21st Ave S, Nashville, TN 37240, United States
| |
Collapse
|
29
|
Henderson MM, Tarr MJ, Wehbe L. A Texture Statistics Encoding Model Reveals Hierarchical Feature Selectivity across Human Visual Cortex. J Neurosci 2023; 43:4144-4161. [PMID: 37127366 PMCID: PMC10255092 DOI: 10.1523/jneurosci.1822-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 03/21/2023] [Accepted: 03/26/2023] [Indexed: 05/03/2023] Open
Abstract
Midlevel features, such as contour and texture, provide a computational link between low- and high-level visual representations. Although the nature of midlevel representations in the brain is not fully understood, past work has suggested a texture statistics model, called the P-S model (Portilla and Simoncelli, 2000), is a candidate for predicting neural responses in areas V1-V4 as well as human behavioral data. However, it is not currently known how well this model accounts for the responses of higher visual cortex to natural scene images. To examine this, we constructed single-voxel encoding models based on P-S statistics and fit the models to fMRI data from human subjects (both sexes) from the Natural Scenes Dataset (Allen et al., 2022). We demonstrate that the texture statistics encoding model can predict the held-out responses of individual voxels in early retinotopic areas and higher-level category-selective areas. The ability of the model to reliably predict signal in higher visual cortex suggests that the representation of texture statistics features is widespread throughout the brain. Furthermore, using variance partitioning analyses, we identify which features are most uniquely predictive of brain responses and show that the contributions of higher-order texture features increase from early areas to higher areas on the ventral and lateral surfaces. We also demonstrate that patterns of sensitivity to texture statistics can be used to recover broad organizational axes within visual cortex, including dimensions that capture semantic image content. These results provide a key step forward in characterizing how midlevel feature representations emerge hierarchically across the visual system.SIGNIFICANCE STATEMENT Intermediate visual features, like texture, play an important role in cortical computations and may contribute to tasks like object and scene recognition. Here, we used a texture model proposed in past work to construct encoding models that predict the responses of neural populations in human visual cortex (measured with fMRI) to natural scene stimuli. We show that responses of neural populations at multiple levels of the visual system can be predicted by this model, and that the model is able to reveal an increase in the complexity of feature representations from early retinotopic cortex to higher areas of ventral and lateral visual cortex. These results support the idea that texture-like representations may play a broad underlying role in visual processing.
Collapse
Affiliation(s)
- Margaret M Henderson
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Department of Psychology
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
| | - Michael J Tarr
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Department of Psychology
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
| | - Leila Wehbe
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Department of Psychology
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
| |
Collapse
|
30
|
He BJ. Towards a pluralistic neurobiological understanding of consciousness. Trends Cogn Sci 2023; 27:420-432. [PMID: 36842851 PMCID: PMC10101889 DOI: 10.1016/j.tics.2023.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 02/02/2023] [Accepted: 02/03/2023] [Indexed: 02/27/2023]
Abstract
Theories of consciousness are often based on the assumption that a single, unified neurobiological account will explain different types of conscious awareness. However, recent findings show that, even within a single modality such as conscious visual perception, the anatomical location, timing, and information flow of neural activity related to conscious awareness vary depending on both external and internal factors. This suggests that the search for generic neural correlates of consciousness may not be fruitful. I argue that consciousness science requires a more pluralistic approach and propose a new framework: joint determinant theory (JDT). This theory may be capable of accommodating different brain circuit mechanisms for conscious contents as varied as percepts, wills, memories, emotions, and thoughts, as well as their integrated experience.
Collapse
Affiliation(s)
- Biyu J He
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA; Departments of Neurology, Neuroscience and Physiology, Radiology, New York University Grossman School of Medicine, New York, NY 10016.
| |
Collapse
|
31
|
Kramer MA, Hebart MN, Baker CI, Bainbridge WA. The features underlying the memorability of objects. SCIENCE ADVANCES 2023; 9:eadd2981. [PMID: 37126552 PMCID: PMC10132746 DOI: 10.1126/sciadv.add2981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 03/29/2023] [Indexed: 05/03/2023]
Abstract
What makes certain images more memorable than others? While much of memory research has focused on participant effects, recent studies using a stimulus-centric perspective have sparked debate on the determinants of memory, including the roles of semantic and visual features and whether the most prototypical or atypical items are best remembered. Prior studies have typically relied on constrained stimulus sets, limiting a generalized view of the features underlying what we remember. Here, we collected more than 1 million memory ratings for a naturalistic dataset of 26,107 object images designed to comprehensively sample concrete objects. We establish a model of object features that is predictive of image memorability and examined whether memorability could be accounted for by the typicality of the objects. We find that semantic features exert a stronger influence than perceptual features on what we remember and that the relationship between memorability and typicality is more complex than a simple positive or negative association alone.
Collapse
Affiliation(s)
- Max A. Kramer
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Martin N. Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
| | - Chris I. Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Wilma A. Bainbridge
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Neuroscience Institute, University of Chicago, Chicago, IL, USA
| |
Collapse
|
32
|
Leshinskaya A, Bajaj M, Thompson-Schill SL. Novel objects with causal event schemas elicit selective responses in tool- and hand-selective lateral occipitotemporal cortex. Cereb Cortex 2023; 33:5557-5573. [PMID: 36469589 PMCID: PMC10152094 DOI: 10.1093/cercor/bhac442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 10/10/2022] [Accepted: 10/11/2022] [Indexed: 12/11/2022] Open
Abstract
Tool-selective lateral occipitotemporal cortex (LOTC) responds preferentially to images of tools (hammers, brushes) relative to non-tool objects (clocks, shoes). What drives these responses? Unlike other objects, tools exert effects on their surroundings. We tested whether LOTC responses are influenced by event schemas that denote different temporal relations. Participants learned about novel objects embedded in different event sequences. Causer objects moved prior to the appearance of an environmental event (e.g. stars), while Reactor objects moved after an event. Visual features and motor association were controlled. During functional magnetic resonance imaging, participants viewed still images of the objects. We localized tool-selective LOTC and non-tool-selective parahippocampal cortex (PHC) by contrasting neural responses to images of familiar tools and non-tools. We found that LOTC responded more to Causers than Reactors, while PHC did not. We also measured responses to images of hands, which elicit overlapping responses with tools. Across inferior temporal cortex, voxels' tool and hand selectivity positively predicted a preferential response to Causers. We conclude that an event schema typical of tools is sufficient to drive LOTC and that category-preferential responses across the temporal lobe may reflect relational event structures typical of those domains.
Collapse
Affiliation(s)
- Anna Leshinskaya
- Department of Psychology, University of Pennsylvania, 425 S. University Ave, Stephen A Levin Building, Philadelphia, PA 19104, United States
- Center for Neuroscience, University of California, Davis, 1544 Newton Court, Room 209, Davis, CA, United States
| | - Mira Bajaj
- Department of Psychology, University of Pennsylvania, 425 S. University Ave, Stephen A Levin Building, Philadelphia, PA 19104, United States
- The Johns Hopkins University School of Medicine, 733 N Broadway, Baltimore, MD 21205, United States
| | - Sharon L Thompson-Schill
- Department of Psychology, University of Pennsylvania, 425 S. University Ave, Stephen A Levin Building, Philadelphia, PA 19104, United States
| |
Collapse
|
33
|
Tian M, Saccone EJ, Kim JS, Kanjlia S, Bedny M. Sensory modality and spoken language shape reading network in blind readers of Braille. Cereb Cortex 2023; 33:2426-2440. [PMID: 35671478 PMCID: PMC10016046 DOI: 10.1093/cercor/bhac216] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 05/06/2022] [Accepted: 05/07/2022] [Indexed: 01/24/2023] Open
Abstract
The neural basis of reading is highly consistent across many languages and scripts. Are there alternative neural routes to reading? How does the sensory modality of symbols (tactile vs. visual) influence their neural representations? We examined these questions by comparing reading of visual print (sighted group, n = 19) and tactile Braille (congenitally blind group, n = 19). Blind and sighted readers were presented with written (words, consonant strings, non-letter shapes) and spoken stimuli (words, backward speech) that varied in word-likeness. Consistent with prior work, the ventral occipitotemporal cortex (vOTC) was active during Braille and visual reading. A posterior/anterior vOTC word-form gradient was observed only in sighted readers with more anterior regions preferring larger orthographic units (words). No such gradient was observed in blind readers. Consistent with connectivity predictions, in blind compared to sighted readers, posterior parietal cortices were recruited to a greater degree and contained word-preferring patches. Lateralization of Braille in blind readers was predicted by laterality of spoken language and reading hand. The effect of spoken language increased along a cortical hierarchy, whereas effect of reading hand waned. These results suggested that the neural basis of reading is influenced by symbol modality and spoken language and support connectivity-based views of cortical function.
Collapse
Affiliation(s)
- Mengyu Tian
- Corresponding author: Department of Psychological and Brain Sciences, Johns Hopkins University, 3400 N Charles St, Baltimore, MD 21218, United States.
| | - Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
| | - Judy S Kim
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
- Department of Psychology, Yale University, 2 Hillhouse Ave., New Haven, CT 06511, United States
| | - Shipra Kanjlia
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue Pittsburgh, PA 15213, United States
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
| |
Collapse
|
34
|
Abstract
Visual cortex contains regions of selectivity for domains of ecological importance. Food is an evolutionarily critical category whose visual heterogeneity may make the identification of selectivity more challenging. We investigate neural responsiveness to food using natural images combined with large-scale human fMRI. Leveraging the improved sensitivity of modern designs and statistical analyses, we identify two food-selective regions in the ventral visual cortex. Our results are robust across 8 subjects from the Natural Scenes Dataset (NSD), multiple independent image sets and multiple analysis methods. We then test our findings of food selectivity in an fMRI "localizer" using grayscale food images. These independent results confirm the existence of food selectivity in ventral visual cortex and help illuminate why earlier studies may have failed to do so. Our identification of food-selective regions stands alongside prior findings of functional selectivity and adds to our understanding of the organization of knowledge within the human visual system.
Collapse
|
35
|
Aveni K, Ahmed J, Borovsky A, McRae K, Jenkins ME, Sprengel K, Fraser JA, Orange JB, Knowles T, Roberts AC. Predictive language comprehension in Parkinson's disease. PLoS One 2023; 18:e0262504. [PMID: 36753529 PMCID: PMC9907838 DOI: 10.1371/journal.pone.0262504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 12/27/2021] [Indexed: 02/09/2023] Open
Abstract
Verb and action knowledge deficits are reported in persons with Parkinson's disease (PD), even in the absence of dementia or mild cognitive impairment. However, the impact of these deficits on combinatorial semantic processing is less well understood. Following on previous verb and action knowledge findings, we tested the hypothesis that PD impairs the ability to integrate event-based thematic fit information during online sentence processing. Specifically, we anticipated persons with PD with age-typical cognitive abilities would perform more poorly than healthy controls during a visual world paradigm task requiring participants to predict a target object constrained by the thematic fit of the agent-verb combination. Twenty-four PD and 24 healthy age-matched participants completed comprehensive neuropsychological assessments. We recorded participants' eye movements as they heard predictive sentences (The fisherman rocks the boat) alongside target, agent-related, verb-related, and unrelated images. We tested effects of group (PD/control) on gaze using growth curve models. There were no significant differences between PD and control participants, suggesting that PD participants successfully and rapidly use combinatory thematic fit information to predict upcoming language. Baseline sentences with no predictive information (e.g., Look at the drum) confirmed that groups showed equivalent sentence processing and eye movement patterns. Additionally, we conducted an exploratory analysis contrasting PD and controls' performance on low-motion-content versus high-motion-content verbs. This analysis revealed fewer predictive fixations in high-motion sentences only for healthy older adults. PD participants may adapt to their disease by relying on spared, non-action-simulation-based language processing mechanisms, although this conclusion is speculative, as the analyses of high- vs. low-motion items was highly limited by the study design. These findings provide novel evidence that individuals with PD match healthy adults in their ability to use verb meaning to predict upcoming nouns despite previous findings of verb semantic impairment in PD across a variety of tasks.
Collapse
Affiliation(s)
- Katharine Aveni
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States of America
| | - Juweiriya Ahmed
- Department of Psychology, Western University, London, ON, Canada
| | - Arielle Borovsky
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States of America
| | - Ken McRae
- Department of Psychology, Western University, London, ON, Canada
| | - Mary E. Jenkins
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Katherine Sprengel
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States of America
| | - J. Alexander Fraser
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
- Department of Ophthalmology, Western University, St. Jo122seph’s Health Care, London, ON, Canada
| | - Joseph B. Orange
- School of Communication Sciences and Disorders, Western University, London, ON, Canada
- Canadian Centre for Activity and Aging, Western University, London, ON, Canada
| | - Thea Knowles
- Department of Psychology, Western University, London, ON, Canada
| | - Angela C. Roberts
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States of America
- School of Communication Sciences and Disorders, Western University, London, ON, Canada
| |
Collapse
|
36
|
Chiou R, Cox CR, Lambon Ralph MA. Bipartite functional fractionation within the neural system for social cognition supports the psychological continuity of self versus other. Cereb Cortex 2023; 33:1277-1299. [PMID: 35394005 PMCID: PMC9930627 DOI: 10.1093/cercor/bhac135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 02/10/2022] [Accepted: 03/14/2022] [Indexed: 11/12/2022] Open
Abstract
Research of social neuroscience establishes that regions in the brain's default-mode network (DN) and semantic network (SN) are engaged by socio-cognitive tasks. Research of the human connectome shows that DN and SN regions are both situated at the transmodal end of a cortical gradient but differ in their loci along this gradient. Here we integrated these 2 bodies of research, used the psychological continuity of self versus other as a "test-case," and used functional magnetic resonance imaging to investigate whether these 2 networks would encode social concepts differently. We found a robust dissociation between the DN and SN-while both networks contained sufficient information for decoding broad-stroke distinction of social categories, the DN carried more generalizable information for cross-classifying across social distance and emotive valence than did the SN. We also found that the overarching distinction of self versus other was a principal divider of the representational space while social distance was an auxiliary factor (subdivision, nested within the principal dimension), and this representational landscape was more manifested in the DN than in the SN. Taken together, our findings demonstrate how insights from connectome research can benefit social neuroscience and have implications for clarifying the 2 networks' differential contributions to social cognition.
Collapse
Affiliation(s)
| | - Christopher R Cox
- Department of Psychology, Louisiana State University, LA 70803, Baton Rouge, United States
| | - Matthew A Lambon Ralph
- MRC Cognition & Brain Science Unit, University of Cambridge, Cambridge, CB2 7EF, United Kingdom
| |
Collapse
|
37
|
Pulvinar Response Profiles and Connectivity Patterns to Object Domains. J Neurosci 2023; 43:812-826. [PMID: 36596697 PMCID: PMC9899088 DOI: 10.1523/jneurosci.0613-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 11/30/2022] [Accepted: 12/10/2022] [Indexed: 01/05/2023] Open
Abstract
Distributed cortical regions show differential responses to visual objects belonging to different domains varying by animacy (e.g., animals vs tools), yet it remains unclear whether this is an organization principle also applying to the subcortical structures. Combining multiple fMRI activation experiments (two main experiments and six validation datasets; 12 females and 9 males in the main Experiment 1; 10 females and 10 males in the main Experiment 2), resting-state functional connectivity, and task-based dynamic causal modeling analysis in human subjects, we found that visual processing of images of animals and tools elicited different patterns of response in the pulvinar, with robust left lateralization for tools, and distinct, bilateral (with rightward tendency) clusters for animals. Such domain-preferring activity distribution in the pulvinar was associated with the magnitude with which the voxels were intrinsically connected with the corresponding domain-preferring regions in the cortex. The pulvinar-to-right-amygdala path showed a one-way shortcut supporting the perception of animals, and the modulation connection from pulvinar to parietal showed an advantage to the perception of tools. These results incorporate the subcortical regions into the object processing network and highlight that domain organization appears to be an overarching principle across various processing stages in the brain.SIGNIFICANCE STATEMENT Viewing objects belonging to different domains elicited different cortical regions, but whether the domain organization applied to the subcortical structures (e.g., pulvinar) was unknown. Multiple fMRI activation experiments revealed that object pictures belonging to different domains elicited differential patterns of response in the pulvinar, with robust left lateralization for tool pictures, and distinct, bilateral (with rightward tendency) clusters for animals. Combining the resting-state functional connectivity and dynamic causal modeling analysis on task-based fMRI data, we found domain-preferring activity distribution in the pulvinar aligned with that in cortical regions. These results highlight the need for coherent visual theories that explain the mechanisms underlying the domain organization across various processing stages.
Collapse
|
38
|
Disentangling Object Category Representations Driven by Dynamic and Static Visual Input. J Neurosci 2023; 43:621-634. [PMID: 36639892 PMCID: PMC9888510 DOI: 10.1523/jneurosci.0371-22.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 10/01/2022] [Accepted: 10/06/2022] [Indexed: 12/12/2022] Open
Abstract
Humans can label and categorize objects in a visual scene with high accuracy and speed, a capacity well characterized with studies using static images. However, motion is another cue that could be used by the visual system to classify objects. To determine how motion-defined object category information is processed by the brain in the absence of luminance-defined form information, we created a novel stimulus set of "object kinematograms" to isolate motion-defined signals from other sources of visual information. Object kinematograms were generated by extracting motion information from videos of 6 object categories and applying the motion to limited-lifetime random dot patterns. Using functional magnetic resonance imaging (fMRI) (n = 15, 40% women), we investigated whether category information from the object kinematograms could be decoded within the occipitotemporal and parietal cortex and evaluated whether the information overlapped with category responses to static images from the original videos. We decoded object category for both stimulus formats in all higher-order regions of interest (ROIs). More posterior occipitotemporal and ventral regions showed higher accuracy in the static condition, while more anterior occipitotemporal and dorsal regions showed higher accuracy in the dynamic condition. Further, decoding across the two stimulus formats was possible in all regions. These results demonstrate that motion cues can elicit widespread and robust category responses on par with those elicited by static luminance cues, even in ventral regions of visual cortex that have traditionally been associated with primarily image-defined form processing.SIGNIFICANCE STATEMENT Much research on visual object recognition has focused on recognizing objects in static images. However, motion is a rich source of information that humans might also use to categorize objects. Here, we present the first study to compare neural representations of several animate and inanimate objects when category information is presented in two formats: static cues or isolated dynamic motion cues. Our study shows that, while higher-order brain regions differentially process object categories depending on format, they also contain robust, abstract category representations that generalize across format. These results expand our previous understanding of motion-derived animate and inanimate object category processing and provide useful tools for future research on object category processing driven by multiple sources of visual information.
Collapse
|
39
|
Li J, Kean H, Fedorenko E, Saygin Z. Intact reading ability despite lacking a canonical visual word form area in an individual born without the left superior temporal lobe. Cogn Neuropsychol 2023; 39:249-275. [PMID: 36653302 DOI: 10.1080/02643294.2023.2164923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
The visual word form area (VWFA), a region canonically located within left ventral temporal cortex (VTC), is specialized for orthography in literate adults presumbly due to its connectivity with frontotemporal language regions. But is a typical, left-lateralized language network critical for the VWFA's emergence? We investigated this question in an individual (EG) born without the left superior temporal lobe but who has normal reading ability. EG showed canonical typical face-selectivity bilateraly but no wordselectivity either in right VWFA or in the spared left VWFA. Moreover, in contrast with the idea that the VWFA is simply part of the language network, no part of EG's VTC showed selectivity to higher-level linguistic processing. Interestingly, EG's VWFA showed reliable multivariate patterns that distinguished words from other categories. These results suggest that a typical left-hemisphere language network is necessary for acanonical VWFA, and that orthographic processing can otherwise be supported by a distributed neural code.
Collapse
Affiliation(s)
- Jin Li
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Hope Kean
- Department of Brain and Cognitive Sciences / McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences / McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Zeynep Saygin
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
40
|
Bonham LW, Geier EG, Sirkis DW, Leong JK, Ramos EM, Wang Q, Karydas A, Lee SE, Sturm VE, Sawyer RP, Friedberg A, Ichida JK, Gitler AD, Sugrue L, Cordingley M, Bee W, Weber E, Kramer JH, Rankin KP, Rosen HJ, Boxer AL, Seeley WW, Ravits J, Miller BL, Yokoyama JS. Radiogenomics of C9orf72 Expansion Carriers Reveals Global Transposable Element Derepression and Enables Prediction of Thalamic Atrophy and Clinical Impairment. J Neurosci 2023. [PMID: 36446586 DOI: 10.1101/2022.04.29.490104] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023] Open
Abstract
Hexanucleotide repeat expansion (HRE) within C9orf72 is the most common genetic cause of frontotemporal dementia (FTD). Thalamic atrophy occurs in both sporadic and familial FTD but is thought to distinctly affect HRE carriers. Separately, emerging evidence suggests widespread derepression of transposable elements (TEs) in the brain in several neurodegenerative diseases, including C9orf72 HRE-mediated FTD (C9-FTD). Whether TE activation can be measured in peripheral blood and how the reduction in peripheral C9orf72 expression observed in HRE carriers relates to atrophy and clinical impairment remain unknown. We used FreeSurfer software to assess the effects of C9orf72 HRE and clinical diagnosis (n = 78 individuals, male and female) on atrophy of thalamic nuclei. We also generated a novel, human, whole-blood RNA-sequencing dataset to determine the relationships among peripheral C9orf72 expression, TE activation, thalamic atrophy, and clinical severity (n = 114 individuals, male and female). We confirmed global thalamic atrophy and reduced C9orf72 expression in HRE carriers. Moreover, we identified disproportionate atrophy of the right mediodorsal lateral nucleus in HRE carriers and showed that C9orf72 expression associated with clinical severity, independent of thalamic atrophy. Strikingly, we found global peripheral activation of TEs, including the human endogenous LINE-1 element L1HS L1HS levels were associated with atrophy of multiple pulvinar nuclei, a thalamic region implicated in C9-FTD. Integration of peripheral transcriptomic and neuroimaging data from human HRE carriers revealed atrophy of specific thalamic nuclei, demonstrated that C9orf72 levels relate to clinical severity, and identified marked derepression of TEs, including L1HS, which predicted atrophy of FTD-relevant thalamic nuclei.SIGNIFICANCE STATEMENT Pathogenic repeat expansion in C9orf72 is the most frequent genetic cause of FTD and amyotrophic lateral sclerosis (ALS; C9-FTD/ALS). The clinical, neuroimaging, and pathologic features of C9-FTD/ALS are well characterized, whereas the intersections of transcriptomic dysregulation and brain structure remain largely unexplored. Herein, we used a novel radiogenomic approach to examine the relationship between peripheral blood transcriptomics and thalamic atrophy, a neuroimaging feature disproportionately impacted in C9-FTD/ALS. We confirmed reduction of C9orf72 in blood and found broad dysregulation of transposable elements-genetic elements typically repressed in the human genome-in symptomatic C9orf72 expansion carriers, which associated with atrophy of thalamic nuclei relevant to FTD. C9orf72 expression was also associated with clinical severity, suggesting that peripheral C9orf72 levels capture disease-relevant information.
Collapse
Affiliation(s)
- Luke W Bonham
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California 94158
| | - Ethan G Geier
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Transposon Therapeutics, San Diego, California 92122
| | - Daniel W Sirkis
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Josiah K Leong
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Psychological Science, University of Arkansas, Fayetteville, Arkansas 72701
| | - Eliana Marisa Ramos
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095
| | - Qing Wang
- Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095
| | - Anna Karydas
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Suzee E Lee
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Virginia E Sturm
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Russell P Sawyer
- Department of Neurology and Rehabilitation Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio 45267
| | - Adit Friedberg
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Justin K Ichida
- Department of Stem Cell Biology and Regenerative Medicine, Keck School of Medicine of USC, University of Southern California, Los Angeles, California 90033
| | - Aaron D Gitler
- Department of Genetics, Stanford University School of Medicine, Stanford, California 94305
| | - Leo Sugrue
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California 94158
| | | | - Walter Bee
- Transposon Therapeutics, San Diego, California 92122
| | - Eckard Weber
- Transposon Therapeutics, San Diego, California 92122
| | - Joel H Kramer
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Katherine P Rankin
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - Howard J Rosen
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Adam L Boxer
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
| | - William W Seeley
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Pathology, University of California, San Francisco, San Francisco, California 94158
| | - John Ravits
- Department of Neurosciences, ALS Translational Research, University of California, San Diego, La Jolla, California 92093
| | - Bruce L Miller
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| | - Jennifer S Yokoyama
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California 94158
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California 94158
- Global Brain Health Institute, University of California, San Francisco, San Francisco, California 94158, and Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
41
|
Gert AL, Ehinger BV, Timm S, Kietzmann TC, König P. WildLab: A naturalistic free viewing experiment reveals previously unknown electroencephalography signatures of face processing. Eur J Neurosci 2022; 56:6022-6038. [PMID: 36113866 DOI: 10.1111/ejn.15824] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 08/26/2022] [Accepted: 08/30/2022] [Indexed: 12/29/2022]
Abstract
Neural mechanisms of face perception are predominantly studied in well-controlled experimental settings that involve random stimulus sequences and fixed eye positions. Although powerful, the employed paradigms are far from what constitutes natural vision. Here, we demonstrate the feasibility of ecologically more valid experimental paradigms using natural viewing behaviour, by combining a free viewing paradigm on natural scenes, free of photographer bias, with advanced data processing techniques that correct for overlap effects and co-varying non-linear dependencies of multiple eye movement parameters. We validate this approach by replicating classic N170 effects in neural responses, triggered by fixation onsets (fixation event-related potentials [fERPs]). Importantly, besides finding a strong correlation between both experiments, our more natural stimulus paradigm yielded smaller variability between subjects than the classic setup. Moving beyond classic temporal and spatial effect locations, our experiment furthermore revealed previously unknown signatures of face processing: This includes category-specific modulation of the event-related potential (ERP)'s amplitude even before fixation onset, as well as adaptation effects across subsequent fixations depending on their history.
Collapse
Affiliation(s)
- Anna L Gert
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Benedikt V Ehinger
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,Stuttgart Center for Simulation Science, University of Stuttgart, Stuttgart, Germany
| | - Silja Timm
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Tim C Kietzmann
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,MRC Cognition and Brain Sciences Unit, Cambridge University, Cambridge, UK
| | - Peter König
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany.,Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
42
|
Ramp-shaped neural tuning supports graded population-level representation of the object-to-scene continuum. Sci Rep 2022; 12:18081. [PMID: 36302932 PMCID: PMC9613906 DOI: 10.1038/s41598-022-21768-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 09/30/2022] [Indexed: 01/24/2023] Open
Abstract
We can easily perceive the spatial scale depicted in a picture, regardless of whether it is a small space (e.g., a close-up view of a chair) or a much larger space (e.g., an entire class room). How does the human visual system encode this continuous dimension? Here, we investigated the underlying neural coding of depicted spatial scale, by examining the voxel tuning and topographic organization of brain responses. We created naturalistic yet carefully-controlled stimuli by constructing virtual indoor environments, and rendered a series of snapshots to smoothly sample between a close-up view of the central object and far-scale view of the full environment (object-to-scene continuum). Human brain responses were measured to each position using functional magnetic resonance imaging. We did not find evidence for a smooth topographic mapping for the object-to-scene continuum on the cortex. Instead, we observed large swaths of cortex with opposing ramp-shaped profiles, with highest responses to one end of the object-to-scene continuum or the other, and a small region showing a weak tuning to intermediate scale views. However, when we considered the population code of the entire ventral occipito-temporal cortex, we found smooth and linear representation of the object-to-scene continuum. Our results together suggest that depicted spatial scale information is encoded parametrically in large-scale population codes across the entire ventral occipito-temporal cortex.
Collapse
|
43
|
Khosla M, Ratan Murty NA, Kanwisher N. A highly selective response to food in human visual cortex revealed by hypothesis-free voxel decomposition. Curr Biol 2022; 32:4159-4171.e9. [PMID: 36027910 PMCID: PMC9561032 DOI: 10.1016/j.cub.2022.08.009] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 08/03/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022]
Abstract
Prior work has identified cortical regions selectively responsive to specific categories of visual stimuli. However, this hypothesis-driven work cannot reveal how prominent these category selectivities are in the overall functional organization of the visual cortex, or what others might exist that scientists have not thought to look for. Furthermore, standard voxel-wise tests cannot detect distinct neural selectivities that coexist within voxels. To overcome these limitations, we used data-driven voxel decomposition methods to identify the main components underlying fMRI responses to thousands of complex photographic images. Our hypothesis-neutral analysis rediscovered components selective for faces, places, bodies, and words, validating our method and showing that these selectivities are dominant features of the ventral visual pathway. The analysis also revealed an unexpected component with a distinct anatomical distribution that responded highly selectively to images of food. Alternative accounts based on low- to mid-level visual features, such as color, shape, or texture, failed to account for the food selectivity of this component. High-throughput testing and control experiments with matched stimuli on a highly accurate computational model of this component confirm its selectivity for food. We registered our methods and hypotheses before replicating them on held-out participants and in a novel dataset. These findings demonstrate the power of data-driven methods and show that the dominant neural responses of the ventral visual pathway include not only selectivities for faces, scenes, bodies, and words but also the visually heterogeneous category of food, thus constraining accounts of when and why functional specialization arises in the cortex.
Collapse
Affiliation(s)
- Meenakshi Khosla
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - N Apurva Ratan Murty
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Nancy Kanwisher
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
44
|
Arbel R, Heimler B, Amedi A. Face shape processing via visual-to-auditory sensory substitution activates regions within the face processing networks in the absence of visual experience. Front Neurosci 2022; 16:921321. [PMID: 36263367 PMCID: PMC9576157 DOI: 10.3389/fnins.2022.921321] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/05/2022] [Indexed: 11/16/2022] Open
Abstract
Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.
Collapse
Affiliation(s)
- Roni Arbel
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Pediatrics, Hadassah University Hospital-Mount Scopus, Jerusalem, Israel
- *Correspondence: Roni Arbel,
| | - Benedetta Heimler
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind, and Technology, Reichman University, Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation, Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind, and Technology, Reichman University, Herzeliya, Israel
| |
Collapse
|
45
|
Jiang Z, Sanders DMW, Cowell RA. Visual and semantic similarity norms for a photographic image stimulus set containing recognizable objects, animals and scenes. Behav Res Methods 2022; 54:2364-2380. [PMID: 35088365 PMCID: PMC9325926 DOI: 10.3758/s13428-021-01732-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/21/2021] [Indexed: 11/08/2022]
Abstract
We collected visual and semantic similarity norms for a set of photographic images comprising 120 recognizable objects/animals and 120 indoor/outdoor scenes. Human observers rated the similarity of pairs of images within four categories of stimuli-inanimate objects, animals, indoor scenes and outdoor scenes-via Amazon's Mechanical Turk. We performed multidimensional scaling (MDS) on the collected similarity ratings to visualize the perceived similarity for each image category, for both visual and semantic ratings. The MDS solutions revealed the expected similarity relationships between images within each category, along with intuitively sensible differences between visual and semantic similarity relationships for each category. Stress tests performed on the MDS solutions indicated that the MDS analyses captured meaningful levels of variance in the similarity data. These stimuli, associated norms and naming data are made available to all researchers, and should provide a useful resource for researchers of vision, memory and conceptual knowledge wishing to run experiments using well-parameterized stimulus sets.
Collapse
Affiliation(s)
- Zhuohan Jiang
- Neuroscience Program, Smith College, Northampton, MA, USA
- Integrated Program in Neuroscience, McGill University, Montreal, Quebec, Canada
| | - D Merika W Sanders
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, 135 Hicks Way, Amherst, MA, 01003, USA
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Rosemary A Cowell
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, 135 Hicks Way, Amherst, MA, 01003, USA.
| |
Collapse
|
46
|
An evaluation of how connectopic mapping reveals visual field maps in V1. Sci Rep 2022; 12:16249. [PMID: 36171242 PMCID: PMC9519585 DOI: 10.1038/s41598-022-20322-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 09/09/2022] [Indexed: 11/25/2022] Open
Abstract
Abstract Functional gradients, in which response properties change gradually across the cortical surface, have been proposed as a key organising principle of the brain. However, the presence of these gradients remains undetermined in many brain regions. Resting-state neuroimaging studies have suggested these gradients can be reconstructed from patterns of functional connectivity. Here we investigate the accuracy of these reconstructions and establish whether it is connectivity or the functional properties within a region that determine these “connectopic maps”. Different manifold learning techniques were used to recover visual field maps while participants were at rest or engaged in natural viewing. We benchmarked these reconstructions against maps measured by traditional visual field mapping. We report an initial exploratory experiment of a publicly available naturalistic imaging dataset, followed by a preregistered replication using larger resting-state and naturalistic imaging datasets from the Human Connectome Project. Connectopic mapping accurately predicted visual field maps in primary visual cortex, with better predictions for eccentricity than polar angle maps. Non-linear manifold learning methods outperformed simpler linear embeddings. We also found more accurate predictions during natural viewing compared to resting-state. Varying the source of the connectivity estimates had minimal impact on the connectopic maps, suggesting the key factor is the functional topography within a brain region. The application of these standardised methods for connectopic mapping will allow the discovery of functional gradients across the brain. Protocol registration The stage 1 protocol for this Registered Report was accepted in
principle on 19 April 2022. The protocol, as accepted by the journal, can be found at 10.6084/m9.figshare.19771717.
Collapse
|
47
|
Bola Ł. Rethinking the representation of sound. eLife 2022; 11:e82747. [PMID: 36070353 PMCID: PMC9451532 DOI: 10.7554/elife.82747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Blindness triggers a reorganization of the visual and auditory cortices in the brain.
Collapse
Affiliation(s)
- Łukasz Bola
- Institute of Psychology, Polish Academy of SciencesWarsawPoland
| |
Collapse
|
48
|
Harnett NG, Finegold KE, Lebois LAM, van Rooij SJH, Ely TD, Murty VP, Jovanovic T, Bruce SE, House SL, Beaudoin FL, An X, Zeng D, Neylan TC, Clifford GD, Linnstaedt SD, Germine LT, Bollen KA, Rauch SL, Haran JP, Storrow AB, Lewandowski C, Musey PI, Hendry PL, Sheikh S, Jones CW, Punches BE, Kurz MC, Swor RA, Hudak LA, Pascual JL, Seamon MJ, Harris E, Chang AM, Pearson C, Peak DA, Domeier RM, Rathlev NK, O'Neil BJ, Sergot P, Sanchez LD, Miller MW, Pietrzak RH, Joormann J, Barch DM, Pizzagalli DA, Sheridan JF, Harte SE, Elliott JM, Kessler RC, Koenen KC, McLean SA, Nickerson LD, Ressler KJ, Stevens JS. Structural covariance of the ventral visual stream predicts posttraumatic intrusion and nightmare symptoms: a multivariate data fusion analysis. Transl Psychiatry 2022; 12:321. [PMID: 35941117 PMCID: PMC9360028 DOI: 10.1038/s41398-022-02085-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 07/14/2022] [Accepted: 07/20/2022] [Indexed: 01/16/2023] Open
Abstract
Visual components of trauma memories are often vividly re-experienced by survivors with deleterious consequences for normal function. Neuroimaging research on trauma has primarily focused on threat-processing circuitry as core to trauma-related dysfunction. Conversely, limited attention has been given to visual circuitry which may be particularly relevant to posttraumatic stress disorder (PTSD). Prior work suggests that the ventral visual stream is directly related to the cognitive and affective disturbances observed in PTSD and may be predictive of later symptom expression. The present study used multimodal magnetic resonance imaging data (n = 278) collected two weeks after trauma exposure from the AURORA study, a longitudinal, multisite investigation of adverse posttraumatic neuropsychiatric sequelae. Indices of gray and white matter were combined using data fusion to identify a structural covariance network (SCN) of the ventral visual stream 2 weeks after trauma. Participant's loadings on the SCN were positively associated with both intrusion symptoms and intensity of nightmares. Further, SCN loadings moderated connectivity between a previously observed amygdala-hippocampal functional covariance network and the inferior temporal gyrus. Follow-up MRI data at 6 months showed an inverse relationship between SCN loadings and negative alterations in cognition in mood. Further, individuals who showed decreased strength of the SCN between 2 weeks and 6 months had generally higher PTSD symptom severity over time. The present findings highlight a role for structural integrity of the ventral visual stream in the development of PTSD. The ventral visual stream may be particularly important for the consolidation or retrieval of trauma memories and may contribute to efficient reactivation of visual components of the trauma memory, thereby exacerbating PTSD symptoms. Potentially chronic engagement of the network may lead to reduced structural integrity which becomes a risk factor for lasting PTSD symptoms.
Collapse
Affiliation(s)
- Nathaniel G Harnett
- Division of Depression and Anxiety, McLean Hospital, Belmont, MA, USA.
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| | | | - Lauren A M Lebois
- Division of Depression and Anxiety, McLean Hospital, Belmont, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| | - Sanne J H van Rooij
- Department of Psychiatry and Behavioral Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Timothy D Ely
- Department of Psychiatry and Behavioral Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Vishnu P Murty
- Department of Psychology, Temple University, Philadelphia, PA, USA
| | - Tanja Jovanovic
- Department of Psychiatry and Behavioral Neurosciences, Wayne State University, Detroit, MI, USA
| | - Steven E Bruce
- Department of Psychological Sciences, University of Missouri - St. Louis, St. Louis, MO, USA
| | - Stacey L House
- Department of Emergency Medicine, Washington University School of Medicine, St. Louis, MO, USA
| | - Francesca L Beaudoin
- Department of Emergency Medicine & Department of Health Services, Policy, and Practice, The Alpert Medical School of Brown University, Rhode Island Hospital and The Miriam Hospital, Providence, RI, USA
| | - Xinming An
- Institute for Trauma Recovery, Department of Anesthesiology, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Donglin Zeng
- Department of Biostatistics, Gillings School of Global Public Health, University of North Carolina, Chapel Hill, NC, USA
| | - Thomas C Neylan
- Departments of Psychiatry and Neurology, University of California San Francisco, San Francisco, CA, USA
| | - Gari D Clifford
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Sarah D Linnstaedt
- Institute for Trauma Recovery, Department of Anesthesiology, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Laura T Germine
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
- Institute for Technology in Psychiatry, McLean Hospital, Belmont, MA, USA
- The Many Brains Project, Belmont, MA, USA
| | - Kenneth A Bollen
- Department of Psychology and Neuroscience & Department of Sociology, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Scott L Rauch
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
- Institute for Technology in Psychiatry, McLean Hospital, Belmont, MA, USA
- Department of Psychiatry, McLean Hospital, Belmont, MA, USA
| | - John P Haran
- Department of Emergency Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | - Alan B Storrow
- Department of Emergency Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - Paul I Musey
- Department of Emergency Medicine, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Phyllis L Hendry
- Department of Emergency Medicine, University of Florida College of Medicine-Jacksonville, Jacksonville, FL, USA
| | - Sophia Sheikh
- Department of Emergency Medicine, University of Florida College of Medicine-Jacksonville, Jacksonville, FL, USA
| | - Christopher W Jones
- Department of Emergency Medicine, Cooper Medical School of Rowan University, Camden, NJ, USA
| | - Brittany E Punches
- Department of Emergency Medicine, Ohio State University College of Medicine, Columbus, OH, USA
- Ohio State University College of Nursing, Columbus, OH, USA
| | - Michael C Kurz
- Department of Emergency Medicine, University of Alabama School of Medicine, Birmingham, AL, USA
- Department of Surgery, Division of Acute Care Surgery, University of Alabama School of Medicine, Birmingham, AL, USA
- Center for Injury Science, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Robert A Swor
- Department of Emergency Medicine, Oakland University William Beaumont School of Medicine, Rochester, MI, USA
| | - Lauren A Hudak
- Department of Emergency Medicine, Emory University School of Medicine, Atlanta, GA, USA
| | - Jose L Pascual
- Department of Surgery, Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Mark J Seamon
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Surgery, Division of Traumatology, Surgical Critical Care and Emergency Surgery, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Anna M Chang
- Department of Emergency Medicine, Jefferson University Hospitals, Philadelphia, PA, USA
| | - Claire Pearson
- Department of Emergency Medicine, Wayne State University, Ascension St. John Hospital, Detroit, MI, USA
| | - David A Peak
- Department of Emergency Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Robert M Domeier
- Department of Emergency Medicine, Saint Joseph Mercy Hospital, Ypsilanti, MI, USA
| | - Niels K Rathlev
- Department of Emergency Medicine, University of Massachusetts Medical School-Baystate, Springfield, MA, USA
| | - Brian J O'Neil
- Department of Emergency Medicine, Wayne State University, Detroit Receiving Hospital, Detroit, MI, USA
| | - Paulina Sergot
- Department of Emergency Medicine, McGovern Medical School, University of Texas Health, Houston, TX, USA
| | - Leon D Sanchez
- Department of Emergency Medicine, Brigham and Women's Hospital, Boston, MA, USA
- Department of Emergency Medicine, Harvard Medical School, Boston, MA, USA
| | - Mark W Miller
- National Center for PTSD, Behavioral Science Division, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Boston University School of Medicine, Boston, MA, USA
| | - Robert H Pietrzak
- National Center for PTSD, Clinical Neurosciences Division, VA Connecticut Healthcare System, West Haven, CT, USA
- Department of Psychiatry, Yale School of Medicine, New Haven, CT, USA
| | - Jutta Joormann
- Department of Psychology, Yale University, New Haven, CT, USA
| | - Deanna M Barch
- Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA
| | - Diego A Pizzagalli
- Division of Depression and Anxiety, McLean Hospital, Belmont, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| | - John F Sheridan
- Division of Biosciences, Ohio State University College of Dentistry, Columbus, OH, USA
- Institute for Behavioral Medicine Research, OSU Wexner Medical Center, Columbus, OH, USA
| | - Steven E Harte
- Department of Anesthesiology, University of Michigan Medical School, Ann Arbor, MI, USA
- Department of Internal Medicine-Rheumatology, University of Michigan Medical School, Ann Arbor, MI, USA
| | - James M Elliott
- Kolling Institute, University of Sydney, St Leonards, New South Wales, Australia
- Faculty of Medicine and Health, University of Sydney, Northern Sydney Local Health District, New South Wales, Australia
- Physical Therapy & Human Movement Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Ronald C Kessler
- Department of Health Care Policy, Harvard Medical School, Boston, MA, USA
| | - Karestan C Koenen
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, USA
| | - Samuel A McLean
- Department of Emergency Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Institute for Trauma Recovery, Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Lisa D Nickerson
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
- McLean Imaging Center, McLean Hospital, Belmont, MA, USA
| | - Kerry J Ressler
- Division of Depression and Anxiety, McLean Hospital, Belmont, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| | - Jennifer S Stevens
- Department of Psychiatry and Behavioral Sciences, Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
49
|
Chen YC, Deza A, Konkle T. How big should this object be? Perceptual influences on viewing-size preferences. Cognition 2022; 225:105114. [DOI: 10.1016/j.cognition.2022.105114] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 03/24/2022] [Accepted: 03/28/2022] [Indexed: 11/29/2022]
|
50
|
Wang R, Janini D, Konkle T. Mid-level Feature Differences Support Early Animacy and Object Size Distinctions: Evidence from Electroencephalography Decoding. J Cogn Neurosci 2022; 34:1670-1680. [PMID: 35704550 PMCID: PMC9438936 DOI: 10.1162/jocn_a_01883] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Responses to visually presented objects along the cortical surface of the human brain have a large-scale organization reflecting the broad categorical divisions of animacy and object size. Emerging evidence indicates that this topographical organization is supported by differences between objects in mid-level perceptual features. With regard to the timing of neural responses, images of objects quickly evoke neural responses with decodable information about animacy and object size, but are mid-level features sufficient to evoke these rapid neural responses? Or is slower iterative neural processing required to untangle information about animacy and object size from mid-level features, requiring hundreds of milliseconds more processing time? To answer this question, we used EEG to measure human neural responses to images of objects and their texform counterparts-unrecognizable images that preserve some mid-level feature information about texture and coarse form. We found that texform images evoked neural responses with early decodable information about both animacy and real-world size, as early as responses evoked by original images. Furthermore, successful cross-decoding indicates that both texform and original images evoke information about animacy and size through a common underlying neural basis. Broadly, these results indicate that the visual system contains a mid-level feature bank carrying linearly decodable information on animacy and size, which can be rapidly activated without requiring explicit recognition or protracted temporal processing.
Collapse
|