1
|
Walbrin J, Sossounov N, Mahdiani M, Vaz I, Almeida J. Fine-grained knowledge about manipulable objects is well-predicted by contrastive language image pre-training. iScience 2024; 27:110297. [PMID: 39040066 PMCID: PMC11261149 DOI: 10.1016/j.isci.2024.110297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 02/23/2024] [Accepted: 06/14/2024] [Indexed: 07/24/2024] Open
Abstract
Object recognition is an important ability that relies on distinguishing between similar objects (e.g., deciding which utensil(s) to use at different stages of meal preparation). Recent work describes the fine-grained organization of knowledge about manipulable objects via the study of the constituent dimensions that are most relevant to human behavior, for example, vision, manipulation, and function-based properties. A logical extension of this work concerns whether or not these dimensions are uniquely human, or can be approximated by deep learning. Here, we show that behavioral dimensions are generally well-predicted by CLIP-ViT - a multimodal network trained on a large and diverse set of image-text pairs. Moreover, this model outperforms comparison networks pre-trained on smaller, image-only datasets. These results demonstrate the impressive capacity of CLIP-ViT to approximate fine-grained object knowledge. We discuss the possible sources of this benefit relative to other models (e.g., multimodal vs. image-only pre-training, dataset size, architecture).
Collapse
Affiliation(s)
- Jon Walbrin
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Nikita Sossounov
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | | | - Igor Vaz
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
2
|
Abdel-Ghaffar SA, Huth AG, Lescroart MD, Stansbury D, Gallant JL, Bishop SJ. Occipital-temporal cortical tuning to semantic and affective features of natural images predicts associated behavioral responses. Nat Commun 2024; 15:5531. [PMID: 38982092 PMCID: PMC11233618 DOI: 10.1038/s41467-024-49073-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 05/22/2024] [Indexed: 07/11/2024] Open
Abstract
In everyday life, people need to respond appropriately to many types of emotional stimuli. Here, we investigate whether human occipital-temporal cortex (OTC) shows co-representation of the semantic category and affective content of visual stimuli. We also explore whether OTC transformation of semantic and affective features extracts information of value for guiding behavior. Participants viewed 1620 emotional natural images while functional magnetic resonance imaging data were acquired. Using voxel-wise modeling we show widespread tuning to semantic and affective image features across OTC. The top three principal components underlying OTC voxel-wise responses to image features encoded stimulus animacy, stimulus arousal and interactions of animacy with stimulus valence and arousal. At low to moderate dimensionality, OTC tuning patterns predicted behavioral responses linked to each image better than regressors directly based on image features. This is consistent with OTC representing stimulus semantic category and affective content in a manner suited to guiding behavior.
Collapse
Affiliation(s)
- Samy A Abdel-Ghaffar
- Department of Psychology, UC Berkeley, Berkeley, CA, 94720, USA
- Google LLC, San Francisco, CA, USA
| | - Alexander G Huth
- Centre for Theoretical and Computational Neuroscience, UT Austin, Austin, TX, 78712, USA
| | - Mark D Lescroart
- Department of Psychology University of Nevada Reno, Reno, NV, 89557, USA
| | - Dustin Stansbury
- Program in Vision Sciences, UC Berkeley, Berkeley, CA, 94720, USA
| | - Jack L Gallant
- Department of Psychology, UC Berkeley, Berkeley, CA, 94720, USA
- Program in Vision Sciences, UC Berkeley, Berkeley, CA, 94720, USA
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, 94720, USA
| | - Sonia J Bishop
- Department of Psychology, UC Berkeley, Berkeley, CA, 94720, USA.
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, 94720, USA.
- School of Psychology, Trinity College Dublin, Dublin, Ireland.
- Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, D02 PX31, Ireland.
| |
Collapse
|
3
|
Retsa C, Turpin H, Geiser E, Ansermet F, Müller-Nix C, Murray MM. Longstanding Auditory Sensory and Semantic Differences in Preterm Born Children. Brain Topogr 2024; 37:536-551. [PMID: 38010487 PMCID: PMC11199270 DOI: 10.1007/s10548-023-01022-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 11/06/2023] [Indexed: 11/29/2023]
Abstract
More than 10% of births are preterm, and the long-term consequences on sensory and semantic processing of non-linguistic information remain poorly understood. 17 very preterm-born children (born at < 33 weeks gestational age) and 15 full-term controls were tested at 10 years old with an auditory object recognition task, while 64-channel auditory evoked potentials (AEPs) were recorded. Sounds consisted of living (animal and human vocalizations) and manmade objects (e.g. household objects, instruments, and tools). Despite similar recognition behavior, AEPs strikingly differed between full-term and preterm children. Starting at 50ms post-stimulus onset, AEPs from preterm children differed topographically from their full-term counterparts. Over the 108-224ms post-stimulus period, full-term children showed stronger AEPs in response to living objects, whereas preterm born children showed the reverse pattern; i.e. stronger AEPs in response to manmade objects. Differential brain activity between semantic categories could reliably classify children according to their preterm status. Moreover, this opposing pattern of differential responses to semantic categories of sounds was also observed in source estimations within a network of occipital, temporal and frontal regions. This study highlights how early life experience in terms of preterm birth shapes sensory and object processing later on in life.
Collapse
Affiliation(s)
- Chrysa Retsa
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland.
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland.
| | - Hélène Turpin
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
| | - Eveline Geiser
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - François Ansermet
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
- Department of Child and Adolescent Psychiatry, University Hospital, Geneva, Switzerland
| | - Carole Müller-Nix
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
4
|
Ulep MG, Liénard P. Free-listing and Semantic Knowledge: A Tool for Detecting Alzheimer Disease? Cogn Behav Neurol 2024:00146965-990000000-00068. [PMID: 38899852 DOI: 10.1097/wnn.0000000000000370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 02/05/2024] [Indexed: 06/21/2024]
Abstract
BACKGROUND Impairment in semantic knowledge contributes to Alzheimer disease (AD)-related decline. However, the particulars of the impact AD has on specific domains of knowledge remain debatable. OBJECTIVE To investigate the impact of AD on specific semantic categories that are integral to daily functions-living things and man-made objects. METHOD We administered a free-listing task (written version) to 19 individuals with AD and 15 cognitively normal older adults and assessed the task's relationship with other cognitive and functional tests in clinical use. We compared the contents of the lists of salient concepts generated by the AD and control groups. RESULTS Group membership (AD or control), after controlling for age, sex, formal education, and an estimate of premorbid intellectual ability, predicted the groups' performance on the free-listing task across two categories. Functional status was inversely related to performance on the free-listing task, holding demographic variables constant. Based on a comparison of the contents of the free lists that were generated by the two groups, it was possible to conclude that, in individuals with AD, conceptual knowledge central to the respective categories was well preserved, whereas the peripheral conceptual material showed evidence of degradation. CONCLUSION The free-listing task, which is an easy-to-administer and cost-effective tool, could aid in the preliminary detection of semantic knowledge dysfunction, revealing concepts that are better preserved and, possibly, the characterization of AD. Cognitive assessment tools that can be applied across cultures are needed, and the free-listing task has the potential to address this gap.
Collapse
Affiliation(s)
- Maileen G Ulep
- Cognitive Disorders Clinic, Cleveland Clinic Nevada, Lou Ruvo Center for Brain Health, Las Vegas, Nevada
- Department of Anthropology, University of Nevada Las Vegas, Las Vegas, Nevada
| | - Pierre Liénard
- Department of Anthropology, University of Nevada Las Vegas, Las Vegas, Nevada
| |
Collapse
|
5
|
Tian S, Chen L, Wang X, Li G, Fu Z, Ji Y, Lu J, Wang X, Shan S, Bi Y. Vision matters for shape representation: Evidence from sculpturing and drawing in the blind. Cortex 2024; 174:241-255. [PMID: 38582629 DOI: 10.1016/j.cortex.2024.02.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/23/2024] [Accepted: 02/27/2024] [Indexed: 04/08/2024]
Abstract
Shape is a property that could be perceived by vision and touch, and is classically considered to be supramodal. While there is mounting evidence for the shared cognitive and neural representation space between visual and tactile shape, previous research tended to rely on dissimilarity structures between objects and had not examined the detailed properties of shape representation in the absence of vision. To address this gap, we conducted three explicit object shape knowledge production experiments with congenitally blind and sighted participants, who were asked to produce verbal features, 3D clay models, and 2D drawings of familiar objects with varying levels of tactile exposure, including tools, large nonmanipulable objects, and animals. We found that the absence of visual experience (i.e., in the blind group) led to stronger differences in animals than in tools and large objects, suggesting that direct tactile experience of objects is essential for shape representation when vision is unavailable. For tools with rich tactile/manipulation experiences, the blind produced overall good shapes comparable to the sighted, yet also showed intriguing differences. The blind group had more variations and a systematic bias in the geometric property of tools (making them stubbier than the sighted), indicating that visual experience contributes to aligning internal representations and calibrating overall object configurations, at least for tools. Taken together, the object shape representation reflects the intricate orchestration of vision, touch and language.
Collapse
Affiliation(s)
- Shuang Tian
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Lingjuan Chen
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Guochao Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ze Fu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yufeng Ji
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Jiahui Lu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaosha Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Shiguang Shan
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China; Chinese Institute for Brain Research, Beijing, China.
| |
Collapse
|
6
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
7
|
Hauptman M, Elli G, Pant R, Bedny M. Neural specialization for 'visual' concepts emerges in the absence of vision. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.23.552701. [PMID: 37662234 PMCID: PMC10473738 DOI: 10.1101/2023.08.23.552701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Vision provides a key source of information about many concepts, including 'living things' (e.g., tiger) and visual events (e.g., sparkle). According to a prominent theoretical framework, neural specialization for different conceptual categories is shaped by sensory features, e.g., living things are neurally dissociable from navigable places because living things concepts depend more on visual features. We tested this framework by comparing the neural basis of 'visual' concepts across sighted (n=22) and congenitally blind (n=21) adults. Participants judged the similarity of words varying in their reliance on vision while undergoing fMRI. We compared neural responses to living things nouns (birds, mammals) and place nouns (natural, manmade). In addition, we compared visual event verbs (e.g., 'sparkle') to non-visual events (sound emission, hand motion, mouth motion). People born blind exhibited distinctive univariate and multivariate responses to living things in a temporo-parietal semantic network activated by nouns, including the precuneus (PC). To our knowledge, this is the first demonstration that neural selectivity for living things does not require vision. We additionally observed preserved neural signatures of 'visual' light events in the left middle temporal gyrus (LMTG+). Across a wide range of semantic types, neural representations of sensory concepts develop independent of sensory experience.
Collapse
Affiliation(s)
- Miriam Hauptman
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Giulia Elli
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Rashi Pant
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
- Department of Biological Psychology & Neuropsychology, Universität Hamburg, Germany
| | - Marina Bedny
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
8
|
Almeida J, Fracasso A, Kristensen S, Valério D, Bergström F, Chakravarthi R, Tal Z, Walbrin J. Neural and behavioral signatures of the multidimensionality of manipulable object processing. Commun Biol 2023; 6:940. [PMID: 37709924 PMCID: PMC10502059 DOI: 10.1038/s42003-023-05323-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 09/04/2023] [Indexed: 09/16/2023] Open
Abstract
Understanding how we recognize objects requires unravelling the variables that govern the way we think about objects and the neural organization of object representations. A tenable hypothesis is that the organization of object knowledge follows key object-related dimensions. Here, we explored, behaviorally and neurally, the multidimensionality of object processing. We focused on within-domain object information as a proxy for the decisions we typically engage in our daily lives - e.g., identifying a hammer in the context of other tools. We extracted object-related dimensions from subjective human judgments on a set of manipulable objects. We show that the extracted dimensions are cognitively interpretable and relevant - i.e., participants are able to consistently label them, and these dimensions can guide object categorization; and are important for the neural organization of knowledge - i.e., they predict neural signals elicited by manipulable objects. This shows that multidimensionality is a hallmark of the organization of manipulable object knowledge.
Collapse
Affiliation(s)
- Jorge Almeida
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
| | - Alessio Fracasso
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Stephanie Kristensen
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Daniela Valério
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Fredrik Bergström
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- Department of Psychology, University of Gothenburg, Gothenburg, Sweden
| | | | - Zohar Tal
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Jonathan Walbrin
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
9
|
Dȩbska A, Wójcik M, Chyl K, Dziȩgiel-Fivet G, Jednoróg K. Beyond the Visual Word Form Area - a cognitive characterization of the left ventral occipitotemporal cortex. Front Hum Neurosci 2023; 17:1199366. [PMID: 37576470 PMCID: PMC10416454 DOI: 10.3389/fnhum.2023.1199366] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 07/10/2023] [Indexed: 08/15/2023] Open
Abstract
The left ventral occipitotemporal cortex has been traditionally viewed as a pathway for visual object recognition including written letters and words. Its crucial role in reading was strengthened by the studies on the functionally localized "Visual Word Form Area" responsible for processing word-like information. However, in the past 20 years, empirical studies have challenged the assumptions of this brain region as processing exclusively visual or even orthographic stimuli. In this review, we aimed to present the development of understanding of the left ventral occipitotemporal cortex from the visually based letter area to the modality-independent symbolic language related region. We discuss theoretical and empirical research that includes orthographic, phonological, and semantic properties of language. Existing results showed that involvement of the left ventral occipitotemporal cortex is not limited to unimodal activity but also includes multimodal processes. The idea of the integrative nature of this region is supported by the broad functional and structural connectivity with language-related and attentional brain networks. We conclude that although the function of the area is not yet fully understood in human cognition, its role goes beyond visual word form processing. The left ventral occipitotemporal cortex seems to be crucial for combining higher-level language information with abstract forms that convey meaning independently of modality.
Collapse
Affiliation(s)
- Agnieszka Dȩbska
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Marta Wójcik
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Katarzyna Chyl
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
- The Educational Research Institute, Warsaw, Poland
| | - Gabriela Dziȩgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
10
|
Coggan DD, Tong F. Spikiness and animacy as potential organizing principles of human ventral visual cortex. Cereb Cortex 2023; 33:8194-8217. [PMID: 36958809 PMCID: PMC10321104 DOI: 10.1093/cercor/bhad108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 03/05/2023] [Accepted: 03/06/2023] [Indexed: 03/25/2023] Open
Abstract
Considerable research has been devoted to understanding the fundamental organizing principles of the ventral visual pathway. A recent study revealed a series of 3-4 topographical maps arranged along the macaque inferotemporal (IT) cortex. The maps articulated a two-dimensional space based on the spikiness and animacy of visual objects, with "inanimate-spiky" and "inanimate-stubby" regions of the maps constituting two previously unidentified cortical networks. The goal of our study was to determine whether a similar functional organization might exist in human IT. To address this question, we presented the same object stimuli and images from "classic" object categories (bodies, faces, houses) to humans while recording fMRI activity at 7 Tesla. Contrasts designed to reveal the spikiness-animacy object space evoked extensive significant activation across human IT. However, unlike the macaque, we did not observe a clear sequence of complete maps, and selectivity for the spikiness-animacy space was deeply and mutually entangled with category-selectivity. Instead, we observed multiple new stimulus preferences in category-selective regions, including functional sub-structure related to object spikiness in scene-selective cortex. Taken together, these findings highlight spikiness as a promising organizing principle of human IT and provide new insights into the role of category-selective regions in visual object processing.
Collapse
Affiliation(s)
- David D Coggan
- Department of Psychology, Vanderbilt University, 111 21st Ave S, Nashville, TN 37240, United States
| | - Frank Tong
- Department of Psychology, Vanderbilt University, 111 21st Ave S, Nashville, TN 37240, United States
| |
Collapse
|
11
|
Usami K, Matsumoto R, Korzeniewska A, Shimotake A, Matsuhashi M, Nakae T, Kikuchi T, Yoshida K, Kunieda T, Takahashi R, Crone NE, Ikeda A. The dynamics of cortical interactions in visual recognition of object category: living versus nonliving. Cereb Cortex 2023; 33:5740-5750. [PMID: 36408645 PMCID: PMC10152084 DOI: 10.1093/cercor/bhac456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 10/23/2022] [Accepted: 10/24/2022] [Indexed: 11/22/2022] Open
Abstract
Noninvasive brain imaging studies have shown that higher visual processing of objects occurs in neural populations that are separable along broad semantic categories, particularly living versus nonliving objects. However, because of their limited temporal resolution, these studies have not been able to determine whether broad semantic categories are also reflected in the dynamics of neural interactions within cortical networks. We investigated the time course of neural propagation among cortical areas activated during object naming in 12 patients implanted with subdural electrode grids prior to epilepsy surgery, with a special focus on the visual recognition phase of the task. Analysis of event-related causality revealed significantly stronger neural propagation among sites within ventral temporal lobe (VTL) at early latencies, around 250 ms, for living objects compared to nonliving objects. Differences in other features, including familiarity, visual complexity, and age of acquisition, did not significantly change the patterns of neural propagation. Our findings suggest that the visual processing of living objects relies on stronger causal interactions among sites within VTL, perhaps reflecting greater integration of visual feature processing. In turn, this may help explain the fragility of naming living objects in neurological diseases affecting VTL.
Collapse
Affiliation(s)
- Kiyohide Usami
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Riki Matsumoto
- Division of Neurology, Kobe University Graduate School of Medicine, Kobe 650-0017, Japan
| | - Anna Korzeniewska
- Department of Neurology, Johns Hopkins University School of Medicine, MD 21287, United States
| | - Akihiro Shimotake
- Department of Neurology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Masao Matsuhashi
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Takuro Nakae
- Department of Neurosurgery, Shiga General Hospital, Moriyama 524-8524, Japan
| | - Takayuki Kikuchi
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Kazumichi Yoshida
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Takeharu Kunieda
- Department of Neurosurgery, Ehime University Graduate School of Medicine, Toon 791-0295, Japan
| | - Ryosuke Takahashi
- Department of Neurology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Nathan E Crone
- Department of Neurology, Johns Hopkins University School of Medicine, MD 21287, United States
| | - Akio Ikeda
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| |
Collapse
|
12
|
Leshinskaya A, Bajaj M, Thompson-Schill SL. Novel objects with causal event schemas elicit selective responses in tool- and hand-selective lateral occipitotemporal cortex. Cereb Cortex 2023; 33:5557-5573. [PMID: 36469589 PMCID: PMC10152094 DOI: 10.1093/cercor/bhac442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 10/10/2022] [Accepted: 10/11/2022] [Indexed: 12/11/2022] Open
Abstract
Tool-selective lateral occipitotemporal cortex (LOTC) responds preferentially to images of tools (hammers, brushes) relative to non-tool objects (clocks, shoes). What drives these responses? Unlike other objects, tools exert effects on their surroundings. We tested whether LOTC responses are influenced by event schemas that denote different temporal relations. Participants learned about novel objects embedded in different event sequences. Causer objects moved prior to the appearance of an environmental event (e.g. stars), while Reactor objects moved after an event. Visual features and motor association were controlled. During functional magnetic resonance imaging, participants viewed still images of the objects. We localized tool-selective LOTC and non-tool-selective parahippocampal cortex (PHC) by contrasting neural responses to images of familiar tools and non-tools. We found that LOTC responded more to Causers than Reactors, while PHC did not. We also measured responses to images of hands, which elicit overlapping responses with tools. Across inferior temporal cortex, voxels' tool and hand selectivity positively predicted a preferential response to Causers. We conclude that an event schema typical of tools is sufficient to drive LOTC and that category-preferential responses across the temporal lobe may reflect relational event structures typical of those domains.
Collapse
Affiliation(s)
- Anna Leshinskaya
- Department of Psychology, University of Pennsylvania, 425 S. University Ave, Stephen A Levin Building, Philadelphia, PA 19104, United States
- Center for Neuroscience, University of California, Davis, 1544 Newton Court, Room 209, Davis, CA, United States
| | - Mira Bajaj
- Department of Psychology, University of Pennsylvania, 425 S. University Ave, Stephen A Levin Building, Philadelphia, PA 19104, United States
- The Johns Hopkins University School of Medicine, 733 N Broadway, Baltimore, MD 21205, United States
| | - Sharon L Thompson-Schill
- Department of Psychology, University of Pennsylvania, 425 S. University Ave, Stephen A Levin Building, Philadelphia, PA 19104, United States
| |
Collapse
|
13
|
Frisby SL, Halai AD, Cox CR, Lambon Ralph MA, Rogers TT. Decoding semantic representations in mind and brain. Trends Cogn Sci 2023; 27:258-281. [PMID: 36631371 DOI: 10.1016/j.tics.2022.12.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 12/12/2022] [Accepted: 12/13/2022] [Indexed: 01/11/2023]
Abstract
A key goal for cognitive neuroscience is to understand the neurocognitive systems that support semantic memory. Recent multivariate analyses of neuroimaging data have contributed greatly to this effort, but the rapid development of these novel approaches has made it difficult to track the diversity of findings and to understand how and why they sometimes lead to contradictory conclusions. We address this challenge by reviewing cognitive theories of semantic representation and their neural instantiation. We then consider contemporary approaches to neural decoding and assess which types of representation each can possibly detect. The analysis suggests why the results are heterogeneous and identifies crucial links between cognitive theory, data collection, and analysis that can help to better connect neuroimaging to mechanistic theories of semantic cognition.
Collapse
Affiliation(s)
- Saskia L Frisby
- Medical Research Council (MRC) Cognition and Brain Sciences Unit, Chaucer Road, Cambridge CB2 7EF, UK.
| | - Ajay D Halai
- Medical Research Council (MRC) Cognition and Brain Sciences Unit, Chaucer Road, Cambridge CB2 7EF, UK
| | - Christopher R Cox
- Department of Psychology, Louisiana State University, Baton Rouge, LA 70803, USA
| | - Matthew A Lambon Ralph
- Medical Research Council (MRC) Cognition and Brain Sciences Unit, Chaucer Road, Cambridge CB2 7EF, UK
| | - Timothy T Rogers
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53706, USA.
| |
Collapse
|
14
|
Mamus E, Speed LJ, Rissman L, Majid A, Özyürek A. Lack of Visual Experience Affects Multimodal Language Production: Evidence From Congenitally Blind and Sighted People. Cogn Sci 2023; 47:e13228. [PMID: 36607157 PMCID: PMC10078191 DOI: 10.1111/cogs.13228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/08/2022] [Accepted: 11/25/2022] [Indexed: 01/07/2023]
Abstract
The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
Collapse
Affiliation(s)
- Ezgi Mamus
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics
| | | | - Lilia Rissman
- Department of Psychology, University of Wisconsin - Madison
| | - Asifa Majid
- Department of Experimental Psychology, University of Oxford
| | - Aslı Özyürek
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics.,Donders Center for Cognition, Radboud University
| |
Collapse
|
15
|
Sá-Leite AR, Comesaña M, Acuña-Fariña C, Fraga I. A cautionary note on the studies using the picture-word interference paradigm: the unwelcome consequences of the random use of "in/animates". Front Psychol 2023; 14:1145884. [PMID: 37213376 PMCID: PMC10196210 DOI: 10.3389/fpsyg.2023.1145884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/13/2023] [Indexed: 05/23/2023] Open
Abstract
The picture-word interference (PWI) paradigm allows us to delve into the process of lexical access in language production with great precision. It creates situations of interference between target pictures and superimposed distractor words that participants must consciously ignore to name the pictures. Yet, although the PWI paradigm has offered numerous insights at all levels of lexical representation, in this work we expose an extended lack of control regarding the variable animacy. Animacy has been shown to have a great impact on cognition, especially when it comes to the mechanisms of attention, which are highly biased toward animate entities to the detriment of inanimate objects. Furthermore, animate nouns have been shown to be semantically richer and prioritized during lexical access, with effects observable in multiple psycholinguistic tasks. Indeed, not only does the performance on a PWI task directly depend on the different stages of lexical access to nouns, but also attention has a fundamental role in it, as participants must focus on targets and ignore interfering distractors. We conducted a systematic review with the terms "picture-word interference paradigm" and "animacy" in the databases PsycInfo and Psychology Database. The search revealed that only 12 from a total of 193 PWI studies controlled for animacy, and only one considered it as a factor in the design. The remaining studies included animate and inanimate stimuli in their materials randomly, sometimes in a very disproportionate amount across conditions. We speculate about the possible impact of this uncontrolled variable mixing on many types of effects within the framework of multiple theories, namely the Animate Monitoring Hypothesis, the WEAVER++ model, and the Independent Network Model in an attempt to fuel the theoretical debate on this issue as well as the empirical research to turn speculations into knowledge.
Collapse
Affiliation(s)
- Ana Rita Sá-Leite
- Cognitive Processes and Behavior Research Group, Department of Social Psychology, Basic Psychology, and Methodology, University of Santiago de Compostela, Santiago de Compostela, Spain
- Institut für Romanische Sprachen und Literaturen, Goethe University Frankfurt, Frankfurt, Germany
- *Correspondence: Ana Rita Sá-Leite
| | - Montserrat Comesaña
- Psycholinguistics Research Line, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Carlos Acuña-Fariña
- Cognitive Processes and Behavior Research Group, Department of English and German, University of Santiago de Compostela, Santiago de Compostela, Spain
| | - Isabel Fraga
- Cognitive Processes and Behavior Research Group, Department of Social Psychology, Basic Psychology, and Methodology, University of Santiago de Compostela, Santiago de Compostela, Spain
| |
Collapse
|
16
|
Cabral L, Zubiaurre-Elorza L, Wild CJ, Linke A, Cusack R. Anatomical correlates of category-selective visual regions have distinctive signatures of connectivity in neonates. Dev Cogn Neurosci 2022; 58:101179. [PMID: 36521345 PMCID: PMC9768242 DOI: 10.1016/j.dcn.2022.101179] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 11/15/2022] [Accepted: 11/21/2022] [Indexed: 11/25/2022] Open
Abstract
The ventral visual stream is shaped during development by innate proto-organization within the visual system, such as the strong input from the fovea to the fusiform face area. In adults, category-selective regions have distinct signatures of connectivity to brain regions beyond the visual system, likely reflecting cross-modal and motoric associations. We tested if this long-range connectivity is part of the innate proto-organization, or if it develops with postnatal experience, by using diffusion-weighted imaging to characterize the connectivity of anatomical correlates of category-selective regions in neonates (N = 445), 1-9 month old infants (N = 11), and adults (N = 14). Using the HCP data we identified face- and place- selective regions and a third intermediate region with a distinct profile of selectivity. Using linear classifiers, these regions were found to have distinctive connectivity at birth, to other regions in the visual system and to those outside of it. The results support an extended proto-organization that includes long-range connectivity that shapes, and is shaped by, experience-dependent development.
Collapse
Affiliation(s)
- Laura Cabral
- Department of Radiology, University of Pittsburgh, Pittsburgh 15224, PA, USA.
| | - Leire Zubiaurre-Elorza
- Department of Psychology, Faculty of Health Sciences, University of Deusto, Bilbao 48007, Spain
| | - Conor J Wild
- Western Institute for Neuroscience, Western University, London, ON N6A 3K7, Canada; Department of Physiology and Pharmacology,Western University, London, ON N6A 3K7, Canada
| | - Annika Linke
- Brain Development Imaging Laboratories, San Diego State University, San Diego 92120, CA, USA
| | - Rhodri Cusack
- Trinity College Institute of Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland
| |
Collapse
|
17
|
Wang R, Janini D, Konkle T. Mid-level Feature Differences Support Early Animacy and Object Size Distinctions: Evidence from EEG Decoding. J Cogn Neurosci 2022; 34:1670-1680. [PMID: 35704550 DOI: 10.1162/jocn_a_01883] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Responses to visually presented objects along the cortical surface of the human brain have a large-scale organization reflecting the broad categorical divisions of animacy and object size. Emerging evidence indicates that this topographical organization is supported by differences between objects in mid-level perceptual features. With regard to the timing of neural responses, images of objects quickly evoke neural responses with decodable information about animacy and object size, but are mid-level features sufficient to evoke these rapid neural responses? Or is slower iterative neural processing required to untangle information about animacy and object size from mid-level features, requiring hundreds of milliseconds more processing time? To answer this question, we used EEG to measure human neural responses to images of objects and their texform counterparts-unrecognizable images that preserve some mid-level feature information about texture and coarse form. We found that texform images evoked neural responses with early decodable information about both animacy and real-world size, as early as responses evoked by original images. Furthermore, successful cross-decoding indicates that both texform and original images evoke information about animacy and size through a common underlying neural basis. Broadly, these results indicate that the visual system contains a mid-level feature bank carrying linearly decodable information on animacy and size, which can be rapidly activated without requiring explicit recognition or protracted temporal processing.
Collapse
|
18
|
Rogers TT, Lambon Ralph MA. Semantic tiles or hub-and-spokes? Trends Cogn Sci 2022; 26:189-190. [PMID: 35090837 DOI: 10.1016/j.tics.2022.01.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 01/13/2022] [Indexed: 11/18/2022]
Abstract
New results from Popham et al. generate 'semantic maps' from spoken narratives and movies that appear remarkably aligned near visual cortex. We consider whether such findings are consistent with the hub-and-spokes view of semantic representation or whether they require a rethinking of the cortical knowledge system.
Collapse
Affiliation(s)
- Timothy T Rogers
- University of Wisconsin Madison, Department of Psychology, 1202 W Johnson Street, Madison, WI, USA.
| | - Matthew A Lambon Ralph
- MRC Cognition and Brain Sciences Unit, Cambridge University, 15 Chaucer Road, Cambridge, UK.
| |
Collapse
|
19
|
Abstract
Categorization is the basis of thinking and reasoning. Through the analysis of infants’ gaze, we describe the trajectory through which visual object representations in infancy incrementally match categorical object representations as mapped onto adults’ visual cortex. Using a methodological approach that allows for a comparison of findings obtained with behavioral and brain measures in infants and adults, we identify the transition from visual exploration guided by perceptual salience to an organization of objects by categories, which begins with the animate–inanimate distinction in the first months of life and continues with a spurt of biologically relevant categories (human bodies, nonhuman bodies, nonhuman faces, small natural objects) through the second year of life. Humans make sense of the world by organizing things into categories. When and how does this process begin? We investigated whether real-world object categories that spontaneously emerge in the first months of life match categorical representations of objects in the human visual cortex. Using eye tracking, we measured the differential looking time of 4-, 10-, and 19-mo-olds as they looked at pairs of pictures belonging to eight animate or inanimate categories (human/nonhuman, faces/bodies, real-world size big/small, natural/artificial). Taking infants’ looking times as a measure of similarity, for each age group, we defined a representational space where each object was defined in relation to others of the same or of a different category. This space was compared with hypothesis-based and functional MRI-based models of visual object categorization in the adults’ visual cortex. Analyses across different age groups showed that, as infants grow older, their looking behavior matches neural representations in ever-larger portions of the adult visual cortex, suggesting progressive recruitment and integration of more and more feature spaces distributed over the visual cortex. Moreover, the results characterize infants’ visual categorization as an incremental process with two milestones. Between 4 and 10 mo, visual exploration guided by saliency gives way to an organization according to the animate–inanimate distinction. Between 10 and 19 mo, a category spurt leads toward a mature organization. We propose that these changes underlie the coupling between seeing and thinking in the developing mind.
Collapse
|
20
|
Cox JA, Cox TW, Aimola Davies AM. EXPRESS: Are animates special? Exploring the effects of selective attention and animacy on visual statistical learning. Q J Exp Psychol (Hove) 2022; 75:1746-1762. [PMID: 35001729 DOI: 10.1177/17470218221074686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Our visual system is built to extract regularities in how objects within our visual environment appear in relation to each other across time and space ('visual statistical learning'). Existing research indicates that visual statistical learning is modulated by selective attention. Our attentional system prioritises information that enables behaviour; for example, animates are prioritised over inanimates (the 'animacy advantage'). The present study examined the effects of selective attention and animacy on visual statistical learning in young adults (N = 284). We tested visual statistical learning of attended and unattended information across four animacy conditions: (i) living things that can self-initiate movement (animals); (ii) living things that cannot self-initiate movement (fruits and vegetables); (iii) non-living things that can generate movement (vehicles); and (iv) non-living things that cannot generate movement (tools and kitchen utensils). We implemented a four-point confidence-rating scale as an assessment of participants' awareness of the regularities in the visual statistical learning task. There were four key findings. First, selective attention plays a critical role by modulating visual statistical learning. Second, animacy does not play a special role in visual statistical learning. Third, visual statistical learning of attended information cannot be exclusively accounted for by unconscious knowledge. Fourth, performance on the visual statistical learning task is associated with the proportion of stimuli that were named or labelled. Our findings support the notion that visual statistical learning is a powerful mechanism by which our visual system resolves an abundance of sensory input over time.
Collapse
Affiliation(s)
- Jolene Alexa Cox
- Research School of Psychology, The Australian National University 2219
| | | | | |
Collapse
|
21
|
Mahon BZ. Domain-specific connectivity drives the organization of object knowledge in the brain. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:221-244. [PMID: 35964974 DOI: 10.1016/b978-0-12-823493-8.00028-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The goal of this chapter is to review neuropsychological and functional MRI findings that inform a theory of the causes of functional specialization for semantic categories within occipito-temporal cortex-the ventral visual processing pathway. The occipito-temporal pathway supports visual object processing and recognition. The theoretical framework that drives this review considers visual object recognition through the lens of how "downstream" systems interact with the outputs of visual recognition processes. Those downstream processes include conceptual interpretation, grasping and object use, navigating and orienting in an environment, physical reasoning about the world, and inferring future actions and the inner mental states of agents. The core argument of this chapter is that innately constrained connectivity between occipito-temporal areas and other regions of the brain is the basis for the emergence of neural specificity for a limited number of semantic domains in the brain.
Collapse
Affiliation(s)
- Bradford Z Mahon
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States.
| |
Collapse
|
22
|
OUP accepted manuscript. Cereb Cortex 2022; 32:4913-4933. [DOI: 10.1093/cercor/bhab524] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 12/16/2021] [Accepted: 12/17/2021] [Indexed: 11/12/2022] Open
|
23
|
Groen IIA, Dekker TM, Knapen T, Silson EH. Visuospatial coding as ubiquitous scaffolding for human cognition. Trends Cogn Sci 2021; 26:81-96. [PMID: 34799253 DOI: 10.1016/j.tics.2021.10.011] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 10/19/2021] [Accepted: 10/19/2021] [Indexed: 01/28/2023]
Abstract
For more than 100 years we have known that the visual field is mapped onto the surface of visual cortex, imposing an inherently spatial reference frame on visual information processing. Recent studies highlight visuospatial coding not only throughout visual cortex, but also brain areas not typically considered visual. Such widespread access to visuospatial coding raises important questions about its role in wider cognitive functioning. Here, we synthesise these recent developments and propose that visuospatial coding scaffolds human cognition by providing a reference frame through which neural computations interface with environmental statistics and task demands via perception-action loops.
Collapse
Affiliation(s)
- Iris I A Groen
- Institute for Informatics, University of Amsterdam, Amsterdam, The Netherlands
| | - Tessa M Dekker
- Institute of Ophthalmology, University College London, London, UK
| | - Tomas Knapen
- Behavioral and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands; Spinoza Centre for NeuroImaging, Royal Dutch Academy of Sciences, Amsterdam, The Netherlands
| | - Edward H Silson
- Department of Psychology, School of Philosophy, Psychology & Language Sciences, University of Edinburgh, Edinburgh, UK.
| |
Collapse
|
24
|
Rogers TT, Cox CR, Lu Q, Shimotake A, Kikuchi T, Kunieda T, Miyamoto S, Takahashi R, Ikeda A, Matsumoto R, Lambon Ralph MA. Evidence for a deep, distributed and dynamic code for animacy in human ventral anterior temporal cortex. eLife 2021; 10:66276. [PMID: 34704935 PMCID: PMC8550752 DOI: 10.7554/elife.66276] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 10/09/2021] [Indexed: 12/01/2022] Open
Abstract
How does the human brain encode semantic information about objects? This paper reconciles two seemingly contradictory views. The first proposes that local neural populations independently encode semantic features; the second, that semantic representations arise as a dynamic distributed code that changes radically with stimulus processing. Combining simulations with a well-known neural network model of semantic memory, multivariate pattern classification, and human electrocorticography, we find that both views are partially correct: information about the animacy of a depicted stimulus is distributed across ventral temporal cortex in a dynamic code possessing feature-like elements posteriorly but with elements that change rapidly and nonlinearly in anterior regions. This pattern is consistent with the view that anterior temporal lobes serve as a deep cross-modal ‘hub’ in an interactive semantic network, and more generally suggests that tertiary association cortices may adopt dynamic distributed codes difficult to detect with common brain imaging methods.
Collapse
Affiliation(s)
- Timothy T Rogers
- Department of Psychology, University of Wisconsin- Madison, Madison, United States
| | - Christopher R Cox
- Department of Psychology, Louisiana State University, Baton Rouge, United States
| | - Qihong Lu
- Department of Psychology, Princeton University, Princeton, United States
| | - Akihiro Shimotake
- Department of Neurology, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Takayuki Kikuchi
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Takeharu Kunieda
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Kyoto, Japan.,Department of Neurosurgery, Ehime University Graduate School of Medicine, Ehime, Japan
| | - Susumu Miyamoto
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Ryosuke Takahashi
- Department of Neurology, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Akio Ikeda
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School ofMedicine, Kyoto, Japan
| | - Riki Matsumoto
- Department of Neurology, Kyoto University Graduate School of Medicine, Kyoto, Japan.,Division of Neurology, Kobe University Graduate School of Medicine, Kusunoki-cho, Kobe, Japan
| | - Matthew A Lambon Ralph
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
25
|
Derderian KD, Zhou X, Chen L. Category-specific activations depend on imaging mode, task demand, and stimuli modality: An ALE meta-analysis. Neuropsychologia 2021; 161:108002. [PMID: 34450136 DOI: 10.1016/j.neuropsychologia.2021.108002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 08/08/2021] [Accepted: 08/22/2021] [Indexed: 10/20/2022]
Abstract
The cortical organization of the semantic network has been examined extensively in neuropsychological and neuroimaging studies; however, after decades of research, several issues remain controversial. A comprehensive and systematic investigation is needed to characterize the consistent patterns of category-specific activations as well as to examine factors that contribute to the varying findings across numerous neuroimaging studies. In this study, we reviewed 113 published papers that reported category-specific activations for living or nonliving concepts from the past two decades. Using the Activation Likelihood Estimate (ALE) method, we characterized the brain regions associated with living and nonliving concepts and revealed how the observed patterns were heavily influenced by methodological factors including imaging mode, task demand, and stimuli modality. Our findings provided the most comprehensive summary of category-specific activations for living and nonliving concepts and critically revealed that these activation patterns are highly contextually dependent. This work advanced our knowledge about the organization of the cortical semantic network and provided important insights into theoretical accounts and future research directions.
Collapse
Affiliation(s)
| | - Xiaojue Zhou
- Department of Cognitive Sciences, University of California at Irvine, United States
| | - Lang Chen
- Neuroscience Program, Santa Clara University, United States; Department of Psychology, Santa Clara University, United States.
| |
Collapse
|
26
|
Seijdel N, Scholte HS, de Haan EHF. Visual features drive the category-specific impairments on categorization tasks in a patient with object agnosia. Neuropsychologia 2021; 161:108017. [PMID: 34487736 DOI: 10.1016/j.neuropsychologia.2021.108017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 08/30/2021] [Accepted: 08/31/2021] [Indexed: 01/18/2023]
Abstract
Object and scene recognition both require mapping of incoming sensory information to existing conceptual knowledge about the world. A notable finding in brain-damaged patients is that they may show differentially impaired performance for specific categories, such as for "living exemplars". While numerous patients with category-specific impairments have been reported, the explanations for these deficits remain controversial. In the current study, we investigate the ability of a brain injured patient with a well-established category-specific impairment of semantic memory to perform two categorization experiments: 'natural' vs. 'manmade' scenes (experiment 1) and objects (experiment 2). Our findings show that the pattern of categorical impairment does not respect the natural versus manmade distinction. This suggests that the impairments may be better explained by differences in visual features, rather than by category membership. Using Deep Convolutional Neural Networks (DCNNs) as 'artificial animal models' we further explored this idea. Results indicated that DCNNs with 'lesions' in higher order layers showed similar response patterns, with decreased relative performance for manmade scenes (experiment 1) and natural objects (experiment 2), even though they have no semantic category knowledge, apart from a mapping between pictures and labels. Collectively, these results suggest that the direction of category-effects to a large extent depends, at least in MS' case, on the degree of perceptual differentiation called for, and not semantic knowledge.
Collapse
Affiliation(s)
- Noor Seijdel
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands; Amsterdam Brain & Cognition (ABC) Center, University of Amsterdam, Amsterdam, the Netherlands.
| | - H Steven Scholte
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands; Amsterdam Brain & Cognition (ABC) Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Edward H F de Haan
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands; Amsterdam Brain & Cognition (ABC) Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
27
|
Perceptual and Semantic Representations at Encoding Contribute to True and False Recognition of Objects. J Neurosci 2021; 41:8375-8389. [PMID: 34413205 DOI: 10.1523/jneurosci.0677-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 06/30/2021] [Accepted: 07/28/2021] [Indexed: 11/21/2022] Open
Abstract
When encoding new episodic memories, visual and semantic processing is proposed to make distinct contributions to accurate memory and memory distortions. Here, we used fMRI and preregistered representational similarity analysis to uncover the representations that predict true and false recognition of unfamiliar objects. Two semantic models captured coarse-grained taxonomic categories and specific object features, respectively, while two perceptual models embodied low-level visual properties. Twenty-eight female and male participants encoded images of objects during fMRI scanning, and later had to discriminate studied objects from similar lures and novel objects in a recognition memory test. Both perceptual and semantic models predicted true memory. When studied objects were later identified correctly, neural patterns corresponded to low-level visual representations of these object images in the early visual cortex, lingual, and fusiform gyri. In a similar fashion, alignment of neural patterns with fine-grained semantic feature representations in the fusiform gyrus also predicted true recognition. However, emphasis on coarser taxonomic representations predicted forgetting more anteriorly in the anterior ventral temporal cortex, left inferior frontal gyrus and, in an exploratory analysis, left perirhinal cortex. In contrast, false recognition of similar lure objects was associated with weaker visual analysis posteriorly in early visual and left occipitotemporal cortex. The results implicate multiple perceptual and semantic representations in successful memory encoding and suggest that fine-grained semantic as well as visual analysis contributes to accurate later recognition, while processing visual image detail is critical for avoiding false recognition errors.SIGNIFICANCE STATEMENT People are able to store detailed memories of many similar objects. We offer new insights into the encoding of these specific memories by combining fMRI with explicit models of how image properties and object knowledge are represented in the brain. When people processed fine-grained visual properties in occipital and posterior temporal cortex, they were more likely to recognize the objects later and less likely to falsely recognize similar objects. In contrast, while object-specific feature representations in fusiform gyrus predicted accurate memory, coarse-grained categorical representations in frontal and temporal regions predicted forgetting. The data provide the first direct tests of theoretical assumptions about encoding true and false memories, suggesting that semantic representations contribute to specific memories as well as errors.
Collapse
|
28
|
Canessa E, Chaigneau SE, Moreno S. Language Processing Differences Between Blind and Sighted Individuals and the Abstract Versus Concrete Concept Difference. Cogn Sci 2021; 45:e13044. [PMID: 34606124 DOI: 10.1111/cogs.13044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 08/18/2021] [Accepted: 08/22/2021] [Indexed: 11/29/2022]
Abstract
In the property listing task (PLT), participants are asked to list properties for a concept (e.g., for the concept dog, "barks," and "is a pet" may be produced). In conceptual property norming (CPNs) studies, participants are asked to list properties for large sets of concepts. Here, we use a mathematical model of the property listing process to explore two longstanding issues: characterizing the difference between concrete and abstract concepts, and characterizing semantic knowledge in the blind versus sighted population. When we apply our mathematical model to a large CPN reporting properties listed by sighted and blind participants, the model uncovers significant differences between concrete and abstract concepts. Though we also find that blind individuals show many of the same processing differences between abstract and concrete concepts found in sighted individuals, our model shows that those differences are noticeably less pronounced than in sighted individuals. We discuss our results vis-a-vis theories attempting to characterize abstract concepts.
Collapse
Affiliation(s)
- Enrique Canessa
- Center for Cognition Research (CINCO), School of Psychology, Universidad Adolfo Ibáñe.,Faculty of Engineering and Science, Universidad Adolfo Ibáñez
| | - Sergio E Chaigneau
- Center for Cognition Research (CINCO), School of Psychology, Universidad Adolfo Ibáñe.,Center for Social and Cognitive Neuroscience, School of Psychology, Universidad Adolfo Ibáñez
| | | |
Collapse
|
29
|
Henderson SK, Dev SI, Ezzo R, Quimby M, Wong B, Brickhouse M, Hochberg D, Touroutoglou A, Dickerson BC, Cordella C, Collins JA. A category-selective semantic memory deficit for animate objects in semantic variant primary progressive aphasia. Brain Commun 2021; 3:fcab210. [PMID: 34622208 PMCID: PMC8493104 DOI: 10.1093/braincomms/fcab210] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/16/2021] [Accepted: 07/26/2021] [Indexed: 11/13/2022] Open
Abstract
Data are mixed on whether patients with semantic variant primary progressive aphasia exhibit a category-selective semantic deficit for animate objects. Moreover, there is little consensus regarding the neural substrates of this category-selective semantic deficit, though prior literature has suggested that the perirhinal cortex and the lateral posterior fusiform gyrus may support semantic memory functions important for processing animate objects. In this study, we investigated whether patients with semantic variant primary progressive aphasia exhibited a category-selective semantic deficit for animate objects in a word-picture matching task, controlling for psycholinguistic features of the stimuli, including frequency, familiarity, typicality and age of acquisition. We investigated the neural bases of this category selectivity by examining its relationship with cortical atrophy in two primary regions of interest: bilateral perirhinal cortex and lateral posterior fusiform gyri. We analysed data from 20 patients with semantic variant primary progressive aphasia (mean age = 64 years, S.D. = 6.94). For each participant, we calculated an animacy index score to denote the magnitude of the category-selective semantic deficit for animate objects. Multivariate regression analysis revealed a main effect of animacy (β = 0.52, t = 4.03, P < 0.001) even after including all psycholinguistic variables in the model, such that animate objects were less likely to be identified correctly relative to inanimate objects. Inspection of each individual patient's data indicated the presence of a disproportionate impairment in animate objects in most patients. A linear regression analysis revealed a relationship between the right perirhinal cortex thickness and animacy index scores (β = -0.57, t = -2.74, P = 0.015) such that patients who were more disproportionally impaired for animate relative to inanimate objects exhibited thinner right perirhinal cortex. A vertex-wise general linear model analysis restricted to the temporal lobes revealed additional associations between positive animacy index scores (i.e. a disproportionately poorer performance on animate objects) and cortical atrophy in the right perirhinal and entorhinal cortex, superior, middle, and inferior temporal gyri, and the anterior fusiform gyrus, as well as the left anterior fusiform gyrus. Taken together, our results indicate that a category-selective semantic deficit for animate objects is a characteristic feature of semantic variant primary progressive aphasia that is detectable in most individuals. Our imaging findings provide further support for the role of the right perirhinal cortex and other temporal lobe regions in the semantic processing of animate objects.
Collapse
Affiliation(s)
- Shalom K Henderson
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Sheena I Dev
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Rania Ezzo
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Megan Quimby
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Bonnie Wong
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Michael Brickhouse
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Daisy Hochberg
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Alexandra Touroutoglou
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Bradford C Dickerson
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Claire Cordella
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Jessica A Collins
- Frontotemporal Disorders Unit and Alzheimer’s Disease Research Center, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
30
|
Arcaro MJ, Livingstone MS. On the relationship between maps and domains in inferotemporal cortex. Nat Rev Neurosci 2021; 22:573-583. [PMID: 34345018 PMCID: PMC8865285 DOI: 10.1038/s41583-021-00490-4] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/24/2021] [Indexed: 02/07/2023]
Abstract
How does the brain encode information about the environment? Decades of research have led to the pervasive notion that the object-processing pathway in primate cortex consists of multiple areas that are each specialized to process different object categories (such as faces, bodies, hands, non-face objects and scenes). The anatomical consistency and modularity of these regions have been interpreted as evidence that these regions are innately specialized. Here, we propose that ventral-stream modules do not represent clusters of circuits that each evolved to process some specific object category particularly important for survival, but instead reflect the effects of experience on a domain-general architecture that evolved to be able to adapt, within a lifetime, to its particular environment. Furthermore, we propose that the mechanisms underlying the development of domains are both evolutionarily old and universal across cortex. Topographic maps are fundamental, governing the development of specializations across systems, providing a framework for brain organization.
Collapse
|
31
|
Rabini G, Ubaldi S, Fairhall SL. Combining concepts across categorical domains: a linking role of the precuneus. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:354-371. [PMID: 34595480 PMCID: PMC7611750 DOI: 10.1162/nol_a_00039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The human capacity for semantic knowledge entails not only the representation of single concepts but the capacity to combine these concepts into the increasingly complex ideas that underlie human thought. This process involves not only the combination of concepts from within the same semantic category but frequently the conceptual combination across semantic domains. In this fMRI study (N=24) we investigate the cortical mechanisms underlying our ability to combine concepts across different semantic domains. Using five different semantic domains (People, Places, Food, Objects and Animals), we present sentences depicting concepts drawn from a single semantic domain as well as sentences that combine concepts from two of these domains. Contrasting single-category and combined-category sentences reveals that the precuneus is more active when concepts from different domains have to be combined. At the same time, we observe that distributed category selectivity representations persist when higher-order meaning involves the combination of categories and that this category-selective response is captured by the combination of the single categories composing the sentence. Collectively, these results suggest that the combination of concepts across different semantic domains is mediated by the precuneus, which functions to link together category-selective representations distributed across the cortex.
Collapse
Affiliation(s)
- Giuseppe Rabini
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
| | - Silvia Ubaldi
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
| | | |
Collapse
|
32
|
The contribution of object size, manipulability, and stability on neural responses to inanimate objects. Neuroimage 2021; 237:118098. [PMID: 33940141 DOI: 10.1016/j.neuroimage.2021.118098] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 04/09/2021] [Accepted: 04/24/2021] [Indexed: 11/20/2022] Open
Abstract
In human occipitotemporal cortex, brain responses to depicted inanimate objects have a large-scale organization by real-world object size. Critically, the size of objects in the world is systematically related to behaviorally-relevant properties: small objects are often grasped and manipulated (e.g., forks), while large objects tend to be less motor-relevant (e.g., tables), though this relationship does not always have to be true (e.g., picture frames and wheelbarrows). To determine how these two dimensions interact, we measured brain activity with functional magnetic resonance imaging while participants viewed a stimulus set of small and large objects with either low or high motor-relevance. The results revealed that the size organization was evident for objects with both low and high motor-relevance; further, a motor-relevance map was also evident across both large and small objects. Targeted contrasts revealed that typical combinations (small motor-relevant vs. large non-motor-relevant) yielded more robust topographies than the atypical covariance contrast (small non-motor-relevant vs. large motor-relevant). In subsequent exploratory analyses, a factor analysis revealed that the construct of motor-relevance was better explained by two underlying factors: one more related to manipulability, and the other to whether an object moves or is stable. The factor related to manipulability better explained responses in lateral small-object preferring regions, while the factor related to object stability (lack of movement) better explained responses in ventromedial large-object preferring regions. Taken together, these results reveal that the structure of neural responses to objects of different sizes further reflect behavior-relevant properties of manipulability and stability, and contribute to a deeper understanding of some of the factors that help the large-scale organization of object representation in high-level visual cortex.
Collapse
|
33
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
34
|
Davis SW, Geib BR, Wing EA, Wang WC, Hovhannisyan M, Monge ZA, Cabeza R. Visual and Semantic Representations Predict Subsequent Memory in Perceptual and Conceptual Memory Tests. Cereb Cortex 2021; 31:974-992. [PMID: 32935833 PMCID: PMC8485078 DOI: 10.1093/cercor/bhaa269] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 07/26/2020] [Accepted: 08/21/2020] [Indexed: 12/18/2022] Open
Abstract
It is generally assumed that the encoding of a single event generates multiple memory representations, which contribute differently to subsequent episodic memory. We used functional magnetic resonance imaging (fMRI) and representational similarity analysis to examine how visual and semantic representations predicted subsequent memory for single item encoding (e.g., seeing an orange). Three levels of visual representations corresponding to early, middle, and late visual processing stages were based on a deep neural network. Three levels of semantic representations were based on normative observed ("is round"), taxonomic ("is a fruit"), and encyclopedic features ("is sweet"). We identified brain regions where each representation type predicted later perceptual memory, conceptual memory, or both (general memory). Participants encoded objects during fMRI, and then completed both a word-based conceptual and picture-based perceptual memory test. Visual representations predicted subsequent perceptual memory in visual cortices, but also facilitated conceptual and general memory in more anterior regions. Semantic representations, in turn, predicted perceptual memory in visual cortex, conceptual memory in the perirhinal and inferior prefrontal cortex, and general memory in the angular gyrus. These results suggest that the contribution of visual and semantic representations to subsequent memory effects depends on a complex interaction between representation, test type, and storage location.
Collapse
Affiliation(s)
- Simon W Davis
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA
- Department of Neurology, Duke University School of Medicine, Durham, NC 27708, USA
| | - Benjamin R Geib
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA
| | - Erik A Wing
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA
| | - Wei-Chun Wang
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA
| | - Mariam Hovhannisyan
- Department of Neurology, Duke University School of Medicine, Durham, NC 27708, USA
| | - Zachary A Monge
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA
| | - Roberto Cabeza
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA
| |
Collapse
|
35
|
Rosenke M, van Hoof R, van den Hurk J, Grill-Spector K, Goebel R. A Probabilistic Functional Atlas of Human Occipito-Temporal Visual Cortex. Cereb Cortex 2021; 31:603-619. [PMID: 32968767 PMCID: PMC7727347 DOI: 10.1093/cercor/bhaa246] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 07/01/2020] [Accepted: 07/30/2020] [Indexed: 11/12/2022] Open
Abstract
Human visual cortex contains many retinotopic and category-specific regions. These brain regions have been the focus of a large body of functional magnetic resonance imaging research, significantly expanding our understanding of visual processing. As studying these regions requires accurate localization of their cortical location, researchers perform functional localizer scans to identify these regions in each individual. However, it is not always possible to conduct these localizer scans. Here, we developed and validated a functional region of interest (ROI) atlas of early visual and category-selective regions in human ventral and lateral occipito-temporal cortex. Results show that for the majority of functionally defined ROIs, cortex-based alignment results in lower between-subject variability compared to nonlinear volumetric alignment. Furthermore, we demonstrate that 1) the atlas accurately predicts the location of an independent dataset of ventral temporal cortex ROIs and other atlases of place selectivity, motion selectivity, and retinotopy. Next, 2) we show that the majority of voxel within our atlas is responding mostly to the labeled category in a left-out subject cross-validation, demonstrating the utility of this atlas. The functional atlas is publicly available (download.brainvoyager.com/data/visfAtlas.zip) and can help identify the location of these regions in healthy subjects as well as populations (e.g., blind people, infants) in which functional localizers cannot be run.
Collapse
Affiliation(s)
- Mona Rosenke
- Department of Psychology, Stanford University, Stanford, CA 94305, USA
| | - Rick van Hoof
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, 6229 EV, The Netherlands
| | - Job van den Hurk
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, 6229 EV, The Netherlands
- Scannexus MRI Center, Maastricht, 6229 EV, The Netherlands
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, CA 94305, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, 94305 CA, USA
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, 6229 EV, The Netherlands
| |
Collapse
|
36
|
Kozunov VV, West TO, Nikolaeva AY, Stroganova TA, Friston KJ. Object recognition is enabled by an experience-dependent appraisal of visual features in the brain's value system. Neuroimage 2020; 221:117143. [PMID: 32650054 PMCID: PMC7762843 DOI: 10.1016/j.neuroimage.2020.117143] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 06/13/2020] [Accepted: 07/02/2020] [Indexed: 01/05/2023] Open
Abstract
This paper addresses perceptual synthesis by comparing responses evoked by visual stimuli before and after they are recognized, depending on prior exposure. Using magnetoencephalography, we analyzed distributed patterns of neuronal activity - evoked by Mooney figures - before and after they were recognized as meaningful objects. Recognition induced changes were first seen at 100-120 ms, for both faces and tools. These early effects - in right inferior and middle occipital regions - were characterized by an increase in power in the absence of any changes in spatial patterns of activity. Within a later 210-230 ms window, a quite different type of recognition effect appeared. Regions of the brain's value system (insula, entorhinal cortex and cingulate of the right hemisphere for faces and right orbitofrontal cortex for tools) evinced a reorganization of their neuronal activity without an overall power increase in the region. Finally, we found that during the perception of disambiguated face stimuli, a face-specific response in the right fusiform gyrus emerged at 240-290 ms, with a much greater latency than the well-known N170m component, and, crucially, followed the recognition effect in the value system regions. These results can clarify one of the most intriguing issues of perceptual synthesis, namely, how a limited set of high-level predictions, which is required to reduce the uncertainty when resolving the ill-posed inverse problem of perception, can be available before category-specific processing in visual cortex. We suggest that a subset of local spatial features serves as partial cues for a fast re-activation of object-specific appraisal by the value system. The ensuing top-down feedback from value system to visual cortex, in particular, the fusiform gyrus enables high levels of processing to form category-specific predictions. This descending influence of the value system was more prominent for faces than for tools, the fact that reflects different dependence of these categories on value-related information.
Collapse
Affiliation(s)
- Vladimir V Kozunov
- MEG Centre, Moscow State University of Psychology and Education, Moscow, 29 Sretenka, Russia.
| | - Timothy O West
- Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK; Wellcome Trust Centre for Neuroimaging, 12 Queen Square, University College London, London, WC1N 3AR, UK.
| | - Anastasia Y Nikolaeva
- MEG Centre, Moscow State University of Psychology and Education, Moscow, 29 Sretenka, Russia.
| | - Tatiana A Stroganova
- MEG Centre, Moscow State University of Psychology and Education, Moscow, 29 Sretenka, Russia.
| | - Karl J Friston
- Wellcome Trust Centre for Neuroimaging, 12 Queen Square, University College London, London, WC1N 3AR, UK.
| |
Collapse
|
37
|
Ratan Murty NA, Teng S, Beeler D, Mynick A, Oliva A, Kanwisher N. Visual experience is not necessary for the development of face-selectivity in the lateral fusiform gyrus. Proc Natl Acad Sci U S A 2020; 117:23011-23020. [PMID: 32839334 PMCID: PMC7502773 DOI: 10.1073/pnas.2004607117] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain.
Collapse
Affiliation(s)
- N Apurva Ratan Murty
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Santani Teng
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA 94115
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - David Beeler
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Anna Mynick
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| |
Collapse
|
38
|
Taniguchi K, Tanabe-Ishibashi A, Itakura S. The Categorization of Objects With Uniform Texture at Superordinate and Living/Non-living Levels in Infants: An Exploratory Study. Front Psychol 2020; 11:2009. [PMID: 32849164 PMCID: PMC7424027 DOI: 10.3389/fpsyg.2020.02009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 07/20/2020] [Indexed: 11/13/2022] Open
Affiliation(s)
- Kosuke Taniguchi
- Center for Baby Science, Doshisha University, Kyoto, Japan
- *Correspondence: Kosuke Taniguchi,
| | | | - Shoji Itakura
- Center for Baby Science, Doshisha University, Kyoto, Japan
| |
Collapse
|
39
|
Vetter P, Bola Ł, Reich L, Bennett M, Muckli L, Amedi A. Decoding Natural Sounds in Early "Visual" Cortex of Congenitally Blind Individuals. Curr Biol 2020; 30:3039-3044.e2. [PMID: 32559449 PMCID: PMC7416107 DOI: 10.1016/j.cub.2020.05.071] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 02/12/2020] [Accepted: 05/21/2020] [Indexed: 11/28/2022]
Abstract
Complex natural sounds, such as bird singing, people talking, or traffic noise, induce decodable fMRI activation patterns in early visual cortex of sighted blindfolded participants [1]. That is, early visual cortex receives non-visual and potentially predictive information from audition. However, it is unclear whether the transfer of auditory information to early visual areas is an epiphenomenon of visual imagery or, alternatively, whether it is driven by mechanisms independent from visual experience. Here, we show that we can decode natural sounds from activity patterns in early “visual” areas of congenitally blind individuals who lack visual imagery. Thus, visual imagery is not a prerequisite of auditory feedback to early visual cortex. Furthermore, the spatial pattern of sound decoding accuracy in early visual cortex was remarkably similar in blind and sighted individuals, with an increasing decoding accuracy gradient from foveal to peripheral regions. This suggests that the typical organization by eccentricity of early visual cortex develops for auditory feedback, even in the lifelong absence of vision. The same feedback to early visual cortex might support visual perception in the sighted [1] and drive the recruitment of this area for non-visual functions in blind individuals [2, 3]. Sounds can be decoded from early visual cortex activity in blind individuals Sound decoding accuracy increases from foveal to peripheral early visual regions Visual imagery is not necessary for auditory feedback to early visual cortex Early visual cortex organization by eccentricity develops without visual experience
Collapse
Affiliation(s)
- Petra Vetter
- Department of Psychology, Royal Holloway, University of London, Egham Hill, Egham, Surrey TW20 0EX, UK.
| | - Łukasz Bola
- Institute of Psychology, Jagiellonian University, ul. Ingardena 6, 30-060 Kraków, Poland; Department of Psychology, Harvard University, William James Hall, 33 Kirkland Street, Cambridge, MA 02138, USA
| | - Lior Reich
- Department of Medical Neurobiology, Faculty of Medicine, Hebrew University Jerusalem, Ein Kerem, PO Box 12271, Jerusalem 91120, Israel
| | - Matthew Bennett
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Lars Muckli
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Amir Amedi
- Department of Medical Neurobiology, Faculty of Medicine, Hebrew University Jerusalem, Ein Kerem, PO Box 12271, Jerusalem 91120, Israel; The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Reichman University, PO Box 167, Herzliya 461010, Israel
| |
Collapse
|
40
|
What and where in the auditory systems of sighted and early blind individuals: Evidence from representational similarity analysis. J Neurol Sci 2020; 413:116805. [PMID: 32259708 DOI: 10.1016/j.jns.2020.116805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 03/14/2020] [Accepted: 03/24/2020] [Indexed: 11/24/2022]
Abstract
Separated ventral and dorsal streams in auditory system have been proposed to process sound identification and localization respectively. Despite the popularity of the dual-pathway model, it remains controversial how much independence two neural pathways enjoy and whether visual experiences can influence the distinct cortical organizational scheme. In this study, representational similarity analysis (RSA) was used to explore the functional roles of distinct cortical regions that lay within either the ventral or dorsal auditory streams of sighted and early blind (EB) participants. We found functionally segregated auditory networks in both sighted and EB groups where anterior superior temporal gyrus (aSTG) and inferior frontal junction (IFJ) were more related to the sound identification, while posterior superior temporal gyrus (pSTG) and inferior parietal lobe (IPL) preferred the sound localization. The findings indicated visual experiences may not have an influence on this functional dissociation and the cortex of the human brain may be organized as task-specific and modality-independent strategies. Meanwhile, partial overlap of spatial and non-spatial auditory information processing was observed, illustrating the existence of interaction between the two auditory streams. Furthermore, we investigated the effect of visual experiences on the neural bases of auditory perception and observed the cortical reorganization in EB participants in whom middle occipital gyrus was recruited to process auditory information. Our findings examined the distinct cortical networks that abstractly encoded sound identification and localization, and confirmed the existence of interaction from the multivariate perspective. Furthermore, the results suggested visual experience might not impact the functional specialization of auditory regions.
Collapse
|
41
|
Popp M, Trumpp NM, Sim EJ, Kiefer M. Brain Activation During Conceptual Processing of Action and Sound Verbs. Adv Cogn Psychol 2020; 15:236-255. [PMID: 32494311 PMCID: PMC7251527 DOI: 10.5709/acp-0272-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Grounded cognition approaches to conceptual representations postulate a close link between conceptual knowledge and the sensorimotor brain systems. The present fMRI study tested, whether a feature-specific representation of concepts, as previously demonstrated for nouns, can also be found for action- and sound-related verbs. Participants were presented with action- and soundrelated verbs along with pseudoverbs while performing a lexical decision task. Sound-related verbs activated auditory areas in the temporal cortex, whereas action-related verbs activated brain regions in the superior frontal gyrus and the cerebellum, albeit only at a more liberal threshold. This differential brain activation during conceptual verb processing partially overlapped with or was adjacent to brain regions activated during the functional localizers probing sound perception or action execution. Activity in brain areas involved in the processing of action information was parametrically modulated by ratings of action relevance. Comparisons of action- and sound-related verbs with pseudoverbs revealed activation for both verb categories in auditory and motor areas. In contrast to proposals of strong grounded cognition approaches, our study did not demonstrate a considerable overlap of activations for action- and sound-related verbs and for the corresponding functional localizer tasks. However, in line with weaker variants of grounded cognition theories, the differential activation pattern for action- and sound-related verbs was near corresponding sensorimotor brain regions depending on conceptual feature relevance. Possibly, action-sound coupling resulted in a mutual activation of the motor and the auditory system for both action- and sound-related verbs, thereby reducing the effect sizes for the differential contrasts.
Collapse
Affiliation(s)
- Margot Popp
- Ulm University, Department of Psychiatry, Ulm, Germany
| | | | - Eun-Jin Sim
- Ulm University, Department of Psychiatry, Ulm, Germany
| | - Markus Kiefer
- Ulm University, Department of Psychiatry, Ulm, Germany
| |
Collapse
|
42
|
Li M, Xu Y, Luo X, Zeng J, Han Z. Linguistic experience acquisition for novel stimuli selectively activates the neural network of the visual word form area. Neuroimage 2020; 215:116838. [PMID: 32298792 DOI: 10.1016/j.neuroimage.2020.116838] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 03/24/2020] [Accepted: 04/06/2020] [Indexed: 10/24/2022] Open
Abstract
The human ventral visual cortex is functionally organized into different domains that sensitively respond to different categories, such as words and objects. There is heated debate over what principle constrains the locations of those domains. Taking the visual word form area (VWFA) as an example, we tested whether the word preference in this area originates from the bottom-up processes related to word shape (the shape hypothesis) or top-down connectivity of higher-order language regions (the connectivity hypothesis). We trained subjects to associate identical, meaningless, non-word-like figures with high-level features of either words or objects. We found that the word-feature learning for the figures elicited the neural activation change in the VWFA, and learning performance effectively predicted the activation strength of this area after learning. Word-learning effects were also observed in other language areas (i.e., the left posterior superior temporal gyrus, postcentral gyrus, and supplementary motor area), with increased functional connectivity between the VWFA and the language regions. In contrast, object-feature learning was not associated with obvious activation changes in the language regions. These results indicate that high-level language features of stimuli can modulate the activation of the VWFA, providing supportive evidence for the connectivity hypothesis of words processing in the ventral occipitotemporal cortex.
Collapse
Affiliation(s)
- Mingyang Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Yangwen Xu
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, 38123, Italy; International School for Advanced Studies (SISSA), Trieste, 34136, Italy
| | - Xiangqi Luo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Jiahong Zeng
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Zaizhu Han
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
43
|
Taniguchi K, Kuraguchi K, Takano Y, Itakura S. Object Categorization Processing Differs According to Category Level: Comparing Visual Information Between the Basic and Superordinate Levels. Front Psychol 2020; 11:501. [PMID: 32269541 PMCID: PMC7109334 DOI: 10.3389/fpsyg.2020.00501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 03/02/2020] [Indexed: 11/30/2022] Open
Abstract
Object category levels comprise a crucial concept in the field of object recognition. Specifically, categorization performance differs according to the category level of the target object. This study involved experiments with two types of stimulus sequences (i.e., forward condition: presenting the target name before the line-drawing stimulus; and reverse condition: presenting the target name after the line-drawing stimulus) for both basic- and superordinate-level categorizations. Adult participants were assigned to each level and asked to judge whether briefly presented stimuli included the same object and target name. Here, we investigated how the category level altered the categorization process. We conducted path analyses using a multivariate multiple regression model, and set our variables to investigate whether the predictors affected the categorization process between two types of stimulus sequence. Dependent variables included the measures of performance (i.e., reaction time, accuracy) for each categorization task. The predictors included dimensions and shapes of the line-drawings, such as primary and local shape information, shape complexity, subject estimation, and other shape variables related to object recognition. Results showed that the categorization process differed according to shape properties between conditions only for basic-level categorizations. For the forward condition, the bottom-up processing of primary visual information depended on matches with stored representations for the basic-level category. For the reverse condition at the basic-level category, decisions depended on subjective ratings in terms of object-representation accessibility. Finally, superordinate-level decisions depended on higher levels of visual information in terms of complexity, regardless of the condition. Thus, the given category level altered the processing of visual information for object recognition in relation to shape properties. This indicates that decision processing for object recognition is flexible depending on the criteria of the processed objects (e.g., category levels).
Collapse
Affiliation(s)
| | - Kana Kuraguchi
- Faculty of Psychology, Otemon Gakuin University, Osaka, Japan
| | - Yuji Takano
- Smart-Aging Research Center, Tohoku University, Miyagi, Japan
| | - Shoji Itakura
- Center for Baby Science, Doshisha University, Kyoto, Japan
| |
Collapse
|
44
|
Behrmann M, Plaut DC. Hemispheric Organization for Visual Object Recognition: A Theoretical Account and Empirical Evidence. Perception 2020; 49:373-404. [PMID: 31980013 PMCID: PMC9944149 DOI: 10.1177/0301006619899049] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Despite the similarity in structure, the hemispheres of the human brain have somewhat different functions. A traditional view of hemispheric organization asserts that there are independent and largely lateralized domain-specific regions in ventral occipitotemporal (VOTC), specialized for the recognition of distinct classes of objects. Here, we offer an alternative account of the organization of the hemispheres, with a specific focus on face and word recognition. This alternative account relies on three computational principles: distributed representations and knowledge, cooperation and competition between representations, and topography and proximity. The crux is that visual recognition results from a network of regions with graded functional specialization that is distributed across both hemispheres. Specifically, the claim is that face recognition, which is acquired relatively early in life, is processed by VOTC regions in both hemispheres. Once literacy is acquired, word recognition, which is co-lateralized with language areas, primarily engages the left VOTC and, consequently, face recognition is primarily, albeit not exclusively, mediated by the right VOTC. We review psychological and neural evidence from a range of studies conducted with normal and brain-damaged adults and children and consider findings which challenge this account. Last, we offer suggestions for future investigations whose findings may further refine this account.
Collapse
Affiliation(s)
- Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - David C. Plaut
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
45
|
Rogers TT. Neural networks as a critical level of description for cognitive neuroscience. Curr Opin Behav Sci 2020. [DOI: 10.1016/j.cobeha.2020.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
46
|
Connectivity at the origins of domain specificity in the cortical face and place networks. Proc Natl Acad Sci U S A 2020; 117:6163-6169. [PMID: 32123077 DOI: 10.1073/pnas.1911359117] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
It is well established that the adult brain contains a mosaic of domain-specific networks. But how do these domain-specific networks develop? Here we tested the hypothesis that the brain comes prewired with connections that precede the development of domain-specific function. Using resting-state fMRI in the youngest sample of newborn humans tested to date, we indeed found that cortical networks that will later develop strong face selectivity (including the "proto" occipital face area and fusiform face area) and scene selectivity (including the "proto" parahippocampal place area and retrosplenial complex) by adulthood, already show domain-specific patterns of functional connectivity as early as 27 d of age (beginning as early as 6 d of age). Furthermore, we asked how these networks are functionally connected to early visual cortex and found that the proto face network shows biased functional connectivity with foveal V1, while the proto scene network shows biased functional connectivity with peripheral V1. Given that faces are almost always experienced at the fovea, while scenes always extend across the entire periphery, these differential inputs may serve to facilitate domain-specific processing in each network after that function develops, or even guide the development of domain-specific function in each network in the first place. Taken together, these findings reveal domain-specific and eccentricity-biased connectivity in the earliest days of life, placing new constraints on our understanding of the origins of domain-specific cortical networks.
Collapse
|
47
|
Genetic influence is linked to cortical morphology in category-selective areas of visual cortex. Nat Commun 2020; 11:709. [PMID: 32024844 PMCID: PMC7002610 DOI: 10.1038/s41467-020-14610-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Accepted: 01/22/2020] [Indexed: 01/24/2023] Open
Abstract
Human visual cortex contains discrete areas that respond selectively to specific object categories such as faces, bodies, and places. A long-standing question is whether these areas are shaped by genetic or environmental factors. To address this question, here we analyzed functional MRI data from an unprecedented number (n = 424) of monozygotic (MZ) and dizygotic (DZ) twins. Category-selective maps were more identical in MZ than DZ twins. Within each category-selective area, distinct subregions showed significant genetic influence. Structural MRI analysis revealed that the ‘genetic voxels’ were predominantly located in regions with higher cortical curvature (gyral crowns in face areas and sulcal fundi in place areas). Moreover, we found that cortex was thicker and more myelinated in genetic voxels of face areas, while it was thinner and less myelinated in genetic voxels of place areas. This double dissociation suggests a differential development of face and place areas in cerebral cortex. It remains unclear whether the functional organization of the visual cortex is shaped by genetic or environmental factors. Using fMRI in twins (n = 424), these authors show that activation patterns in category-selective areas are heritable, and that the genetic effects in these areas are linked to structural properties of cortical tissue.
Collapse
|
48
|
Cecchetto C, Fischmeister FPS, Gorkiewicz S, Schuehly W, Bagga D, Parma V, Schöpf V. Human body odor increases familiarity for faces during encoding-retrieval task. Hum Brain Mapp 2020; 41:1904-1919. [PMID: 31904899 PMCID: PMC7268037 DOI: 10.1002/hbm.24920] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 10/31/2019] [Accepted: 12/29/2019] [Indexed: 01/27/2023] Open
Abstract
Odors can increase memory performance when presented as context during both encoding and retrieval phases. Since information from different sensory modalities is integrated into a unified conceptual knowledge, we hypothesize that the social information from body odors and faces would be integrated during encoding. The integration of such social information would enhance retrieval more so than when the encoding occurs in the context of common odors. To examine this hypothesis and to further explore the underlying neural correlates of this behavior, we have conducted a functional magnetic resonance imaging study in which participants performed an encoding‐retrieval memory task for faces during the presentation of common odor, body odor or clean air. At the behavioral level, results show that participants were less biased and faster in recognizing faces when presented in concomitance with the body odor compared to the common odor. At the neural level, the encoding of faces in the body odor condition, compared to common odor and clean air conditions, showed greater activation in areas related to associative memory (dorsolateral prefrontal cortex), odor perception and multisensory integration (orbitofrontal cortex). These results suggest that face and body odor information were integrated and as a result, participants were faster in recognizing previously presented material.
Collapse
Affiliation(s)
- Cinzia Cecchetto
- Institute of Psychology, University of Graz, Graz, Austria.,BioTechMed, Graz, Austria
| | | | | | | | - Deepika Bagga
- Institute of Psychology, University of Graz, Graz, Austria.,BioTechMed, Graz, Austria
| | - Valentina Parma
- Department of Psychology, Temple University, Philadelphia, Pennsylvania
| | - Veronika Schöpf
- Institute of Psychology, University of Graz, Graz, Austria.,BioTechMed, Graz, Austria.,Computational Imaging Research Lab (CIR), Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
49
|
Frankland SM, Greene JD. Concepts and Compositionality: In Search of the Brain's Language of Thought. Annu Rev Psychol 2020; 71:273-303. [DOI: 10.1146/annurev-psych-122216-011829] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Imagine Genghis Khan, Aretha Franklin, and the Cleveland Cavaliers performing an opera on Maui. This silly sentence makes a serious point: As humans, we can flexibly generate and comprehend an unbounded number of complex ideas. Little is known, however, about how our brains accomplish this. Here we assemble clues from disparate areas of cognitive neuroscience, integrating recent research on language, memory, episodic simulation, and computational models of high-level cognition. Our review is framed by Fodor's classic language of thought hypothesis, according to which our minds employ an amodal, language-like system for combining and recombining simple concepts to form more complex thoughts. Here, we highlight emerging work on combinatorial processes in the brain and consider this work's relation to the language of thought. We review evidence for distinct, but complementary, contributions of map-like representations in subregions of the default mode network and sentence-like representations of conceptual relations in regions of the temporal and prefrontal cortex.
Collapse
Affiliation(s)
- Steven M. Frankland
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey 08544, USA
| | - Joshua D. Greene
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA
| |
Collapse
|
50
|
Ricciardi E, Bottari D, Ptito M, Röder B, Pietrini P. The sensory-deprived brain as a unique tool to understand brain development and function. Neurosci Biobehav Rev 2020; 108:78-82. [DOI: 10.1016/j.neubiorev.2019.10.017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|