1
|
Cracco E, Papeo L, Wiersema JR. Evidence for a role of synchrony but not common fate in the perception of biological group movements. Eur J Neurosci 2024; 60:3557-3571. [PMID: 38706370 DOI: 10.1111/ejn.16356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/16/2024] [Accepted: 04/05/2024] [Indexed: 05/07/2024]
Abstract
Extensive research has shown that observers are able to efficiently extract summary information from groups of people. However, little is known about the cues that determine whether multiple people are represented as a social group or as independent individuals. Initial research on this topic has primarily focused on the role of static cues. Here, we instead investigate the role of dynamic cues. In two experiments with male and female human participants, we use EEG frequency tagging to investigate the influence of two fundamental Gestalt principles - synchrony and common fate - on the grouping of biological movements. In Experiment 1, we find that brain responses coupled to four point-light figures walking together are enhanced when they move in sync vs. out of sync, but only when they are presented upright. In contrast, we found no effect of movement direction (i.e., common fate). In Experiment 2, we rule out that synchrony takes precedence over common fate by replicating the null effect of movement direction while keeping synchrony constant. These results suggest that synchrony plays an important role in the processing of biological group movements. In contrast, the role of common fate is less clear and will require further research.
Collapse
Affiliation(s)
- Emiel Cracco
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, Bron, France
| | - Jan R Wiersema
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
2
|
Hafri A, Bonner MF, Landau B, Firestone C. A Phone in a Basket Looks Like a Knife in a Cup: Role-Filler Independence in Visual Processing. Open Mind (Camb) 2024; 8:766-794. [PMID: 38957507 PMCID: PMC11219067 DOI: 10.1162/opmi_a_00146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/17/2024] [Indexed: 07/04/2024] Open
Abstract
When a piece of fruit is in a bowl, and the bowl is on a table, we appreciate not only the individual objects and their features, but also the relations containment and support, which abstract away from the particular objects involved. Independent representation of roles (e.g., containers vs. supporters) and "fillers" of those roles (e.g., bowls vs. cups, tables vs. chairs) is a core principle of language and higher-level reasoning. But does such role-filler independence also arise in automatic visual processing? Here, we show that it does, by exploring a surprising error that such independence can produce. In four experiments, participants saw a stream of images containing different objects arranged in force-dynamic relations-e.g., a phone contained in a basket, a marker resting on a garbage can, or a knife sitting in a cup. Participants had to respond to a single target image (e.g., a phone in a basket) within a stream of distractors presented under time constraints. Surprisingly, even though participants completed this task quickly and accurately, they false-alarmed more often to images matching the target's relational category than to those that did not-even when those images involved completely different objects. In other words, participants searching for a phone in a basket were more likely to mistakenly respond to a knife in a cup than to a marker on a garbage can. Follow-up experiments ruled out strategic responses and also controlled for various confounding image features. We suggest that visual processing represents relations abstractly, in ways that separate roles from fillers.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Linguistics and Cognitive Science, University of Delaware
- Department of Cognitive Science, Johns Hopkins University
- Department of Psychological and Brain Sciences, Johns Hopkins University
| | | | - Barbara Landau
- Department of Cognitive Science, Johns Hopkins University
| | - Chaz Firestone
- Department of Cognitive Science, Johns Hopkins University
- Department of Psychological and Brain Sciences, Johns Hopkins University
| |
Collapse
|
3
|
Abassi E, Papeo L. Category-Selective Representation of Relationships in the Visual Cortex. J Neurosci 2024; 44:e0250232023. [PMID: 38124013 PMCID: PMC10860595 DOI: 10.1523/jneurosci.0250-23.2023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 09/29/2023] [Accepted: 10/14/2023] [Indexed: 12/23/2023] Open
Abstract
Understanding social interaction requires processing social agents and their relationships. The latest results show that much of this process is visually solved: visual areas can represent multiple people encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to face-to-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured the brain activity of healthy female and male humans (N = 42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing > non-facing effect as a signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in the visual cortex. Additional analyses suggest the existence of a third network encoding relations between (nonsocial) objects. Finally, a separate occipitotemporal network showed the generalization of relational information across body, face, and nonsocial object dyads (multivariate pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, the visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and nonsocial) interaction, which trigger inferential processes.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| |
Collapse
|
4
|
Peelen MV, Berlot E, de Lange FP. Predictive processing of scenes and objects. NATURE REVIEWS PSYCHOLOGY 2024; 3:13-26. [PMID: 38989004 PMCID: PMC7616164 DOI: 10.1038/s44159-023-00254-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 07/12/2024]
Abstract
Real-world visual input consists of rich scenes that are meaningfully composed of multiple objects which interact in complex, but predictable, ways. Despite this complexity, we recognize scenes, and objects within these scenes, from a brief glance at an image. In this review, we synthesize recent behavioral and neural findings that elucidate the mechanisms underlying this impressive ability. First, we review evidence that visual object and scene processing is partly implemented in parallel, allowing for a rapid initial gist of both objects and scenes concurrently. Next, we discuss recent evidence for bidirectional interactions between object and scene processing, with scene information modulating the visual processing of objects, and object information modulating the visual processing of scenes. Finally, we review evidence that objects also combine with each other to form object constellations, modulating the processing of individual objects within the object pathway. Altogether, these findings can be understood by conceptualizing object and scene perception as the outcome of a joint probabilistic inference, in which "best guesses" about objects act as priors for scene perception and vice versa, in order to concurrently optimize visual inference of objects and scenes.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eva Berlot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
5
|
Li C, Ficco L, Trapp S, Rostalski SM, Korn L, Kovács G. The effect of context congruency on fMRI repetition suppression for objects. Neuropsychologia 2023; 188:108603. [PMID: 37270029 DOI: 10.1016/j.neuropsychologia.2023.108603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 05/31/2023] [Accepted: 05/31/2023] [Indexed: 06/05/2023]
Abstract
The recognition of objects is strongly facilitated when they are presented in the context of other objects (Biederman, 1972). Such contexts facilitate perception and induce expectations of context-congruent objects (Trapp and Bar, 2015). The neural mechanisms underlying these facilitatory effects of context on object processing, however, are not yet fully understood. In the present study, we investigate how context-induced expectations affect subsequent object processing. We used functional magnetic resonance imaging and measured repetition suppression as a proxy for prediction error processing. Participants viewed pairs of alternating or repeated object images which were preceded by context-congruent, context-incongruent or neutral cues. We found a stronger repetition suppression in congruent as compared to incongruent or neutral cues in the object sensitive lateral occipital cortex. Interestingly, this stronger effect was driven by enhanced responses to alternating stimulus pairs in the congruent contexts, rather than by suppressed responses to repeated stimulus pairs, which emphasizes the contribution of surprise-related response enhancement for the context modulation on RS when expectations are violated. In addition, in the congruent condition, we discovered significant functional connectivity between object-responsive and frontal cortical regions, as well as between object-responsive regions and the fusiform gyrus. Our findings indicate that prediction errors, reflected in enhanced brain responses to violated contextual expectations, underlie the facilitating effect of context during object perception.
Collapse
Affiliation(s)
- Chenglin Li
- School of Psychology, Zhejiang Normal University, China; Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich-Schiller-Universität Jena, Germany
| | - Linda Ficco
- Department of General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich-Schiller-Universität Jena, Germany; Department of Linguistics and Cultural Evolution, International Max Planck Research School for the Science of Human History, Jena, Germany
| | - Sabrina Trapp
- Macromedia University of Applied Sciences, Munich, Germany
| | - Sophie-Marie Rostalski
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich-Schiller-Universität Jena, Germany
| | - Lukas Korn
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich-Schiller-Universität Jena, Germany
| | - Gyula Kovács
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich-Schiller-Universität Jena, Germany.
| |
Collapse
|
6
|
Wiesmann SL, Võ MLH. Disentangling diagnostic object properties for human scene categorization. Sci Rep 2023; 13:5912. [PMID: 37041222 PMCID: PMC10090043 DOI: 10.1038/s41598-023-32385-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 03/27/2023] [Indexed: 04/13/2023] Open
Abstract
It usually only takes a single glance to categorize our environment into different scene categories (e.g. a kitchen or a highway). Object information has been suggested to play a crucial role in this process, and some proposals even claim that the recognition of a single object can be sufficient to categorize the scene around it. Here, we tested this claim in four behavioural experiments by having participants categorize real-world scene photographs that were reduced to a single, cut-out object. We show that single objects can indeed be sufficient for correct scene categorization and that scene category information can be extracted within 50 ms of object presentation. Furthermore, we identified object frequency and specificity for the target scene category as the most important object properties for human scene categorization. Interestingly, despite the statistical definition of specificity and frequency, human ratings of these properties were better predictors of scene categorization behaviour than more objective statistics derived from databases of labelled real-world images. Taken together, our findings support a central role of object information during human scene categorization, showing that single objects can be indicative of a scene category if they are assumed to frequently and exclusively occur in a certain environment.
Collapse
Affiliation(s)
- Sandro L Wiesmann
- Department of Psychology, Johann Wolfgang Goethe-Universität, Theodor-W.-Adorno-Platz 6, 60323, Frankfurt Am Main, Germany.
| | - Melissa L-H Võ
- Department of Psychology, Johann Wolfgang Goethe-Universität, Theodor-W.-Adorno-Platz 6, 60323, Frankfurt Am Main, Germany
| |
Collapse
|
7
|
Aminoff EM, Durham T. Scene-selective brain regions respond to embedded objects of a scene. Cereb Cortex 2022; 33:5066-5074. [PMID: 36305640 DOI: 10.1093/cercor/bhac399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/12/2022] [Accepted: 09/13/2022] [Indexed: 11/14/2022] Open
Abstract
Abstract
Objects are fundamental to scene understanding. Scenes are defined by embedded objects and how we interact with them. Paradoxically, scene processing in the brain is typically discussed in contrast to object processing. Using the BOLD5000 dataset (Chang et al., 2019), we examined whether objects within a scene predicted the neural representation of scenes, as measured by functional magnetic resonance imaging in humans. Stimuli included 1,179 unique scenes across 18 semantic categories. Object composition of scenes were compared across scene exemplars in different semantic scene categories, and separately, in exemplars of the same scene category. Neural representations in scene- and object-preferring brain regions were significantly related to which objects were in a scene, with the effect at times stronger in the scene-preferring regions. The object model accounted for more variance when comparing scenes within the same semantic category to scenes from different categories. Here, we demonstrate the function of scene-preferring regions includes the processing of objects. This suggests visual processing regions may be better characterized by the processes, which are engaged when interacting with the stimulus kind, such as processing groups of objects in scenes, or processing a single object in our foreground, rather than the stimulus kind itself.
Collapse
Affiliation(s)
- Elissa M Aminoff
- Fordham University Department of Psychology, , 226 Dealy Hall, 441 E. Fordham Rd, Bronx, NY 10458, United States
| | - Tess Durham
- Fordham University Department of Psychology, , 226 Dealy Hall, 441 E. Fordham Rd, Bronx, NY 10458, United States
| |
Collapse
|
8
|
Abstract
During natural vision, our brains are constantly exposed to complex, but regularly structured environments. Real-world scenes are defined by typical part-whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part-whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: By dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their constituent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene's part-whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.
Collapse
Affiliation(s)
- Daniel Kaiser
- Justus-Liebig-Universität Gießen, Germany.,Philipps-Universität Marburg, Germany.,University of York, United Kingdom
| | - Radoslaw M Cichy
- Freie Universität Berlin, Germany.,Humboldt-Universität zu Berlin, Germany.,Bernstein Centre for Computational Neuroscience Berlin, Germany
| |
Collapse
|
9
|
Gronau N. To Grasp the World at a Glance: The Role of Attention in Visual and Semantic Associative Processing. J Imaging 2021; 7:jimaging7090191. [PMID: 34564117 PMCID: PMC8470651 DOI: 10.3390/jimaging7090191] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 08/30/2021] [Accepted: 09/15/2021] [Indexed: 11/16/2022] Open
Abstract
Associative relations among words, concepts and percepts are the core building blocks of high-level cognition. When viewing the world ‘at a glance’, the associative relations between objects in a scene, or between an object and its visual background, are extracted rapidly. The extent to which such relational processing requires attentional capacity, however, has been heavily disputed over the years. In the present manuscript, I review studies investigating scene–object and object–object associative processing. I then present a series of studies in which I assessed the necessity of spatial attention to various types of visual–semantic relations within a scene. Importantly, in all studies, the spatial and temporal aspects of visual attention were tightly controlled in an attempt to minimize unintentional attention shifts from ‘attended’ to ‘unattended’ regions. Pairs of stimuli—either objects, scenes or a scene and an object—were briefly presented on each trial, while participants were asked to detect a pre-defined target category (e.g., an animal, a nonsense shape). Response times (RTs) to the target detection task were registered when visual attention spanned both stimuli in a pair vs. when attention was focused on only one of two stimuli. Among non-prioritized stimuli that were not defined as to-be-detected targets, findings consistently demonstrated rapid associative processing when stimuli were fully attended, i.e., shorter RTs to associated than unassociated pairs. Focusing attention on a single stimulus only, however, largely impaired this relational processing. Notably, prioritized targets continued to affect performance even when positioned at an unattended location, and their associative relations with the attended items were well processed and analyzed. Our findings portray an important dissociation between unattended task-irrelevant and task-relevant items: while the former require spatial attentional resources in order to be linked to stimuli positioned inside the attentional focus, the latter may influence high-level recognition and associative processes via feature-based attentional mechanisms that are largely independent of spatial attention.
Collapse
Affiliation(s)
- Nurit Gronau
- Department of Psychology and Department of Cognitive Science Studies, The Open University of Israel, Raanana 4353701, Israel
| |
Collapse
|
10
|
Çelik E, Keles U, Kiremitçi İ, Gallant JL, Çukur T. Cortical networks of dynamic scene category representation in the human brain. Cortex 2021; 143:127-147. [PMID: 34411847 DOI: 10.1016/j.cortex.2021.07.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 06/28/2021] [Accepted: 07/14/2021] [Indexed: 10/20/2022]
Abstract
Humans have an impressive ability to rapidly process global information in natural scenes to infer their category. Yet, it remains unclear whether and how scene categories observed dynamically in the natural world are represented in cerebral cortex beyond few canonical scene-selective areas. To address this question, here we examined the representation of dynamic visual scenes by recording whole-brain blood oxygenation level-dependent (BOLD) responses while subjects viewed natural movies. We fit voxelwise encoding models to estimate tuning for scene categories that reflect statistical ensembles of objects and actions in the natural world. We find that this scene-category model explains a significant portion of the response variance broadly across cerebral cortex. Cluster analysis of scene-category tuning profiles across cortex reveals nine spatially-segregated networks of brain regions consistently across subjects. These networks show heterogeneous tuning for a diverse set of dynamic scene categories related to navigation, human activity, social interaction, civilization, natural environment, non-human animals, motion-energy, and texture, suggesting that the organization of scene category representation is quite complex.
Collapse
Affiliation(s)
- Emin Çelik
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey.
| | - Umit Keles
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey; Division of Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA
| | - İbrahim Kiremitçi
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA; Department of Bioengineering, University of California, Berkeley, Berkeley, CA, USA; Department of Psychology, University of California, Berkeley, CA, USA
| | - Tolga Çukur
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey; Department of Electrical and Electronics Engineering, Bilkent University, Ankara, Turkey
| |
Collapse
|
11
|
Hafri A, Firestone C. The Perception of Relations. Trends Cogn Sci 2021; 25:475-492. [PMID: 33812770 DOI: 10.1016/j.tics.2021.01.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 01/05/2021] [Accepted: 01/18/2021] [Indexed: 11/16/2022]
Abstract
The world contains not only objects and features (red apples, glass bowls, wooden tables), but also relations holding between them (apples contained in bowls, bowls supported by tables). Representations of these relations are often developmentally precocious and linguistically privileged; but how does the mind extract them in the first place? Although relations themselves cast no light onto our eyes, a growing body of work suggests that even very sophisticated relations display key signatures of automatic visual processing. Across physical, eventive, and social domains, relations such as support, fit, cause, chase, and even socially interact are extracted rapidly, are impossible to ignore, and influence other perceptual processes. Sophisticated and structured relations are not only judged and understood, but also seen - revealing surprisingly rich content in visual perception itself.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Philosophy, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
12
|
Zacharia AA, Ahuja N, Kaur S, Sharma R. Frontal activation as a key for deciphering context congruity and valence during visual perception: An electrical neuroimaging study. Brain Cogn 2021; 150:105711. [PMID: 33774336 DOI: 10.1016/j.bandc.2021.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 01/12/2021] [Accepted: 02/24/2021] [Indexed: 11/20/2022]
Abstract
The object-context associations and the valence are two important stimulus attributes that influence visual perception. The current study investigates the neural sources associated with schema congruent and incongruent object-context associations within positive, negative, and neutral valence during an intermittent binocular rivalry task with simultaneous high-density EEG recording. Cortical sourceswere calculated using the sLORETA algorithm in 150 ms after stimulus onset (Stim + 150) and 400 ms before response (Resp-400) time windows. No significant difference in source activity was found between congruent and incongruent associations in any of the valence categories in the Stim + 150 ms window indicating that immediately after stimulus presentation the basic visual processing remains the same for both. In the Resp-400 ms window, different frontal regions showed higher activity for incongruent associations with different valence such as the superior frontal gyrus showed significantly higher activations for negative while the middle and medial frontal gyrus showed higher activations for neutral and finally, the inferior frontal gyrus showed higher activations for positive valence. Besides replicating the previous knowledge of frontal activations in response to context congruity, the current study provides further evidence for the sensitivity of the frontal lobe to the valence associated with the incongruent stimuli.
Collapse
Affiliation(s)
- Angel Anna Zacharia
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Navdeep Ahuja
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Simran Kaur
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Ratna Sharma
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India.
| |
Collapse
|
13
|
Costanzo F, Alfieri P, Caciolo C, Bergonzini P, Perrino F, Zampino G, Leoni C, Menghini D, Digilio MC, Tartaglia M, Vicari S, Carlesimo GA. Recognition Memory in Noonan Syndrome. Brain Sci 2021; 11:169. [PMID: 33572736 PMCID: PMC7910957 DOI: 10.3390/brainsci11020169] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 01/23/2021] [Accepted: 01/24/2021] [Indexed: 11/17/2022] Open
Abstract
Noonan syndrome (NS) and the clinically related NS with multiple lentiginous (NMLS) are genetic conditions characterized by upregulated RAS mitogen activated protein kinase (RAS-MAPK) signaling, which is known to impact hippocampus-dependent memory formation and consolidation. The aim of the present study was to provide a detailed characterization of the recognition memory of children and adolescents with NS/NMLS. We compared 18 children and adolescents affected by NS and NMLS with 22 typically developing (TD) children, matched for chronological age and non-verbal Intelligence Quotient (IQ), in two different experimental paradigms, to assess familiarity and recollection: a Process Dissociation Procedure (PDP) and a Task Dissociation Procedure (TDP). Differences in verbal skills between groups, as well as chronological age, were considered in the analysis. Participants with NS and NSML showed reduced recollection in the PDP and impaired associative recognition in the TDP, compared to controls. These results indicate poor recollection in the recognition memory of participants with NS and NSML, which cannot be explained by intellectual disability or language deficits. These results provide evidence of the role of mutations impacting RAS-MAPK signaling in the disruption of hippocampal memory formation and consolidation.
Collapse
Affiliation(s)
- Floriana Costanzo
- Child and Adolescent Psychiatric Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.A.); (C.C.); (P.B.); (D.M.); (S.V.)
| | - Paolo Alfieri
- Child and Adolescent Psychiatric Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.A.); (C.C.); (P.B.); (D.M.); (S.V.)
| | - Cristina Caciolo
- Child and Adolescent Psychiatric Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.A.); (C.C.); (P.B.); (D.M.); (S.V.)
| | - Paola Bergonzini
- Child and Adolescent Psychiatric Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.A.); (C.C.); (P.B.); (D.M.); (S.V.)
| | - Francesca Perrino
- Center for Rare Diseases and Birth Defects, Department of Woman and Child Health, Institute of Pediatrics, Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Catholic University of the Sacred Heart, 00168 Rome, Italy; (F.P.); (G.Z.); (C.L.)
- Rehabilitation Center UILMD Lazio Onlus, 00167 Rome, Italy
| | - Giuseppe Zampino
- Center for Rare Diseases and Birth Defects, Department of Woman and Child Health, Institute of Pediatrics, Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Catholic University of the Sacred Heart, 00168 Rome, Italy; (F.P.); (G.Z.); (C.L.)
| | - Chiara Leoni
- Center for Rare Diseases and Birth Defects, Department of Woman and Child Health, Institute of Pediatrics, Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Catholic University of the Sacred Heart, 00168 Rome, Italy; (F.P.); (G.Z.); (C.L.)
| | - Deny Menghini
- Child and Adolescent Psychiatric Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.A.); (C.C.); (P.B.); (D.M.); (S.V.)
| | - Maria Cristina Digilio
- Genetics and Rare Diseases Research Division, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.C.D.); (M.T.)
- Medical Genetics, Academic Department of Pediatrics, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy
| | - Marco Tartaglia
- Genetics and Rare Diseases Research Division, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.C.D.); (M.T.)
| | - Stefano Vicari
- Child and Adolescent Psychiatric Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (P.A.); (C.C.); (P.B.); (D.M.); (S.V.)
- Department of Life Science and Public Health, Catholic University of the Sacred Heart, 00168 Rome, Italy
| | - Giovanni Augusto Carlesimo
- Laboratory of Clinical and Behavioral Neurology, Santa Lucia Foundation, 00179 Rome, Italy;
- Department of Systems Medicine, Tor Vergata University, 00133 Rome, Italy
| |
Collapse
|
14
|
Quek GL, Peelen MV. Contextual and Spatial Associations Between Objects Interactively Modulate Visual Processing. Cereb Cortex 2020; 30:6391-6404. [PMID: 32754744 PMCID: PMC7609942 DOI: 10.1093/cercor/bhaa197] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 06/29/2020] [Accepted: 06/29/2020] [Indexed: 01/23/2023] Open
Abstract
Much of what we know about object recognition arises from the study of isolated objects. In the real world, however, we commonly encounter groups of contextually associated objects (e.g., teacup and saucer), often in stereotypical spatial configurations (e.g., teacup above saucer). Here we used electroencephalography to test whether identity-based associations between objects (e.g., teacup-saucer vs. teacup-stapler) are encoded jointly with their typical relative positioning (e.g., teacup above saucer vs. below saucer). Observers viewed a 2.5-Hz image stream of contextually associated object pairs intermixed with nonassociated pairs as every fourth image. The differential response to nonassociated pairs (measurable at 0.625 Hz in 28/37 participants) served as an index of contextual integration, reflecting the association of object identities in each pair. Over right occipitotemporal sites, this signal was larger for typically positioned object streams, indicating that spatial configuration facilitated the extraction of the objects' contextual association. This high-level influence of spatial configuration on object identity integration arose ~ 320 ms post-stimulus onset, with lower-level perceptual grouping (shared with inverted displays) present at ~ 130 ms. These results demonstrate that contextual and spatial associations between objects interactively influence object processing. We interpret these findings as reflecting the high-level perceptual grouping of objects that frequently co-occur in highly stereotyped relative positions.
Collapse
Affiliation(s)
- Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Gelderland, The Netherlands
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Gelderland, The Netherlands
| |
Collapse
|
15
|
Kaiser D, Inciuraite G, Cichy RM. Rapid contextualization of fragmented scene information in the human visual system. Neuroimage 2020; 219:117045. [PMID: 32540354 DOI: 10.1016/j.neuroimage.2020.117045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 04/24/2020] [Accepted: 06/09/2020] [Indexed: 10/24/2022] Open
Abstract
Real-world environments are extremely rich in visual information. At any given moment in time, only a fraction of this information is available to the eyes and the brain, rendering naturalistic vision a collection of incomplete snapshots. Previous research suggests that in order to successfully contextualize this fragmented information, the visual system sorts inputs according to spatial schemata, that is knowledge about the typical composition of the visual world. Here, we used a large set of 840 different natural scene fragments to investigate whether this sorting mechanism can operate across the diverse visual environments encountered during real-world vision. We recorded brain activity using electroencephalography (EEG) while participants viewed incomplete scene fragments at fixation. Using representational similarity analysis on the EEG data, we tracked the fragments' cortical representations across time. We found that the fragments' typical vertical location within the environment (top or bottom) predicted their cortical representations, indexing a sorting of information according to spatial schemata. The fragments' cortical representations were most strongly organized by their vertical location at around 200 ms after image onset, suggesting rapid perceptual sorting of information according to spatial schemata. In control analyses, we show that this sorting is flexible with respect to visual features: it is neither explained by commonalities between visually similar indoor and outdoor scenes, nor by the feature organization emerging from a deep neural network trained on scene categorization. Demonstrating such a flexible sorting across a wide range of visually diverse scenes suggests a contextualization mechanism suitable for complex and variable real-world environments.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, UK.
| | - Gabriele Inciuraite
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
16
|
Kaiser D, Häberle G, Cichy RM. Real-world structure facilitates the rapid emergence of scene category information in visual brain signals. J Neurophysiol 2020; 124:145-151. [PMID: 32519577 DOI: 10.1152/jn.00164.2020] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In everyday life, our visual surroundings are not arranged randomly but structured in predictable ways. Although previous studies have shown that the visual system is sensitive to such structural regularities, it remains unclear whether the presence of an intact structure in a scene also facilitates the cortical analysis of the scene's categorical content. To address this question, we conducted an EEG experiment during which participants viewed natural scene images that were either "intact" (with their quadrants arranged in typical positions) or "jumbled" (with their quadrants arranged into atypical positions). We then used multivariate pattern analysis to decode the scenes' category from the EEG signals (e.g., whether the participant had seen a church or a supermarket). The category of intact scenes could be decoded rapidly within the first 100 ms of visual processing. Critically, within 200 ms of processing, category decoding was more pronounced for the intact scenes compared with the jumbled scenes, suggesting that the presence of real-world structure facilitates the extraction of scene category information. No such effect was found when the scenes were presented upside down, indicating that the facilitation of neural category information is indeed linked to a scene's adherence to typical real-world structure rather than to differences in visual features between intact and jumbled scenes. Our results demonstrate that early stages of categorical analysis in the visual system exhibit tuning to the structure of the world that may facilitate the rapid extraction of behaviorally relevant information from rich natural environments.NEW & NOTEWORTHY Natural scenes are structured, with different types of information appearing in predictable locations. Here, we use EEG decoding to show that the visual brain uses this structure to efficiently analyze scene content. During early visual processing, the category of a scene (e.g., a church vs. a supermarket) could be more accurately decoded from EEG signals when the scene adhered to its typical spatial structure compared with when it did not.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, United Kingdom
| | - Greta Häberle
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
17
|
Age effects on the neural processing of object-context associations in briefly flashed natural scenes. Neuropsychologia 2020; 136:107264. [DOI: 10.1016/j.neuropsychologia.2019.107264] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 09/30/2019] [Accepted: 11/11/2019] [Indexed: 01/31/2023]
|
18
|
The Representation of Two-Body Shapes in the Human Visual Cortex. J Neurosci 2019; 40:852-863. [PMID: 31801812 DOI: 10.1523/jneurosci.1378-19.2019] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 11/21/2019] [Accepted: 11/27/2019] [Indexed: 11/21/2022] Open
Abstract
Human social nature has shaped visual perception. A signature of the relationship between vision and sociality is a particular visual sensitivity to social entities such as faces and bodies. We asked whether human vision also exhibits a special sensitivity to spatial relations that reliably correlate with social relations. In general, interacting people are more often situated face-to-face than back-to-back. Using functional MRI and behavioral measures in female and male human participants, we show that visual sensitivity to social stimuli extends to images including two bodies facing toward (vs away from) each other. In particular, the inferior lateral occipital cortex, which is involved in visual-object perception, is organized such that the inferior portion encodes the number of bodies (one vs two) and the superior portion is selectively sensitive to the spatial relation between bodies (facing vs nonfacing). Moreover, functionally localized, body-selective visual cortex responded to facing bodies more strongly than identical, but nonfacing, bodies. In this area, multivariate pattern analysis revealed an accurate representation of body dyads with sharpening of the representation of single-body postures in facing dyads, which demonstrates an effect of visual context on the perceptual analysis of a body. Finally, the cost of body inversion (upside-down rotation) on body recognition, a behavioral signature of a specialized mechanism for body perception, was larger for facing versus nonfacing dyads. Thus, spatial relations between multiple bodies are encoded in regions for body perception and affect the way in which bodies are processed.SIGNIFICANCE STATEMENT Human social nature has shaped visual perception. Here, we show that human vision is not only attuned to socially relevant entities, such as bodies, but also to socially relevant spatial relations between those entities. Body-selective regions of visual cortex respond more strongly to multiple bodies that appear to be interacting (i.e., face-to-face), relative to unrelated bodies, and more accurately represent single body postures in interacting scenarios. Moreover, recognition of facing bodies is particularly susceptible to perturbation by upside-down rotation, indicative of a particular visual sensitivity to the canonical appearance of facing bodies. This encoding of relations between multiple bodies in areas for body-shape recognition suggests that the visual context in which a body is encountered deeply affects its perceptual analysis.
Collapse
|
19
|
Kaiser D, Häberle G, Cichy RM. Cortical sensitivity to natural scene structure. Hum Brain Mapp 2019; 41:1286-1295. [PMID: 31758632 PMCID: PMC7267931 DOI: 10.1002/hbm.24875] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 11/07/2019] [Accepted: 11/07/2019] [Indexed: 11/23/2022] Open
Abstract
Natural scenes are inherently structured, with meaningful objects appearing in predictable locations. Human vision is tuned to this structure: When scene structure is purposefully jumbled, perception is strongly impaired. Here, we tested how such perceptual effects are reflected in neural sensitivity to scene structure. During separate fMRI and EEG experiments, participants passively viewed scenes whose spatial structure (i.e., the position of scene parts) and categorical structure (i.e., the content of scene parts) could be intact or jumbled. Using multivariate decoding, we show that spatial (but not categorical) scene structure profoundly impacts on cortical processing: Scene‐selective responses in occipital and parahippocampal cortices (fMRI) and after 255 ms (EEG) accurately differentiated between spatially intact and jumbled scenes. Importantly, this differentiation was more pronounced for upright than for inverted scenes, indicating genuine sensitivity to spatial structure rather than sensitivity to low‐level attributes. Our findings suggest that visual scene analysis is tightly linked to the spatial structure of our natural environments. This link between cortical processing and scene structure may be crucial for rapidly parsing naturalistic visual inputs.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, UK.,Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Greta Häberle
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Einstein Center for Neurosciences Berlin, Humboldt-Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Einstein Center for Neurosciences Berlin, Humboldt-Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Humboldt-Universität Berlin, Berlin, Germany
| |
Collapse
|
20
|
Strachan JWA, Sebanz N, Knoblich G. The role of emotion in the dyad inversion effect. PLoS One 2019; 14:e0219185. [PMID: 31265483 PMCID: PMC6605658 DOI: 10.1371/journal.pone.0219185] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 06/18/2019] [Indexed: 12/05/2022] Open
Abstract
When observing two individuals, people are faster and better able to identify them as other people if they are facing each other than if they are facing away from each other. This advantage disappears when the images are inverted, suggesting that the visual system is particularly sensitive to dyads in this upright configuration, and perceptually groups socially engaged dyads into a single holistic unit. This dyadic inversion effect was obtained with images of full bodies. Body information was sufficient to elicit this effect even when information about head orientation was absent. However, it has not been tested whether the dyadic inversion effect occurs with face images and whether the emotions displayed by the faces modulate the effect. In three experiments we obtained robust dyadic inversion with face images. Holistic processing of upright face pairs occurred for neutral, happy, and sad faces but not for angry and fearful face pairs. Thus, perceptual grouping of individuals into pairs appears to depend on the emotional expressions of individual faces and the interpersonal relations they imply.
Collapse
|
21
|
Kaiser D, Quek GL, Cichy RM, Peelen MV. Object Vision in a Structured World. Trends Cogn Sci 2019; 23:672-685. [PMID: 31147151 PMCID: PMC7612023 DOI: 10.1016/j.tics.2019.04.013] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 04/15/2019] [Accepted: 04/30/2019] [Indexed: 01/02/2023]
Abstract
In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.
| | - Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| |
Collapse
|
22
|
Faivre N, Dubois J, Schwartz N, Mudrik L. Imaging object-scene relations processing in visible and invisible natural scenes. Sci Rep 2019; 9:4567. [PMID: 30872607 PMCID: PMC6418099 DOI: 10.1038/s41598-019-38654-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Accepted: 12/13/2018] [Indexed: 11/17/2022] Open
Abstract
Integrating objects with their context is a key step in interpreting complex visual scenes. Here, we used functional Magnetic Resonance Imaging (fMRI) while participants viewed visual scenes depicting a person performing an action with an object that was either congruent or incongruent with the scene. Univariate and multivariate analyses revealed different activity for congruent vs. incongruent scenes in the lateral occipital complex, inferior temporal cortex, parahippocampal cortex, and prefrontal cortex. Importantly, and in contrast to previous studies, these activations could not be explained by task-induced conflict. A secondary goal of this study was to examine whether processing of object-context relations could occur in the absence of awareness. We found no evidence for brain activity differentiating between congruent and incongruent invisible masked scenes, which might reflect a genuine lack of activation, or stem from the limitations of our study. Overall, our results provide novel support for the roles of parahippocampal cortex and frontal areas in conscious processing of object-context relations, which cannot be explained by either low-level differences or task demands. Yet they further suggest that brain activity is decreased by visual masking to the point of becoming undetectable with our fMRI protocol.
Collapse
Affiliation(s)
- Nathan Faivre
- Division of Biology, California Institute of Technology, Pasadena, CA, 91125, USA. .,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland. .,Centre d'Economie de la Sorbonne, CNRS UMR 8174, Paris, France.
| | - Julien Dubois
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA.,Department of Neurosurgery, Cedars Sinai Medical Center, Los Angeles, CA, USA
| | - Naama Schwartz
- Division of Biology, California Institute of Technology, Pasadena, CA, 91125, USA.,School of Psychological sciences, Tel Aviv University, Tel Aviv, Israel
| | - Liad Mudrik
- Division of Biology, California Institute of Technology, Pasadena, CA, 91125, USA. .,School of Psychological sciences, Tel Aviv University, Tel Aviv, Israel. .,Sagol school of Neuroscience, Tel Aviv University, Tel Aviv, Israel.
| |
Collapse
|
23
|
Abstract
Inferior temporal cortex (IT) is a key part of the ventral visual pathway implicated in object, face, and scene perception. But how does IT work? Here, I describe an organizational scheme that marries form and function and provides a framework for future research. The scheme consists of a series of stages arranged along the posterior-anterior axis of IT, defined by anatomical connections and functional responses. Each stage comprises a complement of subregions that have a systematic spatial relationship. The organization of each stage is governed by an eccentricity template, and corresponding eccentricity representations across stages are interconnected. Foveal representations take on a role in high-acuity object vision (including face recognition); intermediate representations compute other aspects of object vision such as behavioral valence (using color and surface cues); and peripheral representations encode information about scenes. This multistage, parallel-processing model invokes an innately determined organization refined by visual experience that is consistent with principles of cortical development. The model is also consistent with principles of evolution, which suggest that visual cortex expanded through replication of retinotopic areas. Finally, the model predicts that the most extensively studied network within IT-the face patches-is not unique but rather one manifestation of a canonical set of operations that reveal general principles of how IT works.
Collapse
Affiliation(s)
- Bevil R Conway
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, Maryland 28092, USA; .,National Institutes of Mental Health, National Institute of Neurological Disease and Stroke, National Institutes of Health, Bethesda, Maryland 28092, USA
| |
Collapse
|
24
|
Attention Effects on Neural Population Representations for Shape and Location Are Stronger in the Ventral than Dorsal Stream. eNeuro 2018; 5:eN-NWR-0371-17. [PMID: 29876521 PMCID: PMC5988342 DOI: 10.1523/eneuro.0371-17.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2017] [Revised: 03/14/2018] [Accepted: 04/19/2018] [Indexed: 11/21/2022] Open
Abstract
We examined how attention causes neural population representations of shape and location to change in ventral stream (AIT) and dorsal stream (LIP). Monkeys performed two identical delayed-match-to-sample (DMTS) tasks, attending either to shape or location. In AIT, shapes were more discriminable when directing attention to shape rather than location, measured by an increase in mean distance between population response vectors. In LIP, attending to location rather than shape did not increase the discriminability of different stimulus locations. Even when factoring out the change in mean vector response distance, multidimensional scaling (MDS) still showed a significant task difference in AIT, but not LIP, indicating that beyond increasing discriminability, attention also causes a nonlinear warping of representation space in AIT. Despite single-cell attentional modulations in both areas, our data show that attentional modulations of population representations are weaker in LIP, likely due to a need to maintain veridical representations for visuomotor control.
Collapse
|
25
|
Ekanayake J, Hutton C, Ridgway G, Scharnowski F, Weiskopf N, Rees G. Real-time decoding of covert attention in higher-order visual areas. Neuroimage 2018; 169:462-472. [PMID: 29247807 PMCID: PMC5864512 DOI: 10.1016/j.neuroimage.2017.12.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2017] [Revised: 12/06/2017] [Accepted: 12/09/2017] [Indexed: 12/21/2022] Open
Abstract
Brain-computer-interfaces (BCI) provide a means of using human brain activations to control devices for communication. Until now this has only been demonstrated in primary motor and sensory brain regions, using surgical implants or non-invasive neuroimaging techniques. Here, we provide proof-of-principle for the use of higher-order brain regions involved in complex cognitive processes such as attention. Using realtime fMRI, we implemented an online 'winner-takes-all approach' with quadrant-specific parameter estimates, to achieve single-block classification of brain activations. These were linked to the covert allocation of attention to real-world images presented at 4-quadrant locations. Accuracies in three target regions were significantly above chance, with individual decoding accuracies reaching upto 70%. By utilising higher order mental processes, 'cognitive BCIs' access varied and therefore more versatile information, potentially providing a platform for communication in patients who are unable to speak or move due to brain injury.
Collapse
Affiliation(s)
- Jinendra Ekanayake
- Wellcome Trust Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom; Institute of Cognitive Neuroscience, University College London, London, United Kingdom.
| | - Chloe Hutton
- Siemens Molecular Imaging, Oxford, United Kingdom
| | | | - Frank Scharnowski
- Psychiatric University Hospital, University of Zürich, Lenggstrasse 31, 8032 Zürich, Switzerland; Neuroscience Center Zürich, University of Zürich and Swiss Federal Institute of Technology, Winterthurerstr. 190, 8057 Zürich, Switzerland; Zürich Center for Integrative Human Physiology (ZIHP), University of Zürich, Winterthurerstr. 190, 8057 Zürich, Switzerland
| | - Nikolaus Weiskopf
- Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom; Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Geraint Rees
- Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom; Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| |
Collapse
|
26
|
Roux-Sibilon A, Kalénine S, Pichat C, Peyrin C. Dorsal and ventral stream contribution to the paired-object affordance effect. Neuropsychologia 2018. [PMID: 29522759 DOI: 10.1016/j.neuropsychologia.2018.03.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Visual extinction, a parietal syndrome in which patients exhibit perceptual impairments when two objects are simultaneously presented in the visual field, is reduced when objects are correctly positioned for action, indicating that action helps patients' visual attention. Similarly, healthy individuals make faster action decisions on object pairs that appear in left/right standard co-location for actions in comparison to object pairs that appear in a mirror location, a phenomenon called the paired-object affordance effect. However, the neural locus of such effect remains debated and may be related to the activity of ventral or dorsal brain regions. The present fMRI study aims at determining the neural substrates of the paired-object affordance effect. Fourteen right-handed participants made decisions about semantically related (i.e. thematically related and co-manipulated) and unrelated object pairs. Pairs were either positioned in a standard location for a right-handed action (with the active object - lid - in the right visual hemifield, and the passive object - pan - in the left visual hemifield), or in the reverse location. Behavioral results showed a suppression of the observed cost of correctly positioning related pairs for action when performing action decisions (deciding if the two objects are usually used together), but not when performing contextual decisions (deciding if the two objects are typically found in the kitchen). Anterior regions of the dorsal stream (e.g. supplementary motor area) responded to inadequate object co-positioning for action, but only when the perceptual task required action decisions. In the ventral cortex, the left lateral occipital complex showed increased activation for objects correctly positioned for action in all conditions except when neither task demands nor object relatedness was relevant for action. Thus, fMRI results demonstrated a joint contribution of ventral and dorsal cortical streams to the paired-affordance effect. They further suggest that this contribution may depend on contextual situations and task demands, in line with flexible views of affordance evocation.
Collapse
Affiliation(s)
| | - Solène Kalénine
- Univ. Lille, CNRS, CHU Lille, UMR 9193, SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
| | - Cédric Pichat
- Université Grenoble Alpes, CNRS, LPNC UMR 5105, Grenoble, France
| | - Carole Peyrin
- Université Grenoble Alpes, CNRS, LPNC UMR 5105, Grenoble, France
| |
Collapse
|
27
|
Wang S, Cao L, Xu J, Zhang G, Lou Y, Liu B. Revealing the Semantic Association between Perception of Scenes and Significant Objects by Representational Similarity Analysis. Neuroscience 2018; 372:87-96. [DOI: 10.1016/j.neuroscience.2017.12.043] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 12/19/2017] [Accepted: 12/23/2017] [Indexed: 11/29/2022]
|
28
|
Kaiser D, Peelen MV. Transformation from independent to integrative coding of multi-object arrangements in human visual cortex. Neuroimage 2017; 169:334-341. [PMID: 29277645 DOI: 10.1016/j.neuroimage.2017.12.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 10/08/2017] [Accepted: 12/20/2017] [Indexed: 10/18/2022] Open
Abstract
To optimize processing, the human visual system utilizes regularities present in naturalistic visual input. One of these regularities is the relative position of objects in a scene (e.g., a sofa in front of a television), with behavioral research showing that regularly positioned objects are easier to perceive and to remember. Here we use fMRI to test how positional regularities are encoded in the visual system. Participants viewed pairs of objects that formed minimalistic two-object scenes (e.g., a "living room" consisting of a sofa and television) presented in their regularly experienced spatial arrangement or in an irregular arrangement (with interchanged positions). Additionally, single objects were presented centrally and in isolation. Multi-voxel activity patterns evoked by the object pairs were modeled as the average of the response patterns evoked by the two single objects forming the pair. In two experiments, this approximation in object-selective cortex was significantly less accurate for the regularly than the irregularly positioned pairs, indicating integration of individual object representations. More detailed analysis revealed a transition from independent to integrative coding along the posterior-anterior axis of the visual cortex, with the independent component (but not the integrative component) being almost perfectly predicted by object selectivity across the visual hierarchy. These results reveal a transitional stage between individual object and multi-object coding in visual cortex, providing a possible neural correlate of efficient processing of regularly positioned objects in natural scenes.
Collapse
Affiliation(s)
- Daniel Kaiser
- Center for Mind/Brain Sciences, University of Trento, 38068, Rovereto, TN, Italy; Department of Education and Psychology, Freie Universität Berlin, 14195, Berlin-Dahlem, Germany.
| | - Marius V Peelen
- Center for Mind/Brain Sciences, University of Trento, 38068, Rovereto, TN, Italy; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
29
|
Crafa D, Hawco C, Brodeur MB. Heightened Responses of the Parahippocampal and Retrosplenial Cortices during Contextualized Recognition of Congruent Objects. Front Behav Neurosci 2017; 11:232. [PMID: 29311862 PMCID: PMC5735118 DOI: 10.3389/fnbeh.2017.00232] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 11/08/2017] [Indexed: 12/21/2022] Open
Abstract
Context sometimes helps make objects more recognizable. Previous studies using functional magnetic resonance imaging (fMRI) have examined regional neural activity when objects have strong or weak associations with their contexts. Such studies have demonstrated that activity in the parahippocampal cortex (PHC) generally corresponds with strong associations between objects and their spatial contexts while retrosplenial cortex (RSC) activity is linked with episodic memory. However these studies investigated objects viewed in associated contexts, but the direct influence of scene on the perception of visual objects has not been widely investigated. We hypothesized that the PHC and RSC may only be engaged for congruent contexts in which the object could typically be found but not for neutral contexts. While in an fMRI scanner, 15 participants rated the recognizability of 152 photographic images of objects, presented within congruent and incongruent contexts. Regions of interest were created to examine PHC and RSC activity using a hypothesis-driven approach. Exploratory analyses were also performed to identify other regional activity. In line with previous studies, PHC and RSC activity emerged when objects were viewed in congruent contexts. Activity in the RSC, inferior parietal lobe (IPL) and fusiform gyrus also emerged. These findings indicate that different brain regions are employed when objects are meaningfully contextualized.
Collapse
Affiliation(s)
- Daina Crafa
- Integrated Program in Neuroscience, McGill University, Montreal, QC, Canada
| | - Colin Hawco
- Campbell Family Mental Health Institute, Centre for Addiction and Mental Health, Toronto, ON, Canada
| | - Mathieu B. Brodeur
- Department of Psychiatry, Douglas Mental Health University Institute, McGill University, Montreal, QC, Canada
| |
Collapse
|
30
|
Margalit E, Biederman I, Tjan BS, Shah MP. What Is Actually Affected by the Scrambling of Objects When Localizing the Lateral Occipital Complex? J Cogn Neurosci 2017; 29:1595-1604. [DOI: 10.1162/jocn_a_01144] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The lateral occipital complex (LOC), the cortical region critical for shape perception, is localized with fMRI by its greater BOLD activity when viewing intact objects compared with their scrambled versions (resembling texture). Despite hundreds of studies investigating LOC, what the LOC localizer accomplishes—beyond distinguishing shape from texture—has never been resolved. By independently scattering the intact parts of objects, the axis structure defining the relations between parts was no longer defined. This led to a diminished BOLD response, despite the increase in the number of independent entities (the parts) produced by the scattering, thus indicating that LOC specifies interpart relations, in addition to specifying the shape of the parts themselves. LOC's sensitivity to relations is not confined to those between parts but is also readily apparent between objects, rendering it—and not subsequent “place” areas—as the critical region for the representation of scenes. Moreover, that these effects are witnessed with novel as well as familiar intact objects and scenes suggests that the relations are computed on the fly, rather than being retrieved from memory.
Collapse
|
31
|
Baldassano C, Beck DM, Fei-Fei L. Human-Object Interactions Are More than the Sum of Their Parts. Cereb Cortex 2017; 27:2276-2288. [PMID: 27073216 DOI: 10.1093/cercor/bhw077] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.
Collapse
Affiliation(s)
| | - Diane M Beck
- Department of Psychology and Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Li Fei-Fei
- Department of Computer Science, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
32
|
Abstract
How does one perceive groups of people? It is known that functionally interacting objects (e.g., a glass and a pitcher tilted as if pouring water into it) are perceptually grouped. Here, we showed that processing of multiple human bodies is also influenced by their relative positioning. In a series of categorization experiments, bodies facing each other (seemingly interacting) were recognized more accurately than bodies facing away from each other (noninteracting). Moreover, recognition of facing body dyads (but not nonfacing body dyads) was strongly impaired when those stimuli were inverted, similar to what has been found for individual bodies. This inversion effect demonstrates sensitivity of the visual system to facing body dyads in their common upright configuration and might imply recruitment of configural processing (i.e., processing of the overall body configuration without prior part-by-part analysis). These findings suggest that facing dyads are represented as one structured unit, which may be the intermediate level of representation between multiple-object (body) perception and representation of social actions.
Collapse
Affiliation(s)
- Liuba Papeo
- Center for Brain and Cognition, Universitat Pompeu Fabra
- Institut des Sciences Cognitives—Marc Jeannerod, Unité Mixte de Recherche (UMR) 5304, Centre National de la Recherche Scientifique (CNRS), Université de Lyon
| | - Timo Stein
- Center for Mind/Brain Sciences (CIMeC), University of Trento
| | - Salvador Soto-Faraco
- Center for Brain and Cognition, Universitat Pompeu Fabra
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
33
|
Groen IIA, Silson EH, Baker CI. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0102. [PMID: 28044013 DOI: 10.1098/rstb.2016.0102] [Citation(s) in RCA: 87] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/20/2016] [Indexed: 11/12/2022] Open
Abstract
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Iris I A Groen
- Laboratory of Brain and Cognition, National Institutes of Health, 10 Center Drive 10-3N228, Bethesda, MD, USA
| | - Edward H Silson
- Laboratory of Brain and Cognition, National Institutes of Health, 10 Center Drive 10-3N228, Bethesda, MD, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institutes of Health, 10 Center Drive 10-3N228, Bethesda, MD, USA
| |
Collapse
|
34
|
Xu S, Humphreys GW, Mevorach C, Heinke D. The involvement of the dorsal stream in processing implied actions between paired objects: A TMS study. Neuropsychologia 2016; 95:240-249. [PMID: 28034601 DOI: 10.1016/j.neuropsychologia.2016.12.021] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Revised: 11/03/2016] [Accepted: 12/20/2016] [Indexed: 11/19/2022]
Abstract
Perceiving and selecting the action possibilities (affordances) provided by objects is an important challenge to human vision, and is not limited to single-object scenarios. Xu et al. (2015) identified two effects of implied actions between paired objects on response selection: an inhibitory effect on responses aligned with the passive object in the pair (e.g. a bowl) and an advantage associated with responses aligned with the active objects (e.g. a spoon). The present study investigated the neurocognitive mechanisms behind these effects by examining the involvement of the ventral (vision for perception) and the dorsal (vision for action) visual streams, as defined in Goodale and Milner's (1992) two visual stream theory. Online repetitive transcranial magnetic stimulation (rTMS) applied to the left anterior intraparietal sulcus (aIPS) reduced both the inhibitory effect of implied actions on responses aligned with the passive objects and the advantage of those aligned with the active objects, but only when the active objects were contralateral to the stimulation. rTMS to the left lateral occipital areas (LO) did not significantly alter the influence of implied actions. The results reveal that the dorsal visual stream is crucial not only in single-object affordance processing, but also in responding to implied actions between objects.
Collapse
Affiliation(s)
- Shan Xu
- School of Psychology, Beijing Normal University, Beijing 100875, China; School of Psychology, University of Birmingham, Birmingham B15 2TT, UK.
| | - Glyn W Humphreys
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
| | - Carmel Mevorach
- School of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| | - Dietmar Heinke
- School of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| |
Collapse
|
35
|
Abstract
Abstract
Recognizing objects in the environment and understanding our surroundings often depends on context: the presence of other objects and knowledge about their relations with each other. Such contextual information activates a set of medial lobe brain regions, the parahippocampal cortex and the retrosplenial complex. Both regions are more activated by single objects with a unique contextual association than by objects not associated with any specific context. Similarly they are more activated by spatially coherent arrangements of objects when those are consistent with their known spatial relations. The current study tested how context in multiple-object displays is represented in these regions in the absence of relevant spatial information. Using an fMRI slow-event-related design, we show that the precuneus (a subpart of the retrosplenial complex) is more activated by simultaneously presented contextually related objects than by unrelated objects. This suggests that the representation of context in this region is cumulative, representing integrated information across objects in the display. We discuss these findings in relation to processing of visual information and relate them to previous findings of contextual effects in perception.
Collapse
Affiliation(s)
- Tomer Livne
- 1Harvard Medical School
- 2Massachusetts General Hospital
- 3Washington University in St. Louis
| | - Moshe Bar
- 1Harvard Medical School
- 2Massachusetts General Hospital
- 4Bar Ilan University, Israel
| |
Collapse
|
36
|
Choo H, Walther DB. Contour junctions underlie neural representations of scene categories in high-level human visual cortex. Neuroimage 2016; 135:32-44. [PMID: 27118087 DOI: 10.1016/j.neuroimage.2016.04.021] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2015] [Revised: 03/16/2016] [Accepted: 04/08/2016] [Indexed: 10/21/2022] Open
Abstract
Humans efficiently grasp complex visual environments, making highly consistent judgments of entry-level category despite their high variability in visual appearance. How does the human brain arrive at the invariant neural representations underlying categorization of real-world environments? We here show that the neural representation of visual environments in scene-selective human visual cortex relies on statistics of contour junctions, which provide cues for the three-dimensional arrangement of surfaces in a scene. We manipulated line drawings of real-world environments such that statistics of contour orientations or junctions were disrupted. Manipulated and intact line drawings were presented to participants in an fMRI experiment. Scene categories were decoded from neural activity patterns in the parahippocampal place area (PPA), the occipital place area (OPA) and other visual brain regions. Disruption of junctions but not orientations led to a drastic decrease in decoding accuracy in the PPA and OPA, indicating the reliance of these areas on intact junction statistics. Accuracy of decoding from early visual cortex, on the other hand, was unaffected by either image manipulation. We further show that the correlation of error patterns between decoding from the scene-selective brain areas and behavioral experiments is contingent on intact contour junctions. Finally, a searchlight analysis exposes the reliance of visually active brain regions on different sets of contour properties. Statistics of contour length and curvature dominate neural representations of scene categories in early visual areas and contour junctions in high-level scene-selective brain regions.
Collapse
Affiliation(s)
- Heeyoung Choo
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON M5S 3G3, Canada.
| | - Dirk B Walther
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON M5S 3G3, Canada
| |
Collapse
|
37
|
Abstract
UNLABELLED Developmental topographic disorientation (DTD) is a life-long condition in which affected individuals are severely impaired in navigating around their environment. Individuals with DTD have no apparent structural brain damage on conventional imaging and the neural mechanisms underlying DTD are currently unknown. Using functional and diffusion tensor imaging, we present a comprehensive neuroimaging study of an individual, J.N., with well defined DTD. J.N. has intact scene-selective responses in the parahippocampal place area (PPA), transverse occipital sulcus, and retrosplenial cortex (RSC), key regions associated with scene perception and navigation. However, detailed fMRI studies probing selective tuning properties of these regions, as well as functional connectivity, suggest that J.N.'s RSC has an atypical response profile and an atypical functional coupling to PPA compared with human controls. This deviant functional profile of RSC is not due to compromised structural connectivity. This comprehensive examination suggests that the RSC may play a key role in navigation-related processing and that an alteration of the RSC's functional properties may serve as the neural basis for DTD. SIGNIFICANCE STATEMENT Individuals with developmental topographic disorientation (DTD) have a life-long impairment in spatial navigation in the absence of brain damage, neurological conditions, or basic perceptual or memory deficits. Although progress has been made in identifying brain regions that subserve normal navigation, the neural basis of DTD is unknown. Using functional and structural neuroimaging and detailed statistical analyses, we investigated the brain regions typically involved in navigation and scene processing in a representative DTD individual, J.N. Although scene-selective regions were identified, closer scrutiny indicated that these areas, specifically the retrosplenial cortex (RSC), were functionally disrupted in J.N. This comprehensive examination of a representative DTD individual provides insight into the neural basis of DTD and the role of the RSC in navigation-related processing.
Collapse
|
38
|
|
39
|
Kubilius J, Baeck A, Wagemans J, Op de Beeck HP. Brain-decoding fMRI reveals how wholes relate to the sum of parts. Cortex 2015; 72:5-14. [PMID: 25771992 DOI: 10.1016/j.cortex.2015.01.020] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2014] [Revised: 12/03/2014] [Accepted: 01/27/2015] [Indexed: 11/19/2022]
Abstract
The human brain performs many nonlinear operations in order to extract relevant information from local inputs. How can we observe and quantify these effects within and across large patches of cortex? In this paper, we discuss the application of multi-voxel pattern analysis (MVPA) in functional magnetic resonance imaging (fMRI) to address this issue. Specifically, we show how MVPA (i) allows to compare various possibilities of part combinations into wholes, such as taking the mean, weighted mean, or the maximum of responses to the parts; (ii) can be used to quantify the parameters of these combinations; and (iii) can be applied in various experimental paradigms. Through these procedures, fMRI helps to obtain a computational understanding of how local information is integrated into larger wholes in various cortical regions.
Collapse
Affiliation(s)
- Jonas Kubilius
- Laboratory of Biological Psychology, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium; Laboratory of Experimental Psychology, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Annelies Baeck
- Laboratory of Biological Psychology, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium; Laboratory of Experimental Psychology, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Johan Wagemans
- Laboratory of Experimental Psychology, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Hans P Op de Beeck
- Laboratory of Biological Psychology, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium.
| |
Collapse
|
40
|
Quadflieg S, Gentile F, Rossion B. The neural basis of perceiving person interactions. Cortex 2015; 70:5-20. [PMID: 25697049 DOI: 10.1016/j.cortex.2014.12.020] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Revised: 12/25/2014] [Accepted: 12/30/2014] [Indexed: 11/26/2022]
Abstract
This study examined whether the grouping of people into meaningful social scenes (e.g., two people having a chat) impacts the basic perceptual analysis of each partaking individual. To explore this issue, we measured neural activity using functional magnetic resonance imaging (fMRI) while participants sex-categorized congruent as well as incongruent person dyads (i.e., two people interacting in a plausible or implausible manner). Incongruent person dyads elicited enhanced neural processing in several high-level visual areas dedicated to face and body encoding and in the posterior middle temporal gyrus compared to congruent person dyads. Incongruent and congruent person scenes were also successfully differentiated by a linear multivariate pattern classifier in the right fusiform body area and the left extrastriate body area. Finally, increases in the person scenes' meaningfulness as judged by independent observers was accompanied by enhanced activity in the bilateral posterior insula. These findings demonstrate that the processing of person scenes goes beyond a mere stimulus-bound encoding of their partaking agents, suggesting that changes in relations between agents affect their representation in category-selective regions of the visual cortex and beyond.
Collapse
Affiliation(s)
- Susanne Quadflieg
- School of Experimental Psychology, University of Bristol, UK; Division of Psychology, New York University Abu Dhabi, UAE.
| | - Francesco Gentile
- Psychological Sciences Research Institute and Institute of Neuroscience, University of Louvain, Louvain-la-Neuve, Belgium; Department of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Bruno Rossion
- Psychological Sciences Research Institute and Institute of Neuroscience, University of Louvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
41
|
Rubin DC, Umanath S. Event memory: A theory of memory for laboratory, autobiographical, and fictional events. Psychol Rev 2015; 122:1-23. [PMID: 25330330 PMCID: PMC4295926 DOI: 10.1037/a0037907] [Citation(s) in RCA: 190] [Impact Index Per Article: 21.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An event memory is a mental construction of a scene recalled as a single occurrence. It therefore requires the hippocampus and ventral visual stream needed for all scene construction. The construction need not come with a sense of reliving or be made by a participant in the event, and it can be a summary of occurrences from more than one encoding. The mental construction, or physical rendering, of any scene must be done from a specific location and time; this introduces a "self" located in space and time, which is a necessary, but need not be a sufficient, condition for a sense of reliving. We base our theory on scene construction rather than reliving because this allows the integration of many literatures and because there is more accumulated knowledge about scene construction's phenomenology, behavior, and neural basis. Event memory differs from episodic memory in that it does not conflate the independent dimensions of whether or not a memory is relived, is about the self, is recalled voluntarily, or is based on a single encoding with whether it is recalled as a single occurrence of a scene. Thus, we argue that event memory provides a clearer contrast to semantic memory, which also can be about the self, be recalled voluntarily, and be from a unique encoding; allows for a more comprehensive dimensional account of the structure of explicit memory; and better accounts for laboratory and real-world behavioral and neural results, including those from neuropsychology and neuroimaging, than does episodic memory.
Collapse
Affiliation(s)
- David C Rubin
- Department of Psychology and Neuroscience, Duke University
| | - Sharda Umanath
- Department of Psychology and Neuroscience, Duke University
| |
Collapse
|
42
|
Gagne CR, MacEvoy SP. Do simultaneously viewed objects influence scene recognition individually or as groups? Two perceptual studies. PLoS One 2014; 9:e102819. [PMID: 25119715 PMCID: PMC4138008 DOI: 10.1371/journal.pone.0102819] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2013] [Accepted: 06/24/2014] [Indexed: 11/18/2022] Open
Abstract
The ability to quickly categorize visual scenes is critical to daily life, allowing us to identify our whereabouts and to navigate from one place to another. Rapid scene categorization relies heavily on the kinds of objects scenes contain; for instance, studies have shown that recognition is less accurate for scenes to which incongruent objects have been added, an effect usually interpreted as evidence of objects' general capacity to activate semantic networks for scene categories they are statistically associated with. Essentially all real-world scenes contain multiple objects, however, and it is unclear whether scene recognition draws on the scene associations of individual objects or of object groups. To test the hypothesis that scene recognition is steered, at least in part, by associations between object groups and scene categories, we asked observers to categorize briefly-viewed scenes appearing with object pairs that were semantically consistent or inconsistent with the scenes. In line with previous results, scenes were less accurately recognized when viewed with inconsistent versus consistent pairs. To understand whether this reflected individual or group-level object associations, we compared the impact of pairs composed of mutually related versus unrelated objects; i.e., pairs, which, as groups, had clear associations to particular scene categories versus those that did not. Although related and unrelated object pairs equally reduced scene recognition accuracy, unrelated pairs were consistently less capable of drawing erroneous scene judgments towards scene categories associated with their individual objects. This suggests that scene judgments were influenced by the scene associations of object groups, beyond the influence of individual objects. More generally, the fact that unrelated objects were as capable of degrading categorization accuracy as related objects, while less capable of generating specific alternative judgments, indicates that the process by which objects interfere with scene recognition is separate from the one through which they inform it.
Collapse
Affiliation(s)
- Christopher R. Gagne
- Department of Psychology, Boston College, Chestnut Hill, Massachusetts, United States of America
| | - Sean P. MacEvoy
- Department of Psychology, Boston College, Chestnut Hill, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
43
|
Object grouping based on real-world regularities facilitates perception by reducing competitive interactions in visual cortex. Proc Natl Acad Sci U S A 2014; 111:11217-22. [PMID: 25024190 DOI: 10.1073/pnas.1400559111] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In virtually every real-life situation humans are confronted with complex and cluttered visual environments that contain a multitude of objects. Because of the limited capacity of the visual system, objects compete for neural representation and cognitive processing resources. Previous work has shown that such attentional competition is partly object based, such that competition among elements is reduced when these elements perceptually group into an object based on low-level cues. Here, using functional MRI (fMRI) and behavioral measures, we show that the attentional benefit of grouping extends to higher-level grouping based on the relative position of objects as experienced in the real world. An fMRI study designed to measure competitive interactions among objects in human visual cortex revealed reduced neural competition between objects when these were presented in commonly experienced configurations, such as a lamp above a table, relative to the same objects presented in other configurations. In behavioral visual search studies, we then related this reduced neural competition to improved target detection when distracter objects were shown in regular configurations. Control studies showed that low-level grouping could not account for these results. We interpret these findings as reflecting the grouping of objects based on higher-level spatial-relational knowledge acquired through a lifetime of seeing objects in specific configurations. This interobject grouping effectively reduces the number of objects that compete for representation and thereby contributes to the efficiency of real-world perception.
Collapse
|
44
|
Abstract
Fifteen years ago, an intriguing area was found in human visual cortex. This area (the parahippocampal place area [PPA]) was initially interpreted as responding selectively to images of places. However, subsequent studies reported that PPA also responds strongly to a much wider range of image categories, including inanimate objects, tools, spatial context, landmarks, objectively large objects, indoor scenes, and/or isolated buildings. Here, we hypothesized that PPA responds selectively to a lower-level stimulus property (rectilinear features), which are common to many of the above higher-order categories. Using a novel wavelet image filter, we first demonstrated that rectangular features are common in these diverse stimulus categories. Then we tested whether PPA is selectively activated by rectangular features in six independent fMRI experiments using progressively simplified stimuli, from complex real-world images, through 3D/2D computer-generated shapes, through simple line stimuli. We found that PPA was consistently activated by rectilinear features, compared with curved and nonrectangular features. This rectilinear preference was (1) comparable in amplitude and selectivity, relative to the preference for category (scenes vs faces), (2) independent of known biases for specific orientations and spatial frequency, and (3) not predictable from V1 activity. Two additional scene-responsive areas were sensitive to a subset of rectilinear features. Thus, rectilinear selectivity may serve as a crucial building block for category-selective responses in PPA and functionally related areas.
Collapse
|
45
|
Walther DB, Shen D. Nonaccidental properties underlie human categorization of complex natural scenes. Psychol Sci 2014; 25:851-60. [PMID: 24474725 DOI: 10.1177/0956797613512662] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Humans can categorize complex natural scenes quickly and accurately. Which scene properties enable people to do this with such apparent ease? We extracted structural properties of contours (orientation, length, curvature) and contour junctions (types and angles) from line drawings of natural scenes. All of these properties contain information about scene categories that can be exploited computationally. However, when we compared error patterns from computational scene categorization with those from a six-alternative forced-choice scene-categorization experiment, we found that only junctions and curvature made significant contributions to human behavior. To further test the critical role of these properties, we perturbed junctions in line drawings by randomly shifting contours and found a significant decrease in human categorization accuracy. We conclude that scene categorization by humans relies on curvature as well as the same nonaccidental junction properties used for object recognition. These properties correspond to the visual features represented in area V2.
Collapse
|
46
|
|
47
|
Rémy F, Vayssière N, Pins D, Boucart M, Fabre-Thorpe M. Incongruent object/context relationships in visual scenes: where are they processed in the brain? Brain Cogn 2013; 84:34-43. [PMID: 24280445 DOI: 10.1016/j.bandc.2013.10.008] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2013] [Revised: 10/25/2013] [Accepted: 10/28/2013] [Indexed: 11/18/2022]
Abstract
Rapid object visual categorization in briefly flashed natural scenes is influenced by the surrounding context. The neural correlates underlying reduced categorization performance in response to incongruent object/context associations remain unclear and were investigated in the present study using fMRI. Participants were instructed to categorize objects in briefly presented scenes (exposure duration=100ms). Half of the scenes consisted of objects pasted in an expected (congruent) context, whereas for the other half, objects were embedded in incongruent contexts. Object categorization was more accurate and faster in congruent relative to incongruent scenes. Moreover, we found that the two types of scenes elicited different patterns of cerebral activation. In particular, the processing of incongruent scenes induced increased activations in the parahippocampal cortex, as well as in the right frontal cortex. This higher activity may indicate additional neural processing of the novel (non experienced) contextual associations that were inherent to the incongruent scenes. Moreover, our results suggest that the locus of object categorization impairment due to contextual incongruence is in the right anterior parahippocampal cortex. Indeed in this region activity was correlated with the reaction time increase observed with incongruent scenes. Representations for associations between objects and their usual context of appearance might be encoded in the right anterior parahippocampal cortex.
Collapse
Affiliation(s)
- Florence Rémy
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, France; CNRS, CerCo, Toulouse, France.
| | - Nathalie Vayssière
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, France; CNRS, CerCo, Toulouse, France
| | - Delphine Pins
- Université Lille Nord de France, UDSL, Laboratoire Neurosciences Fonctionnelles et Pathologies, CHU Lille, F-59000 Lille, France; CNRS, F-59000 Lille, France
| | - Muriel Boucart
- Université Lille Nord de France, UDSL, Laboratoire Neurosciences Fonctionnelles et Pathologies, CHU Lille, F-59000 Lille, France; CNRS, F-59000 Lille, France
| | - Michèle Fabre-Thorpe
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, France; CNRS, CerCo, Toulouse, France
| |
Collapse
|
48
|
Abstract
CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition.
Collapse
|
49
|
Humphreys GW, Kumar S, Yoon EY, Wulff M, Roberts KL, Riddoch MJ. Attending to the possibilities of action. Philos Trans R Soc Lond B Biol Sci 2013; 368:20130059. [PMID: 24018721 PMCID: PMC3758202 DOI: 10.1098/rstb.2013.0059] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Actions taking place in the environment are critical for our survival. We review evidence on attention to action, drawing on sets of converging evidence from neuropsychological patients through to studies of the time course and neural locus of action-based cueing of attention in normal observers. We show that the presence of action relations between stimuli helps reduce visual extinction in patients with limited attention to the contralesional side of space, while the first saccades made by normal observers and early perceptual and attentional responses measured using electroencephalography/event-related potentials are modulated by preparation of action and by seeing objects being grasped correctly or incorrectly for action. With both normal observers and patients, there is evidence for two components to these effects based on both visual perceptual and motor-based responses. While the perceptual responses reflect factors such as the visual familiarity of the action-related information, the motor response component is determined by factors such as the alignment of the objects with the observer's effectors and not by the visual familiarity of the stimuli. In addition to this, we suggest that action relations between stimuli can be coded pre-attentively, in the absence of attention to the stimulus, and action relations cue perceptual and motor responses rapidly and automatically. At present, formal theories of visual attention are not set up to account for these action-related effects; we suggest ways that theories could be expected to enable action effects to be incorporated.
Collapse
Affiliation(s)
- Glyn W. Humphreys
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK
| | - Sanjay Kumar
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK
| | - Eun Young Yoon
- Korean NeuroTraining Center, Apsan-soonhwan Road 736, Nam-gu, Daegu, South Korea
| | - Melanie Wulff
- School of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| | | | - M. Jane Riddoch
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK
| |
Collapse
|
50
|
Roberts KL, Humphreys GW. Distinguishing the effects of action relations and scene context on object perception. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.851755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|