1
|
Cracco E, Papeo L, Wiersema JR. Evidence for a role of synchrony but not common fate in the perception of biological group movements. Eur J Neurosci 2024; 60:3557-3571. [PMID: 38706370 DOI: 10.1111/ejn.16356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/16/2024] [Accepted: 04/05/2024] [Indexed: 05/07/2024]
Abstract
Extensive research has shown that observers are able to efficiently extract summary information from groups of people. However, little is known about the cues that determine whether multiple people are represented as a social group or as independent individuals. Initial research on this topic has primarily focused on the role of static cues. Here, we instead investigate the role of dynamic cues. In two experiments with male and female human participants, we use EEG frequency tagging to investigate the influence of two fundamental Gestalt principles - synchrony and common fate - on the grouping of biological movements. In Experiment 1, we find that brain responses coupled to four point-light figures walking together are enhanced when they move in sync vs. out of sync, but only when they are presented upright. In contrast, we found no effect of movement direction (i.e., common fate). In Experiment 2, we rule out that synchrony takes precedence over common fate by replicating the null effect of movement direction while keeping synchrony constant. These results suggest that synchrony plays an important role in the processing of biological group movements. In contrast, the role of common fate is less clear and will require further research.
Collapse
Affiliation(s)
- Emiel Cracco
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, Bron, France
| | - Jan R Wiersema
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
2
|
Katyal S, Abdoun O, Mounier H, Lutz A. Reduced processing of afforded actions while observing mental content as ongoing mental phenomena. Sci Rep 2024; 14:10130. [PMID: 38698150 PMCID: PMC11065984 DOI: 10.1038/s41598-024-60934-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 04/29/2024] [Indexed: 05/05/2024] Open
Abstract
While consciousness is typically considered equivalent to mental contents, certain meditation practices-including open monitoring (OM)-are said to enable a unique conscious state where meditators can experience mental content from a de-reified perspective as "ongoing phenomena." Phenomenologically, such a state is considered as reduction of intentionality, the mental act upon mental content. We hypothesised that this de-reified state would be characterised by reduced mental actional processing of affording objects. We recruited two groups of participants, meditators with long-term experience in cultivating a de-reified state, and demographically-matched novice meditators. Participants performed a task with images in two configurations-where objects did (high-affordance) and did not imply actions (low-affordance)-following both a baseline and OM-induced de-reified state, along with EEG recordings. While long-term meditators exhibited preferential processing of high-affordance images compared to low-affordance images during baseline, such an effect was abolished during the OM state, as hypothesised. For novices, however, the high-affordance configuration was preferred over the low-affordance one both during baseline and OM. Perceptual durations of objects across conditions positively correlated with the degree of µ-rhythm desynchronization, indicating that neural processing of affordance impacted perceptual awareness. Our results indicate that OM styles of meditation may help in mentally decoupling otherwise automatic cognitive processing of mental actions by affording objects.
Collapse
Affiliation(s)
- Sucharit Katyal
- EDUWELL Team, Lyon Neuroscience Research Centre, INSERM U1028, CNRS UMR5292, Lyon 1 University, Lyon, France.
- Department of Psychology, University of Copenhagen, Copenhagen, Denmark.
| | - Oussama Abdoun
- EDUWELL Team, Lyon Neuroscience Research Centre, INSERM U1028, CNRS UMR5292, Lyon 1 University, Lyon, France
| | - Hugues Mounier
- L2S - Laboratoire des signaux et systemes, Université Paris-Saclay, CentraleSupélec, CNRS, Gif Sur Yvette, France
| | - Antoine Lutz
- EDUWELL Team, Lyon Neuroscience Research Centre, INSERM U1028, CNRS UMR5292, Lyon 1 University, Lyon, France.
| |
Collapse
|
3
|
Abassi E, Papeo L. Category-Selective Representation of Relationships in the Visual Cortex. J Neurosci 2024; 44:e0250232023. [PMID: 38124013 PMCID: PMC10860595 DOI: 10.1523/jneurosci.0250-23.2023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 09/29/2023] [Accepted: 10/14/2023] [Indexed: 12/23/2023] Open
Abstract
Understanding social interaction requires processing social agents and their relationships. The latest results show that much of this process is visually solved: visual areas can represent multiple people encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to face-to-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured the brain activity of healthy female and male humans (N = 42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing > non-facing effect as a signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in the visual cortex. Additional analyses suggest the existence of a third network encoding relations between (nonsocial) objects. Finally, a separate occipitotemporal network showed the generalization of relational information across body, face, and nonsocial object dyads (multivariate pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, the visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and nonsocial) interaction, which trigger inferential processes.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| |
Collapse
|
4
|
Peelen MV, Berlot E, de Lange FP. Predictive processing of scenes and objects. NATURE REVIEWS PSYCHOLOGY 2024; 3:13-26. [PMID: 38989004 PMCID: PMC7616164 DOI: 10.1038/s44159-023-00254-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 07/12/2024]
Abstract
Real-world visual input consists of rich scenes that are meaningfully composed of multiple objects which interact in complex, but predictable, ways. Despite this complexity, we recognize scenes, and objects within these scenes, from a brief glance at an image. In this review, we synthesize recent behavioral and neural findings that elucidate the mechanisms underlying this impressive ability. First, we review evidence that visual object and scene processing is partly implemented in parallel, allowing for a rapid initial gist of both objects and scenes concurrently. Next, we discuss recent evidence for bidirectional interactions between object and scene processing, with scene information modulating the visual processing of objects, and object information modulating the visual processing of scenes. Finally, we review evidence that objects also combine with each other to form object constellations, modulating the processing of individual objects within the object pathway. Altogether, these findings can be understood by conceptualizing object and scene perception as the outcome of a joint probabilistic inference, in which "best guesses" about objects act as priors for scene perception and vice versa, in order to concurrently optimize visual inference of objects and scenes.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eva Berlot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
5
|
Goupil N, Hochmann JR, Papeo L. Intermodulation responses show integration of interacting bodies in a new whole. Cortex 2023; 165:129-140. [PMID: 37279640 DOI: 10.1016/j.cortex.2023.04.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/31/2023] [Accepted: 04/30/2023] [Indexed: 06/08/2023]
Abstract
People are often seen among other people, relating to and interacting with one another. Recent studies suggest that socially relevant spatial relations between bodies, such as the face-to-face positioning, or facingness, change the visual representation of those bodies, relative to when the same items appear unrelated (e.g., back-to-back) or in isolation. The current study addresses the hypothesis that face-to-face bodies give rise to a new whole, an integrated representation of individual bodies in a new perceptual unit. Using frequency-tagging EEG, we targeted, as a measure of integration, an EEG correlate of the non-linear combination of the neural responses to each of two individual bodies presented either face-to-face as if interacting, or back-to-back. During EEG recording, participants (N = 32) viewed two bodies, either face-to-face or back-to-back, flickering at two different frequencies (F1 and F2), yielding two distinctive responses in the EEG signal. Spectral analysis examined the responses at the intermodulation frequencies (nF1±mF2), signaling integration of individual responses. An anterior intermodulation response was observed for face-to-face bodies, but not for back-to-back bodies, nor for face-to-face chairs and machines. These results show that interacting bodies are integrated into a representation that is more than the sum of its parts. This effect, specific to body dyads, may mark an early step in the transformation towards an integrated representation of a social event, from the visual representation of individual participants in that event.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| |
Collapse
|
6
|
The visual encoding of graspable unfamiliar objects. PSYCHOLOGICAL RESEARCH 2023; 87:452-461. [PMID: 35322276 DOI: 10.1007/s00426-022-01673-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 03/08/2022] [Indexed: 10/18/2022]
Abstract
We explored by eye-tracking the visual encoding modalities of participants (N = 20) involved in a free-observation task in which three repetitions of ten unfamiliar graspable objects were administered. Then, we analysed the temporal allocation (t = 1500 ms) of visual-spatial attention to objects' manipulation (i.e., the part aimed at grasping the object) and functional (i.e., the part aimed at recognizing the function and identity of the object) areas. Within the first 750 ms, participants tended to shift their gaze on the functional areas while decreasing their attention on the manipulation areas. Then, participants reversed this trend, decreasing their visual-spatial attention to the functional areas while fixing the manipulation areas relatively more. Crucially, the global amount of visual-spatial attention for objects' functional areas significantly decreased as an effect of stimuli repetition while remaining stable for the manipulation areas, thus indicating stimulus familiarity effects. These findings support the action reappraisal theoretical approach, which considers object/tool processing as abilities emerging from semantic, technical/mechanical, and sensorimotor knowledge integration.
Collapse
|
7
|
Chen L, Zhu S, Feng B, Zhang X, Jiang Y. Altered effective connectivity between lateral occipital cortex and superior parietal lobule contributes to manipulability-related modulation of the Ebbinghaus illusion. Cortex 2022; 147:194-205. [DOI: 10.1016/j.cortex.2021.11.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 08/30/2021] [Accepted: 11/30/2021] [Indexed: 11/03/2022]
|
8
|
The contributions of the ventral and the dorsal visual streams to the automatic processing of action relations of familiar and unfamiliar object pairs. Neuroimage 2021. [DOI: 10.1016/j.neuroimage.2021.118629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
9
|
Gronau N. To Grasp the World at a Glance: The Role of Attention in Visual and Semantic Associative Processing. J Imaging 2021; 7:jimaging7090191. [PMID: 34564117 PMCID: PMC8470651 DOI: 10.3390/jimaging7090191] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 08/30/2021] [Accepted: 09/15/2021] [Indexed: 11/16/2022] Open
Abstract
Associative relations among words, concepts and percepts are the core building blocks of high-level cognition. When viewing the world ‘at a glance’, the associative relations between objects in a scene, or between an object and its visual background, are extracted rapidly. The extent to which such relational processing requires attentional capacity, however, has been heavily disputed over the years. In the present manuscript, I review studies investigating scene–object and object–object associative processing. I then present a series of studies in which I assessed the necessity of spatial attention to various types of visual–semantic relations within a scene. Importantly, in all studies, the spatial and temporal aspects of visual attention were tightly controlled in an attempt to minimize unintentional attention shifts from ‘attended’ to ‘unattended’ regions. Pairs of stimuli—either objects, scenes or a scene and an object—were briefly presented on each trial, while participants were asked to detect a pre-defined target category (e.g., an animal, a nonsense shape). Response times (RTs) to the target detection task were registered when visual attention spanned both stimuli in a pair vs. when attention was focused on only one of two stimuli. Among non-prioritized stimuli that were not defined as to-be-detected targets, findings consistently demonstrated rapid associative processing when stimuli were fully attended, i.e., shorter RTs to associated than unassociated pairs. Focusing attention on a single stimulus only, however, largely impaired this relational processing. Notably, prioritized targets continued to affect performance even when positioned at an unattended location, and their associative relations with the attended items were well processed and analyzed. Our findings portray an important dissociation between unattended task-irrelevant and task-relevant items: while the former require spatial attentional resources in order to be linked to stimuli positioned inside the attentional focus, the latter may influence high-level recognition and associative processes via feature-based attentional mechanisms that are largely independent of spatial attention.
Collapse
Affiliation(s)
- Nurit Gronau
- Department of Psychology and Department of Cognitive Science Studies, The Open University of Israel, Raanana 4353701, Israel
| |
Collapse
|
10
|
Kumar S, Riddoch MJ, Humphreys GW. Handgrip Based Action Information Modulates Attentional Selection: An ERP Study. Front Hum Neurosci 2021; 15:634359. [PMID: 33746725 PMCID: PMC7969504 DOI: 10.3389/fnhum.2021.634359] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 02/08/2021] [Indexed: 11/25/2022] Open
Abstract
Prior work shows that the possibility of action to an object (visual affordance) facilitates attentional deployment. We sought to investigate the neural mechanisms underlying this modulation of attention by examining ERPs to target objects that were either congruently or incongruently gripped for their use in the presence of a congruently or incongruently gripped distractor. Participants responded to the presence or absence of a target object matching a preceding action word with a distractor object presented in the opposite location. Participants were faster in responding to congruently gripped targets compared to incongruently gripped targets. There was a reduced N2pc potential when the target was congruently gripped, and the distractor was incongruently gripped compared to the conditions where targets were incongruently gripped or when the distractor, as well as target, was congruently gripped. The N2pc results indicate that target selection is easier when action information is congruent with an object’s use.
Collapse
Affiliation(s)
- Sanjay Kumar
- Department of Psychology, Oxford Brookes University, Oxford, United Kingdom
| | - M Jane Riddoch
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Glyn W Humphreys
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
11
|
Quek GL, Peelen MV. Contextual and Spatial Associations Between Objects Interactively Modulate Visual Processing. Cereb Cortex 2020; 30:6391-6404. [PMID: 32754744 PMCID: PMC7609942 DOI: 10.1093/cercor/bhaa197] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 06/29/2020] [Accepted: 06/29/2020] [Indexed: 01/23/2023] Open
Abstract
Much of what we know about object recognition arises from the study of isolated objects. In the real world, however, we commonly encounter groups of contextually associated objects (e.g., teacup and saucer), often in stereotypical spatial configurations (e.g., teacup above saucer). Here we used electroencephalography to test whether identity-based associations between objects (e.g., teacup-saucer vs. teacup-stapler) are encoded jointly with their typical relative positioning (e.g., teacup above saucer vs. below saucer). Observers viewed a 2.5-Hz image stream of contextually associated object pairs intermixed with nonassociated pairs as every fourth image. The differential response to nonassociated pairs (measurable at 0.625 Hz in 28/37 participants) served as an index of contextual integration, reflecting the association of object identities in each pair. Over right occipitotemporal sites, this signal was larger for typically positioned object streams, indicating that spatial configuration facilitated the extraction of the objects' contextual association. This high-level influence of spatial configuration on object identity integration arose ~ 320 ms post-stimulus onset, with lower-level perceptual grouping (shared with inverted displays) present at ~ 130 ms. These results demonstrate that contextual and spatial associations between objects interactively influence object processing. We interpret these findings as reflecting the high-level perceptual grouping of objects that frequently co-occur in highly stereotyped relative positions.
Collapse
Affiliation(s)
- Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Gelderland, The Netherlands
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Gelderland, The Netherlands
| |
Collapse
|
12
|
|
13
|
Kaiser D, Inciuraite G, Cichy RM. Rapid contextualization of fragmented scene information in the human visual system. Neuroimage 2020; 219:117045. [PMID: 32540354 DOI: 10.1016/j.neuroimage.2020.117045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 04/24/2020] [Accepted: 06/09/2020] [Indexed: 10/24/2022] Open
Abstract
Real-world environments are extremely rich in visual information. At any given moment in time, only a fraction of this information is available to the eyes and the brain, rendering naturalistic vision a collection of incomplete snapshots. Previous research suggests that in order to successfully contextualize this fragmented information, the visual system sorts inputs according to spatial schemata, that is knowledge about the typical composition of the visual world. Here, we used a large set of 840 different natural scene fragments to investigate whether this sorting mechanism can operate across the diverse visual environments encountered during real-world vision. We recorded brain activity using electroencephalography (EEG) while participants viewed incomplete scene fragments at fixation. Using representational similarity analysis on the EEG data, we tracked the fragments' cortical representations across time. We found that the fragments' typical vertical location within the environment (top or bottom) predicted their cortical representations, indexing a sorting of information according to spatial schemata. The fragments' cortical representations were most strongly organized by their vertical location at around 200 ms after image onset, suggesting rapid perceptual sorting of information according to spatial schemata. In control analyses, we show that this sorting is flexible with respect to visual features: it is neither explained by commonalities between visually similar indoor and outdoor scenes, nor by the feature organization emerging from a deep neural network trained on scene categorization. Demonstrating such a flexible sorting across a wide range of visually diverse scenes suggests a contextualization mechanism suitable for complex and variable real-world environments.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, UK.
| | - Gabriele Inciuraite
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
14
|
Kaiser D, Häberle G, Cichy RM. Real-world structure facilitates the rapid emergence of scene category information in visual brain signals. J Neurophysiol 2020; 124:145-151. [PMID: 32519577 DOI: 10.1152/jn.00164.2020] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In everyday life, our visual surroundings are not arranged randomly but structured in predictable ways. Although previous studies have shown that the visual system is sensitive to such structural regularities, it remains unclear whether the presence of an intact structure in a scene also facilitates the cortical analysis of the scene's categorical content. To address this question, we conducted an EEG experiment during which participants viewed natural scene images that were either "intact" (with their quadrants arranged in typical positions) or "jumbled" (with their quadrants arranged into atypical positions). We then used multivariate pattern analysis to decode the scenes' category from the EEG signals (e.g., whether the participant had seen a church or a supermarket). The category of intact scenes could be decoded rapidly within the first 100 ms of visual processing. Critically, within 200 ms of processing, category decoding was more pronounced for the intact scenes compared with the jumbled scenes, suggesting that the presence of real-world structure facilitates the extraction of scene category information. No such effect was found when the scenes were presented upside down, indicating that the facilitation of neural category information is indeed linked to a scene's adherence to typical real-world structure rather than to differences in visual features between intact and jumbled scenes. Our results demonstrate that early stages of categorical analysis in the visual system exhibit tuning to the structure of the world that may facilitate the rapid extraction of behaviorally relevant information from rich natural environments.NEW & NOTEWORTHY Natural scenes are structured, with different types of information appearing in predictable locations. Here, we use EEG decoding to show that the visual brain uses this structure to efficiently analyze scene content. During early visual processing, the category of a scene (e.g., a church vs. a supermarket) could be more accurately decoded from EEG signals when the scene adhered to its typical spatial structure compared with when it did not.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, United Kingdom
| | - Greta Häberle
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
15
|
Federico G, Brandimonte MA. Looking to recognise: the pre-eminence of semantic over sensorimotor processing in human tool use. Sci Rep 2020; 10:6157. [PMID: 32273576 PMCID: PMC7145874 DOI: 10.1038/s41598-020-63045-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 03/24/2020] [Indexed: 02/05/2023] Open
Abstract
Alongside language and bipedal locomotion, tool use is a characterizing activity of human beings. Current theories in the field embrace two contrasting approaches: "manipulation-based" theories, which are anchored in the embodied-cognition view, explain tool use as deriving from past sensorimotor experiences, whereas "reasoning-based" theories suggest that people reason about object properties to solve everyday-life problems. Here, we present results from two eye-tracking experiments in which we manipulated the visuo-perceptual context (thematically consistent vs. inconsistent object-tool pairs) and the goal of the task (free observation or looking to recognise). We found that participants exhibited reversed tools' visual-exploration patterns, focusing on the tool's manipulation area under thematically consistent conditions and on its functional area under thematically inconsistent conditions. Crucially, looking at the tools with the aim of recognising them produced longer fixations on the tools' functional areas irrespective of thematic consistency. In addition, tools (but not objects) were recognised faster in the thematically consistent conditions. These results strongly support reasoning-based theories of tool use, as they indicate that people primarily process semantic rather than sensorimotor information to interact with the environment in an agent's consistent-with-goal way. Such a pre-eminence of semantic processing challenges the mainstream embodied-cognition view of human tool use.
Collapse
Affiliation(s)
- Giovanni Federico
- Suor Orsola Benincasa University, Laboratory of Experimental Psychology, Naples, Italy.
| | - Maria A Brandimonte
- Suor Orsola Benincasa University, Laboratory of Experimental Psychology, Naples, Italy
| |
Collapse
|
16
|
The Representation of Two-Body Shapes in the Human Visual Cortex. J Neurosci 2019; 40:852-863. [PMID: 31801812 DOI: 10.1523/jneurosci.1378-19.2019] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 11/21/2019] [Accepted: 11/27/2019] [Indexed: 11/21/2022] Open
Abstract
Human social nature has shaped visual perception. A signature of the relationship between vision and sociality is a particular visual sensitivity to social entities such as faces and bodies. We asked whether human vision also exhibits a special sensitivity to spatial relations that reliably correlate with social relations. In general, interacting people are more often situated face-to-face than back-to-back. Using functional MRI and behavioral measures in female and male human participants, we show that visual sensitivity to social stimuli extends to images including two bodies facing toward (vs away from) each other. In particular, the inferior lateral occipital cortex, which is involved in visual-object perception, is organized such that the inferior portion encodes the number of bodies (one vs two) and the superior portion is selectively sensitive to the spatial relation between bodies (facing vs nonfacing). Moreover, functionally localized, body-selective visual cortex responded to facing bodies more strongly than identical, but nonfacing, bodies. In this area, multivariate pattern analysis revealed an accurate representation of body dyads with sharpening of the representation of single-body postures in facing dyads, which demonstrates an effect of visual context on the perceptual analysis of a body. Finally, the cost of body inversion (upside-down rotation) on body recognition, a behavioral signature of a specialized mechanism for body perception, was larger for facing versus nonfacing dyads. Thus, spatial relations between multiple bodies are encoded in regions for body perception and affect the way in which bodies are processed.SIGNIFICANCE STATEMENT Human social nature has shaped visual perception. Here, we show that human vision is not only attuned to socially relevant entities, such as bodies, but also to socially relevant spatial relations between those entities. Body-selective regions of visual cortex respond more strongly to multiple bodies that appear to be interacting (i.e., face-to-face), relative to unrelated bodies, and more accurately represent single body postures in interacting scenarios. Moreover, recognition of facing bodies is particularly susceptible to perturbation by upside-down rotation, indicative of a particular visual sensitivity to the canonical appearance of facing bodies. This encoding of relations between multiple bodies in areas for body-shape recognition suggests that the visual context in which a body is encountered deeply affects its perceptual analysis.
Collapse
|
17
|
Kaiser D, Häberle G, Cichy RM. Cortical sensitivity to natural scene structure. Hum Brain Mapp 2019; 41:1286-1295. [PMID: 31758632 PMCID: PMC7267931 DOI: 10.1002/hbm.24875] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 11/07/2019] [Accepted: 11/07/2019] [Indexed: 11/23/2022] Open
Abstract
Natural scenes are inherently structured, with meaningful objects appearing in predictable locations. Human vision is tuned to this structure: When scene structure is purposefully jumbled, perception is strongly impaired. Here, we tested how such perceptual effects are reflected in neural sensitivity to scene structure. During separate fMRI and EEG experiments, participants passively viewed scenes whose spatial structure (i.e., the position of scene parts) and categorical structure (i.e., the content of scene parts) could be intact or jumbled. Using multivariate decoding, we show that spatial (but not categorical) scene structure profoundly impacts on cortical processing: Scene‐selective responses in occipital and parahippocampal cortices (fMRI) and after 255 ms (EEG) accurately differentiated between spatially intact and jumbled scenes. Importantly, this differentiation was more pronounced for upright than for inverted scenes, indicating genuine sensitivity to spatial structure rather than sensitivity to low‐level attributes. Our findings suggest that visual scene analysis is tightly linked to the spatial structure of our natural environments. This link between cortical processing and scene structure may be crucial for rapidly parsing naturalistic visual inputs.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, UK.,Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Greta Häberle
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Einstein Center for Neurosciences Berlin, Humboldt-Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Einstein Center for Neurosciences Berlin, Humboldt-Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Humboldt-Universität Berlin, Berlin, Germany
| |
Collapse
|
18
|
Strachan JWA, Sebanz N, Knoblich G. The role of emotion in the dyad inversion effect. PLoS One 2019; 14:e0219185. [PMID: 31265483 PMCID: PMC6605658 DOI: 10.1371/journal.pone.0219185] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 06/18/2019] [Indexed: 12/05/2022] Open
Abstract
When observing two individuals, people are faster and better able to identify them as other people if they are facing each other than if they are facing away from each other. This advantage disappears when the images are inverted, suggesting that the visual system is particularly sensitive to dyads in this upright configuration, and perceptually groups socially engaged dyads into a single holistic unit. This dyadic inversion effect was obtained with images of full bodies. Body information was sufficient to elicit this effect even when information about head orientation was absent. However, it has not been tested whether the dyadic inversion effect occurs with face images and whether the emotions displayed by the faces modulate the effect. In three experiments we obtained robust dyadic inversion with face images. Holistic processing of upright face pairs occurred for neutral, happy, and sad faces but not for angry and fearful face pairs. Thus, perceptual grouping of individuals into pairs appears to depend on the emotional expressions of individual faces and the interpersonal relations they imply.
Collapse
|
19
|
Kaiser D, Quek GL, Cichy RM, Peelen MV. Object Vision in a Structured World. Trends Cogn Sci 2019; 23:672-685. [PMID: 31147151 PMCID: PMC7612023 DOI: 10.1016/j.tics.2019.04.013] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 04/15/2019] [Accepted: 04/30/2019] [Indexed: 01/02/2023]
Abstract
In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.
| | - Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| |
Collapse
|
20
|
Walbrin J, Koldewyn K. Dyadic interaction processing in the posterior temporal cortex. Neuroimage 2019; 198:296-302. [PMID: 31100434 PMCID: PMC6610332 DOI: 10.1016/j.neuroimage.2019.05.027] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 04/04/2019] [Accepted: 05/10/2019] [Indexed: 11/21/2022] Open
Abstract
Recent behavioural evidence shows that visual displays of two individuals interacting are not simply encoded as separate individuals, but as an interactive unit that is 'more than the sum of its parts'. Recent functional magnetic resonance imaging (fMRI) evidence shows the importance of the posterior superior temporal sulcus (pSTS) in processing human social interactions, and suggests that it may represent human-object interactions as qualitatively 'greater' than the average of their constituent parts. The current study aimed to investigate whether the pSTS or other posterior temporal lobe region(s): 1) Demonstrated evidence of a dyadic information effect - that is, qualitatively different responses to an interacting dyad than to averaged responses of the same two interactors, presented in isolation, and; 2) Significantly differentiated between different types of social interactions. Multivoxel pattern analysis was performed in which a classifier was trained to differentiate between qualitatively different types of dyadic interactions. Above-chance classification of interactions was observed in 'interaction selective' pSTS-I and extrastriate body area (EBA), but not in other regions of interest (i.e. face-selective STS and mentalizing-selective temporo-parietal junction). A dyadic information effect was not observed in the pSTS-I, but instead was shown in the EBA; that is, classification of dyadic interactions did not fully generalise to averaged responses to the isolated interactors, indicating that dyadic representations in the EBA contain unique information that cannot be recovered from the interactors presented in isolation. These findings complement previous observations for congruent grouping of human bodies and objects in the broader lateral occipital temporal cortex area. pSTS and EBA classify between different dynamic interactions. EBA is sensitive to (uniquely) dyadic interaction information. These findings support previous evidence for grouping of interacting people/objects in LOTC.
Collapse
Affiliation(s)
- Jon Walbrin
- School of Psychology, Bangor University, Wales, UK.
| | | |
Collapse
|
21
|
Faivre N, Dubois J, Schwartz N, Mudrik L. Imaging object-scene relations processing in visible and invisible natural scenes. Sci Rep 2019; 9:4567. [PMID: 30872607 PMCID: PMC6418099 DOI: 10.1038/s41598-019-38654-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Accepted: 12/13/2018] [Indexed: 11/17/2022] Open
Abstract
Integrating objects with their context is a key step in interpreting complex visual scenes. Here, we used functional Magnetic Resonance Imaging (fMRI) while participants viewed visual scenes depicting a person performing an action with an object that was either congruent or incongruent with the scene. Univariate and multivariate analyses revealed different activity for congruent vs. incongruent scenes in the lateral occipital complex, inferior temporal cortex, parahippocampal cortex, and prefrontal cortex. Importantly, and in contrast to previous studies, these activations could not be explained by task-induced conflict. A secondary goal of this study was to examine whether processing of object-context relations could occur in the absence of awareness. We found no evidence for brain activity differentiating between congruent and incongruent invisible masked scenes, which might reflect a genuine lack of activation, or stem from the limitations of our study. Overall, our results provide novel support for the roles of parahippocampal cortex and frontal areas in conscious processing of object-context relations, which cannot be explained by either low-level differences or task demands. Yet they further suggest that brain activity is decreased by visual masking to the point of becoming undetectable with our fMRI protocol.
Collapse
Affiliation(s)
- Nathan Faivre
- Division of Biology, California Institute of Technology, Pasadena, CA, 91125, USA. .,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland. .,Centre d'Economie de la Sorbonne, CNRS UMR 8174, Paris, France.
| | - Julien Dubois
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA.,Department of Neurosurgery, Cedars Sinai Medical Center, Los Angeles, CA, USA
| | - Naama Schwartz
- Division of Biology, California Institute of Technology, Pasadena, CA, 91125, USA.,School of Psychological sciences, Tel Aviv University, Tel Aviv, Israel
| | - Liad Mudrik
- Division of Biology, California Institute of Technology, Pasadena, CA, 91125, USA. .,School of Psychological sciences, Tel Aviv University, Tel Aviv, Israel. .,Sagol school of Neuroscience, Tel Aviv University, Tel Aviv, Israel.
| |
Collapse
|
22
|
Langner R, Eickhoff SB, Bilalić M. A network view on brain regions involved in experts' object and pattern recognition: Implications for the neural mechanisms of skilled visual perception. Brain Cogn 2018; 131:74-86. [PMID: 30290974 DOI: 10.1016/j.bandc.2018.09.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 08/01/2018] [Accepted: 09/25/2018] [Indexed: 01/12/2023]
Abstract
Skilled visual object and pattern recognition form the basis of many everyday behaviours. The game of chess has often been used as a model case for studying how long-term experience aides in perceiving objects and their spatio-functional interrelations. Earlier research revealed two brain regions, posterior middle temporal gyrus (pMTG) and collateral sulcus (CoS), to be linked to chess experts' superior object and pattern recognition, respectively. Here we elucidated the brain networks these two expertise-related regions are embedded in, employing resting-state functional connectivity analysis and meta-analytic connectivity modelling with the BrainMap database. pMTG was preferentially connected with dorsal visual stream areas and a parieto-prefrontal network for action planning, while CoS was preferentially connected with posterior medial cortex and hippocampus, linked to scene perception, perspective-taking and navigation. Functional profiling using BrainMap meta-data revealed that pMTG was linked to semantic processing as well as inhibition and attention, while CoS was linked to face and shape perception as well as passive viewing. Our findings suggest that pMTG subserves skilled object recognition by mediating the link between object identity and object affordances, while CoS subserves skilled pattern recognition by linking the position of individual objects with typical spatio-functional layouts of their environment stored in memory.
Collapse
Affiliation(s)
- Robert Langner
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany; Institute of Neuroscience and Medicine (INM-7: Brain and Behaviour), Research Centre Jülich, Jülich, Germany.
| | - Simon B Eickhoff
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany; Institute of Neuroscience and Medicine (INM-7: Brain and Behaviour), Research Centre Jülich, Jülich, Germany
| | - Merim Bilalić
- Department of Psychology, University of Northumbria at Newcastle, Newcastle, England, United Kingdom; Department of Neuroradiology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
23
|
Roux-Sibilon A, Kalénine S, Pichat C, Peyrin C. Dorsal and ventral stream contribution to the paired-object affordance effect. Neuropsychologia 2018. [PMID: 29522759 DOI: 10.1016/j.neuropsychologia.2018.03.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Visual extinction, a parietal syndrome in which patients exhibit perceptual impairments when two objects are simultaneously presented in the visual field, is reduced when objects are correctly positioned for action, indicating that action helps patients' visual attention. Similarly, healthy individuals make faster action decisions on object pairs that appear in left/right standard co-location for actions in comparison to object pairs that appear in a mirror location, a phenomenon called the paired-object affordance effect. However, the neural locus of such effect remains debated and may be related to the activity of ventral or dorsal brain regions. The present fMRI study aims at determining the neural substrates of the paired-object affordance effect. Fourteen right-handed participants made decisions about semantically related (i.e. thematically related and co-manipulated) and unrelated object pairs. Pairs were either positioned in a standard location for a right-handed action (with the active object - lid - in the right visual hemifield, and the passive object - pan - in the left visual hemifield), or in the reverse location. Behavioral results showed a suppression of the observed cost of correctly positioning related pairs for action when performing action decisions (deciding if the two objects are usually used together), but not when performing contextual decisions (deciding if the two objects are typically found in the kitchen). Anterior regions of the dorsal stream (e.g. supplementary motor area) responded to inadequate object co-positioning for action, but only when the perceptual task required action decisions. In the ventral cortex, the left lateral occipital complex showed increased activation for objects correctly positioned for action in all conditions except when neither task demands nor object relatedness was relevant for action. Thus, fMRI results demonstrated a joint contribution of ventral and dorsal cortical streams to the paired-affordance effect. They further suggest that this contribution may depend on contextual situations and task demands, in line with flexible views of affordance evocation.
Collapse
Affiliation(s)
| | - Solène Kalénine
- Univ. Lille, CNRS, CHU Lille, UMR 9193, SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
| | - Cédric Pichat
- Université Grenoble Alpes, CNRS, LPNC UMR 5105, Grenoble, France
| | - Carole Peyrin
- Université Grenoble Alpes, CNRS, LPNC UMR 5105, Grenoble, France
| |
Collapse
|
24
|
Kaiser D, Peelen MV. Transformation from independent to integrative coding of multi-object arrangements in human visual cortex. Neuroimage 2017; 169:334-341. [PMID: 29277645 DOI: 10.1016/j.neuroimage.2017.12.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 10/08/2017] [Accepted: 12/20/2017] [Indexed: 10/18/2022] Open
Abstract
To optimize processing, the human visual system utilizes regularities present in naturalistic visual input. One of these regularities is the relative position of objects in a scene (e.g., a sofa in front of a television), with behavioral research showing that regularly positioned objects are easier to perceive and to remember. Here we use fMRI to test how positional regularities are encoded in the visual system. Participants viewed pairs of objects that formed minimalistic two-object scenes (e.g., a "living room" consisting of a sofa and television) presented in their regularly experienced spatial arrangement or in an irregular arrangement (with interchanged positions). Additionally, single objects were presented centrally and in isolation. Multi-voxel activity patterns evoked by the object pairs were modeled as the average of the response patterns evoked by the two single objects forming the pair. In two experiments, this approximation in object-selective cortex was significantly less accurate for the regularly than the irregularly positioned pairs, indicating integration of individual object representations. More detailed analysis revealed a transition from independent to integrative coding along the posterior-anterior axis of the visual cortex, with the independent component (but not the integrative component) being almost perfectly predicted by object selectivity across the visual hierarchy. These results reveal a transitional stage between individual object and multi-object coding in visual cortex, providing a possible neural correlate of efficient processing of regularly positioned objects in natural scenes.
Collapse
Affiliation(s)
- Daniel Kaiser
- Center for Mind/Brain Sciences, University of Trento, 38068, Rovereto, TN, Italy; Department of Education and Psychology, Freie Universität Berlin, 14195, Berlin-Dahlem, Germany.
| | - Marius V Peelen
- Center for Mind/Brain Sciences, University of Trento, 38068, Rovereto, TN, Italy; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
25
|
Abstract
The opportunity an object presents for action is known as an affordance. A basic assumption in previous research was that images of objects, which do not afford physical action, elicit effects on attention and behavior comparable with those of real-world tangible objects. Using a flanker task, we compared interference effects between real graspable objects and matched 2-D or 3-D images of the items. Compared with both 2-D and 3-D images, real objects yielded slower response times overall and elicited greater flanker interference effects. When the real objects were positioned out of reach or behind a transparent barrier, the pattern of response times and interference effects was comparable with that for 2-D images. Graspable objects exert a more powerful influence on attention and manual responses than images because of the affordances they offer for manual interaction. These results raise questions about whether images are suitable proxies for real objects in psychological research.
Collapse
Affiliation(s)
| | - Rafal M Skiba
- 1 Department of Psychology, University of Nevada, Reno.,2 Department of Neuroscience, University of Geneva
| | | |
Collapse
|
26
|
Xu S, Heinke D. Implied between-object actions affect response selection without knowledge about object functionality. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1330792] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Affiliation(s)
- Shan Xu
- Faculty of Psychology, Beijing Normal University, Beijing, China
- School of Psychology, University of Birmingham, Birmingham, UK
| | - Dietmar Heinke
- School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
27
|
Baldassano C, Beck DM, Fei-Fei L. Human-Object Interactions Are More than the Sum of Their Parts. Cereb Cortex 2017; 27:2276-2288. [PMID: 27073216 DOI: 10.1093/cercor/bhw077] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.
Collapse
Affiliation(s)
| | - Diane M Beck
- Department of Psychology and Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Li Fei-Fei
- Department of Computer Science, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
28
|
Gomez MA, Snow JC. Action properties of object images facilitate visual search. J Exp Psychol Hum Percept Perform 2017; 43:1115-1124. [PMID: 28263627 DOI: 10.1037/xhp0000390] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There is mounting evidence that constraints from action can influence the early stages of object selection, even in the absence of any explicit preparation for action. Here, we examined whether action properties of images can influence visual search, and whether such effects were modulated by hand preference. Observers searched for an oddball target among 3 distractors. The search arrays consisted either of images of graspable "handles" ("action-related" stimuli), or images that were otherwise identical to the handles but in which the semicircular fulcrum element was reoriented so that the stimuli no longer looked like graspable objects ("non-action-related" stimuli). In Experiment 1, right-handed observers, who have been shown previously to prefer to use the right hand over the left for manual tasks, were faster to detect targets in action-related versus non-action-related arrays, and showed a response time (reaction time [RT]) advantage for rightward- versus leftward-oriented action-related handles. In Experiment 2, left-handed observers, who have been shown to use the left and right hands relatively equally in manual tasks, were also faster to detect targets in the action-related versus non-action-related arrays, but RTs were equally fast for rightward- and leftward-oriented handle targets. Together, or results suggest that action properties in images, and constraints for action imposed by preferences for manual interaction with objects, can influence attentional selection in the context of visual search. (PsycINFO Database Record
Collapse
Affiliation(s)
- Michael A Gomez
- Department of Psychology, Program in Cognitive and Brain Sciences, The University of Nevada
| | - Jacqueline C Snow
- Department of Psychology, Program in Cognitive and Brain Sciences, The University of Nevada
| |
Collapse
|
29
|
Abstract
How does one perceive groups of people? It is known that functionally interacting objects (e.g., a glass and a pitcher tilted as if pouring water into it) are perceptually grouped. Here, we showed that processing of multiple human bodies is also influenced by their relative positioning. In a series of categorization experiments, bodies facing each other (seemingly interacting) were recognized more accurately than bodies facing away from each other (noninteracting). Moreover, recognition of facing body dyads (but not nonfacing body dyads) was strongly impaired when those stimuli were inverted, similar to what has been found for individual bodies. This inversion effect demonstrates sensitivity of the visual system to facing body dyads in their common upright configuration and might imply recruitment of configural processing (i.e., processing of the overall body configuration without prior part-by-part analysis). These findings suggest that facing dyads are represented as one structured unit, which may be the intermediate level of representation between multiple-object (body) perception and representation of social actions.
Collapse
Affiliation(s)
- Liuba Papeo
- Center for Brain and Cognition, Universitat Pompeu Fabra
- Institut des Sciences Cognitives—Marc Jeannerod, Unité Mixte de Recherche (UMR) 5304, Centre National de la Recherche Scientifique (CNRS), Université de Lyon
| | - Timo Stein
- Center for Mind/Brain Sciences (CIMeC), University of Trento
| | - Salvador Soto-Faraco
- Center for Brain and Cognition, Universitat Pompeu Fabra
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
30
|
Xu S, Humphreys GW, Mevorach C, Heinke D. The involvement of the dorsal stream in processing implied actions between paired objects: A TMS study. Neuropsychologia 2016; 95:240-249. [PMID: 28034601 DOI: 10.1016/j.neuropsychologia.2016.12.021] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Revised: 11/03/2016] [Accepted: 12/20/2016] [Indexed: 11/19/2022]
Abstract
Perceiving and selecting the action possibilities (affordances) provided by objects is an important challenge to human vision, and is not limited to single-object scenarios. Xu et al. (2015) identified two effects of implied actions between paired objects on response selection: an inhibitory effect on responses aligned with the passive object in the pair (e.g. a bowl) and an advantage associated with responses aligned with the active objects (e.g. a spoon). The present study investigated the neurocognitive mechanisms behind these effects by examining the involvement of the ventral (vision for perception) and the dorsal (vision for action) visual streams, as defined in Goodale and Milner's (1992) two visual stream theory. Online repetitive transcranial magnetic stimulation (rTMS) applied to the left anterior intraparietal sulcus (aIPS) reduced both the inhibitory effect of implied actions on responses aligned with the passive objects and the advantage of those aligned with the active objects, but only when the active objects were contralateral to the stimulation. rTMS to the left lateral occipital areas (LO) did not significantly alter the influence of implied actions. The results reveal that the dorsal visual stream is crucial not only in single-object affordance processing, but also in responding to implied actions between objects.
Collapse
Affiliation(s)
- Shan Xu
- School of Psychology, Beijing Normal University, Beijing 100875, China; School of Psychology, University of Birmingham, Birmingham B15 2TT, UK.
| | - Glyn W Humphreys
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
| | - Carmel Mevorach
- School of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| | - Dietmar Heinke
- School of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| |
Collapse
|
31
|
Humphreys GW. Feature Confirmation in Object Perception: Feature Integration Theory 26 Years on from the Treisman Bartlett Lecture. Q J Exp Psychol (Hove) 2016; 69:1910-40. [DOI: 10.1080/17470218.2014.988736] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The Treisman Bartlett lecture, reported in the Quarterly Journal of Experimental Psychology in 1988, provided a major overview of the feature integration theory of attention. This has continued to be a dominant account of human visual attention to this day. The current paper provides a summary of the work reported in the lecture and an update on critical aspects of the theory as applied to visual object perception. The paper highlights the emergence of findings that pose significant challenges to the theory and which suggest that revisions are required that allow for (a) several rather than a single form of feature integration, (b) some forms of feature integration to operate preattentively, (c) stored knowledge about single objects and interactions between objects to modulate perceptual integration, (d) the application of feature-based inhibition to object files where visual features are specified, which generates feature-based spreading suppression and scene segmentation, and (e) a role for attention in feature confirmation rather than feature integration in visual selection. A feature confirmation account of attention in object perception is outlined.
Collapse
Affiliation(s)
- Glyn W. Humphreys
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
32
|
Ishibashi R, Pobric G, Saito S, Lambon Ralph MA. The neural network for tool-related cognition: An activation likelihood estimation meta-analysis of 70 neuroimaging contrasts. Cogn Neuropsychol 2016; 33:241-56. [PMID: 27362967 PMCID: PMC4989859 DOI: 10.1080/02643294.2016.1188798] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
The ability to recognize and use a variety of tools is an intriguing human cognitive function. Multiple neuroimaging studies have investigated neural activations with various types of tool-related tasks. In the present paper, we reviewed tool-related neural activations reported in 70 contrasts from 56 neuroimaging studies and performed a series of activation likelihood estimation (ALE) meta-analyses to identify tool-related cortical circuits dedicated either to general tool knowledge or to task-specific processes. The results indicate the following: (a) Common, task-general processing regions for tools are located in the left inferior parietal lobule (IPL) and ventral premotor cortex; and (b) task-specific regions are located in superior parietal lobule (SPL) and dorsal premotor area for imagining/executing actions with tools and in bilateral occipito-temporal cortex for recognizing/naming tools. The roles of these regions in task-general and task-specific activities are discussed with reference to evidence from neuropsychology, experimental psychology and other neuroimaging studies.
Collapse
Affiliation(s)
- Ryo Ishibashi
- a Neuroscience and Aphasia Research Unit, School of Psychological Sciences , University of Manchester , Manchester , UK.,b Human Brain Research Center, School of Medicine , Kyoto University , Kyoto , Japan
| | - Gorana Pobric
- a Neuroscience and Aphasia Research Unit, School of Psychological Sciences , University of Manchester , Manchester , UK
| | - Satoru Saito
- a Neuroscience and Aphasia Research Unit, School of Psychological Sciences , University of Manchester , Manchester , UK.,c Department of Cognitive Psychology in Education , Kyoto University , Kyoto , Japan
| | - Matthew A Lambon Ralph
- a Neuroscience and Aphasia Research Unit, School of Psychological Sciences , University of Manchester , Manchester , UK
| |
Collapse
|
33
|
|
34
|
Quadflieg S, Gentile F, Rossion B. The neural basis of perceiving person interactions. Cortex 2015; 70:5-20. [PMID: 25697049 DOI: 10.1016/j.cortex.2014.12.020] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Revised: 12/25/2014] [Accepted: 12/30/2014] [Indexed: 11/26/2022]
Abstract
This study examined whether the grouping of people into meaningful social scenes (e.g., two people having a chat) impacts the basic perceptual analysis of each partaking individual. To explore this issue, we measured neural activity using functional magnetic resonance imaging (fMRI) while participants sex-categorized congruent as well as incongruent person dyads (i.e., two people interacting in a plausible or implausible manner). Incongruent person dyads elicited enhanced neural processing in several high-level visual areas dedicated to face and body encoding and in the posterior middle temporal gyrus compared to congruent person dyads. Incongruent and congruent person scenes were also successfully differentiated by a linear multivariate pattern classifier in the right fusiform body area and the left extrastriate body area. Finally, increases in the person scenes' meaningfulness as judged by independent observers was accompanied by enhanced activity in the bilateral posterior insula. These findings demonstrate that the processing of person scenes goes beyond a mere stimulus-bound encoding of their partaking agents, suggesting that changes in relations between agents affect their representation in category-selective regions of the visual cortex and beyond.
Collapse
Affiliation(s)
- Susanne Quadflieg
- School of Experimental Psychology, University of Bristol, UK; Division of Psychology, New York University Abu Dhabi, UAE.
| | - Francesco Gentile
- Psychological Sciences Research Institute and Institute of Neuroscience, University of Louvain, Louvain-la-Neuve, Belgium; Department of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Bruno Rossion
- Psychological Sciences Research Institute and Institute of Neuroscience, University of Louvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
35
|
Zhao L, Bai Y, Wang Y. Action representations activated by task-irrelevant information: is it really irrelevant? Scand J Psychol 2014; 56:18-27. [PMID: 25405292 DOI: 10.1111/sjop.12178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2014] [Accepted: 09/20/2014] [Indexed: 11/28/2022]
Abstract
Accessing action knowledge is believed to rely on the activation of action representations through the retrieval of functional, manipulative, and spatial information associated with objects. However, it remains unclear whether action representations can be activated in this way when the object information is irrelevant to the current judgment. The present study investigated this question by independently manipulating the correctness of three types of action-related information: the functional relation between the two objects, the grip applied to the objects, and the orientation of the objects. In each of three tasks in Experiment 1, participants evaluated the correctness of only one of the three information types (function, grip or orientation). Similar results were achieved with all three tasks: "correct" judgments were facilitated when the other dimensions were correct; however, "incorrect" judgments were facilitated when the other two dimensions were both correct and also when they were both incorrect. In Experiment 2, when participants attended to an action-irrelevant feature (object color), there was no interaction between function, grip, and orientation. These results clearly indicate that action representations can be activated by retrieval of functional, manipulative, and spatial knowledge about objects, even though this is task-irrelevant information.
Collapse
Affiliation(s)
- Liang Zhao
- School of Psychology, Shaanxi Normal University, Xi'an, China; Shaanxi Provincial Key Laboratory of Behavior & Cognitive Neuroscience, Xi'an, China
| | | | | |
Collapse
|
36
|
Object grouping based on real-world regularities facilitates perception by reducing competitive interactions in visual cortex. Proc Natl Acad Sci U S A 2014; 111:11217-22. [PMID: 25024190 DOI: 10.1073/pnas.1400559111] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In virtually every real-life situation humans are confronted with complex and cluttered visual environments that contain a multitude of objects. Because of the limited capacity of the visual system, objects compete for neural representation and cognitive processing resources. Previous work has shown that such attentional competition is partly object based, such that competition among elements is reduced when these elements perceptually group into an object based on low-level cues. Here, using functional MRI (fMRI) and behavioral measures, we show that the attentional benefit of grouping extends to higher-level grouping based on the relative position of objects as experienced in the real world. An fMRI study designed to measure competitive interactions among objects in human visual cortex revealed reduced neural competition between objects when these were presented in commonly experienced configurations, such as a lamp above a table, relative to the same objects presented in other configurations. In behavioral visual search studies, we then related this reduced neural competition to improved target detection when distracter objects were shown in regular configurations. Control studies showed that low-level grouping could not account for these results. We interpret these findings as reflecting the grouping of objects based on higher-level spatial-relational knowledge acquired through a lifetime of seeing objects in specific configurations. This interobject grouping effectively reduces the number of objects that compete for representation and thereby contributes to the efficiency of real-world perception.
Collapse
|
37
|
|
38
|
Humphreys GW, Kumar S, Yoon EY, Wulff M, Roberts KL, Riddoch MJ. Attending to the possibilities of action. Philos Trans R Soc Lond B Biol Sci 2013; 368:20130059. [PMID: 24018721 PMCID: PMC3758202 DOI: 10.1098/rstb.2013.0059] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Actions taking place in the environment are critical for our survival. We review evidence on attention to action, drawing on sets of converging evidence from neuropsychological patients through to studies of the time course and neural locus of action-based cueing of attention in normal observers. We show that the presence of action relations between stimuli helps reduce visual extinction in patients with limited attention to the contralesional side of space, while the first saccades made by normal observers and early perceptual and attentional responses measured using electroencephalography/event-related potentials are modulated by preparation of action and by seeing objects being grasped correctly or incorrectly for action. With both normal observers and patients, there is evidence for two components to these effects based on both visual perceptual and motor-based responses. While the perceptual responses reflect factors such as the visual familiarity of the action-related information, the motor response component is determined by factors such as the alignment of the objects with the observer's effectors and not by the visual familiarity of the stimuli. In addition to this, we suggest that action relations between stimuli can be coded pre-attentively, in the absence of attention to the stimulus, and action relations cue perceptual and motor responses rapidly and automatically. At present, formal theories of visual attention are not set up to account for these action-related effects; we suggest ways that theories could be expected to enable action effects to be incorporated.
Collapse
Affiliation(s)
- Glyn W. Humphreys
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK
| | - Sanjay Kumar
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK
| | - Eun Young Yoon
- Korean NeuroTraining Center, Apsan-soonhwan Road 736, Nam-gu, Daegu, South Korea
| | - Melanie Wulff
- School of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| | | | - M. Jane Riddoch
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK
| |
Collapse
|
39
|
Roberts KL, Humphreys GW. Distinguishing the effects of action relations and scene context on object perception. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.851755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
40
|
Kumar S, Riddoch MJ, Humphreys G. Mu rhythm desynchronization reveals motoric influences of hand action on object recognition. Front Hum Neurosci 2013; 7:66. [PMID: 23471236 PMCID: PMC3590458 DOI: 10.3389/fnhum.2013.00066] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2012] [Accepted: 02/18/2013] [Indexed: 11/13/2022] Open
Abstract
We examined the effect of hand grip on object recognition by studying the modulation of the mu rhythm when participants made object decisions to objects and non-objects shown with congruent or incongruent hand-grip actions. Despite the grip responses being irrelevant to the task, mu rhythm activity on the scalp over motor and pre-motor cortex was sensitive to the congruency of the hand grip-in particular the event-related desynchronization of the mu rhythm was more pronounced for familiar objects grasped with an appropriate grip than for objects given an inappropriate grasp. Also the power of mu activity correlated with RTs to congruently gripped objects. The results suggest that familiar motor responses evoked by the appropriateness of a hand grip facilitate recognition responses to objects.
Collapse
Affiliation(s)
- Sanjay Kumar
- Department of Experimental Psychology, Oxford University Oxford, UK
| | | | | |
Collapse
|
41
|
Wulff M, Humphreys GW. Visual responses to action between unfamiliar object pairs modulateextinction. Neuropsychologia 2013; 51:622-32. [DOI: 10.1016/j.neuropsychologia.2013.01.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2012] [Revised: 12/27/2012] [Accepted: 01/07/2013] [Indexed: 10/27/2022]
|
42
|
Baeck A, Wagemans J, Op de Beeck HP. The distributed representation of random and meaningful object pairs in human occipitotemporal cortex: the weighted average as a general rule. Neuroimage 2012; 70:37-47. [PMID: 23266747 DOI: 10.1016/j.neuroimage.2012.12.023] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2012] [Revised: 12/06/2012] [Accepted: 12/13/2012] [Indexed: 10/27/2022] Open
Abstract
Natural scenes typically contain multiple visual objects, often in interaction, such as when a bottle is used to fill a glass. Previous studies disagree about the representation of multiple objects and the role of object position herein, nor did they pinpoint the effect of potential interactions between the objects. In an fMRI study, we presented four single objects in two different positions and object pairs consisting of all possible combinations of the single objects. Objects pairs could form either a meaningful action configuration in which they interact with each other or a non-meaningful configuration. We found that for single objects and object pairs both identity and position were represented in multi-voxel activity patterns in LOC. The response patterns of object pairs were best predicted by a weighted average of the response patterns of the constituent objects, with the strongest single-object response (the max response) weighted more than the min response. The difference in weight between the max and the min object was larger for familiar action pairs than for other pairs when participants attended to the configuration. A weighted average thus relates the response patterns of object pairs to the response patterns of single objects, even when the objects interact.
Collapse
Affiliation(s)
- Annelies Baeck
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Tiensestraat 102, 3000 Leuven, Belgium.
| | | | | |
Collapse
|
43
|
Yoon EY, Humphreys GW, Kumar S, Rotshtein P. The Neural Selection and Integration of Actions and Objects: An fMRI Study. J Cogn Neurosci 2012; 24:2268-79. [DOI: 10.1162/jocn_a_00256] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
There is considerable evidence that there are anatomically and functionally distinct pathways for action and object recognition. However, little is known about how information about action and objects is integrated. This study provides fMRI evidence for task-based selection of brain regions associated with action and object processing, and on how the congruency between the action and the object modulates neural response. Participants viewed videos of objects used in congruent or incongruent actions and attended either to the action or the object in a one-back procedure. Attending to the action led to increased responses in a fronto-parietal action-associated network. Attending to the object activated regions within a fronto-inferior temporal network. Stronger responses for congruent action–object clips occurred in bilateral parietal, inferior temporal, and putamen. Distinct cortical and thalamic regions were modulated by congruency in the different tasks. The results suggest that (i) selective attention to action and object information is mediated through separate networks, (ii) object–action congruency evokes responses in action planning regions, and (iii) the selective activation of nuclei within the thalamus provides a mechanism to integrate task goals in relation to the congruency of the perceptual information presented to the observer.
Collapse
|
44
|
Kumar S, Yoon EY, Humphreys GW. Perceptual and motor-based responses to hand actions on objects: evidence from ERPs. Exp Brain Res 2012; 220:153-64. [PMID: 22644235 DOI: 10.1007/s00221-012-3126-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2011] [Accepted: 05/07/2012] [Indexed: 10/28/2022]
Abstract
We carried out a study examining the electrophysiological responses when participants made object decisions to objects and non-objects subject to congruent and incongruent hand-grip actions. Despite the grip responses being irrelevant to the task, event-related potentials were sensitive to the handgrip. There were effects of grip congruency on both P1 and N1 components, over both posterior and motor cortices, with the effects emerging most strongly for familiar objects. In addition, enhanced lateralized readiness potentials were observed for incongruent grips. The results suggest that there are increased perceptual and motor-based responses to objects and object-like stimuli that are grasped correctly, even when the grip is irrelevant to the task. This is consistent with the automatic coding of potential appropriate actions based on visual information from objects in the environment.
Collapse
Affiliation(s)
- Sanjay Kumar
- Department of Experimental Psychology, Oxford University, South Parks Road, Oxford, OX1 3UD, UK.
| | | | | |
Collapse
|
45
|
The benefit of object interactions arises in the lateral occipital cortex independent of attentional modulation from the intraparietal sulcus: a transcranial magnetic stimulation study. J Neurosci 2011; 31:8320-4. [PMID: 21632952 DOI: 10.1523/jneurosci.6450-10.2011] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Our visual experience is generally not of isolated objects, but of scenes, where multiple objects are interacting. Such interactions (e.g., a watering can positioned to pour water toward a plant) have been shown to facilitate object identification compared with when the objects are depicted as not interacting (e.g., a watering can positioned away from the plant) (Green and Hummel, 2004, 2006). What is the neural basis for this advantage? Recent fMRI studies have identified the lateral occipital cortex (LO) as a potential neural origin of this behavioral benefit, as LO showed greater responses to object pairs depicted as interacting compared with when they are not (Kim and Biederman, 2010; Roberts and Humphreys, 2010). However, it is possible that LO was modulated by an attention-sensitive region, the intraparietal sulcus (IPS), which sometimes showed a similar pattern of responses as that of LO in the Kim and Biederman (2010) investigation. To test this hypothesis, we delivered transcranial magnetic stimulation (TMS) to human subjects' LO and IPS while they detected a target object that was or was not interacting with another object to form a scene. TMS delivered to LO but not IPS abolished the facilitation in identifying interacting objects compared with noninteracting depictions observed in the absence of TMS, suggesting that it is LO and not IPS that is critical for the coding of object interactions.
Collapse
|
46
|
Abstract
Regions tuned to individual visual categories, such as faces and objects, have been discovered in the later stages of the ventral visual pathway in the cortex. But most visual experience is composed of scenes, where multiple objects are interacting. Such interactions are readily described by prepositions or verb forms, for example, a bird perched on a birdhouse. At what stage in the pathway does sensitivity to such interactions arise? Here we report that object pairs shown as interacting, compared with their side-by-side depiction (e.g., a bird besides a birdhouse), elicit greater activity in the lateral occipital complex, the earliest cortical region where shape is distinguished from texture. Novelty of the interactions magnified this gain, an effect that was absent in the side-by-side depictions. Scene-like relations are thus likely achieved simultaneously with the specification of object shape.
Collapse
Affiliation(s)
- Jiye G Kim
- Department of Psychology, University of Southern California, Los Angeles, CA 90089-1061, USA.
| | | |
Collapse
|
47
|
Kassuba T, Klinge C, Hölig C, Menz MM, Ptito M, Röder B, Siebner HR. The left fusiform gyrus hosts trisensory representations of manipulable objects. Neuroimage 2011; 56:1566-77. [DOI: 10.1016/j.neuroimage.2011.02.032] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2011] [Revised: 02/09/2011] [Accepted: 02/10/2011] [Indexed: 11/25/2022] Open
|
48
|
Mizelle JC, Wheaton LA. The Neuroscience of Storing and Molding Tool Action Concepts: How "Plastic" is Grounded Cognition? Front Psychol 2010; 1:195. [PMID: 21833254 PMCID: PMC3153804 DOI: 10.3389/fpsyg.2010.00195] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2010] [Accepted: 10/21/2010] [Indexed: 11/25/2022] Open
Abstract
Choosing how to use tools to accomplish a task is a natural and seemingly trivial aspect of our lives, yet engages complex neural mechanisms. Recently, work in healthy populations has led to the idea that tool knowledge is grounded to allow for appropriate recall based on some level of personal history. This grounding has presumed neural loci for tool use, centered on parieto-temporo-frontal areas to fuse perception and action representations into one dynamic system. A challenge for this idea is related to one of its great benefits. For such a system to exist, it must be very plastic, to allow for the introduction of novel tools or concepts of tool use and modification of existing ones. Thus, learning new tool usage (familiar tools in new situations and new tools in familiar situations) must involve mapping into this grounded network while maintaining existing rules for tool usage. This plasticity may present a challenging breadth of encoding that needs to be optimally stored and accessed. The aim of this work is to explore the challenges of plasticity related to changing or incorporating representations of tool action within the theory of grounded cognition and propose a modular model of tool–object goal related accomplishment. While considering the neuroscience evidence for this approach, we will focus on the requisite plasticity for this system. Further, we will highlight challenges for flexibility and organization of already grounded tool actions and provide thoughts on future research to better evaluate mechanisms of encoding in the theory of grounded cognition.
Collapse
Affiliation(s)
- J C Mizelle
- School of Applied Physiology, Georgia Institute of Technology Atlanta, GA, USA
| | | |
Collapse
|
49
|
Action relations facilitate the identification of briefly-presented objects. Atten Percept Psychophys 2010; 73:597-612. [DOI: 10.3758/s13414-010-0043-0] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|