1
|
Cracco E, Papeo L, Wiersema JR. Evidence for a role of synchrony but not common fate in the perception of biological group movements. Eur J Neurosci 2024; 60:3557-3571. [PMID: 38706370 DOI: 10.1111/ejn.16356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/16/2024] [Accepted: 04/05/2024] [Indexed: 05/07/2024]
Abstract
Extensive research has shown that observers are able to efficiently extract summary information from groups of people. However, little is known about the cues that determine whether multiple people are represented as a social group or as independent individuals. Initial research on this topic has primarily focused on the role of static cues. Here, we instead investigate the role of dynamic cues. In two experiments with male and female human participants, we use EEG frequency tagging to investigate the influence of two fundamental Gestalt principles - synchrony and common fate - on the grouping of biological movements. In Experiment 1, we find that brain responses coupled to four point-light figures walking together are enhanced when they move in sync vs. out of sync, but only when they are presented upright. In contrast, we found no effect of movement direction (i.e., common fate). In Experiment 2, we rule out that synchrony takes precedence over common fate by replicating the null effect of movement direction while keeping synchrony constant. These results suggest that synchrony plays an important role in the processing of biological group movements. In contrast, the role of common fate is less clear and will require further research.
Collapse
Affiliation(s)
- Emiel Cracco
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, Bron, France
| | - Jan R Wiersema
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
2
|
Hafri A, Bonner MF, Landau B, Firestone C. A Phone in a Basket Looks Like a Knife in a Cup: Role-Filler Independence in Visual Processing. Open Mind (Camb) 2024; 8:766-794. [PMID: 38957507 PMCID: PMC11219067 DOI: 10.1162/opmi_a_00146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/17/2024] [Indexed: 07/04/2024] Open
Abstract
When a piece of fruit is in a bowl, and the bowl is on a table, we appreciate not only the individual objects and their features, but also the relations containment and support, which abstract away from the particular objects involved. Independent representation of roles (e.g., containers vs. supporters) and "fillers" of those roles (e.g., bowls vs. cups, tables vs. chairs) is a core principle of language and higher-level reasoning. But does such role-filler independence also arise in automatic visual processing? Here, we show that it does, by exploring a surprising error that such independence can produce. In four experiments, participants saw a stream of images containing different objects arranged in force-dynamic relations-e.g., a phone contained in a basket, a marker resting on a garbage can, or a knife sitting in a cup. Participants had to respond to a single target image (e.g., a phone in a basket) within a stream of distractors presented under time constraints. Surprisingly, even though participants completed this task quickly and accurately, they false-alarmed more often to images matching the target's relational category than to those that did not-even when those images involved completely different objects. In other words, participants searching for a phone in a basket were more likely to mistakenly respond to a knife in a cup than to a marker on a garbage can. Follow-up experiments ruled out strategic responses and also controlled for various confounding image features. We suggest that visual processing represents relations abstractly, in ways that separate roles from fillers.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Linguistics and Cognitive Science, University of Delaware
- Department of Cognitive Science, Johns Hopkins University
- Department of Psychological and Brain Sciences, Johns Hopkins University
| | | | - Barbara Landau
- Department of Cognitive Science, Johns Hopkins University
| | - Chaz Firestone
- Department of Cognitive Science, Johns Hopkins University
- Department of Psychological and Brain Sciences, Johns Hopkins University
| |
Collapse
|
3
|
Bunce C, Gehdu BK, Press C, Gray KLH, Cook R. Autistic adults exhibit typical sensitivity to changes in interpersonal distance. Autism Res 2024. [PMID: 38828663 DOI: 10.1002/aur.3164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 05/06/2024] [Indexed: 06/05/2024]
Abstract
The visual processing differences seen in autism often impede individuals' visual perception of the social world. In particular, many autistic people exhibit poor face recognition. Here, we sought to determine whether autistic adults also show impaired perception of dyadic social interactions-a class of stimulus thought to engage face-like visual processing. Our focus was the perception of interpersonal distance. Participants completed distance change detection tasks, in which they had to make perceptual decisions about the distance between two actors. On half of the trials, participants judged whether the actors moved closer together; on the other half, whether they moved further apart. In a nonsocial control task, participants made similar judgments about two grandfather clocks. We also assessed participants' face recognition ability using standardized measures. The autistic and nonautistic observers showed similar levels of perceptual sensitivity to changes in interpersonal distance when viewing social interactions. As expected, however, the autistic observers showed clear signs of impaired face recognition. Despite putative similarities between the visual processing of faces and dyadic social interactions, our results suggest that these two facets of social vision may dissociate.
Collapse
Affiliation(s)
- Carl Bunce
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
- School of Psychology, University of Leeds, Leeds, UK
| | - Bayparvah Kaur Gehdu
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Clare Press
- Department of Experimental Psychology, University College London, London, UK
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- School of Psychology, University of Leeds, Leeds, UK
| |
Collapse
|
4
|
Goupil N, Rayson H, Serraille É, Massera A, Ferrari PF, Hochmann JR, Papeo L. Visual Preference for Socially Relevant Spatial Relations in Humans and Monkeys. Psychol Sci 2024; 35:681-693. [PMID: 38683657 DOI: 10.1177/09567976241242995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024] Open
Abstract
As a powerful social signal, a body, face, or gaze facing toward oneself holds an individual's attention. We asked whether, going beyond an egocentric stance, facingness between others has a similar effect and why. In a preferential-looking time paradigm, human adults showed spontaneous preference to look at two bodies facing toward (vs. away from) each other (Experiment 1a, N = 24). Moreover, facing dyads were rated higher on social semantic dimensions, showing that facingness adds social value to stimuli (Experiment 1b, N = 138). The same visual preference was found in juvenile macaque monkeys (Experiment 2, N = 21). Finally, on the human development timescale, this preference emerged by 5 years, although young infants by 7 months of age already discriminate visual scenes on the basis of body positioning (Experiment 3, N = 120). We discuss how the preference for facing dyads-shared by human adults, young children, and macaques-can signal a new milestone in social cognition development, supporting processing and learning from third-party social interactions.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Holly Rayson
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Émilie Serraille
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Alice Massera
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Pier Francesco Ferrari
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Liuba Papeo
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| |
Collapse
|
5
|
Tsantani M, Yon D, Cook R. Neural Representations of Observed Interpersonal Synchrony/Asynchrony in the Social Perception Network. J Neurosci 2024; 44:e2009222024. [PMID: 38527811 PMCID: PMC11097257 DOI: 10.1523/jneurosci.2009-22.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/19/2023] [Accepted: 01/10/2024] [Indexed: 03/27/2024] Open
Abstract
The visual perception of individuals is thought to be mediated by a network of regions in the occipitotemporal cortex that supports specialized processing of faces, bodies, and actions. In comparison, we know relatively little about the neural mechanisms that support the perception of multiple individuals and the interactions between them. The present study sought to elucidate the visual processing of social interactions by identifying which regions of the social perception network represent interpersonal synchrony. In an fMRI study with 32 human participants (26 female, 6 male), we used multivoxel pattern analysis to investigate whether activity in face-selective, body-selective, and interaction-sensitive regions across the social perception network supports the decoding of synchronous versus asynchronous head-nodding and head-shaking. Several regions were found to support significant decoding of synchrony/asynchrony, including extrastriate body area (EBA), face-selective and interaction-sensitive mid/posterior right superior temporal sulcus, and occipital face area. We also saw robust cross-classification across actions in the EBA, suggestive of movement-invariant representations of synchrony/asynchrony. Exploratory whole-brain analyses also identified a region of the right fusiform cortex that responded more strongly to synchronous than to asynchronous motion. Critically, perceiving interpersonal synchrony/asynchrony requires the simultaneous extraction and integration of dynamic information from more than one person. Hence, the representation of synchrony/asynchrony cannot be attributed to augmented or additive processing of individual actors. Our findings therefore provide important new evidence that social interactions recruit dedicated visual processing within the social perception network that extends beyond that engaged by the faces and bodies of the constituent individuals.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom
| | - Daniel Yon
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom
| | - Richard Cook
- School of Psychology, University of Leeds, Leeds LS2 9JU, United Kingdom
- Department of Psychology, University of York, York YO10 5DD, United Kingdom
| |
Collapse
|
6
|
Papeo L. What is abstract about seeing social interactions? Trends Cogn Sci 2024; 28:390-391. [PMID: 38632008 DOI: 10.1016/j.tics.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 02/06/2024] [Indexed: 04/19/2024]
Affiliation(s)
- Liuba Papeo
- Institute of Cognitive Sciences Marc Jeannerod -UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, France.
| |
Collapse
|
7
|
Zanon M, Lemaire BS, Papeo L, Vallortigara G. Innate sensitivity to face-to-face biological motion. iScience 2024; 27:108793. [PMID: 38299110 PMCID: PMC10828802 DOI: 10.1016/j.isci.2024.108793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 12/08/2023] [Accepted: 01/02/2024] [Indexed: 02/02/2024] Open
Abstract
Sensitivity to face-to-face stimuli configurations, which likely indicates interaction, seems to appear early in infants' development, and recently a preference for face-to-face (vs. other spatial configurations) has been shown to occur in macaque monkeys. It is unknown, however, whether such a preference is acquired through experience or as an evolutionary-given biological predisposition. Here, we exploited a precocial social animal, the domestic chick, as a model system to address this question. Visually naive chicks were tested for their spontaneous preferences for face-to-face vs. back-to-back hen dyads of point-light displays depicting biological motion. We found that female chicks have a spontaneous preference for the facing interactive configuration. Males showed no preference, as expected due to the well-known low social motivation of males in this highly polygynous species. These findings support the idea of an innate and sex-dependent predisposition toward social and interacting stimuli in a vertebrate brain such as that of chicks.
Collapse
Affiliation(s)
- Mirko Zanon
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| | | | - Liuba Papeo
- Institut des Sciences Cognitives - Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, France
| | | |
Collapse
|
8
|
Kabulska Z, Zhuang T, Lingnau A. Overlapping representations of observed actions and action-related features. Hum Brain Mapp 2024; 45:e26605. [PMID: 38379447 PMCID: PMC10879913 DOI: 10.1002/hbm.26605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 12/21/2023] [Accepted: 01/12/2024] [Indexed: 02/22/2024] Open
Abstract
The lateral occipitotemporal cortex (LOTC) has been shown to capture the representational structure of a smaller range of actions. In the current study, we carried out an fMRI experiment in which we presented human participants with images depicting 100 different actions and used representational similarity analysis (RSA) to determine which brain regions capture the semantic action space established using judgments of action similarity. Moreover, to determine the contribution of a wide range of action-related features to the neural representation of the semantic action space we constructed an action feature model on the basis of ratings of 44 different features. We found that the semantic action space model and the action feature model are best captured by overlapping activation patterns in bilateral LOTC and ventral occipitotemporal cortex (VOTC). An RSA on eight dimensions resulting from principal component analysis carried out on the action feature model revealed partly overlapping representations within bilateral LOTC, VOTC, and the parietal lobe. Our results suggest spatially overlapping representations of the semantic action space of a wide range of actions and the corresponding action-related features. Together, our results add to our understanding of the kind of representations along the LOTC that support action understanding.
Collapse
Affiliation(s)
- Zuzanna Kabulska
- Faculty of Human Sciences, Institute of Psychology, Chair of Cognitive NeuroscienceUniversity of RegensburgRegensburgGermany
| | - Tonghe Zhuang
- Faculty of Human Sciences, Institute of Psychology, Chair of Cognitive NeuroscienceUniversity of RegensburgRegensburgGermany
| | - Angelika Lingnau
- Faculty of Human Sciences, Institute of Psychology, Chair of Cognitive NeuroscienceUniversity of RegensburgRegensburgGermany
| |
Collapse
|
9
|
Liu H, Tang E, Guan C, Li J, Zheng J, Zhou D, Shen M, Chen H. Not socially blind: Unimpaired perception of social interaction in schizophrenia. Schizophr Res 2024; 264:448-450. [PMID: 38262311 DOI: 10.1016/j.schres.2023.12.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 12/01/2023] [Accepted: 12/25/2023] [Indexed: 01/25/2024]
Affiliation(s)
- Huiying Liu
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Enze Tang
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Chenxiao Guan
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Jian Li
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Jiewei Zheng
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | | | - Mowei Shen
- Department of Psychology and Behavioral Sciences, Zhejiang University, China.
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, China.
| |
Collapse
|
10
|
Abassi E, Papeo L. Category-Selective Representation of Relationships in the Visual Cortex. J Neurosci 2024; 44:e0250232023. [PMID: 38124013 PMCID: PMC10860595 DOI: 10.1523/jneurosci.0250-23.2023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 09/29/2023] [Accepted: 10/14/2023] [Indexed: 12/23/2023] Open
Abstract
Understanding social interaction requires processing social agents and their relationships. The latest results show that much of this process is visually solved: visual areas can represent multiple people encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to face-to-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured the brain activity of healthy female and male humans (N = 42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing > non-facing effect as a signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in the visual cortex. Additional analyses suggest the existence of a third network encoding relations between (nonsocial) objects. Finally, a separate occipitotemporal network showed the generalization of relational information across body, face, and nonsocial object dyads (multivariate pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, the visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and nonsocial) interaction, which trigger inferential processes.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| |
Collapse
|
11
|
Gandolfo M, Abassi E, Balgova E, Downing PE, Papeo L, Koldewyn K. Converging evidence that left extrastriate body area supports visual sensitivity to social interactions. Curr Biol 2024; 34:343-351.e5. [PMID: 38181794 DOI: 10.1016/j.cub.2023.12.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/25/2023] [Accepted: 12/05/2023] [Indexed: 01/07/2024]
Abstract
Navigating our complex social world requires processing the interactions we observe. Recent psychophysical and neuroimaging studies provide parallel evidence that the human visual system may be attuned to efficiently perceive dyadic interactions. This work implies, but has not yet demonstrated, that activity in body-selective cortical regions causally supports efficient visual perception of interactions. We adopt a multi-method approach to close this important gap. First, using a large fMRI dataset (n = 92), we found that the left hemisphere extrastriate body area (EBA) responds more to face-to-face than non-facing dyads. Second, we replicated a behavioral marker of visual sensitivity to interactions: categorization of facing dyads is more impaired by inversion than non-facing dyads. Third, in a pre-registered experiment, we used fMRI-guided transcranial magnetic stimulation to show that online stimulation of the left EBA, but not a nearby control region, abolishes this selective inversion effect. Activity in left EBA, thus, causally supports the efficient perception of social interactions.
Collapse
Affiliation(s)
- Marco Gandolfo
- Donders Institute, Radboud University, Nijmegen 6525GD, the Netherlands; Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| | - Etienne Abassi
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Eva Balgova
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK; Department of Psychology, Aberystwyth University, Aberystwyth SY23 3UX, Ceredigion, UK
| | - Paul E Downing
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK
| | - Liuba Papeo
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Kami Koldewyn
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| |
Collapse
|
12
|
McMahon E, Bonner MF, Isik L. Hierarchical organization of social action features along the lateral visual pathway. Curr Biol 2023; 33:5035-5047.e8. [PMID: 37918399 PMCID: PMC10841461 DOI: 10.1016/j.cub.2023.10.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 09/01/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023]
Abstract
Recent theoretical work has argued that in addition to the classical ventral (what) and dorsal (where/how) visual streams, there is a third visual stream on the lateral surface of the brain specialized for processing social information. Like visual representations in the ventral and dorsal streams, representations in the lateral stream are thought to be hierarchically organized. However, no prior studies have comprehensively investigated the organization of naturalistic, social visual content in the lateral stream. To address this question, we curated a naturalistic stimulus set of 250 3-s videos of two people engaged in everyday actions. Each clip was richly annotated for its low-level visual features, mid-level scene and object properties, visual social primitives (including the distance between people and the extent to which they were facing), and high-level information about social interactions and affective content. Using a condition-rich fMRI experiment and a within-subject encoding model approach, we found that low-level visual features are represented in early visual cortex (EVC) and middle temporal (MT) area, mid-level visual social features in extrastriate body area (EBA) and lateral occipital complex (LOC), and high-level social interaction information along the superior temporal sulcus (STS). Communicative interactions, in particular, explained unique variance in regions of the STS after accounting for variance explained by all other labeled features. Taken together, these results provide support for representation of increasingly abstract social visual content-consistent with hierarchical organization-along the lateral visual stream and suggest that recognizing communicative actions may be a key computational goal of the lateral visual pathway.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA.
| | - Michael F Bonner
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA
| | - Leyla Isik
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA; Department of Biomedical Engineering, Whiting School of Engineering, Johns Hopkins University, Suite 400 West, Wyman Park Building, 3400 N. Charles Street, Baltimore, MD 21218, USA
| |
Collapse
|
13
|
McMahon E, Isik L. Seeing social interactions. Trends Cogn Sci 2023; 27:1165-1179. [PMID: 37805385 PMCID: PMC10841760 DOI: 10.1016/j.tics.2023.09.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/01/2023] [Accepted: 09/05/2023] [Indexed: 10/09/2023]
Abstract
Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
14
|
Malik M, Isik L. Relational visual representations underlie human social interaction recognition. Nat Commun 2023; 14:7317. [PMID: 37951960 PMCID: PMC10640586 DOI: 10.1038/s41467-023-43156-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 11/02/2023] [Indexed: 11/14/2023] Open
Abstract
Humans effortlessly recognize social interactions from visual input. Attempts to model this ability have typically relied on generative inverse planning models, which make predictions by inverting a generative model of agents' interactions based on their inferred goals, suggesting humans use a similar process of mental inference to recognize interactions. However, growing behavioral and neuroscience evidence suggests that recognizing social interactions is a visual process, separate from complex mental state inference. Yet despite their success in other domains, visual neural network models have been unable to reproduce human-like interaction recognition. We hypothesize that humans rely on relational visual information in particular, and develop a relational, graph neural network model, SocialGNN. Unlike prior models, SocialGNN accurately predicts human interaction judgments across both animated and natural videos. These results suggest that humans can make complex social interaction judgments without an explicit model of the social and physical world, and that structured, relational visual representations are key to this behavior.
Collapse
Affiliation(s)
- Manasi Malik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, USA.
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, USA.
| |
Collapse
|
15
|
Nudnou I, Post A, Saville A, Balas B. Putting people in context: ERP responses to bodies in natural scenes. PLoS One 2023; 18:e0283673. [PMID: 37883414 PMCID: PMC10602242 DOI: 10.1371/journal.pone.0283673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 03/13/2023] [Indexed: 10/28/2023] Open
Abstract
The N190 is a body-sensitive ERP component that responds to images of human bodies in different poses. In natural settings, bodies vary in posture and appear within complex, cluttered environments, frequently with other people. In many studies, however, such variability is absent. How does the N190 response change when observers see images that incorporate these sources of variability? In two experiments (N = 16 each), we varied the natural appearance of upright and inverted bodies to examine how the N190 amplitude, latency, and the Body-Inversion Effect (BIE) were affected by natural variability. In Experiment 1, we varied the number of people present in upright and inverted naturalistic scenes such that only one body, a subitizable number of bodies, or a "crowd" was present. In Experiment 2, we varied the natural body appearance by presenting bodies either as silhouettes or with photographic detail. Further, we varied the natural background appearance by either removing it or presenting individual bodies within a rich environment. Using component-based analyses of the N190, we found that the number of bodies in a scene reduced the N190 amplitude, but didn't affect the BIE (Experiment 1). Naturalistic body and background appearance (Experiment 2) also affected the N190, such that component amplitude was dramatically reduced by naturalistic appearance. To complement this analysis, we examined the contribution of spatiotemporal features (i.e., electrode × time point amplitude) via SVM decoding. This technique allows us to examine which timepoints across the entire waveform contribute the most to successful decoding of body orientation in each condition. This analysis revealed that later timepoints (after 300ms) contribute most to successful orientation decoding. These results demonstrate that natural appearance variability affects body processing at the N190 and that later ERP components may make important contributions to body processing in natural scenes.
Collapse
Affiliation(s)
- Ilya Nudnou
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States of America
| | - Abigail Post
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States of America
| | - Alyson Saville
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States of America
| | - Benjamin Balas
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States of America
| |
Collapse
|
16
|
Barzy M, Morgan R, Cook R, Gray KLH. Are social interactions preferentially attended in real-world scenes? Evidence from change blindness. Q J Exp Psychol (Hove) 2023; 76:2293-2302. [PMID: 36847458 PMCID: PMC10503233 DOI: 10.1177/17470218231161044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 10/03/2022] [Accepted: 11/02/2022] [Indexed: 03/01/2023]
Abstract
In change detection paradigms, changes to social or animate aspects of a scene are detected better and faster compared with non-social or inanimate aspects. While previous studies have focused on how changes to individual faces/bodies are detected, it is possible that individuals presented within a social interaction may be further prioritised, as the accurate interpretation of social interactions may convey a competitive advantage. Over three experiments, we explored change detection to complex real-world scenes, in which changes either occurred by the removal of (a) an individual on their own, (b) an individual who was interacting with others, or (c) an object. In Experiment 1 (N = 50), we measured change detection for non-interacting individuals versus objects. In Experiment 2 (N = 49), we measured change detection for interacting individuals versus objects. Finally, in Experiment 3 (N = 85), we measured change detection for non-interacting versus interacting individuals. We also ran an inverted version of each task to determine whether differences were driven by low-level visual features. In Experiments 1 and 2, we found that changes to non-interacting and interacting individuals were detected better and more quickly than changes to objects. We also found inversion effects for both non-interaction and interaction changes, whereby they were detected more quickly when upright compared with inverted. No such inversion effect was seen for objects. This suggests that the high-level, social content of the images was driving the faster change detection for social versus object targets. Finally, we found that changes to individuals in non-interactions were detected faster than those presented within an interaction. Our results replicate the social advantage often found in change detection paradigms. However, we find that changes to individuals presented within social interaction configurations do not appear to be more quickly and easily detected than those in non-interacting configurations.
Collapse
Affiliation(s)
- Mahsa Barzy
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Rachel Morgan
- School of Mathematics and Statistics, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
- Department of Psychology, University of York, York, UK
| | - Katie LH Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
17
|
Hochmann JR. Incomplete language-of-thought in infancy. Behav Brain Sci 2023; 46:e278. [PMID: 37766647 DOI: 10.1017/s0140525x23001826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
The view that infants possess a full-fledged propositional language-of-thought (LoT) is appealing, providing a unifying account for infants' precocious reasoning skills in many domains. However, careful appraisal of empirical evidence suggests that there is still no convincing evidence that infants possess discrete representations of abstract relations, suggesting that infants' LoT remains incomplete. Parallel arguments hold for perception.
Collapse
Affiliation(s)
- Jean-Rémy Hochmann
- CNRS UMR5229 - Institut des Sciences Cognitives Marc Jeannerod, Bron, France. ://sites.google.com/site/jrhochmann/
- Université Lyon 1 Claude Bernard, Lyon, France
| |
Collapse
|
18
|
Xu Z, Chen H, Wang Y. Invisible social grouping facilitates the recognition of individual faces. Conscious Cogn 2023; 113:103556. [PMID: 37541010 DOI: 10.1016/j.concog.2023.103556] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/09/2023] [Accepted: 07/28/2023] [Indexed: 08/06/2023]
Abstract
Emerging evidence suggests a specialized mechanism supporting perceptual grouping of social entities. However, the stage at which social grouping is processed is unclear. Through four experiments, here we showed that participants' recognition of a visible face was facilitated by the presence of a second facing (thus forming a social grouping) relative to a nonfacing face, even when the second face was invisible. Using a monocular/dichoptic paradigm, we further found that the social grouping facilitation effect occurred when the two faces were presented dichoptically to different eyes rather than monocularly to the same eye, suggesting that social grouping relies on binocular rather than monocular neural channels. The above effects were not found for inverted face dyads, thereby ruling out the contribution of nonsocial factors. Taken together, these findings support the unconscious influence of social grouping on visual perception and suggest an early origin of social grouping processing in the visual pathway.
Collapse
Affiliation(s)
- Zhenjie Xu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| | - Yingying Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| |
Collapse
|
19
|
Landsiedel J, Koldewyn K. Auditory dyadic interactions through the "eye" of the social brain: How visual is the posterior STS interaction region? IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2023; 1:1-20. [PMID: 37719835 PMCID: PMC10503480 DOI: 10.1162/imag_a_00003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 09/19/2023]
Abstract
Human interactions contain potent social cues that meet not only the eye but also the ear. Although research has identified a region in the posterior superior temporal sulcus as being particularly sensitive to visually presented social interactions (SI-pSTS), its response to auditory interactions has not been tested. Here, we used fMRI to explore brain response to auditory interactions, with a focus on temporal regions known to be important in auditory processing and social interaction perception. In Experiment 1, monolingual participants listened to two-speaker conversations (intact or sentence-scrambled) and one-speaker narrations in both a known and an unknown language. Speaker number and conversational coherence were explored in separately localised regions-of-interest (ROI). In Experiment 2, bilingual participants were scanned to explore the role of language comprehension. Combining univariate and multivariate analyses, we found initial evidence for a heteromodal response to social interactions in SI-pSTS. Specifically, right SI-pSTS preferred auditory interactions over control stimuli and represented information about both speaker number and interactive coherence. Bilateral temporal voice areas (TVA) showed a similar, but less specific, profile. Exploratory analyses identified another auditory-interaction sensitive area in anterior STS. Indeed, direct comparison suggests modality specific tuning, with SI-pSTS preferring visual information while aSTS prefers auditory information. Altogether, these results suggest that right SI-pSTS is a heteromodal region that represents information about social interactions in both visual and auditory domains. Future work is needed to clarify the roles of TVA and aSTS in auditory interaction perception and further probe right SI-pSTS interaction-selectivity using non-semantic prosodic cues.
Collapse
Affiliation(s)
- Julia Landsiedel
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | - Kami Koldewyn
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| |
Collapse
|
20
|
Goupil N, Hochmann JR, Papeo L. Intermodulation responses show integration of interacting bodies in a new whole. Cortex 2023; 165:129-140. [PMID: 37279640 DOI: 10.1016/j.cortex.2023.04.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/31/2023] [Accepted: 04/30/2023] [Indexed: 06/08/2023]
Abstract
People are often seen among other people, relating to and interacting with one another. Recent studies suggest that socially relevant spatial relations between bodies, such as the face-to-face positioning, or facingness, change the visual representation of those bodies, relative to when the same items appear unrelated (e.g., back-to-back) or in isolation. The current study addresses the hypothesis that face-to-face bodies give rise to a new whole, an integrated representation of individual bodies in a new perceptual unit. Using frequency-tagging EEG, we targeted, as a measure of integration, an EEG correlate of the non-linear combination of the neural responses to each of two individual bodies presented either face-to-face as if interacting, or back-to-back. During EEG recording, participants (N = 32) viewed two bodies, either face-to-face or back-to-back, flickering at two different frequencies (F1 and F2), yielding two distinctive responses in the EEG signal. Spectral analysis examined the responses at the intermodulation frequencies (nF1±mF2), signaling integration of individual responses. An anterior intermodulation response was observed for face-to-face bodies, but not for back-to-back bodies, nor for face-to-face chairs and machines. These results show that interacting bodies are integrated into a representation that is more than the sum of its parts. This effect, specific to body dyads, may mark an early step in the transformation towards an integrated representation of a social event, from the visual representation of individual participants in that event.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| |
Collapse
|
21
|
Moreau Q, Parrotta E, Pesci UG, Era V, Candidi M. Early categorization of social affordances during the visual encoding of bodily stimuli. Neuroimage 2023; 274:120151. [PMID: 37191657 DOI: 10.1016/j.neuroimage.2023.120151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 04/27/2023] [Accepted: 04/30/2023] [Indexed: 05/17/2023] Open
Abstract
Interpersonal interactions rely on various communication channels, both verbal and non-verbal, through which information regarding one's intentions and emotions are perceived. Here, we investigated the neural correlates underlying the visual processing of hand postures conveying social affordances (i.e., hand-shaking), compared to control stimuli such as hands performing non-social actions (i.e., grasping) or showing no movement at all. Combining univariate and multivariate analysis on electroencephalography (EEG) data, our results indicate that occipito-temporal electrodes show early differential processing of stimuli conveying social information compared to non-social ones. First, the amplitude of the Early Posterior Negativity (EPN, an Event-Related Potential related to the perception of body parts) is modulated differently during the perception of social and non-social content carried by hands. Moreover, our multivariate classification analysis (MultiVariate Pattern Analysis - MVPA) expanded the univariate results by revealing early (<200 ms) categorization of social affordances over occipito-parietal sites. In conclusion, we provide new evidence suggesting that the encoding of socially relevant hand gestures is categorized in the early stages of visual processing.
Collapse
Affiliation(s)
- Q Moreau
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy.
| | - E Parrotta
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - U G Pesci
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - V Era
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - M Candidi
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy.
| |
Collapse
|
22
|
Kabulska Z, Lingnau A. The cognitive structure underlying the organization of observed actions. Behav Res Methods 2023; 55:1890-1906. [PMID: 35788973 PMCID: PMC10250259 DOI: 10.3758/s13428-022-01894-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/26/2022] [Indexed: 11/08/2022]
Abstract
In daily life, we frequently encounter actions performed by other people. Here we aimed to examine the key categories and features underlying the organization of a wide range of actions in three behavioral experiments (N = 378 participants). In Experiment 1, we used a multi-arrangement task of 100 different actions. Inverse multidimensional scaling and hierarchical clustering revealed 11 action categories, including Locomotion, Communication, and Aggressive actions. In Experiment 2, we used a feature-listing paradigm to obtain a wide range of action features that were subsequently reduced to 59 key features and used in a rating study (Experiment 3). A direct comparison of the feature ratings obtained in Experiment 3 between actions belonging to the categories identified in Experiment 1 revealed a number of features that appear to be critical for the distinction between these categories, e.g., the features Harm and Noise for the category Aggressive actions, and the features Targeting a person and Contact with others for the category Interaction. Finally, we found that a part of the category-based organization is explained by a combination of weighted features, whereas a significant proportion of variability remained unexplained, suggesting that there are additional sources of information that contribute to the categorization of observed actions. The characterization of action categories and their associated features serves as an important extension of previous studies examining the cognitive structure of actions. Moreover, our results may serve as the basis for future behavioral, neuroimaging and computational modeling studies.
Collapse
Affiliation(s)
- Zuzanna Kabulska
- Department of Psychology, Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany
| | - Angelika Lingnau
- Department of Psychology, Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany.
| |
Collapse
|
23
|
Betti S, Zani G, Guerra S, Granziol U, Castiello U, Begliomini C, Sartori L. When Corticospinal Inhibition Favors an Efficient Motor Response. BIOLOGY 2023; 12:biology12020332. [PMID: 36829607 PMCID: PMC9953307 DOI: 10.3390/biology12020332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 12/25/2022] [Accepted: 02/18/2023] [Indexed: 02/23/2023]
Abstract
Many daily activities involve responding to the actions of other people. However, the functional relationship between the motor preparation and execution phases still needs to be clarified. With the combination of different and complementary experimental techniques (i.e., motor excitability measures, reaction times, electromyography, and dyadic 3-D kinematics), we investigated the behavioral and neurophysiological signatures characterizing different stages of a motor response in contexts calling for an interactive action. Participants were requested to perform an action (i.e., stirring coffee or lifting a coffee cup) following a co-experimenter's request gesture. Another condition, in which a non-interactive gesture was used, was also included. Greater corticospinal inhibition was found when participants prepared their motor response after observing an interactive request, compared to a non-interactive gesture. This, in turn, was associated with faster and more efficient action execution in kinematic terms (i.e., a social motor priming effect). Our results provide new insights on the inhibitory and facilitatory drives guiding social motor response generation. Altogether, the integration of behavioral and neurophysiological indexes allowed us to demonstrate that a more efficient action execution followed a greater corticospinal inhibition. These indexes provide a full picture of motor activity at both planning and execution stages.
Collapse
Affiliation(s)
- Sonia Betti
- Department of Psychology, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Viale Rasi e Spinelli 176, 47521 Cesena, Italy
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy
- Correspondence:
| | - Giovanni Zani
- School of Psychology, Victoria University of Wellington, Kelburn Parade 20, Wellington 6012, New Zealand
| | - Silvia Guerra
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy
| | - Umberto Granziol
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy
| | - Umberto Castiello
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy
- Padua Center for Network Medicine, University of Padova, Via Francesco Marzolo 8, 35131 Padova, Italy
| | - Chiara Begliomini
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy
- Padova Neuroscience Center, University of Padova, Via Giuseppe Orus 2, 35131 Padova, Italy
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy
- Padova Neuroscience Center, University of Padova, Via Giuseppe Orus 2, 35131 Padova, Italy
| |
Collapse
|
24
|
Dapor C, Sperandio I, Meconi F. Fading boundaries between the physical and the social world: Insights and novel techniques from the intersection of these two fields. Front Psychol 2023; 13:1028150. [PMID: 36861005 PMCID: PMC9969107 DOI: 10.3389/fpsyg.2022.1028150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 12/12/2022] [Indexed: 02/15/2023] Open
Abstract
This review focuses on the subtle interactions between sensory input and social cognition in visual perception. We suggest that body indices, such as gait and posture, can mediate such interactions. Recent trends in cognitive research are trying to overcome approaches that define perception as stimulus-centered and are pointing toward a more embodied agent-dependent perspective. According to this view, perception is a constructive process in which sensory inputs and motivational systems contribute to building an image of the external world. A key notion emerging from new theories on perception is that the body plays a critical role in shaping our perception. Depending on our arm's length, height and capacity of movement, we create our own image of the world based on a continuous compromise between sensory inputs and expected behavior. We use our bodies as natural "rulers" to measure both the physical and the social world around us. We point out the necessity of an integrative approach in cognitive research that takes into account the interplay between social and perceptual dimensions. To this end, we review long-established and novel techniques aimed at measuring bodily states and movements, and their perception, with the assumption that only by combining the study of visual perception and social cognition can we deepen our understanding of both fields.
Collapse
Affiliation(s)
- Cecilia Dapor
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy
| | | | | |
Collapse
|
25
|
Varrier RS, Finn ES. Seeing Social: A Neural Signature for Conscious Perception of Social Interactions. J Neurosci 2022; 42:9211-9226. [PMID: 36280263 PMCID: PMC9761685 DOI: 10.1523/jneurosci.0859-22.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 08/15/2022] [Accepted: 10/14/2022] [Indexed: 01/07/2023] Open
Abstract
Social information is some of the most ambiguous content we encounter in our daily lives, yet in experimental contexts, percepts of social interactions-that is, whether an interaction is present and if so, the nature of that interaction-are often dichotomized as correct or incorrect based on experimenter-assigned labels. Here, we investigated the behavioral and neural correlates of subjective (or conscious) social perception using data from the Human Connectome Project in which participants (n = 1049; 486 men, 562 women) viewed animations of geometric shapes during fMRI and indicated whether they perceived a social interaction or random motion. Critically, rather than experimenter-assigned labels, we used observers' own reports of "Social" or "Non-social" to classify percepts and characterize brain activity, including leveraging a particularly ambiguous animation perceived as "Social" by some but "Non-social" by others to control for visual input. Behaviorally, observers were biased toward perceiving information as social (vs non-social); and neurally, observer reports (compared with experimenter labels) explained more variance in activity across much of the brain. Using "Unsure" reports, we identified several regions that responded parametrically to perceived socialness. Neural responses to social versus non-social content diverged early in time and in the cortical hierarchy. Finally, individuals with higher internalizing trait scores showed both a higher response bias toward "Social" and an inverse relationship with activity in default mode and visual association areas while scanning for social information. Findings underscore the subjective nature of social perception and the importance of using observer reports to study percepts of social interactions.SIGNIFICANCE STATEMENT Simple animations involving two or more geometric shapes have been used as a gold standard to understand social cognition and impairments therein. Yet, experimenter-assigned labels of what is social versus non-social are frequently used as a ground truth, despite the fact that percepts of such ambiguous social stimuli are highly subjective. Here, we used behavioral and fMRI data from a large sample of neurotypical individuals to show that participants' responses reveal subtle behavioral biases, help us study neural responses to social content more precisely, and covary with internalizing trait scores. Our findings underscore the subjective nature of social perception and the importance of considering observer reports in studying behavioral and neural dynamics of social perception.
Collapse
Affiliation(s)
- Rekha S Varrier
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Emily S Finn
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| |
Collapse
|
26
|
Yin J, Csibra G, Tatone D. Structural asymmetries in the representation of giving and taking events. Cognition 2022; 229:105248. [PMID: 35961163 DOI: 10.1016/j.cognition.2022.105248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 07/28/2022] [Accepted: 08/01/2022] [Indexed: 11/15/2022]
Abstract
Across languages, GIVE and TAKE verbs have different syntactic requirements: GIVE mandates a patient argument to be made explicit in the clause structure, whereas TAKE does not. Experimental evidence suggests that this asymmetry is rooted in prelinguistic assumptions about the minimal number of event participants that each action entails. The present study provides corroborating evidence for this proposal by investigating whether the observation of giving and taking actions modulates the inclusion of patients in the represented event. Participants were shown events featuring an agent (A) transferring an object to, or collecting it from, an animate target (B) or an inanimate target (a rock), and their sensitivity to changes in pair composition (AB vs. AC) and action role (AB vs. BA) was measured. Change sensitivity was affected by the type of target approached when the agent transferred the object (Experiment 1), but not when she collected it (Experiment 2), or when an outside force carried out the transfer (Experiment 3). Although these object-displacing actions could be equally interpreted as interactive (i.e., directed towards B), this construal was adopted only when B could be perceived as putative patient of a giving action. This evidence buttresses the proposal that structural asymmetries in giving and taking, as reflected in their syntactic requirements, may originate from prelinguistic assumptions about the minimal event participants required for each action to be teleologically well-formed.
Collapse
Affiliation(s)
- Jun Yin
- Department of Psychology, Ningbo University, Ningbo, PR China.
| | - Gergely Csibra
- Department of Cognitive Science, Central European University, Vienna, Austria; Department of Psychological Sciences, Birkbeck, University of London, UK
| | - Denis Tatone
- Department of Cognitive Science, Central European University, Vienna, Austria.
| |
Collapse
|
27
|
Landsiedel J, Daughters K, Downing PE, Koldewyn K. The role of motion in the neural representation of social interactions in the posterior temporal cortex. Neuroimage 2022; 262:119533. [PMID: 35931309 DOI: 10.1016/j.neuroimage.2022.119533] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 07/15/2022] [Accepted: 08/01/2022] [Indexed: 11/30/2022] Open
Abstract
Humans are an inherently social species, with multiple focal brain regions sensitive to various visual social cues such as faces, bodies, and biological motion. More recently, research has begun to investigate how the brain responds to more complex, naturalistic social scenes, identifying a region in the posterior superior temporal sulcus (SI-pSTS; i.e., social interaction pSTS), among others, as an important region for processing social interaction. This research, however, has presented images or videos, and thus the contribution of motion to social interaction perception in these brain regions is not yet understood. In the current study, 22 participants viewed videos, image sequences, scrambled image sequences and static images of either social interactions or non-social independent actions. Combining univariate and multivariate analyses, we confirm that bilateral SI-pSTS plays a central role in dynamic social interaction perception but is much less involved when 'interactiveness' is conveyed solely with static cues. Regions in the social brain, including SI-pSTS and extrastriate body area (EBA), showed sensitivity to both motion and interactive content. While SI-pSTS is somewhat more tuned to video interactions than is EBA, both bilateral SI-pSTS and EBA showed a greater response to social interactions compared to non-interactions and both regions responded more strongly to videos than static images. Indeed, both regions showed higher responses to interactions than independent actions in videos and intact sequences, but not in other conditions. Exploratory multivariate regression analyses suggest that selectivity for simple visual motion does not in itself drive interactive sensitivity in either SI-pSTS or EBA. Rather, selectivity for interactions expressed in point-light animations, and selectivity for static images of bodies, make positive and independent contributions to this effect across the LOTC region. Our results strongly suggest that EBA and SI-pSTS work together during dynamic interaction perception, at least when interactive information is conveyed primarily via body information. As such, our results are also in line with proposals of a third visual stream supporting dynamic social scene perception.
Collapse
Affiliation(s)
| | | | - Paul E Downing
- School of Human and Behavioural Sciences, Bangor University
| | - Kami Koldewyn
- School of Human and Behavioural Sciences, Bangor University.
| |
Collapse
|
28
|
Abassi E, Papeo L. Behavioral and neural markers of visual configural processing in social scene perception. Neuroimage 2022; 260:119506. [PMID: 35878724 DOI: 10.1016/j.neuroimage.2022.119506] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 07/18/2022] [Accepted: 07/21/2022] [Indexed: 11/19/2022] Open
Abstract
Research on face perception has revealed highly specialized visual mechanisms such as configural processing, and provided markers of interindividual differences -including disease risks and alterations- in visuo-perceptual abilities that traffic in social cognition. Is face perception unique in degree or kind of mechanisms, and in its relevance for social cognition? Combining functional MRI and behavioral methods, we address the processing of an uncharted class of socially relevant stimuli: minimal social scenes involving configurations of two bodies spatially close and face-to-face as if interacting (hereafter, facing dyads). We report category-specific activity for facing (vs. non-facing) dyads in visual cortex. That activity shows face-like signatures of configural processing -i.e., stronger response to facing (vs. non-facing) dyads, and greater susceptibility to stimulus inversion for facing (vs. non-facing) dyads-, and is predicted by performance-based measures of configural processing in visual perception of body dyads. Moreover, we observe that the individual performance in body-dyad perception is reliable, stable-over-time and correlated with the individual social sensitivity, coarsely captured by the Autism-Spectrum Quotient. Further analyses clarify the relationship between single-body and body-dyad perception. We propose that facing dyads are processed through highly specialized mechanisms -and brain areas-, analogously to other biologically and socially relevant stimuli such as faces. Like face perception, facing-dyad perception can reveal basic (visual) processes that lay the foundations for understanding others, their relationships and interactions.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, 67 Bd. Pinel, 69675 Bron France.
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, 67 Bd. Pinel, 69675 Bron France
| |
Collapse
|
29
|
Dima DC, Tomita TM, Honey CJ, Isik L. Social-affective features drive human representations of observed actions. eLife 2022; 11:75027. [PMID: 35608254 PMCID: PMC9159752 DOI: 10.7554/elife.75027] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 05/24/2022] [Indexed: 11/13/2022] Open
Abstract
Humans observe actions performed by others in many different visual and social settings. What features do we extract and attend when we view such complex scenes, and how are they processed in the brain? To answer these questions, we curated two large-scale sets of naturalistic videos of everyday actions and estimated their perceived similarity in two behavioral experiments. We normed and quantified a large range of visual, action-related, and social-affective features across the stimulus sets. Using a cross-validated variance partitioning analysis, we found that social-affective features predicted similarity judgments better than, and independently of, visual and action features in both behavioral experiments. Next, we conducted an electroencephalography experiment, which revealed a sustained correlation between neural responses to videos and their behavioral similarity. Visual, action, and social-affective features predicted neural patterns at early, intermediate, and late stages, respectively, during this behaviorally relevant time window. Together, these findings show that social-affective features are important for perceiving naturalistic actions and are extracted at the final stage of a temporal gradient in the brain.
Collapse
Affiliation(s)
- Diana C Dima
- Department of Cognitive Science, Johns Hopkins University, Baltimore, United States
| | - Tyler M Tomita
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, United States
| | - Christopher J Honey
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, United States
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, United States
| |
Collapse
|
30
|
Affiliation(s)
- Ilenia Paparella
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| | - Liuba Papeo
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| |
Collapse
|
31
|
Grichtchouk O, Oliveira JM, Campagnoli RR, Franklin C, Correa MF, Pereira MG, Vargas CD, David IA, Souza GGL, Gleiser S, Keil A, Rocha-Rego V, Volchan E. Visuo-Motor Affective Interplay: Bonding Scenes Promote Implicit Motor Pre-dispositions Associated With Social Grooming-A Pilot Study. Front Psychol 2022; 13:817699. [PMID: 35465505 PMCID: PMC9022038 DOI: 10.3389/fpsyg.2022.817699] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 03/11/2022] [Indexed: 12/02/2022] Open
Abstract
Proximity and interpersonal contact are prominent components of social connection. Giving affective touch to others is fundamental for human bonding. This brief report presents preliminary results from a pilot study. It explores if exposure to bonding scenes impacts the activity of specific muscles related to physical interaction. Fingers flexion is a very important component when performing most actions of affectionate contact. We explored the visuo-motor affective interplay by priming participants with bonding scenes and assessing the electromyographic activity of the fingers flexor muscle, in the absence of any overt movements. Photographs of dyads in social interaction and of the same dyads not interacting were employed. We examined the effects upon the electromyographical activity: (i) during the passive exposure to pictures, and (ii) during picture offset and when expecting the signal to perform a fingers flexion task. Interacting dyads compared to matched non-interacting dyads increased electromyographic activity of the fingers flexor muscle in both contexts. Specific capture of visual bonding cues at the level of visual cortex had been described in the literature. Here we showed that the neural processing of visual bonding cues reaches the fingers flexor muscle. Besides, previous visualization of bonding cues enhanced background electromyographic activity during motor preparation to perform the fingers flexion task, which might reflect a sustained leakage of central motor activity downstream leading to increase in firing of the respective motor neurons. These data suggest, at the effector level, an implicit visuo-motor connection in which social interaction cues evoke intrinsic dispositions toward affectionate social behavior.
Collapse
Affiliation(s)
- Olga Grichtchouk
- Instituto de Biofísica Carlos Chagas Filho, Avenida Carlos Chagas Filho, Centro de Ciências da Saúde, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Jose M Oliveira
- Instituto de Biofísica Carlos Chagas Filho, Avenida Carlos Chagas Filho, Centro de Ciências da Saúde, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Rafaela R Campagnoli
- Instituto Biomédico, Universidade Federal Fluminense, Niterói, Brazil.,Instituto de Biologia, Universidade Federal Fluminense, Niterói, Brazil
| | - Camila Franklin
- Instituto de Psiquiatria, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Monica F Correa
- Instituto de Psiquiatria, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Mirtes G Pereira
- Instituto Biomédico, Universidade Federal Fluminense, Niterói, Brazil
| | - Claudia D Vargas
- Instituto de Biofísica Carlos Chagas Filho, Avenida Carlos Chagas Filho, Centro de Ciências da Saúde, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Isabel A David
- Instituto Biomédico, Universidade Federal Fluminense, Niterói, Brazil.,Instituto de Biologia, Universidade Federal Fluminense, Niterói, Brazil
| | - Gabriela G L Souza
- Departamento de Ciências Biológicas, Universidade Federal de Ouro Preto, Ouro Preto, Brazil
| | - Sonia Gleiser
- Instituto de Psiquiatria, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Andreas Keil
- Department of Psychology, Center for the Study of Emotion and Attention, University of Florida, Gainesville, FL, United States
| | - Vanessa Rocha-Rego
- Instituto de Biofísica Carlos Chagas Filho, Avenida Carlos Chagas Filho, Centro de Ciências da Saúde, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Eliane Volchan
- Instituto de Biofísica Carlos Chagas Filho, Avenida Carlos Chagas Filho, Centro de Ciências da Saúde, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| |
Collapse
|
32
|
Rosén J, Kastrati G, Kuja-Halkola R, Larsson H, Åhs F. A neuroimaging study of interpersonal distance in identical and fraternal twins. Hum Brain Mapp 2022; 43:3508-3523. [PMID: 35417056 PMCID: PMC9248319 DOI: 10.1002/hbm.25864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 03/18/2022] [Accepted: 03/23/2022] [Indexed: 11/25/2022] Open
Abstract
Keeping appropriate interpersonal distance is an evolutionary conserved behavior that can be adapted based on learning. Detailed knowledge on how interpersonal space is represented in the brain and whether such representation is genetically influenced is lacking. We measured brain function using functional magnetic resonance imaging in 294 twins (71 monozygotic, 76 dizygotic pairs) performing a distance task where neural responses to human figures were compared to cylindrical blocks. Proximal viewing distance of human figures was compared to cylinders facilitated responses in the occipital face area (OFA) and the superficial part of the amygdala, which is consistent with these areas playing a role in monitoring interpersonal distance. Using the classic twin method, we observed a genetic influence on interpersonal distance related activation in the OFA, but not in the amygdala. Results suggest that genetic factors may influence interpersonal distance monitoring via the OFA whereas the amygdala may play a role in experience‐dependent adjustments of interpersonal distance.
Collapse
Affiliation(s)
- Jörgen Rosén
- Department of Psychology and Social Work, Mid Sweden University, Östersund, Sweden
| | - Granit Kastrati
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Ralf Kuja-Halkola
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Henrik Larsson
- Department of Medical Sciences, Örebro University, Örebro, Sweden
| | - Fredrik Åhs
- Department of Psychology and Social Work, Mid Sweden University, Östersund, Sweden
| |
Collapse
|
33
|
Pesquita A, Bernardet U, Richards BE, Jensen O, Shapiro K. Isolating Action Prediction from Action Integration in the Perception of Social Interactions. Brain Sci 2022; 12:432. [PMID: 35447965 PMCID: PMC9031105 DOI: 10.3390/brainsci12040432] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 03/08/2022] [Accepted: 03/21/2022] [Indexed: 02/01/2023] Open
Abstract
Previous research suggests that predictive mechanisms are essential in perceiving social interactions. However, these studies did not isolate action prediction (a priori expectations about how partners in an interaction react to one another) from action integration (a posteriori processing of both partner's actions). This study investigated action prediction during social interactions while controlling for integration confounds. Twenty participants viewed 3D animations depicting an action-reaction interaction between two actors. At the start of each action-reaction interaction, one actor performs a social action. Immediately after, instead of presenting the other actor's reaction, a black screen covers the animation for a short time (occlusion duration) until a still frame depicting a precise moment of the reaction is shown (reaction frame). The moment shown in the reaction frame is either temporally aligned with the occlusion duration or deviates by 150 ms or 300 ms. Fifty percent of the action-reaction trials were semantically congruent, and the remaining were incongruent, e.g., one actor offers to shake hands, and the other reciprocally shakes their hand (congruent action-reaction) versus one actor offers to shake hands, and the other leans down (incongruent action-reaction). Participants made fast congruency judgments. We hypothesized that judging the congruency of action-reaction sequences is aided by temporal predictions. The findings supported this hypothesis; linear speed-accuracy scores showed that congruency judgments were facilitated by a temporally aligned occlusion duration, and reaction frames compared to 300 ms deviations, thus suggesting that observers internally simulate the temporal unfolding of an observed social interction. Furthermore, we explored the link between participants with higher autistic traits and their sensitivity to temporal deviations. Overall, the study offers new evidence of prediction mechanisms underpinning the perception of social interactions in isolation from action integration confounds.
Collapse
Affiliation(s)
- Ana Pesquita
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| | - Ulysses Bernardet
- Aston Institute of Urban Technology and the Environment (ASTUTE), Aston University, Birmingham B4 7ET, UK;
| | - Bethany E. Richards
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| | - Ole Jensen
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| | - Kimron Shapiro
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| |
Collapse
|
34
|
Goupil N, Papeo L, Hochmann J. Visual perception grounding of social cognition in preverbal infants. INFANCY 2022; 27:210-231. [DOI: 10.1111/infa.12453] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 11/22/2021] [Accepted: 01/02/2022] [Indexed: 11/28/2022]
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| | - Liuba Papeo
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| | - Jean‐Rémy Hochmann
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| |
Collapse
|
35
|
LIN J, HUANGLIANG J, HE Y, DUAN J, YIN J. The recognition of social intentions based on the information of minimizing costs: EEG and behavioral evidences. ACTA PSYCHOLOGICA SINICA 2022. [DOI: 10.3724/sp.j.1041.2022.00012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
36
|
Abstract
During natural vision, our brains are constantly exposed to complex, but regularly structured environments. Real-world scenes are defined by typical part-whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part-whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: By dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their constituent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene's part-whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.
Collapse
Affiliation(s)
- Daniel Kaiser
- Justus-Liebig-Universität Gießen, Germany.,Philipps-Universität Marburg, Germany.,University of York, United Kingdom
| | - Radoslaw M Cichy
- Freie Universität Berlin, Germany.,Humboldt-Universität zu Berlin, Germany.,Bernstein Centre for Computational Neuroscience Berlin, Germany
| |
Collapse
|
37
|
Vestner T, Over H, Gray KLH, Tipper SP, Cook R. Searching for people: Non-facing distractor pairs hinder the visual search of social scenes more than facing distractor pairs. Cognition 2021; 214:104737. [PMID: 33901835 PMCID: PMC8346951 DOI: 10.1016/j.cognition.2021.104737] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 03/05/2021] [Accepted: 04/12/2021] [Indexed: 11/24/2022]
Abstract
There is growing interest in the visual and attentional processes recruited when human observers view social scenes containing multiple people. Findings from visual search paradigms have helped shape this emerging literature. Previous research has established that, when hidden amongst pairs of individuals facing in the same direction (leftwards or rightwards), pairs of individuals arranged front-to-front are found faster than pairs of individuals arranged back-to-back. Here, we describe a second, closely-related effect with important theoretical implications. When searching for a pair of individuals facing in the same direction (leftwards or rightwards), target dyads are found faster when hidden amongst distractor pairs arranged front-to-front, than when hidden amongst distractor pairs arranged back-to-back. This distractor arrangement effect was also obtained with target and distractor pairs constructed from arrows and types of common objects that cue visuospatial attention. These findings argue against the view that pairs of people arranged front-to-front capture exogenous attention due to a domain-specific orienting mechanism. Rather, it appears that salient direction cues (e.g., gaze direction, body orientation, arrows) hamper systematic search and impede efficient interpretation, when distractor pairs are arranged back-to-back.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Harriet Over
- Department of Psychology, University of York, York, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | | | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|
38
|
Bellot E, Abassi E, Papeo L. Moving Toward versus Away from Another: How Body Motion Direction Changes the Representation of Bodies and Actions in the Visual Cortex. Cereb Cortex 2021; 31:2670-2685. [PMID: 33401307 DOI: 10.1093/cercor/bhaa382] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 11/05/2020] [Accepted: 11/25/2020] [Indexed: 11/12/2022] Open
Abstract
Representing multiple agents and their mutual relations is a prerequisite to understand social events such as interactions. Using functional magnetic resonance imaging on human adults, we show that visual areas dedicated to body form and body motion perception contribute to processing social events, by holding the representation of multiple moving bodies and encoding the spatial relations between them. In particular, seeing animations of human bodies facing and moving toward (vs. away from) each other increased neural activity in the body-selective cortex [extrastriate body area (EBA)] and posterior superior temporal sulcus (pSTS) for biological motion perception. In those areas, representation of body postures and movements, as well as of the overall scene, was more accurate for facing body (vs. nonfacing body) stimuli. Effective connectivity analysis with dynamic causal modeling revealed increased coupling between EBA and pSTS during perception of facing body stimuli. The perceptual enhancement of multiple-body scenes featuring cues of interaction (i.e., face-to-face positioning, spatial proximity, and approaching signals) was supported by the participants' better performance in a recognition task with facing body versus nonfacing body stimuli. Thus, visuospatial cues of interaction in multiple-person scenarios affect the perceptual representation of body and body motion and, by promoting functional integration, streamline the process from body perception to action representation.
Collapse
Affiliation(s)
- Emmanuelle Bellot
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| |
Collapse
|
39
|
Hafri A, Firestone C. The Perception of Relations. Trends Cogn Sci 2021; 25:475-492. [PMID: 33812770 DOI: 10.1016/j.tics.2021.01.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 01/05/2021] [Accepted: 01/18/2021] [Indexed: 11/16/2022]
Abstract
The world contains not only objects and features (red apples, glass bowls, wooden tables), but also relations holding between them (apples contained in bowls, bowls supported by tables). Representations of these relations are often developmentally precocious and linguistically privileged; but how does the mind extract them in the first place? Although relations themselves cast no light onto our eyes, a growing body of work suggests that even very sophisticated relations display key signatures of automatic visual processing. Across physical, eventive, and social domains, relations such as support, fit, cause, chase, and even socially interact are extracted rapidly, are impossible to ignore, and influence other perceptual processes. Sophisticated and structured relations are not only judged and understood, but also seen - revealing surprisingly rich content in visual perception itself.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Philosophy, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
40
|
Bunce C, Gray KLH, Cook R. The perception of interpersonal distance is distorted by the Müller-Lyer illusion. Sci Rep 2021; 11:494. [PMID: 33436801 PMCID: PMC7803751 DOI: 10.1038/s41598-020-80073-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 12/14/2020] [Indexed: 11/10/2022] Open
Abstract
There is growing interest in how human observers perceive social scenes containing multiple people. Interpersonal distance is a critical feature when appraising these scenes; proxemic cues are used by observers to infer whether two people are interacting, the nature of their relationship, and the valence of their current interaction. Presently, however, remarkably little is known about how interpersonal distance is encoded within the human visual system. Here we show that the perception of interpersonal distance is distorted by the Müller-Lyer illusion. Participants perceived the distance between two target points to be compressed or expanded depending on whether face pairs were positioned inside or outside the to-be-judged interval. This illusory bias was found to be unaffected by manipulations of face direction. These findings aid our understanding of how human observers perceive interpersonal distance and may inform theoretical accounts of the Müller-Lyer illusion.
Collapse
Affiliation(s)
- Carl Bunce
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E7HX, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E7HX, UK.
| |
Collapse
|
41
|
Vestner T, Gray KLH, Cook R. Visual search for facing and non-facing people: The effect of actor inversion. Cognition 2020; 208:104550. [PMID: 33360076 DOI: 10.1016/j.cognition.2020.104550] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 12/08/2020] [Accepted: 12/11/2020] [Indexed: 10/22/2022]
Abstract
In recent years, there has been growing interest in how human observers perceive, attend to, and recall, social interactions viewed from third-person perspectives. One of the interesting findings to emerge from this new literature is the search advantage for facing dyads. When hidden amongst pairs of individuals facing in the same direction, pairs of individuals arranged front-to-front are found faster in visual search tasks than pairs of individuals arranged back-to-back. Interestingly, the search advantage for facing dyads appears to be sensitive to the orientation of the people depicted. While front-to-front target pairs are found faster than back-to-back targets when target and distractor pairings are shown upright, front-to-front and back-to-back targets are found equally quickly when pairings are shown upside-down. In the present study, we sought to better understand why the search advantage for facing dyads is sensitive to the orientation of the people depicted. To begin, we show that the orientation sensitivity of the search advantage is seen with dyads constructed from faces only, and from bodies with the head and face occluded. We replicate these effects using two different visual search paradigms. We go on to show that individual faces and bodies, viewed in profile, produce strong attentional cueing effects when shown upright, but not when presented upside-down. Together with recent evidence that arrows arranged front-to-front also produce the search advantage for facing dyads, these findings support the view that the search advantage is a by-product of the ability of constituent elements to direct observers' visuo-spatial attention.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|
42
|
Schweinberger SR, Dobel C. Why twos in human visual perception? A possible role of prediction from dynamic synchronization in interaction. Cortex 2020; 135:355-357. [PMID: 33234236 DOI: 10.1016/j.cortex.2020.09.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 09/23/2020] [Indexed: 12/01/2022]
Affiliation(s)
- Stefan R Schweinberger
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University of Jena, Germany; Swiss Center for Affective Sciences, University of Geneva, Switzerland. http://www.allgpsy.uni-jena.de
| | - Christian Dobel
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Friedrich Schiller University of Jena, Germany
| |
Collapse
|