1
|
Huang L, Du F, Huang W, Ren H, Qiu W, Zhang J, Wang Y. Three-stage Dynamic Brain-cognitive Model of Understanding Action Intention Displayed by Human Body Movements. Brain Topogr 2024; 37:1055-1067. [PMID: 38874853 DOI: 10.1007/s10548-024-01061-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 06/04/2024] [Indexed: 06/15/2024]
Abstract
The ability to comprehend the intention conveyed through human body movements is crucial for effective interpersonal interactions. If people can't understand the intention behind other individuals' isolated or interactive actions, their actions will become meaningless. Psychologists have investigated the cognitive processes and neural representations involved in understanding action intention, yet a cohesive theoretical explanation remains elusive. Hence, we mainly review existing literature related to neural correlates of action intention, and primarily propose a putative Three-stage Dynamic Brain-cognitive Model of understanding action intention, which involves body perception, action identification and intention understanding. Specifically, at the first stage, body parts/shapes are processed by those brain regions such as extrastriate and fusiform body areas; During the second stage, differentiating observed actions relies on configuring relationships between body parts, facilitated by the activation of the Mirror Neuron System; The last stage involves identifying various intention categories, utilizing the Mentalizing System for recruitment, and different activation patterns concerning the nature of the intentions participants dealing with. Finally, we delves into the clinical practice, like intervention training based on a theoretical model for individuals with autism spectrum disorders who encounter difficulties in interpersonal communication.
Collapse
Affiliation(s)
- Liang Huang
- Fujian Key Laboratory of Applied Cognition and Personality, Minnan Normal University, Zhangzhou, China.
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy.
| | - Fangyuan Du
- Fuzhou University of International Studies and Trade, Fuzhou, China
| | - Wenxin Huang
- Fujian Key Laboratory of Applied Cognition and Personality, Minnan Normal University, Zhangzhou, China
- School of Management, Zhejiang University of Technology, Hangzhou, China
| | - Hanlin Ren
- Third People's Hospital of Zhongshan, Zhongshan, China
| | - Wenzhen Qiu
- Fujian Key Laboratory of Applied Cognition and Personality, Minnan Normal University, Zhangzhou, China
| | - Jiayi Zhang
- Fujian Key Laboratory of Applied Cognition and Personality, Minnan Normal University, Zhangzhou, China
| | - Yiwen Wang
- The School of Economics and Management, Fuzhou University, Fuzhou, China.
| |
Collapse
|
2
|
Bierlich AM, Scheel NT, Traiger LS, Keeser D, Tepest R, Georgescu AL, Koehler JC, Plank IS, Falter‐Wagner CM. Neural Mechanisms of Social Interaction Perception: Observing Interpersonal Synchrony Modulates Action Observation Network Activation and Is Spared in Autism. Hum Brain Mapp 2024; 45:e70052. [PMID: 39449147 PMCID: PMC11502411 DOI: 10.1002/hbm.70052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 09/04/2024] [Accepted: 09/30/2024] [Indexed: 10/26/2024] Open
Abstract
How the temporal dynamics of social interactions are perceived arguably plays an important role in how one engages in social interactions and how difficulties in establishing smooth social interactions may occur. One aspect of temporal dynamics in social interactions is the mutual coordination of individuals' behaviors during social interaction, otherwise known as behavioral interpersonal synchrony (IPS). Behavioral IPS has been studied increasingly in various contexts, such as a feature of the social interaction difficulties inherent to autism. To fully understand the temporal dynamics of social interactions, or reductions thereof in autism, the neural basis of IPS perception needs to be established. Thus, the current study's aim was twofold: to establish the basic neuro-perceptual processing of IPS in social interactions for typical observers and to test whether it might differ for autistic individuals. In a task-based fMRI paradigm, participants viewed short, silent video vignettes of humans during social interactions featuring a variation of behavioral IPS. The results show that observing behavioral IPS modulates the Action Observation Network (AON). Interestingly, autistic participants showed similar neural activation patterns as non-autistic participants which were modulated by the behavioral IPS they observed in the videos, suggesting that the perception of temporal dynamics of social interactions is spared and may not underly reduced behavioral IPS often observed in autism. Nevertheless, a general difference in processing social interactions was found in autistic observers, characterized by decreased neural activation in the right middle frontal gyrus, angular gyrus, and superior temporal areas. These findings demonstrate that although the autistic and non-autistic groups indeed differed in the neural processing of social interaction perception, the temporal dynamics of these social interactions were not the reason for these differences in social interaction perception in autism. Hence, spared recruitment of the AON for processing temporal dynamics of social interactions in autism does not account for the widely reported attenuation of IPS in autism and for the widely reported and presently observed differences in social interaction perception in autism.
Collapse
Affiliation(s)
- Afton M. Bierlich
- Department of Psychiatry and PsychotherapyLMU University Hospital, LMU MunichMunichGermany
| | - Nanja T. Scheel
- Department of Psychiatry and PsychotherapyLMU University Hospital, LMU MunichMunichGermany
| | - Leora S. Traiger
- Department of Psychiatry and PsychotherapyLMU University Hospital, LMU MunichMunichGermany
| | - Daniel Keeser
- Department of Psychiatry and PsychotherapyLMU University Hospital, LMU MunichMunichGermany
- NeuroImaging Core Unit Munich (NICUM)LMU University Hospital, LMU MunichMunichGermany
| | - Ralf Tepest
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital CologneUniversity of CologneCologneGermany
| | - Alexandra L. Georgescu
- Thymia LimitedLondonUK
- Department of PsychologyInstitute of Psychiatry, Psychology and Neuroscience, King's College LondonLondonUK
| | - Jana C. Koehler
- Department of Psychiatry and PsychotherapyLMU University Hospital, LMU MunichMunichGermany
| | - Irene Sophia Plank
- Department of Psychiatry and PsychotherapyLMU University Hospital, LMU MunichMunichGermany
| | | |
Collapse
|
3
|
Cross ES, Darda KM, Moffat R, Muñoz L, Humphries S, Kirsch LP. Mutual gaze and movement synchrony boost observers' enjoyment and perception of togetherness when watching dance duets. Sci Rep 2024; 14:24004. [PMID: 39402066 PMCID: PMC11473960 DOI: 10.1038/s41598-024-72659-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 09/09/2024] [Indexed: 10/17/2024] Open
Abstract
As social beings, we are adept at coordinating our body movements and gaze with others. Often, when coordinating with another person, we orient ourselves to face them, as mutual gaze provides valuable cues pertaining to attention and intentions. Moreover, movement synchrony and mutual gaze are associated with prosocial outcomes, yet the perceptual consequences of these forms of coordination remain poorly understood. Across two experiments, we assessed how movement synchrony and gaze direction influence observers' perceptions of dyads. Observers' behavioural responses indicated that dyads are perceived as more socially connected and are more enjoyable to watch when moving synchronously and facing each other. Neuroimaging results showed modulation of the Action Observation and Theory of Mind networks by movement synchrony and mutual gaze, with more robust brain activity when evaluating togetherness (i.e., active and intentional collaboration) than aesthetic value (i.e., enjoyment). A fuller understanding of the consequences of movement synchrony and mutual gaze from the observer's viewpoint holds important implications for social perception, in terms of how observers intuit social relationships within dyads, and the aesthetic value derived from watching individuals moving in these ways.
Collapse
Affiliation(s)
- Emily S Cross
- Professorship for Social Brain Sciences, ETH Zürich, Zurich, Switzerland.
| | - Kohinoor M Darda
- ARISA (Advancement and Research in the Sciences and Arts) Foundation, Pune, India
| | - Ryssa Moffat
- Professorship for Social Brain Sciences, ETH Zürich, Zurich, Switzerland
| | - Lina Muñoz
- Goldsmiths, University of London, London, UK
| | | | - Louise P Kirsch
- Integrative Neuroscience and Cognition Center, UMR 8002, CNRS, Université Paris Cité, Paris, France
| |
Collapse
|
4
|
Jastrzab LE, Chaudhury B, Ashley SA, Koldewyn K, Cross ES. Beyond human-likeness: Socialness is more influential when attributing mental states to robots. iScience 2024; 27:110070. [PMID: 38947497 PMCID: PMC11214418 DOI: 10.1016/j.isci.2024.110070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 03/08/2024] [Accepted: 05/17/2024] [Indexed: 07/02/2024] Open
Abstract
We sought to replicate and expand previous work showing that the more human-like a robot appears, the more willing people are to attribute mind-like capabilities and socially engage with it. Forty-two participants played games against a human, a humanoid robot, a mechanoid robot, and a computer algorithm while undergoing functional neuroimaging. We confirmed that the more human-like the agent, the more participants attributed a mind to them. However, exploratory analyses revealed that the perceived socialness of an agent appeared to be as, if not more, important for mind attribution. Our findings suggest top-down knowledge cues may be equally or possibly more influential than bottom-up stimulus cues when exploring mind attribution in non-human agents. While further work is now required to test this hypothesis directly, these preliminary findings hold important implications for robotic design and to understand and test the flexibility of human social cognition when people engage with artificial agents.
Collapse
Affiliation(s)
- Laura E. Jastrzab
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
| | - Bishakha Chaudhury
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
| | - Sarah A. Ashley
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
- Division of Psychiatry, Institute of Mental Health, University College London, London, UK
| | - Kami Koldewyn
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
| | - Emily S. Cross
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
- Chair for Social Brain Sciences, Department of Humanities, Social and Political Sciences, ETHZ, Zürich, Switzerland
| |
Collapse
|
5
|
Goupil N, Rayson H, Serraille É, Massera A, Ferrari PF, Hochmann JR, Papeo L. Visual Preference for Socially Relevant Spatial Relations in Humans and Monkeys. Psychol Sci 2024; 35:681-693. [PMID: 38683657 DOI: 10.1177/09567976241242995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024] Open
Abstract
As a powerful social signal, a body, face, or gaze facing toward oneself holds an individual's attention. We asked whether, going beyond an egocentric stance, facingness between others has a similar effect and why. In a preferential-looking time paradigm, human adults showed spontaneous preference to look at two bodies facing toward (vs. away from) each other (Experiment 1a, N = 24). Moreover, facing dyads were rated higher on social semantic dimensions, showing that facingness adds social value to stimuli (Experiment 1b, N = 138). The same visual preference was found in juvenile macaque monkeys (Experiment 2, N = 21). Finally, on the human development timescale, this preference emerged by 5 years, although young infants by 7 months of age already discriminate visual scenes on the basis of body positioning (Experiment 3, N = 120). We discuss how the preference for facing dyads-shared by human adults, young children, and macaques-can signal a new milestone in social cognition development, supporting processing and learning from third-party social interactions.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Holly Rayson
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Émilie Serraille
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Alice Massera
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Pier Francesco Ferrari
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Liuba Papeo
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| |
Collapse
|
6
|
Lee Masson H, Chang L, Isik L. Multidimensional neural representations of social features during movie viewing. Soc Cogn Affect Neurosci 2024; 19:nsae030. [PMID: 38722755 PMCID: PMC11130526 DOI: 10.1093/scan/nsae030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 03/05/2024] [Accepted: 05/03/2024] [Indexed: 05/29/2024] Open
Abstract
The social world is dynamic and contextually embedded. Yet, most studies utilize simple stimuli that do not capture the complexity of everyday social episodes. To address this, we implemented a movie viewing paradigm and investigated how everyday social episodes are processed in the brain. Participants watched one of two movies during an MRI scan. Neural patterns from brain regions involved in social perception, mentalization, action observation and sensory processing were extracted. Representational similarity analysis results revealed that several labeled social features (including social interaction, mentalization, the actions of others, characters talking about themselves, talking about others and talking about objects) were represented in the superior temporal gyrus (STG) and middle temporal gyrus (MTG). The mentalization feature was also represented throughout the theory of mind network, and characters talking about others engaged the temporoparietal junction (TPJ), suggesting that listeners may spontaneously infer the mental state of those being talked about. In contrast, we did not observe the action representations in the frontoparietal regions of the action observation network. The current findings indicate that STG and MTG serve as key regions for social processing, and that listening to characters talk about others elicits spontaneous mental state inference in TPJ during natural movie viewing.
Collapse
Affiliation(s)
| | - Lucy Chang
- Department of Cognitive Science, Johns Hopkins University, Baltimore 21218, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore 21218, USA
| |
Collapse
|
7
|
Tsantani M, Yon D, Cook R. Neural Representations of Observed Interpersonal Synchrony/Asynchrony in the Social Perception Network. J Neurosci 2024; 44:e2009222024. [PMID: 38527811 PMCID: PMC11097257 DOI: 10.1523/jneurosci.2009-22.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/19/2023] [Accepted: 01/10/2024] [Indexed: 03/27/2024] Open
Abstract
The visual perception of individuals is thought to be mediated by a network of regions in the occipitotemporal cortex that supports specialized processing of faces, bodies, and actions. In comparison, we know relatively little about the neural mechanisms that support the perception of multiple individuals and the interactions between them. The present study sought to elucidate the visual processing of social interactions by identifying which regions of the social perception network represent interpersonal synchrony. In an fMRI study with 32 human participants (26 female, 6 male), we used multivoxel pattern analysis to investigate whether activity in face-selective, body-selective, and interaction-sensitive regions across the social perception network supports the decoding of synchronous versus asynchronous head-nodding and head-shaking. Several regions were found to support significant decoding of synchrony/asynchrony, including extrastriate body area (EBA), face-selective and interaction-sensitive mid/posterior right superior temporal sulcus, and occipital face area. We also saw robust cross-classification across actions in the EBA, suggestive of movement-invariant representations of synchrony/asynchrony. Exploratory whole-brain analyses also identified a region of the right fusiform cortex that responded more strongly to synchronous than to asynchronous motion. Critically, perceiving interpersonal synchrony/asynchrony requires the simultaneous extraction and integration of dynamic information from more than one person. Hence, the representation of synchrony/asynchrony cannot be attributed to augmented or additive processing of individual actors. Our findings therefore provide important new evidence that social interactions recruit dedicated visual processing within the social perception network that extends beyond that engaged by the faces and bodies of the constituent individuals.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom
| | - Daniel Yon
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, United Kingdom
| | - Richard Cook
- School of Psychology, University of Leeds, Leeds LS2 9JU, United Kingdom
- Department of Psychology, University of York, York YO10 5DD, United Kingdom
| |
Collapse
|
8
|
Papeo L. What is abstract about seeing social interactions? Trends Cogn Sci 2024; 28:390-391. [PMID: 38632008 DOI: 10.1016/j.tics.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 02/06/2024] [Indexed: 04/19/2024]
Affiliation(s)
- Liuba Papeo
- Institute of Cognitive Sciences Marc Jeannerod -UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, France.
| |
Collapse
|
9
|
McMahon E, Isik L. Abstract social interaction representations along the lateral pathway. Trends Cogn Sci 2024; 28:392-393. [PMID: 38632007 DOI: 10.1016/j.tics.2024.03.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 03/14/2024] [Accepted: 03/15/2024] [Indexed: 04/19/2024]
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
10
|
Puce A. From Motion to Emotion: Visual Pathways and Potential Interconnections. J Cogn Neurosci 2024:1-24. [PMID: 38527078 PMCID: PMC11416577 DOI: 10.1162/jocn_a_02141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
The two visual pathway description of [Ungerleider, L. G., & Mishkin, M. Two cortical visual systems. In D. J. Dingle, M. A. Goodale, & R. J. W. Mansfield (Eds.), Analysis of visual behavior (pp. 549-586). Cambridge, MA: MIT, 1982] changed the course of late 20th century systems and cognitive neuroscience. Here, I try to reexamine our laboratory's work through the lens of the [Pitcher, D., & Ungerleider, L. G. Evidence for a third visual pathway specialized for social perception. Trends in Cognitive Sciences, 25, 100-110, 2021] new third visual pathway. I also briefly review the literature related to brain responses to static and dynamic visual displays, visual stimulation involving multiple individuals, and compare existing models of social information processing for the face and body. In this context, I examine how the posterior STS might generate unique social information relative to other brain regions that also respond to social stimuli. I discuss some of the existing challenges we face with assessing how information flow progresses between structures in the proposed functional pathways and how some stimulus types and experimental designs may have complicated our data interpretation and model generation. I also note a series of outstanding questions for the field. Finally, I examine the idea of a potential expansion of the third visual pathway, to include aspects of previously proposed "lateral" visual pathways. Doing this would yield a more general entity for processing motion/action (i.e., "[inter]action") that deals with interactions between people, as well as people and objects. In this framework, a brief discussion of potential hemispheric biases for function, and different forms of neuropsychological impairments created by focal lesions in the posterior brain is highlighted to help situate various brain regions into an expanded [inter]action pathway.
Collapse
|
11
|
Abassi E, Papeo L. Category-Selective Representation of Relationships in the Visual Cortex. J Neurosci 2024; 44:e0250232023. [PMID: 38124013 PMCID: PMC10860595 DOI: 10.1523/jneurosci.0250-23.2023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 09/29/2023] [Accepted: 10/14/2023] [Indexed: 12/23/2023] Open
Abstract
Understanding social interaction requires processing social agents and their relationships. The latest results show that much of this process is visually solved: visual areas can represent multiple people encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to face-to-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured the brain activity of healthy female and male humans (N = 42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing > non-facing effect as a signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in the visual cortex. Additional analyses suggest the existence of a third network encoding relations between (nonsocial) objects. Finally, a separate occipitotemporal network showed the generalization of relational information across body, face, and nonsocial object dyads (multivariate pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, the visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and nonsocial) interaction, which trigger inferential processes.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| |
Collapse
|
12
|
Yu X, Li J, Zhu H, Tian X, Lau E. Electrophysiological hallmarks for event relations and event roles in working memory. Front Neurosci 2024; 17:1282869. [PMID: 38328555 PMCID: PMC10847304 DOI: 10.3389/fnins.2023.1282869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 12/22/2023] [Indexed: 02/09/2024] Open
Abstract
The ability to maintain events (i.e., interactions between/among objects) in working memory is crucial for our everyday cognition, yet the format of this representation is poorly understood. The current ERP study was designed to answer two questions: How is maintaining events (e.g., the tiger hit the lion) neurally different from maintaining item coordinations (e.g., the tiger and the lion)? That is, how is the event relation (present in events but not coordinations) represented? And how is the agent, or initiator of the event encoded differently from the patient, or receiver of the event during maintenance? We used a novel picture-sentence match-across-delay approach in which the working memory representation was "pinged" during the delay, replicated across two ERP experiments with Chinese and English materials. We found that maintenance of events elicited a long-lasting late sustained difference in posterior-occipital electrodes relative to non-events. This effect resembled the negative slow wave reported in previous studies of working memory, suggesting that the maintenance of events in working memory may impose a higher cost compared to coordinations. Although we did not observe significant ERP differences associated with pinging the agent vs. the patient during the delay, we did find that the ping appeared to dampen the ongoing sustained difference, suggesting a shift from sustained activity to activity silent mechanisms. These results suggest a new method by which ERPs can be used to elucidate the format of neural representation for events in working memory.
Collapse
Affiliation(s)
- Xinchi Yu
- Program of Neuroscience and Cognitive Science, University of Maryland, College Park, MD, United States
- Department of Linguistics, University of Maryland, College Park, MD, United States
| | - Jialu Li
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Hao Zhu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Xing Tian
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Ellen Lau
- Program of Neuroscience and Cognitive Science, University of Maryland, College Park, MD, United States
- Department of Linguistics, University of Maryland, College Park, MD, United States
| |
Collapse
|
13
|
Gandolfo M, Abassi E, Balgova E, Downing PE, Papeo L, Koldewyn K. Converging evidence that left extrastriate body area supports visual sensitivity to social interactions. Curr Biol 2024; 34:343-351.e5. [PMID: 38181794 DOI: 10.1016/j.cub.2023.12.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/25/2023] [Accepted: 12/05/2023] [Indexed: 01/07/2024]
Abstract
Navigating our complex social world requires processing the interactions we observe. Recent psychophysical and neuroimaging studies provide parallel evidence that the human visual system may be attuned to efficiently perceive dyadic interactions. This work implies, but has not yet demonstrated, that activity in body-selective cortical regions causally supports efficient visual perception of interactions. We adopt a multi-method approach to close this important gap. First, using a large fMRI dataset (n = 92), we found that the left hemisphere extrastriate body area (EBA) responds more to face-to-face than non-facing dyads. Second, we replicated a behavioral marker of visual sensitivity to interactions: categorization of facing dyads is more impaired by inversion than non-facing dyads. Third, in a pre-registered experiment, we used fMRI-guided transcranial magnetic stimulation to show that online stimulation of the left EBA, but not a nearby control region, abolishes this selective inversion effect. Activity in left EBA, thus, causally supports the efficient perception of social interactions.
Collapse
Affiliation(s)
- Marco Gandolfo
- Donders Institute, Radboud University, Nijmegen 6525GD, the Netherlands; Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| | - Etienne Abassi
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Eva Balgova
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK; Department of Psychology, Aberystwyth University, Aberystwyth SY23 3UX, Ceredigion, UK
| | - Paul E Downing
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK
| | - Liuba Papeo
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Kami Koldewyn
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| |
Collapse
|
14
|
Olson HA, Chen EM, Lydic KO, Saxe RR. Left-Hemisphere Cortical Language Regions Respond Equally to Observed Dialogue and Monologue. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:575-610. [PMID: 38144236 PMCID: PMC10745132 DOI: 10.1162/nol_a_00123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 09/20/2023] [Indexed: 12/26/2023]
Abstract
Much of the language we encounter in our everyday lives comes in the form of conversation, yet the majority of research on the neural basis of language comprehension has used input from only one speaker at a time. Twenty adults were scanned while passively observing audiovisual conversations using functional magnetic resonance imaging. In a block-design task, participants watched 20 s videos of puppets speaking either to another puppet (the dialogue condition) or directly to the viewer (the monologue condition), while the audio was either comprehensible (played forward) or incomprehensible (played backward). Individually functionally localized left-hemisphere language regions responded more to comprehensible than incomprehensible speech but did not respond differently to dialogue than monologue. In a second task, participants watched videos (1-3 min each) of two puppets conversing with each other, in which one puppet was comprehensible while the other's speech was reversed. All participants saw the same visual input but were randomly assigned which character's speech was comprehensible. In left-hemisphere cortical language regions, the time course of activity was correlated only among participants who heard the same character speaking comprehensibly, despite identical visual input across all participants. For comparison, some individually localized theory of mind regions and right-hemisphere homologues of language regions responded more to dialogue than monologue in the first task, and in the second task, activity in some regions was correlated across all participants regardless of which character was speaking comprehensibly. Together, these results suggest that canonical left-hemisphere cortical language regions are not sensitive to differences between observed dialogue and monologue.
Collapse
|
15
|
McMahon E, Bonner MF, Isik L. Hierarchical organization of social action features along the lateral visual pathway. Curr Biol 2023; 33:5035-5047.e8. [PMID: 37918399 PMCID: PMC10841461 DOI: 10.1016/j.cub.2023.10.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 09/01/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023]
Abstract
Recent theoretical work has argued that in addition to the classical ventral (what) and dorsal (where/how) visual streams, there is a third visual stream on the lateral surface of the brain specialized for processing social information. Like visual representations in the ventral and dorsal streams, representations in the lateral stream are thought to be hierarchically organized. However, no prior studies have comprehensively investigated the organization of naturalistic, social visual content in the lateral stream. To address this question, we curated a naturalistic stimulus set of 250 3-s videos of two people engaged in everyday actions. Each clip was richly annotated for its low-level visual features, mid-level scene and object properties, visual social primitives (including the distance between people and the extent to which they were facing), and high-level information about social interactions and affective content. Using a condition-rich fMRI experiment and a within-subject encoding model approach, we found that low-level visual features are represented in early visual cortex (EVC) and middle temporal (MT) area, mid-level visual social features in extrastriate body area (EBA) and lateral occipital complex (LOC), and high-level social interaction information along the superior temporal sulcus (STS). Communicative interactions, in particular, explained unique variance in regions of the STS after accounting for variance explained by all other labeled features. Taken together, these results provide support for representation of increasingly abstract social visual content-consistent with hierarchical organization-along the lateral visual stream and suggest that recognizing communicative actions may be a key computational goal of the lateral visual pathway.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA.
| | - Michael F Bonner
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA
| | - Leyla Isik
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA; Department of Biomedical Engineering, Whiting School of Engineering, Johns Hopkins University, Suite 400 West, Wyman Park Building, 3400 N. Charles Street, Baltimore, MD 21218, USA
| |
Collapse
|
16
|
McMahon E, Isik L. Seeing social interactions. Trends Cogn Sci 2023; 27:1165-1179. [PMID: 37805385 PMCID: PMC10841760 DOI: 10.1016/j.tics.2023.09.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/01/2023] [Accepted: 09/05/2023] [Indexed: 10/09/2023]
Abstract
Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
17
|
Goupil N, Hochmann JR, Papeo L. Intermodulation responses show integration of interacting bodies in a new whole. Cortex 2023; 165:129-140. [PMID: 37279640 DOI: 10.1016/j.cortex.2023.04.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/31/2023] [Accepted: 04/30/2023] [Indexed: 06/08/2023]
Abstract
People are often seen among other people, relating to and interacting with one another. Recent studies suggest that socially relevant spatial relations between bodies, such as the face-to-face positioning, or facingness, change the visual representation of those bodies, relative to when the same items appear unrelated (e.g., back-to-back) or in isolation. The current study addresses the hypothesis that face-to-face bodies give rise to a new whole, an integrated representation of individual bodies in a new perceptual unit. Using frequency-tagging EEG, we targeted, as a measure of integration, an EEG correlate of the non-linear combination of the neural responses to each of two individual bodies presented either face-to-face as if interacting, or back-to-back. During EEG recording, participants (N = 32) viewed two bodies, either face-to-face or back-to-back, flickering at two different frequencies (F1 and F2), yielding two distinctive responses in the EEG signal. Spectral analysis examined the responses at the intermodulation frequencies (nF1±mF2), signaling integration of individual responses. An anterior intermodulation response was observed for face-to-face bodies, but not for back-to-back bodies, nor for face-to-face chairs and machines. These results show that interacting bodies are integrated into a representation that is more than the sum of its parts. This effect, specific to body dyads, may mark an early step in the transformation towards an integrated representation of a social event, from the visual representation of individual participants in that event.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| |
Collapse
|
18
|
Landsiedel J, Koldewyn K. Auditory dyadic interactions through the "eye" of the social brain: How visual is the posterior STS interaction region? IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2023; 1:1-20. [PMID: 37719835 PMCID: PMC10503480 DOI: 10.1162/imag_a_00003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 09/19/2023]
Abstract
Human interactions contain potent social cues that meet not only the eye but also the ear. Although research has identified a region in the posterior superior temporal sulcus as being particularly sensitive to visually presented social interactions (SI-pSTS), its response to auditory interactions has not been tested. Here, we used fMRI to explore brain response to auditory interactions, with a focus on temporal regions known to be important in auditory processing and social interaction perception. In Experiment 1, monolingual participants listened to two-speaker conversations (intact or sentence-scrambled) and one-speaker narrations in both a known and an unknown language. Speaker number and conversational coherence were explored in separately localised regions-of-interest (ROI). In Experiment 2, bilingual participants were scanned to explore the role of language comprehension. Combining univariate and multivariate analyses, we found initial evidence for a heteromodal response to social interactions in SI-pSTS. Specifically, right SI-pSTS preferred auditory interactions over control stimuli and represented information about both speaker number and interactive coherence. Bilateral temporal voice areas (TVA) showed a similar, but less specific, profile. Exploratory analyses identified another auditory-interaction sensitive area in anterior STS. Indeed, direct comparison suggests modality specific tuning, with SI-pSTS preferring visual information while aSTS prefers auditory information. Altogether, these results suggest that right SI-pSTS is a heteromodal region that represents information about social interactions in both visual and auditory domains. Future work is needed to clarify the roles of TVA and aSTS in auditory interaction perception and further probe right SI-pSTS interaction-selectivity using non-semantic prosodic cues.
Collapse
Affiliation(s)
- Julia Landsiedel
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | - Kami Koldewyn
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| |
Collapse
|
19
|
Walbrin J, Almeida J, Koldewyn K. Alternative Brain Connectivity Underscores Age-Related Differences in the Processing of Interactive Biological Motion. J Neurosci 2023; 43:3666-3674. [PMID: 36963845 PMCID: PMC10198447 DOI: 10.1523/jneurosci.2109-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 02/20/2023] [Accepted: 03/12/2023] [Indexed: 03/26/2023] Open
Abstract
Rapidly recognizing and understanding others' social interactions is an important ability that relies on deciphering multiple sources of information, for example, perceiving body information and inferring others' intentions. Despite recent advances in characterizing the brain basis of this ability in adults, its developmental underpinnings are virtually unknown. Here, we used fMRI to investigate which sources of social information support superior temporal sulcus responses to interactive biological motion (i.e., 2 interacting point-light human figures) at different developmental intervals in human participants (of either sex): Children show supportive functional connectivity with key nodes of the mentalizing network, while adults show stronger reliance on regions associated with body- and dynamic social interaction/biological motion processing. We suggest that adults use efficient action-intention understanding via body and biological motion information, while children show a stronger reliance on hidden mental state inferences as a potential means of learning to better understand others' interactive behavior.SIGNIFICANCE STATEMENT Recognizing others' interactive behavior is a critical human skill that depends on different sources of social information (e.g., observable body-action information, inferring others' hidden mental states, etc.). Understanding the brain-basis of this ability and characterizing how it emerges across development are important goals in social neuroscience. Here, we used fMRI to investigate which sources of social information support interactive biological motion processing in children (6-12 years) and adults. These results reveal a striking developmental difference in terms of how wider-brain connectivity shapes functional responses to interactive biological motion that suggests a reliance on distinct neuro-cognitive strategies in service of interaction understanding (i.e., children and adults show a greater reliance on explicit and implicit intentional inference, respectively).
Collapse
Affiliation(s)
- Jon Walbrin
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal 3000-481
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal 3000-481
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal 3000-481
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal 3000-481
| | - Kami Koldewyn
- School of Human and Behavioural Sciences, Bangor University, Bangor, Wales 3000-481
| |
Collapse
|
20
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
21
|
Friedrich EVC, Zillekens IC, Biel AL, O'Leary D, Singer J, Seegenschmiedt EV, Sauseng P, Schilbach L. Spatio-temporal dynamics of oscillatory brain activity during the observation of actions and interactions between point-light agents. Eur J Neurosci 2023; 57:657-679. [PMID: 36539944 DOI: 10.1111/ejn.15903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 12/12/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022]
Abstract
Predicting actions from non-verbal cues and using them to optimise one's response behaviour (i.e. interpersonal predictive coding) is essential in everyday social interactions. We aimed to investigate the neural correlates of different cognitive processes evolving over time during interpersonal predictive coding. Thirty-nine participants watched two agents depicted by moving point-light stimuli while an electroencephalogram (EEG) was recorded. One well-recognizable agent performed either a 'communicative' or an 'individual' action. The second agent either was blended into a cluster of noise dots (i.e. present) or was entirely replaced by noise dots (i.e. absent), which participants had to differentiate. EEG amplitude and coherence analyses for theta, alpha and beta frequency bands revealed a dynamic pattern unfolding over time: Watching communicative actions was associated with enhanced coupling within medial anterior regions involved in social and mentalising processes and with dorsolateral prefrontal activation indicating a higher deployment of cognitive resources. Trying to detect the agent in the cluster of noise dots without having seen communicative cues was related to enhanced coupling in posterior regions for social perception and visual processing. Observing an expected outcome was modulated by motor system activation. Finally, when the agent was detected correctly, activation in posterior areas for visual processing of socially relevant features was increased. Taken together, our results demonstrate that it is crucial to consider the temporal dynamics of social interactions and of their neural correlates to better understand interpersonal predictive coding. This could lead to optimised treatment approaches for individuals with problems in social interactions.
Collapse
Affiliation(s)
- Elisabeth V C Friedrich
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Imme C Zillekens
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.,International Max Planck Research School for Translational Psychiatry, Munich, Germany
| | - Anna Lena Biel
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Department of Psychology, Research Unit Experimental Psychology, Münster University, Münster, Germany
| | - Dariusz O'Leary
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Johannes Singer
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.,Department of Education and Psychology, Freie Universitat Berlin, Berlin, Germany
| | - Eva Victoria Seegenschmiedt
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Paul Sauseng
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Leonhard Schilbach
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.,International Max Planck Research School for Translational Psychiatry, Munich, Germany.,Medical Faculty, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
22
|
Krol MA, Jellema T. Sensorimotor representation of observed dyadic actions with varying agent involvement: an EEG mu study. Cogn Neurosci 2023; 14:25-35. [PMID: 35699606 DOI: 10.1080/17588928.2022.2084605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Observation of others' actions activates motor representations in sensorimotor cortex. Although action observation in the real-world often involves multiple agents displaying varying degrees of action involvement, most lab studies on action observation studied individual actions. We recorded EEG-mu suppression over sensorimotor cortex to investigate how the multi-agent nature of observed hand/arm actions is incorporated in sensorimotor action representations. Hereto we manipulated the extent of agent involvement in dyadic interactions presented in videos. In all clips two agents were present, of which agent-1 always performed the same action, while the involvement of agent-2 differed along three levels: (1) passive and uninvolved, (2) passively involved, (3) actively involved. Additionally, a no-action condition was presented. The occurrence of these four conditions was predictable thanks to cues at the start of each trial, which allowed to study possible mu anticipation effects. Dyadic interactions in which agent-2 was actively involved resulted in increased power suppression of the mu rhythm compared to dyadic interactions in which agent-2 was passively involved. The latter did not differ from actions in which agent-2 was present but not involved. No anticipation effects were found. The results suggest that the sensorimotor representation of a dyadic interaction takes into account the simultaneously performed bodily articulations of both agents, but no evidence was found for incorporation of their static articulated postures.
Collapse
Affiliation(s)
- Manon A Krol
- Donders Institute, Radboud University, Nijmegen, The Netherlands
| | | |
Collapse
|
23
|
Varrier RS, Finn ES. Seeing Social: A Neural Signature for Conscious Perception of Social Interactions. J Neurosci 2022; 42:9211-9226. [PMID: 36280263 PMCID: PMC9761685 DOI: 10.1523/jneurosci.0859-22.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 08/15/2022] [Accepted: 10/14/2022] [Indexed: 01/07/2023] Open
Abstract
Social information is some of the most ambiguous content we encounter in our daily lives, yet in experimental contexts, percepts of social interactions-that is, whether an interaction is present and if so, the nature of that interaction-are often dichotomized as correct or incorrect based on experimenter-assigned labels. Here, we investigated the behavioral and neural correlates of subjective (or conscious) social perception using data from the Human Connectome Project in which participants (n = 1049; 486 men, 562 women) viewed animations of geometric shapes during fMRI and indicated whether they perceived a social interaction or random motion. Critically, rather than experimenter-assigned labels, we used observers' own reports of "Social" or "Non-social" to classify percepts and characterize brain activity, including leveraging a particularly ambiguous animation perceived as "Social" by some but "Non-social" by others to control for visual input. Behaviorally, observers were biased toward perceiving information as social (vs non-social); and neurally, observer reports (compared with experimenter labels) explained more variance in activity across much of the brain. Using "Unsure" reports, we identified several regions that responded parametrically to perceived socialness. Neural responses to social versus non-social content diverged early in time and in the cortical hierarchy. Finally, individuals with higher internalizing trait scores showed both a higher response bias toward "Social" and an inverse relationship with activity in default mode and visual association areas while scanning for social information. Findings underscore the subjective nature of social perception and the importance of using observer reports to study percepts of social interactions.SIGNIFICANCE STATEMENT Simple animations involving two or more geometric shapes have been used as a gold standard to understand social cognition and impairments therein. Yet, experimenter-assigned labels of what is social versus non-social are frequently used as a ground truth, despite the fact that percepts of such ambiguous social stimuli are highly subjective. Here, we used behavioral and fMRI data from a large sample of neurotypical individuals to show that participants' responses reveal subtle behavioral biases, help us study neural responses to social content more precisely, and covary with internalizing trait scores. Our findings underscore the subjective nature of social perception and the importance of considering observer reports in studying behavioral and neural dynamics of social perception.
Collapse
Affiliation(s)
- Rekha S Varrier
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Emily S Finn
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| |
Collapse
|
24
|
Landsiedel J, Daughters K, Downing PE, Koldewyn K. The role of motion in the neural representation of social interactions in the posterior temporal cortex. Neuroimage 2022; 262:119533. [PMID: 35931309 PMCID: PMC9485464 DOI: 10.1016/j.neuroimage.2022.119533] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 07/15/2022] [Accepted: 08/01/2022] [Indexed: 11/30/2022] Open
Abstract
Humans are an inherently social species, with multiple focal brain regions sensitive to various visual social cues such as faces, bodies, and biological motion. More recently, research has begun to investigate how the brain responds to more complex, naturalistic social scenes, identifying a region in the posterior superior temporal sulcus (SI-pSTS; i.e., social interaction pSTS), amongst others, as an important region for processing social interaction. This research, however, has presented images or videos, and thus the contribution of motion to social interaction perception in these brain regions is not yet understood. In the current study, 22 participants viewed videos, image sequences, scrambled image sequences and static images of either social interactions or non-social independent actions. Combining univariate and multivariate analyses, we confirm that bilateral SI-pSTS plays a central role in dynamic social interaction perception but is much less involved when 'interactiveness' is conveyed solely with static cues. Regions in the social brain, including SI-pSTS and extrastriate body area (EBA), showed sensitivity to both motion and interactive content. While SI-pSTS is somewhat more tuned to video interactions than is EBA, both bilateral SI-pSTS and EBA showed a greater response to social interactions compared to non-interactions and both regions responded more strongly to videos than static images. Indeed, both regions showed higher responses to interactions than independent actions in videos and intact sequences, but not in other conditions. Exploratory multivariate regression analyses suggest that selectivity for simple visual motion does not in itself drive interactive sensitivity in either SI-pSTS or EBA. Rather, selectivity for interactions expressed in point-light animations, and selectivity for static images of bodies, make positive and independent contributions to this effect across the LOTC region. Our results strongly suggest that EBA and SI-pSTS work together during dynamic interaction perception, at least when interactive information is conveyed primarily via body information. As such, our results are also in line with proposals of a third visual stream supporting dynamic social scene perception.
Collapse
Affiliation(s)
| | | | - Paul E Downing
- School of Human and Behavioural Sciences, Bangor University
| | - Kami Koldewyn
- School of Human and Behavioural Sciences, Bangor University.
| |
Collapse
|
25
|
Shahdloo M, Çelik E, Urgen BA, Gallant JL, Çukur T. Task-Dependent Warping of Semantic Representations during Search for Visual Action Categories. J Neurosci 2022; 42:6782-6799. [PMID: 35863889 PMCID: PMC9436022 DOI: 10.1523/jneurosci.1372-21.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 06/29/2022] [Accepted: 07/06/2022] [Indexed: 11/21/2022] Open
Abstract
Object and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied brain activity recorded from five subjects (one female) via functional magnetic resonance imaging while they viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity toward target actions and that tuning shifts are a general feature of conceptual representations in the brain.SIGNIFICANCE STATEMENT The ability to swiftly perceive the actions and intentions of others is a crucial skill for humans that relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. However, little is known about the nature of high-level semantic representations during natural visual search for action categories. Here, we provide the first evidence showing that attention significantly warps semantic representations by inducing tuning shifts in single cortical voxels, broadly spread across occipitotemporal, parietal, prefrontal, and cingulate cortices. This dynamic attentional mechanism can facilitate action perception by efficiently allocating neural resources to accentuate the representation of task-relevant action categories.
Collapse
Affiliation(s)
- Mo Shahdloo
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, Oxford OX3 9DU, United Kingdom
- National Magnetic Resonance Research Centre, Bilkent University, 06800 Ankara, Turkey
- Departments of Electrical and Electronics Engineering and
| | - Emin Çelik
- National Magnetic Resonance Research Centre, Bilkent University, 06800 Ankara, Turkey
- Neuroscience Program, Aysel Sabuncu Brain Research Centre, Bilkent University, 06800 Ankara, Turkey
| | - Burcu A Urgen
- National Magnetic Resonance Research Centre, Bilkent University, 06800 Ankara, Turkey
- Psychology, Bilkent University, 06800 Ankara, Turkey
- Neuroscience Program, Aysel Sabuncu Brain Research Centre, Bilkent University, 06800 Ankara, Turkey
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California 94720
| | - Tolga Çukur
- National Magnetic Resonance Research Centre, Bilkent University, 06800 Ankara, Turkey
- Departments of Electrical and Electronics Engineering and
- Neuroscience Program, Aysel Sabuncu Brain Research Centre, Bilkent University, 06800 Ankara, Turkey
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California 94720
| |
Collapse
|
26
|
Abassi E, Papeo L. Behavioral and neural markers of visual configural processing in social scene perception. Neuroimage 2022; 260:119506. [PMID: 35878724 DOI: 10.1016/j.neuroimage.2022.119506] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 07/18/2022] [Accepted: 07/21/2022] [Indexed: 11/19/2022] Open
Abstract
Research on face perception has revealed highly specialized visual mechanisms such as configural processing, and provided markers of interindividual differences -including disease risks and alterations- in visuo-perceptual abilities that traffic in social cognition. Is face perception unique in degree or kind of mechanisms, and in its relevance for social cognition? Combining functional MRI and behavioral methods, we address the processing of an uncharted class of socially relevant stimuli: minimal social scenes involving configurations of two bodies spatially close and face-to-face as if interacting (hereafter, facing dyads). We report category-specific activity for facing (vs. non-facing) dyads in visual cortex. That activity shows face-like signatures of configural processing -i.e., stronger response to facing (vs. non-facing) dyads, and greater susceptibility to stimulus inversion for facing (vs. non-facing) dyads-, and is predicted by performance-based measures of configural processing in visual perception of body dyads. Moreover, we observe that the individual performance in body-dyad perception is reliable, stable-over-time and correlated with the individual social sensitivity, coarsely captured by the Autism-Spectrum Quotient. Further analyses clarify the relationship between single-body and body-dyad perception. We propose that facing dyads are processed through highly specialized mechanisms -and brain areas-, analogously to other biologically and socially relevant stimuli such as faces. Like face perception, facing-dyad perception can reveal basic (visual) processes that lay the foundations for understanding others, their relationships and interactions.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, 67 Bd. Pinel, 69675 Bron France.
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, 67 Bd. Pinel, 69675 Bron France
| |
Collapse
|
27
|
Friedrich EVC, Zillekens IC, Biel AL, O'Leary D, Seegenschmiedt EV, Singer J, Schilbach L, Sauseng P. Seeing a Bayesian ghost: Sensorimotor activation leads to an illusory social perception. iScience 2022; 25:104068. [PMID: 35355523 PMCID: PMC8958323 DOI: 10.1016/j.isci.2022.104068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 12/21/2021] [Accepted: 03/10/2022] [Indexed: 11/05/2022] Open
Abstract
Based on our prior experiences we form social expectations and anticipate another person’s response. Under certain conditions, these expectations can be so strong that they lead to illusory perception of another person who is actually not there (i.e., seeing a Bayesian ghost). We used EEG to investigate the neural correlates of such illusory social perception. Our results showed that activation of the premotor cortex predicted the occurrence of the Bayesian ghost, whereas its actual appearance was later accompanied by activation in sensorimotor and adjacent parietal regions. These findings confirm that our perception of others is so strongly affected by prior expectations, in such a way they can prompt illusory social perceptions associated with activity change in brain regions relevant for action perception. They also contribute to a better understanding of social interaction in healthy individuals as well as persons with mental illnesses, which can be characterized by illusory perception and social interaction difficulties. Expecting a response to a social action can lead to an illusion of another person The brain does not merely respond to social signals but anticipates social behavior Sensorimotor activity indicates top-down predictions that outweigh sensory input Illusory social perception is associated with sensorimotor and parietal activity
Collapse
Affiliation(s)
- Elisabeth V C Friedrich
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-University Munich, 80802 Munich, Germany
| | - Imme C Zillekens
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, 80804 Munich, Germany.,International Max Planck Research School for Translational Psychiatry, 80804 Munich, Germany
| | - Anna Lena Biel
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-University Munich, 80802 Munich, Germany
| | - Dariusz O'Leary
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-University Munich, 80802 Munich, Germany
| | - Eva Victoria Seegenschmiedt
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-University Munich, 80802 Munich, Germany
| | - Johannes Singer
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-University Munich, 80802 Munich, Germany.,Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany
| | - Leonhard Schilbach
- Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, 80804 Munich, Germany.,International Max Planck Research School for Translational Psychiatry, 80804 Munich, Germany.,Medical Faculty, Ludwig-Maximilians-University Munich, 80336 Munich, Germany
| | - Paul Sauseng
- Department of Psychology, Research Unit Biological Psychology, Ludwig-Maximilians-University Munich, 80802 Munich, Germany
| |
Collapse
|
28
|
Pesquita A, Bernardet U, Richards BE, Jensen O, Shapiro K. Isolating Action Prediction from Action Integration in the Perception of Social Interactions. Brain Sci 2022; 12:432. [PMID: 35447965 PMCID: PMC9031105 DOI: 10.3390/brainsci12040432] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 03/08/2022] [Accepted: 03/21/2022] [Indexed: 02/01/2023] Open
Abstract
Previous research suggests that predictive mechanisms are essential in perceiving social interactions. However, these studies did not isolate action prediction (a priori expectations about how partners in an interaction react to one another) from action integration (a posteriori processing of both partner's actions). This study investigated action prediction during social interactions while controlling for integration confounds. Twenty participants viewed 3D animations depicting an action-reaction interaction between two actors. At the start of each action-reaction interaction, one actor performs a social action. Immediately after, instead of presenting the other actor's reaction, a black screen covers the animation for a short time (occlusion duration) until a still frame depicting a precise moment of the reaction is shown (reaction frame). The moment shown in the reaction frame is either temporally aligned with the occlusion duration or deviates by 150 ms or 300 ms. Fifty percent of the action-reaction trials were semantically congruent, and the remaining were incongruent, e.g., one actor offers to shake hands, and the other reciprocally shakes their hand (congruent action-reaction) versus one actor offers to shake hands, and the other leans down (incongruent action-reaction). Participants made fast congruency judgments. We hypothesized that judging the congruency of action-reaction sequences is aided by temporal predictions. The findings supported this hypothesis; linear speed-accuracy scores showed that congruency judgments were facilitated by a temporally aligned occlusion duration, and reaction frames compared to 300 ms deviations, thus suggesting that observers internally simulate the temporal unfolding of an observed social interction. Furthermore, we explored the link between participants with higher autistic traits and their sensitivity to temporal deviations. Overall, the study offers new evidence of prediction mechanisms underpinning the perception of social interactions in isolation from action integration confounds.
Collapse
Affiliation(s)
- Ana Pesquita
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| | - Ulysses Bernardet
- Aston Institute of Urban Technology and the Environment (ASTUTE), Aston University, Birmingham B4 7ET, UK;
| | - Bethany E. Richards
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| | - Ole Jensen
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| | - Kimron Shapiro
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2TT, UK; (B.E.R.); (O.J.); (K.S.)
| |
Collapse
|
29
|
From words to phrases: neural basis of social event semantic composition. Brain Struct Funct 2022; 227:1683-1695. [PMID: 35184222 PMCID: PMC9098591 DOI: 10.1007/s00429-022-02465-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 01/25/2022] [Indexed: 11/13/2022]
Abstract
Events are typically composed of at least actions and entities. Both actions and entities have been shown to be represented by neural structures respecting domain organizations in the brain, including those of social/animate (face and body; person-directed action) versus inanimate (man-made object or tool; object-directed action) concepts. It is unclear whether the brain combines actions and entities into events in a (relative) domain-specific fashion or via domain-general mechanisms in regions that have been shown to support semantic and syntactic composition. We tested these hypotheses in a functional magnetic resonance imaging experiment where two domains of verb-noun event phrases (social-person versus manipulation-artifact, e.g., “hug mother” versus “fold napkin”) and their component words were contrasted. We found a series of brain region supporting social-composition effects more strongly than the manipulation phrase composition—the bilateral inferior occipital gyrus (IOG), inferior temporal gyrus (ITG) and anterior temporal lobe (ATL)—which either showed stronger activation strength tested by univariate contrast, stronger content representation tested by representation similarity analysis, or stronger relationship between the neural activation patterns of phrases and synthesis (additive and multiplication) of the neural activity patterns of the word constituents. No regions were observed showing evidence of phrase composition for both domains or stronger effects of manipulation phrases. These findings highlight the roles of the visual cortex and ATL in social event compositions, suggesting a domain-preferring, rather than domain-general, mechanisms of verbal event composition.
Collapse
|
30
|
Goupil N, Papeo L, Hochmann J. Visual perception grounding of social cognition in preverbal infants. INFANCY 2022; 27:210-231. [DOI: 10.1111/infa.12453] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 11/22/2021] [Accepted: 01/02/2022] [Indexed: 11/28/2022]
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| | - Liuba Papeo
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| | - Jean‐Rémy Hochmann
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| |
Collapse
|
31
|
Functional selectivity for social interaction perception in the human superior temporal sulcus during natural viewing. Neuroimage 2021; 245:118741. [PMID: 34800663 DOI: 10.1016/j.neuroimage.2021.118741] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 09/15/2021] [Accepted: 11/16/2021] [Indexed: 11/22/2022] Open
Abstract
Recognizing others' social interactions is a crucial human ability. Using simple stimuli, previous studies have shown that social interactions are selectively processed in the superior temporal sulcus (STS), but prior work with movies has suggested that social interactions are processed in the medial prefrontal cortex (mPFC), part of the theory of mind network. It remains unknown to what extent social interaction selectivity is observed in real world stimuli when controlling for other covarying perceptual and social information, such as faces, voices, and theory of mind. The current study utilizes a functional magnetic resonance imaging (fMRI) movie paradigm and advanced machine learning methods to uncover the brain mechanisms uniquely underlying naturalistic social interaction perception. We analyzed two publicly available fMRI datasets, collected while both male and female human participants (n = 17 and 18) watched two different commercial movies in the MRI scanner. By performing voxel-wise encoding and variance partitioning analyses, we found that broad social-affective features predict neural responses in social brain regions, including the STS and mPFC. However, only the STS showed robust and unique selectivity specifically to social interactions, independent from other covarying features. This selectivity was observed across two separate fMRI datasets. These findings suggest that naturalistic social interaction perception recruits dedicated neural circuity in the STS, separate from the theory of mind network, and is a critical dimension of human social understanding.
Collapse
|
32
|
The neural coding of face and body orientation in occipitotemporal cortex. Neuroimage 2021; 246:118783. [PMID: 34879251 DOI: 10.1016/j.neuroimage.2021.118783] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 11/09/2021] [Accepted: 12/04/2021] [Indexed: 11/20/2022] Open
Abstract
Face and body orientation convey important information for us to understand other people's actions, intentions and social interactions. It has been shown that several occipitotemporal areas respond differently to faces or bodies of different orientations. However, whether face and body orientation are processed by partially overlapping or completely separate brain networks remains unclear, as the neural coding of face and body orientation is often investigated separately. Here, we recorded participants' brain activity using fMRI while they viewed faces and bodies shown from three different orientations, while attending to either orientation or identity information. Using multivoxel pattern analysis we investigated which brain regions process face and body orientation respectively, and which regions encode both face and body orientation in a stimulus-independent manner. We found that patterns of neural responses evoked by different stimulus orientations in the occipital face area, extrastriate body area, lateral occipital complex and right early visual cortex could generalise across faces and bodies, suggesting a stimulus-independent encoding of person orientation in occipitotemporal cortex. This finding was consistent across functionally defined regions of interest and a whole-brain searchlight approach. The fusiform face area responded to face but not body orientation, suggesting that orientation responses in this area are face-specific. Moreover, neural responses to orientation were remarkably consistent regardless of whether participants attended to the orientation of faces and bodies or not. Together, these results demonstrate that face and body orientation are processed in a partially overlapping brain network, with a stimulus-independent neural code for face and body orientation in occipitotemporal cortex.
Collapse
|
33
|
Arioli M, Cattaneo Z, Ricciardi E, Canessa N. Overlapping and specific neural correlates for empathizing, affective mentalizing, and cognitive mentalizing: A coordinate-based meta-analytic study. Hum Brain Mapp 2021; 42:4777-4804. [PMID: 34322943 PMCID: PMC8410528 DOI: 10.1002/hbm.25570] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 05/10/2021] [Accepted: 06/15/2021] [Indexed: 01/10/2023] Open
Abstract
While the discussion on the foundations of social understanding mainly revolves around the notions of empathy, affective mentalizing, and cognitive mentalizing, their degree of overlap versus specificity is still unclear. We took a meta-analytic approach to unveil the neural bases of cognitive mentalizing, affective mentalizing, and empathy, both in healthy individuals and pathological conditions characterized by social deficits such as schizophrenia and autism. We observed partially overlapping networks for cognitive and affective mentalizing in the medial prefrontal, posterior cingulate, and lateral temporal cortex, while empathy mainly engaged fronto-insular, somatosensory, and anterior cingulate cortex. Adjacent process-specific regions in the posterior lateral temporal, ventrolateral, and dorsomedial prefrontal cortex might underpin a transition from abstract representations of cognitive mental states detached from sensory facets to emotionally-charged representations of affective mental states. Altered mentalizing-related activity involved distinct sectors of the posterior lateral temporal cortex in schizophrenia and autism, while only the latter group displayed abnormal empathy related activity in the amygdala. These data might inform the design of rehabilitative treatments for social cognitive deficits.
Collapse
Affiliation(s)
- Maria Arioli
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Zaira Cattaneo
- Department of Psychology, University of Milano-Bicocca, Milan, Italy.,IRCCS Mondino Foundation, Pavia, Italy
| | | | - Nicola Canessa
- ICoN center, Scuola Universitaria Superiore IUSS, Pavia, Italy.,Istituti Clinici Scientifici Maugeri IRCCS, Cognitive Neuroscience Laboratory of Pavia Institute, Pavia, Italy
| |
Collapse
|
34
|
Bellot E, Abassi E, Papeo L. Moving Toward versus Away from Another: How Body Motion Direction Changes the Representation of Bodies and Actions in the Visual Cortex. Cereb Cortex 2021; 31:2670-2685. [PMID: 33401307 DOI: 10.1093/cercor/bhaa382] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 11/05/2020] [Accepted: 11/25/2020] [Indexed: 11/12/2022] Open
Abstract
Representing multiple agents and their mutual relations is a prerequisite to understand social events such as interactions. Using functional magnetic resonance imaging on human adults, we show that visual areas dedicated to body form and body motion perception contribute to processing social events, by holding the representation of multiple moving bodies and encoding the spatial relations between them. In particular, seeing animations of human bodies facing and moving toward (vs. away from) each other increased neural activity in the body-selective cortex [extrastriate body area (EBA)] and posterior superior temporal sulcus (pSTS) for biological motion perception. In those areas, representation of body postures and movements, as well as of the overall scene, was more accurate for facing body (vs. nonfacing body) stimuli. Effective connectivity analysis with dynamic causal modeling revealed increased coupling between EBA and pSTS during perception of facing body stimuli. The perceptual enhancement of multiple-body scenes featuring cues of interaction (i.e., face-to-face positioning, spatial proximity, and approaching signals) was supported by the participants' better performance in a recognition task with facing body versus nonfacing body stimuli. Thus, visuospatial cues of interaction in multiple-person scenarios affect the perceptual representation of body and body motion and, by promoting functional integration, streamline the process from body perception to action representation.
Collapse
Affiliation(s)
- Emmanuelle Bellot
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| |
Collapse
|
35
|
FFA and OFA Encode Distinct Types of Face Identity Information. J Neurosci 2021; 41:1952-1969. [PMID: 33452225 DOI: 10.1523/jneurosci.1449-20.2020] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 12/18/2020] [Accepted: 12/22/2020] [Indexed: 01/11/2023] Open
Abstract
Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). In a sample of 30 human participants (22 females, 8 males), we used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. We built diverse candidate models, ranging from low-level image-computable properties (pixel-wise, GIST, and Gabor-Jet dissimilarities), through higher-level image-computable descriptions (OpenFace deep neural network, trained to cluster faces by identity), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by the FFA and OFA. Dissimilarities between face identities in FFA were accounted for by differences in perceived similarity, Social Traits, Gender, and by the OpenFace network. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-Jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.SIGNIFICANCE STATEMENT Recent studies using fMRI have shown that several face-responsive brain regions can distinguish between different face identities. It is however unclear whether these different face-responsive regions distinguish between identities in similar or different ways. We used representational similarity analysis to investigate the computations within three brain regions in response to naturalistically varying videos of face identities. Our results revealed that two regions, the fusiform face area and the occipital face area, encode distinct identity information about faces. Although identity can be decoded from both regions, identity representations in fusiform face area primarily contained information about social traits, gender, and high-level visual features, whereas occipital face area primarily represented lower-level image features.
Collapse
|
36
|
Schweinberger SR, Dobel C. Why twos in human visual perception? A possible role of prediction from dynamic synchronization in interaction. Cortex 2020; 135:355-357. [PMID: 33234236 DOI: 10.1016/j.cortex.2020.09.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 09/23/2020] [Indexed: 12/01/2022]
Affiliation(s)
- Stefan R Schweinberger
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University of Jena, Germany; Swiss Center for Affective Sciences, University of Geneva, Switzerland. http://www.allgpsy.uni-jena.de
| | - Christian Dobel
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Friedrich Schiller University of Jena, Germany
| |
Collapse
|
37
|
Social Cognition in the Age of Human–Robot Interaction. Trends Neurosci 2020; 43:373-384. [DOI: 10.1016/j.tins.2020.03.013] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 03/04/2020] [Accepted: 03/26/2020] [Indexed: 11/22/2022]
|
38
|
Walbrin J, Mihai I, Landsiedel J, Koldewyn K. Developmental changes in visual responses to social interactions. Dev Cogn Neurosci 2020; 42:100774. [PMID: 32452460 PMCID: PMC7075793 DOI: 10.1016/j.dcn.2020.100774] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 02/03/2020] [Accepted: 03/02/2020] [Indexed: 11/09/2022] Open
Abstract
Children show less interaction selectivity in the pSTS than adults. Adults show bilateral pSTS selectivity, while children are more right-lateralized. Exploratory findings suggest interaction selectivity in pSTS is more focally tuned in adults.
Recent evidence demonstrates that a region of the posterior superior temporal sulcus (pSTS) is selective to visually observed social interactions in adults. In contrast, little is known about neural responses to social interactions in children. Here, we used fMRI to ask whether the pSTS is ‘tuned’ to social interactions in children at all, and if so, how selectivity might differ from adults. This was investigated in the pSTS, along with several other socially-tuned regions in neighbouring temporal cortex: extrastriate body area, face selective STS, fusiform face area, and mentalizing selective temporo-parietal junction. Both children and adults showed selectivity to social interaction within right pSTS, while only adults showed selectivity on the left. Adults also showed both more focal and greater selectivity than children (6–12 years) bilaterally. Exploratory sub-group analyses showed that younger children (6–8), but not older children (9–12), are less selective than adults on the right, while there was a continuous developmental trend (adults > older > younger) in left pSTS. These results suggest that, over development, the neural response to social interactions is characterized by increasingly more selective, focal, and bilateral pSTS responses, a process that likely continues into adolescence.
Collapse
Affiliation(s)
- Jon Walbrin
- School of Psychology, Bangor University, Wales, United Kingdom.
| | - Ioana Mihai
- School of Psychology, Bangor University, Wales, United Kingdom
| | | | - Kami Koldewyn
- School of Psychology, Bangor University, Wales, United Kingdom
| |
Collapse
|
39
|
The Representation of Two-Body Shapes in the Human Visual Cortex. J Neurosci 2019; 40:852-863. [PMID: 31801812 DOI: 10.1523/jneurosci.1378-19.2019] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 11/21/2019] [Accepted: 11/27/2019] [Indexed: 11/21/2022] Open
Abstract
Human social nature has shaped visual perception. A signature of the relationship between vision and sociality is a particular visual sensitivity to social entities such as faces and bodies. We asked whether human vision also exhibits a special sensitivity to spatial relations that reliably correlate with social relations. In general, interacting people are more often situated face-to-face than back-to-back. Using functional MRI and behavioral measures in female and male human participants, we show that visual sensitivity to social stimuli extends to images including two bodies facing toward (vs away from) each other. In particular, the inferior lateral occipital cortex, which is involved in visual-object perception, is organized such that the inferior portion encodes the number of bodies (one vs two) and the superior portion is selectively sensitive to the spatial relation between bodies (facing vs nonfacing). Moreover, functionally localized, body-selective visual cortex responded to facing bodies more strongly than identical, but nonfacing, bodies. In this area, multivariate pattern analysis revealed an accurate representation of body dyads with sharpening of the representation of single-body postures in facing dyads, which demonstrates an effect of visual context on the perceptual analysis of a body. Finally, the cost of body inversion (upside-down rotation) on body recognition, a behavioral signature of a specialized mechanism for body perception, was larger for facing versus nonfacing dyads. Thus, spatial relations between multiple bodies are encoded in regions for body perception and affect the way in which bodies are processed.SIGNIFICANCE STATEMENT Human social nature has shaped visual perception. Here, we show that human vision is not only attuned to socially relevant entities, such as bodies, but also to socially relevant spatial relations between those entities. Body-selective regions of visual cortex respond more strongly to multiple bodies that appear to be interacting (i.e., face-to-face), relative to unrelated bodies, and more accurately represent single body postures in interacting scenarios. Moreover, recognition of facing bodies is particularly susceptible to perturbation by upside-down rotation, indicative of a particular visual sensitivity to the canonical appearance of facing bodies. This encoding of relations between multiple bodies in areas for body-shape recognition suggests that the visual context in which a body is encountered deeply affects its perceptual analysis.
Collapse
|