1
|
Tanaka H, Jiang P. P1, N170, and N250 Event-related Potential Components Reflect Temporal Perception Processing in Face and Body Personal Identification. J Cogn Neurosci 2024; 36:1265-1281. [PMID: 38652104 DOI: 10.1162/jocn_a_02167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Human faces and bodies represent various socially important signals. Although adults encounter numerous new people in daily life, they can recognize hundreds to thousands of different individuals. However, the neural mechanisms that differentiate one person from another person are unclear. This study aimed to clarify the temporal dynamics of the cognitive processes of face and body personal identification using face-sensitive ERP components (P1, N170, and N250). The present study performed three blocks (face-face, face-body, and body-body) of different ERP adaptation paradigms. Furthermore, in the above three blocks, ERP components were used to compare brain biomarkers under three conditions (same person, different person of the same sex, and different person of the opposite sex). The results showed that the P1 amplitude for the face-face block was significantly greater than that for the body-body block, that the N170 amplitude for a different person of the same sex condition was greater than that for the same person condition in the right hemisphere only, and that the N250 amplitude gradually increased as the degree of face and body sex-social categorization grew closer (i.e., same person condition > different person of the same sex condition > different person of the opposite sex condition). These results suggest that early processing of the face and body processes the face and body separately and that structural encoding and personal identification of the face and body process the face and body collaboratively.
Collapse
Affiliation(s)
| | - Peilun Jiang
- Kanazawa University Graduate School, Kanazawa City, Japan
| |
Collapse
|
2
|
Abo Foul Y, Arkadir D, Demikhovskaya A, Noyman Y, Linetsky E, Abu Snineh M, Aviezer H, Eitan R. Perception of emotionally incongruent cues: evidence for overreliance on body vs. face expressions in Parkinson's disease. Front Psychol 2024; 15:1287952. [PMID: 38770252 PMCID: PMC11103677 DOI: 10.3389/fpsyg.2024.1287952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 04/08/2024] [Indexed: 05/22/2024] Open
Abstract
Individuals with Parkinson's disease (PD) may exhibit impaired emotion perception. However, research demonstrating this decline has been based almost entirely on the recognition of isolated emotional cues. In real life, emotional cues such as expressive faces are typically encountered alongside expressive bodies. The current study investigated emotion perception in individuals with PD (n = 37) using emotionally incongruent composite displays of facial and body expressions, as well as isolated face and body expressions, and congruent composite displays as a baseline. In addition to a group of healthy controls (HC) (n = 50), we also included control individuals with schizophrenia (SZ) (n = 30), who display, as in PD, similar motor symptomology and decreased emotion perception abilities. The results show that individuals with PD showed an increased tendency to categorize incongruent face-body combinations in line with the body emotion, whereas those with HC showed a tendency to classify them in line with the facial emotion. No consistent pattern for prioritizing the face or body was found in individuals with SZ. These results were not explained by the emotional recognition of the isolated cues, cognitive status, depression, or motor symptoms of individuals with PD and SZ. As real-life expressions may include inconsistent cues in the body and face, these findings may have implications for the way individuals with PD and SZ interpret the emotions of others.
Collapse
Affiliation(s)
- Yasmin Abo Foul
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
- Brain Division, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | - David Arkadir
- Brain Division, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | - Anastasia Demikhovskaya
- Neuropsychiatry Unit, Jerusalem Mental Health Center, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yehuda Noyman
- Neuropsychiatry Unit, Jerusalem Mental Health Center, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Eduard Linetsky
- Brain Division, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | - Muneer Abu Snineh
- Brain Division, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | - Hillel Aviezer
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Renana Eitan
- Brain Division, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
- Neuropsychiatry Unit, Jerusalem Mental Health Center, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology (Physiology), Institute for Medical Research Israel-Canada, Hebrew University-Hadassah Medical School, Jerusalem, Israel
- Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
3
|
Wu YT, Baillet S, Lamontagne A. Brain mechanisms involved in the perception of emotional gait: A combined magnetoencephalography and virtual reality study. PLoS One 2024; 19:e0299103. [PMID: 38551903 PMCID: PMC10980214 DOI: 10.1371/journal.pone.0299103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 02/05/2024] [Indexed: 04/01/2024] Open
Abstract
Brain processes associated with emotion perception from biological motion have been largely investigated using point-light displays that are devoid of pictorial information and not representative of everyday life. In this study, we investigated the brain signals evoked when perceiving emotions arising from body movements of virtual pedestrians walking in a community environment. Magnetoencephalography was used to record brain activation in 21 healthy young adults discriminating the emotional gaits (neutral, angry, happy) of virtual male/female pedestrians. Event-related responses in the posterior superior temporal sulcus (pSTS), fusiform body area (FBA), extrastriate body area (EBA), amygdala (AMG), and lateral occipital cortex (Occ) were examined. Brain signals were characterized by an early positive peak (P1;∼200ms) and a late positive potential component (LPP) comprising of an early (400-600ms), middle (600-1000ms) and late phase (1000-1500ms). Generalized estimating equations revealed that P1 amplitude was unaffected by emotion and gender of pedestrians. LPP amplitude showed a significant emotion X phase interaction in all regions of interest, revealing i) an emotion-dependent modulation starting in pSTS and Occ, followed by AMG, FBA and EBA, and ii) generally enhanced responses for angry vs. other gait stimuli in the middle LPP phase. LPP also showed a gender X phase interaction in pSTS and Occ, as gender affected the time course of the response to emotional gait. Present findings show that brain activation within areas associated with biological motion, form, and emotion processing is modulated by emotional gait stimuli rendered by virtual simulations representative of everyday life.
Collapse
Affiliation(s)
- Yu-Tzu Wu
- School of Physical and Occupational Therapy, McGill University, Montreal, Quebec, Canada
- Feil and Oberfeld Research Centre, Jewish Rehabilitation Hospital–Centre Intégré de Santé et de Services Sociaux de Laval, Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Montreal, Quebec, Canada
| | - Sylvain Baillet
- McConnell Brain Imaging Centre, Montreal Neurological Institute-Hospital–Montreal, Montreal, Quebec, Canada
| | - Anouk Lamontagne
- School of Physical and Occupational Therapy, McGill University, Montreal, Quebec, Canada
- Feil and Oberfeld Research Centre, Jewish Rehabilitation Hospital–Centre Intégré de Santé et de Services Sociaux de Laval, Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Montreal, Quebec, Canada
| |
Collapse
|
4
|
Abassi E, Papeo L. Category-Selective Representation of Relationships in the Visual Cortex. J Neurosci 2024; 44:e0250232023. [PMID: 38124013 PMCID: PMC10860595 DOI: 10.1523/jneurosci.0250-23.2023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 09/29/2023] [Accepted: 10/14/2023] [Indexed: 12/23/2023] Open
Abstract
Understanding social interaction requires processing social agents and their relationships. The latest results show that much of this process is visually solved: visual areas can represent multiple people encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to face-to-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured the brain activity of healthy female and male humans (N = 42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing > non-facing effect as a signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in the visual cortex. Additional analyses suggest the existence of a third network encoding relations between (nonsocial) objects. Finally, a separate occipitotemporal network showed the generalization of relational information across body, face, and nonsocial object dyads (multivariate pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, the visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and nonsocial) interaction, which trigger inferential processes.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| |
Collapse
|
5
|
Gandolfo M, Abassi E, Balgova E, Downing PE, Papeo L, Koldewyn K. Converging evidence that left extrastriate body area supports visual sensitivity to social interactions. Curr Biol 2024; 34:343-351.e5. [PMID: 38181794 DOI: 10.1016/j.cub.2023.12.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/25/2023] [Accepted: 12/05/2023] [Indexed: 01/07/2024]
Abstract
Navigating our complex social world requires processing the interactions we observe. Recent psychophysical and neuroimaging studies provide parallel evidence that the human visual system may be attuned to efficiently perceive dyadic interactions. This work implies, but has not yet demonstrated, that activity in body-selective cortical regions causally supports efficient visual perception of interactions. We adopt a multi-method approach to close this important gap. First, using a large fMRI dataset (n = 92), we found that the left hemisphere extrastriate body area (EBA) responds more to face-to-face than non-facing dyads. Second, we replicated a behavioral marker of visual sensitivity to interactions: categorization of facing dyads is more impaired by inversion than non-facing dyads. Third, in a pre-registered experiment, we used fMRI-guided transcranial magnetic stimulation to show that online stimulation of the left EBA, but not a nearby control region, abolishes this selective inversion effect. Activity in left EBA, thus, causally supports the efficient perception of social interactions.
Collapse
Affiliation(s)
- Marco Gandolfo
- Donders Institute, Radboud University, Nijmegen 6525GD, the Netherlands; Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| | - Etienne Abassi
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Eva Balgova
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK; Department of Psychology, Aberystwyth University, Aberystwyth SY23 3UX, Ceredigion, UK
| | - Paul E Downing
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK
| | - Liuba Papeo
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Kami Koldewyn
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| |
Collapse
|
6
|
Hwang J, Lee Y, Kim SH. The Relative Contribution of Facial and Body Information to the Perception of Cuteness. Behav Sci (Basel) 2024; 14:68. [PMID: 38275351 PMCID: PMC10813407 DOI: 10.3390/bs14010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 01/12/2024] [Accepted: 01/16/2024] [Indexed: 01/27/2024] Open
Abstract
Faces and bodies both provide cues to age and cuteness, but little work has explored their interaction in cuteness perception. This study examines the interplay of facial and bodily cues in the perception of cuteness, particularly when these cues convey conflicting age information. Participants rated the cuteness of face-body composites that combined either a child or adult face with an age-congruent or incongruent body alongside manipulations of the head-to-body height ratio (HBR). The findings from two experiments indicated that child-like facial features enhanced the perceived cuteness of adult bodies, while child-like bodily features generally had negative impacts. Furthermore, the results showed that an increased head size significantly boosted the perceived cuteness for child faces more than for adult faces. Lastly, the influence of the HBR was more pronounced when the outline of a body's silhouette was the only available information compared to when detailed facial and bodily features were presented. This study suggests that body proportion information, derived from the body's outline, and facial and bodily features, derived from the interior surface, are integrated to form a unitary representation of a whole person in cuteness perception. Our findings highlight the dominance of facial features over bodily information in cuteness perception, with facial attributes serving as key references for evaluating face-body relationships and body proportions. This research offers significant insights into social cognition and character design, particularly in how people perceive entities with mixed features of different social categories, underlining the importance of congruency in perceptual elements.
Collapse
Affiliation(s)
| | | | - Sung-Ho Kim
- Department of Psychology, Ewha Womans University, 52 Ewhayeodae-gil, Seodaemun-gu, Seoul 03760, Republic of Korea
| |
Collapse
|
7
|
Zafirova Y, Bognár A, Vogels R. Configuration-sensitive face-body interactions in primate visual cortex. Prog Neurobiol 2024; 232:102545. [PMID: 38042248 PMCID: PMC10788614 DOI: 10.1016/j.pneurobio.2023.102545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 09/28/2023] [Accepted: 11/20/2023] [Indexed: 12/04/2023]
Abstract
Traditionally, the neural processing of faces and bodies is studied separately, although they are encountered together, as parts of an agent. Despite its social importance, it is poorly understood how faces and bodies interact, particularly at the single-neuron level. Here, we examined the interaction between faces and bodies in the macaque inferior temporal (IT) cortex, targeting an fMRI-defined patch. We recorded responses of neurons to monkey images in which the face was in its natural location (natural face-body configuration), or in which the face was mislocated with respect to the upper body (unnatural face-body configuration). On average, the neurons did not respond stronger to the natural face-body configurations compared to the summed responses to their faces and bodies, presented in isolation. However, the neurons responded stronger to the natural compared to the unnatural face-body configurations. This configuration effect was present for face- and monkey-centered images, did not depend on local feature differences between configurations, and was present when the face was replaced by a small object. The face-body interaction rules differed between natural and unnatural configurations. In sum, we show for the first time that single IT neurons process faces and bodies in a configuration-specific manner, preferring natural face-body configurations.
Collapse
Affiliation(s)
- Yordanka Zafirova
- Laboratorium voor Neuro, en Psychofysiologie, Department of Neurosciences, KU Leuven, Belgium; Leuven Brain Institute, KU Leuven, Belgium
| | - Anna Bognár
- Laboratorium voor Neuro, en Psychofysiologie, Department of Neurosciences, KU Leuven, Belgium; Leuven Brain Institute, KU Leuven, Belgium
| | - Rufin Vogels
- Laboratorium voor Neuro, en Psychofysiologie, Department of Neurosciences, KU Leuven, Belgium; Leuven Brain Institute, KU Leuven, Belgium.
| |
Collapse
|
8
|
Abstract
The judgment of female body appearance has been reported to be affected by a range of internal (e.g., viewers' sexual cognition) and external factors (e.g., viewed clothing type and colour). This eye-tracking study aimed to complement previous research by examining the effect of facial expression on female body perception and associated body-viewing gaze behaviour. We presented female body images of Caucasian avatars in a continuum of common dress sizes posing seven basic facial expressions (neutral, happiness, sadness, anger, fear, surprise, and disgust), and asked both male and female participants to rate the perceived body attractiveness and body size. The analysis revealed an evident modulatory role of avatar facial expressions on body attractiveness and body size ratings, but not on the amount of viewing time directed at individual body features. Specifically, happy and angry avatars attracted the highest and lowest body attractiveness ratings, respectively, and fearful and surprised avatars tended to be rated slimmer. Interestingly, the impact of facial expression on female body assessment was not further influenced by viewers' gender, suggesting a 'universal' role of common facial expressions in modifying the perception of female body appearance.
Collapse
Affiliation(s)
| | | | | | - Kun Guo
- Kun Guo, School of Psychology, University of Lincoln, Lincoln, LN6 7TS, UK.
| |
Collapse
|
9
|
Hu Y, O'Toole AJ. First impressions: Integrating faces and bodies in personality trait perception. Cognition 2023; 231:105309. [PMID: 36347653 DOI: 10.1016/j.cognition.2022.105309] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 10/05/2022] [Accepted: 10/12/2022] [Indexed: 11/07/2022]
Abstract
Faces and bodies spontaneously elicit personality trait judgments (e.g., trustworthy, dominant, lazy). We examined how trait information from the face and body combine to form first impressions of the whole person and whether trait judgments from the face and body are affected by seeing the whole person. Consistent with the trait-dependence hypothesis, Experiment 1 showed that the relative contribution of the face and body to whole-person perception varied with the trait judged. Agreeableness traits (e.g., warm, aggressive, sympathetic, trustworthy) were inferred primarily from the face, conscientiousness traits (e.g., dependable, careless) from the body, and extraversion traits (e.g., dominant, quiet, confident) from the whole person. A control experiment showed that both clothing and body shape contributed to whole-person judgments. In Experiment 2, we found that a face (body) rated in the whole person elicited a different rating than when it was rated in isolation. Specifically, when trait ratings differed for an isolated face and body of the same identity, the whole-person context biased in-context ratings of the faces and bodies towards the ratings of the context. These results showed that face and body trait perception interact more than previously assumed. We combine current and established findings to propose a novel framework to account for face-body integration in trait perception. This framework incorporates basic elements such as perceptual determinants, nonperceptual determinants, trait formation, and integration, as well as predictive factors such as the rater, the person rated, and the situation.
Collapse
Affiliation(s)
- Ying Hu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | | |
Collapse
|
10
|
Zafirova Y, Cui D, Raman R, Vogels R. Keep the head in the right place: Face-body interactions in inferior temporal cortex. Neuroimage 2022; 264:119676. [PMID: 36216293 DOI: 10.1016/j.neuroimage.2022.119676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 09/23/2022] [Accepted: 10/06/2022] [Indexed: 11/05/2022] Open
Abstract
In primates, faces and bodies activate distinct regions in the inferior temporal (IT) cortex and are typically studied separately. Yet, primates interact with whole agents and not with random concatenations of faces and bodies. Despite its social importance, it is still poorly understood how faces and bodies interact in IT. Here, we addressed this gap by measuring fMRI activations to whole agents and to unnatural face-body configurations in which the head was mislocated with respect to the body, and examined how these relate to the sum of the activations to their corresponding faces and bodies. First, we mapped patches in the IT of awake macaques that were activated more by images of whole monkeys compared to objects and found that these mostly overlapped with body and face patches. In a second fMRI experiment, we obtained no evidence for superadditive responses in these "monkey patches", with the activation to the monkeys being less or equal to the summed face-body activations. However, monkey patches in the anterior IT were activated more by natural compared to unnatural configurations. The stronger activations to natural configurations could not be explained by the summed face-body activations. These univariate results were supported by regression analyses in which we modeled the activations to both configurations as a weighted linear combination of the activations to the faces and bodies, showing higher regression coefficients for the natural compared to the unnatural configurations. Deeper layers of trained convolutional neural networks also contained units that responded more to natural compared to unnatural monkey configurations. Unlike the monkey fMRI patches, these units showed substantial superadditive responses to the natural configurations. Our monkey fMRI data suggest configuration-sensitive face-body interactions in anterior IT, adding to the evidence for an integrated face-body processing in the primate ventral visual stream, and open the way for mechanistic studies using single unit recordings in these patches.
Collapse
Affiliation(s)
- Yordanka Zafirova
- Laboratorium voor Neuro- en Psychofysiologie, Department of Neurosciences, KU Leuven, Belgium; Leuven Brain Institute, KU Leuven, Belgium
| | - Ding Cui
- Laboratorium voor Neuro- en Psychofysiologie, Department of Neurosciences, KU Leuven, Belgium; Leuven Brain Institute, KU Leuven, Belgium
| | - Rajani Raman
- Laboratorium voor Neuro- en Psychofysiologie, Department of Neurosciences, KU Leuven, Belgium; Leuven Brain Institute, KU Leuven, Belgium
| | - Rufin Vogels
- Laboratorium voor Neuro- en Psychofysiologie, Department of Neurosciences, KU Leuven, Belgium; Leuven Brain Institute, KU Leuven, Belgium.
| |
Collapse
|
11
|
Albohn DN, Brandenburg JC, Kveraga K, Adams RB. The shared signal hypothesis: Facial and bodily expressions of emotion mutually inform one another. Atten Percept Psychophys 2022; 84:2271-2280. [PMID: 36045309 PMCID: PMC9509690 DOI: 10.3758/s13414-022-02548-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/27/2022] [Indexed: 11/08/2022]
Abstract
Decades of research show that contextual information from the body, visual scene, and voices can facilitate judgments of facial expressions of emotion. To date, most research suggests that bodily expressions of emotion offer context for interpreting facial expressions, but not vice versa. The present research aimed to investigate the conditions under which mutual processing of facial and bodily displays of emotion facilitate and/or interfere with emotion recognition. In the current two studies, we examined whether body and face emotion recognition are enhanced through integration of shared emotion cues, and/or hindered through mixed signals (i.e., interference). We tested whether faces and bodies facilitate or interfere with emotion processing by pairing briefly presented (33 ms), backward-masked presentations of faces with supraliminally presented bodies (Experiment 1) and vice versa (Experiment 2). Both studies revealed strong support for integration effects, but not interference. Integration effects are most pronounced for low-emotional clarity facial and bodily expressions, suggesting that when more information is needed in one channel, the other channel is recruited to disentangle any ambiguity. That this occurs for briefly presented, backward-masked presentations reveals low-level visual integration of shared emotional signal value.
Collapse
Affiliation(s)
- Daniel N Albohn
- Booth School of Business, The University of Chicago, Chicago, IL, USA.
| | - Joseph C Brandenburg
- Department of School Psychology, The Pennsylvania State University, University Park, PA, USA
| | - Kestutis Kveraga
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Reginald B Adams
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA.
| |
Collapse
|
12
|
Abassi E, Papeo L. Behavioral and neural markers of visual configural processing in social scene perception. Neuroimage 2022; 260:119506. [PMID: 35878724 DOI: 10.1016/j.neuroimage.2022.119506] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 07/18/2022] [Accepted: 07/21/2022] [Indexed: 11/19/2022] Open
Abstract
Research on face perception has revealed highly specialized visual mechanisms such as configural processing, and provided markers of interindividual differences -including disease risks and alterations- in visuo-perceptual abilities that traffic in social cognition. Is face perception unique in degree or kind of mechanisms, and in its relevance for social cognition? Combining functional MRI and behavioral methods, we address the processing of an uncharted class of socially relevant stimuli: minimal social scenes involving configurations of two bodies spatially close and face-to-face as if interacting (hereafter, facing dyads). We report category-specific activity for facing (vs. non-facing) dyads in visual cortex. That activity shows face-like signatures of configural processing -i.e., stronger response to facing (vs. non-facing) dyads, and greater susceptibility to stimulus inversion for facing (vs. non-facing) dyads-, and is predicted by performance-based measures of configural processing in visual perception of body dyads. Moreover, we observe that the individual performance in body-dyad perception is reliable, stable-over-time and correlated with the individual social sensitivity, coarsely captured by the Autism-Spectrum Quotient. Further analyses clarify the relationship between single-body and body-dyad perception. We propose that facing dyads are processed through highly specialized mechanisms -and brain areas-, analogously to other biologically and socially relevant stimuli such as faces. Like face perception, facing-dyad perception can reveal basic (visual) processes that lay the foundations for understanding others, their relationships and interactions.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, 67 Bd. Pinel, 69675 Bron France.
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) and Université Claude Bernard Lyon 1, 67 Bd. Pinel, 69675 Bron France
| |
Collapse
|
13
|
Tidoni E, Holle H, Scandola M, Schindler I, Hill L, Cross ES. Human but not robotic gaze facilitates action prediction. iScience 2022; 25:104462. [PMID: 35707718 PMCID: PMC9189121 DOI: 10.1016/j.isci.2022.104462] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/05/2022] [Accepted: 05/17/2022] [Indexed: 01/08/2023] Open
Abstract
Do people ascribe intentions to humanoid robots as they would to humans or non-human-like animated objects? In six experiments, we compared people’s ability to extract non-mentalistic (i.e., where an agent is looking) and mentalistic (i.e., what an agent is looking at; what an agent is going to do) information from gaze and directional cues performed by humans, human-like robots, and a non-human-like object. People were faster to infer the mental content of human agents compared to robotic agents. Furthermore, although the absence of differences in control conditions rules out the use of non-mentalizing strategies, the human-like appearance of non-human agents may engage mentalizing processes to solve the task. Overall, results suggest that human-like robotic actions may be processed differently from humans’ and objects’ behavior. These findings inform our understanding of the relevance of an object’s physical features in triggering mentalizing abilities and its relevance for human–robot interaction. People differently ascribe mental content to human-like and non-human-like agents A human-like shape may automatically engage mentalizing processes Human actions are interpreted faster than non-human actions
Collapse
|
14
|
Abstract
Visual representations of bodies, in addition to those of faces, contribute to the recognition of con- and heterospecifics, to action recognition, and to nonverbal communication. Despite its importance, the neural basis of the visual analysis of bodies has been less studied than that of faces. In this article, I review what is known about the neural processing of bodies, focusing on the macaque temporal visual cortex. Early single-unit recording work suggested that the temporal visual cortex contains representations of body parts and bodies, with the dorsal bank of the superior temporal sulcus representing bodily actions. Subsequent functional magnetic resonance imaging studies in both humans and monkeys showed several temporal cortical regions that are strongly activated by bodies. Single-unit recordings in the macaque body patches suggest that these represent mainly body shape features. More anterior patches show a greater viewpoint-tolerant selectivity for body features, which may reflect a processing principle shared with other object categories, including faces. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Rufin Vogels
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Belgium; .,Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
15
|
Foster C. A Distributed Model of Face and Body Integration. Neurosci Insights 2022; 17:26331055221119221. [PMID: 35991808 PMCID: PMC9386443 DOI: 10.1177/26331055221119221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/26/2022] [Indexed: 11/17/2022] Open
Abstract
Separated face- and body-responsive brain networks have been identified that show strong responses when observers view faces and bodies. It has been proposed that face and body processing may be initially separated in the lateral occipitotemporal cortex and then combined into a whole person representation in the anterior temporal cortex, or elsewhere in the brain. However, in contrast to this proposal, our recent study identified a common coding of face and body orientation (ie, facing direction) in the lateral occipitotemporal cortex, demonstrating an integration of face and body information at an early stage of face and body processing. These results, in combination with findings that show integration of face and body identity in the lateral occipitotemporal, parahippocampal and superior parietal cortex, and face and body emotional expression in the posterior superior temporal sulcus and medial prefrontal cortex, suggest that face and body integration may be more distributed than previously considered. I propose a new model of face and body integration, where areas at the intersection of face- and body-responsive regions play a role in integrating specific properties of faces and bodies, and distributed regions across the brain contribute to high-level, abstract integration of shared face and body properties.
Collapse
Affiliation(s)
- Celia Foster
- Biopsychology and Cognitive Neuroscience, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
16
|
Kobayashi M, Kanazawa S, Yamaguchi MK, O'Toole AJ. Cortical processing of dynamic bodies in the superior occipito-temporal regions of the infants' brain: Difference from dynamic faces and inversion effect. Neuroimage 2021; 244:118598. [PMID: 34587515 DOI: 10.1016/j.neuroimage.2021.118598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 11/17/2022] Open
Abstract
Previous functional neuroimaging studies imply a crucial role of the superior temporal regions (e.g., superior temporal sulcus: STS) for processing of dynamic faces and bodies. However, little is known about the cortical processing of moving faces and bodies in infancy. The current study used functional near-infrared spectroscopy (fNIRS) to directly compare cortical hemodynamic responses to dynamic faces (videos of approaching people with blurred bodies) and dynamic bodies (videos of approaching people with blurred faces) in infants' brain. We also examined the body-inversion effect in 5- to 8-month-old infants using hemodynamic responses as a measure. We found significant brain activity for the dynamic faces and bodies in the superior area of bilateral temporal cortices in both 5- to 6-month-old and 7- to 8-month-old infants. The hemodynamic responses to dynamic faces occurred across a broader area of cortex in 7- to 8-month-olds than in 5- to 6-month-olds, but we did not find a developmental change for dynamic bodies. There was no significant activation when the stimuli were presented upside down, indicating that these activation patterns did not result from the low-level visual properties of dynamic faces and bodies. Additionally, we found that the superior temporal regions showed a body inversion effect in infants aged over 5 months: the upright dynamic body stimuli induced stronger activation compared to the inverted stimuli. The most important contribution of the present study is that we identified cortical areas responsive to dynamic bodies and faces in two groups of infants (5-6-months and 7-8-months of age) and we found different developmental trends for the processing of bodies and faces.
Collapse
Affiliation(s)
- Megumi Kobayashi
- Department of Functioning and Disability, Institute for Developmental Research, Aichi Developmental Disability Center, Japan.
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Japan
| | | | - Alice J O'Toole
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
| |
Collapse
|
17
|
Mills E, Guo K. Impact of Face Masks on Female Body Perception is Modulated by Facial Expressions. Perception 2021; 51:51-59. [PMID: 34821177 PMCID: PMC8771895 DOI: 10.1177/03010066211061092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
People routinely wear face masks during the pandemic, but little is known about their impact on body perception. In this online study, we presented female body images of Caucasian avatars in common dress sizes displaying happy, angry, and neutral facial expressions with and without face masks, and asked women to rate the perceived body attractiveness and body size. In comparison with mask-off condition, mask-on decreased body attractiveness ratings for happy avatars but did not affect ratings for neutral avatars irrespective of avatar dress sizes. For avatars displaying angry expressions, mask-on increased body attractiveness ratings for slimmer avatars but did not affect ratings for larger avatars. On the other hand, body size estimation was not systematically affected by face masks and facial expressions. It appears that face masks mainly show an expression-dependent influence on body attractiveness judgement, possibly through suppressing the perceived facial expressions.
Collapse
Affiliation(s)
- Eleanor Mills
- School of Psychology, University of Lincoln, Lincoln, UK
| | - Kun Guo
- School of Psychology, University of Lincoln, Lincoln, UK
| |
Collapse
|
18
|
One object, two networks? Assessing the relationship between the face and body-selective regions in the primate visual system. Brain Struct Funct 2021; 227:1423-1438. [PMID: 34792643 DOI: 10.1007/s00429-021-02420-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 10/22/2021] [Indexed: 10/19/2022]
Abstract
Faces and bodies are often treated as distinct categories that are processed separately by face- and body-selective brain regions in the primate visual system. These regions occupy distinct regions of visual cortex and are often thought to constitute independent functional networks. Yet faces and bodies are part of the same object and their presence inevitably covary in naturalistic settings. Here, we re-evaluate both the evidence supporting the independent processing of faces and bodies and the organizational principles that have been invoked to explain this distinction. We outline four hypotheses ranging from completely separate networks to a single network supporting the perception of whole people or animals. The current evidence, especially in humans, is compatible with all of these hypotheses, making it presently unclear how the representation of faces and bodies is organized in the cortex.
Collapse
|
19
|
Linear Integration of Sensory Evidence over Space and Time Underlies Face Categorization. J Neurosci 2021; 41:7876-7893. [PMID: 34326145 DOI: 10.1523/jneurosci.3055-20.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 07/08/2021] [Accepted: 07/21/2021] [Indexed: 11/21/2022] Open
Abstract
Visual object recognition relies on elaborate sensory processes that transform retinal inputs to object representations, but it also requires decision-making processes that read out object representations and function over prolonged time scales. The computational properties of these decision-making processes remain underexplored for object recognition. Here, we study these computations by developing a stochastic multifeature face categorization task. Using quantitative models and tight control of spatiotemporal visual information, we demonstrate that human subjects (five males, eight females) categorize faces through an integration process that first linearly adds the evidence conferred by task-relevant features over space to create aggregated momentary evidence and then linearly integrates it over time with minimum information loss. Discrimination of stimuli along different category boundaries (e.g., identity or expression of a face) is implemented by adjusting feature weights of spatial integration. This linear but flexible integration process over space and time bridges past studies on simple perceptual decisions to complex object recognition behavior.SIGNIFICANCE STATEMENT Although simple perceptual decision-making such as discrimination of random dot motion has been successfully explained as accumulation of sensory evidence, we lack rigorous experimental paradigms to study the mechanisms underlying complex perceptual decision-making such as discrimination of naturalistic faces. We develop a stochastic multifeature face categorization task as a systematic approach to quantify the properties and potential limitations of the decision-making processes during object recognition. We show that human face categorization could be modeled as a linear integration of sensory evidence over space and time. Our framework to study object recognition as a spatiotemporal integration process is broadly applicable to other object categories and bridges past studies of object recognition and perceptual decision-making.
Collapse
|
20
|
Ritchie JB, Zeman AA, Bosmans J, Sun S, Verhaegen K, Op de Beeck HP. Untangling the Animacy Organization of Occipitotemporal Cortex. J Neurosci 2021; 41:7103-7119. [PMID: 34230104 PMCID: PMC8372013 DOI: 10.1523/jneurosci.2628-20.2021] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2020] [Revised: 04/20/2021] [Accepted: 05/20/2021] [Indexed: 11/21/2022] Open
Abstract
Some of the most impressive functional specializations in the human brain are found in the occipitotemporal cortex (OTC), where several areas exhibit selectivity for a small number of visual categories, such as faces and bodies, and spatially cluster based on stimulus animacy. Previous studies suggest this animacy organization reflects the representation of an intuitive taxonomic hierarchy, distinct from the presence of face- and body-selective areas in OTC. Using human functional magnetic resonance imaging, we investigated the independent contribution of these two factors-the face-body division and taxonomic hierarchy-in accounting for the animacy organization of OTC and whether they might also be reflected in the architecture of several deep neural networks that have not been explicitly trained to differentiate taxonomic relations. We found that graded visual selectivity, based on animal resemblance to human faces and bodies, masquerades as an apparent animacy continuum, which suggests that taxonomy is not a separate factor underlying the organization of the ventral visual pathway.SIGNIFICANCE STATEMENT Portions of the visual cortex are specialized to determine whether types of objects are animate in the sense of being capable of self-movement. Two factors have been proposed as accounting for this animacy organization: representations of faces and bodies and an intuitive taxonomic continuum of humans and animals. We performed an experiment to assess the independent contribution of both of these factors. We found that graded visual representations, based on animal resemblance to human faces and bodies, masquerade as an apparent animacy continuum, suggesting that taxonomy is not a separate factor underlying the organization of areas in the visual cortex.
Collapse
Affiliation(s)
- J Brendan Ritchie
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Astrid A Zeman
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Joyce Bosmans
- Faculty of Medicine and Health Sciences, University of Antwerp, 2000 Antwerp, Belgium
| | - Shuo Sun
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Kirsten Verhaegen
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Hans P Op de Beeck
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| |
Collapse
|
21
|
Bratch A, Chen Y, Engel SA, Kersten DJ. Visual adaptation selective for individual limbs reveals hierarchical human body representation. J Vis 2021; 21:18. [PMID: 34007989 PMCID: PMC8142707 DOI: 10.1167/jov.21.5.18] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 02/24/2021] [Indexed: 11/24/2022] Open
Abstract
The spatial relationships between body parts are a rich source of information for person perception, with even simple pairs of parts providing highly valuable information. Computation of these relationships would benefit from a hierarchical representation, where body parts are represented individually. We hypothesized that the human visual system makes use of such representations. To test this hypothesis, we used adaptation to determine whether observers were sensitive to changes in the length of one body part relative to another. Observers viewed forearm/upper arm pairs where the forearm had been either lengthened or shortened, judging the perceived length of the forearm. Observers then adapted to a variety of different stimuli (e.g., arms, objects, etc.) in different orientations and visual field locations. We found that following adaptation to distorted limbs, but not non-limb objects, observers experienced a shift in perceived forearm length. Furthermore, this effect partially transferred across different orientations and visual field locations. Taken together, these results suggest the effect arises in high level mechanisms specialized for specific body parts, providing evidence for a representation of bodies based on parts and their relationships.
Collapse
Affiliation(s)
- Alexander Bratch
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN, USA
| | - Yixiong Chen
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Stephen A Engel
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Daniel J Kersten
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
22
|
Abstract
The accurate perception of human crowds is integral to social understanding and interaction. Previous studies have shown that observers are sensitive to several crowd characteristics such as average facial expression, gender, identity, joint attention, and heading direction. In two experiments, we examined ensemble perception of crowd speed using standard point-light walkers (PLW). Participants were asked to estimate the average speed of a crowd consisting of 12 figures moving at different speeds. In Experiment 1, trials of intact PLWs alternated with trials of scrambled PLWs with a viewing duration of 3 seconds. We found that ensemble processing of crowd speed could rely on local motion alone, although a globally intact configuration enhanced performance. In Experiment 2, observers estimated the average speed of intact-PLW crowds that were displayed at reduced viewing durations across five blocks of trials (between 2500 ms and 500 ms). Estimation of fast crowds was precise and accurate regardless of viewing duration, and we estimated that three to four walkers could still be integrated at 500 ms. For slow crowds, we found a systematic deterioration in performance as viewing time reduced, and performance at 500 ms could not be distinguished from a single-walker response strategy. Overall, our results suggest that rapid and accurate ensemble perception of crowd speed is possible, although sensitive to the precise speed range examined.
Collapse
|
23
|
Independent contributions of the face, body, and gait to the representation of the whole person. Atten Percept Psychophys 2020; 83:199-214. [PMID: 33083987 DOI: 10.3758/s13414-020-02110-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Most studies on person perception have primarily investigated static images of faces. However, real-life person perception also involves the body and often the gait of the whole person. Whereas some studies indicated that the face dominates the representation of the whole person, others have emphasized the additional contribution of the body and gait. Here, we compared models of whole-person perception by asking whether a model that includes the body for static whole-person stimuli and also the gait for dynamic whole-person stimuli accounts better for the representation of the whole person than a model that takes into account the face alone. Participants rated the distinctiveness of static or dynamic displays of different people based on either the whole person, face, body, or gait. By fitting a linear regression model to the representation of the whole person based on the face, body, and gait, we revealed that the face and body contribute uniquely and independently to the representation of the static whole person, and that gait further contributes to the representation of the dynamic person. A complementary analysis examined whether these components are also valid dimensions of a whole-person representational space. This analysis further confirmed that the body in addition to the face as well as the gait are valid dimensions of the static and dynamic whole-person representations, respectively. These data clearly show that whole-person perception goes beyond the face and is significantly influenced by the body and gait.
Collapse
|
24
|
Gandolfo M, Downing PE. Asymmetric visual representation of sex from human body shape. Cognition 2020; 205:104436. [PMID: 32919115 DOI: 10.1016/j.cognition.2020.104436] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 08/05/2020] [Accepted: 08/07/2020] [Indexed: 01/21/2023]
Abstract
We efficiently infer others' states and traits from their appearance, and these inferences powerfully shape our social behaviour. One key trait is sex, which is strongly cued by the appearance of the body. What are the visual representations that link body shape to sex? Previous studies of visual sex judgment tasks find observers have a bias to report "male", particularly for ambiguous stimuli. This finding implies a representational asymmetry - that for the processes that generate a sex percept, the default output is "male", and "female" is determined by the presence of additional perceptual evidence. That is, female body shapes are positively coded by reference to a male default shape. This perspective makes a novel prediction in line with Treisman's studies of visual search asymmetries: female body targets should be more readily detected amongst male distractors than vice versa. Across 10 experiments (N = 32 each) we confirmed this prediction and ruled out alternative low-level explanations. The asymmetry was found with profile and frontal body silhouettes, frontal photographs, and schematised icons. Low-level confounds were controlled by balancing silhouette images for size and homogeneity, and by matching physical properties of photographs. The female advantage was nulled for inverted icons, but intact for inverted photographs, suggesting reliance on distinct cues to sex for different body depictions. Together, these findings demonstrate a principle of the perceptual coding that links bodily appearance with a significant social trait: the female body shape is coded as an extension of a male default. We conclude by offering a visual experience account of how these asymmetric representations arise in the first place.
Collapse
|