1
|
Duan Y, Zhan J, Gross J, Ince RAA, Schyns PG. Pre-frontal cortex guides dimension-reducing transformations in the occipito-ventral pathway for categorization behaviors. Curr Biol 2024:S0960-9822(24)00834-0. [PMID: 39029470 DOI: 10.1016/j.cub.2024.06.050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Revised: 05/10/2024] [Accepted: 06/20/2024] [Indexed: 07/21/2024]
Abstract
To interpret our surroundings, the brain uses a visual categorization process. Current theories and models suggest that this process comprises a hierarchy of different computations that transforms complex, high-dimensional inputs into lower-dimensional representations (i.e., manifolds) in support of multiple categorization behaviors. Here, we tested this hypothesis by analyzing these transformations reflected in dynamic MEG source activity while individual participants actively categorized the same stimuli according to different tasks: face expression, face gender, pedestrian gender, and vehicle type. Results reveal three transformation stages guided by the pre-frontal cortex. At stage 1 (high-dimensional, 50-120 ms), occipital sources represent both task-relevant and task-irrelevant stimulus features; task-relevant features advance into higher ventral/dorsal regions, whereas task-irrelevant features halt at the occipital-temporal junction. At stage 2 (121-150 ms), stimulus feature representations reduce to lower-dimensional manifolds, which then transform into the task-relevant features underlying categorization behavior over stage 3 (161-350 ms). Our findings shed light on how the brain's network mechanisms transform high-dimensional inputs into specific feature manifolds that support multiple categorization behaviors.
Collapse
Affiliation(s)
- Yaocong Duan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Jiayu Zhan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Joachim Gross
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, Münster 48149, Germany
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
2
|
Wang A, Quinn BPA, Gofton H, Andrews TJ. No evidence for an other-race effect in dominance and trustworthy judgements from faces. Perception 2024:3010066241258204. [PMID: 38881389 DOI: 10.1177/03010066241258204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
A variety of evidence shows that social categorization of people based on their race can lead to stereotypical judgements and prejudicial behaviour. Here, we explore the extent to which trait judgements of faces are influenced by race. To address this issue, we measured the reliability of first impressions for own-race and other-race faces in Asian and White participants. Participants viewed pairs of faces and were asked to indicate which of the two faces was more dominant or which of the two faces was more trustworthy. We measured the consistency (or reliability) of these judgements across participants for own-race and other-races faces. We found that judgements of dominance or trustworthiness showed similar levels of reliability for own-race and other-race faces. Moreover, an item analysis showed that the judgements on individual trials were very similar across participants from different races. Next, participants made overall ratings of dominance and trustworthiness from own-race and other-race faces. Again, we found that there was no evidence for an ORE. Together, these results provide a new approach to measuring trait judgements of faces and show that in these conditions there is no ORE for the perception of dominance and trustworthiness.
Collapse
|
3
|
La Malva P, Di Crosta A, Prete G, Ceccato I, Gatti M, D'Intino E, Tommasi L, Mammarella N, Palumbo R, Di Domenico A. The effects of prefrontal tDCS and hf-tRNS on the processing of positive and negative emotions evoked by video clips in first- and third-person. Sci Rep 2024; 14:8064. [PMID: 38580697 PMCID: PMC10997595 DOI: 10.1038/s41598-024-58702-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 03/29/2024] [Indexed: 04/07/2024] Open
Abstract
The causal role of the cerebral hemispheres in positive and negative emotion processing remains uncertain. The Right Hemisphere Hypothesis proposes right hemispheric superiority for all emotions, while the Valence Hypothesis suggests the left/right hemisphere's primary involvement in positive/negative emotions, respectively. To address this, emotional video clips were presented during dorsolateral prefrontal cortex (DLPFC) electrical stimulation, incorporating a comparison of tDCS and high frequency tRNS stimulation techniques and manipulating perspective-taking (first-person vs third-person Point of View, POV). Four stimulation conditions were applied while participants were asked to rate emotional video valence: anodal/cathodal tDCS to the left/right DLPFC, reverse configuration (anodal/cathodal on the right/left DLPFC), bilateral hf-tRNS, and sham (control condition). Results revealed significant interactions between stimulation setup, emotional valence, and POV, implicating the DLPFC in emotions and perspective-taking. The right hemisphere played a crucial role in both positive and negative valence, supporting the Right Hemisphere Hypothesis. However, the complex interactions between the brain hemispheres and valence also supported the Valence Hypothesis. Both stimulation techniques (tDCS and tRNS) significantly modulated results. These findings support both hypotheses regarding hemispheric involvement in emotions, underscore the utility of video stimuli, and emphasize the importance of perspective-taking in this field, which is often overlooked.
Collapse
Affiliation(s)
- Pasquale La Malva
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, 31, Via dei Vestini, 66100, Chieti, Italy
| | - Adolfo Di Crosta
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, 31, Via dei Vestini, 66100, Chieti, Italy
| | - Giulia Prete
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, 31, Via dei Vestini, 66100, Chieti, Italy.
| | - Irene Ceccato
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, 31, Via dei Vestini, 66100, Chieti, Italy
| | - Matteo Gatti
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, 31, Via dei Vestini, 66100, Chieti, Italy
| | - Eleonora D'Intino
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, 31, Via dei Vestini, 66100, Chieti, Italy
| | - Luca Tommasi
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, 31, Via dei Vestini, 66100, Chieti, Italy
| | - Nicola Mammarella
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, 31, Via dei Vestini, 66100, Chieti, Italy
| | - Rocco Palumbo
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, 31, Via dei Vestini, 66100, Chieti, Italy
| | - Alberto Di Domenico
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, 31, Via dei Vestini, 66100, Chieti, Italy
| |
Collapse
|
4
|
Malatesta G, D'Anselmo A, Prete G, Lucafò C, Faieta L, Tommasi L. The Predictive Role of the Posterior Cerebellum in the Processing of Dynamic Emotions. CEREBELLUM (LONDON, ENGLAND) 2024; 23:545-553. [PMID: 37285048 PMCID: PMC10951036 DOI: 10.1007/s12311-023-01574-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/29/2023] [Indexed: 06/08/2023]
Abstract
Recent studies have bolstered the important role of the cerebellum in high-level socio-affective functions. In particular, neuroscientific evidence shows that the posterior cerebellum is involved in social cognition and emotion processing, presumably through its involvement in temporal processing and in predicting the outcomes of social sequences. We used cerebellar transcranial random noise stimulation (ctRNS) targeting the posterior cerebellum to affect the performance of 32 healthy participants during an emotion discrimination task, including both static and dynamic facial expressions (i.e., transitioning from a static neutral image to a happy/sad emotion). ctRNS, compared to the sham condition, significantly reduced the participants' accuracy to discriminate static sad facial expressions, but it increased participants' accuracy to discriminate dynamic sad facial expressions. No effects emerged with happy faces. These findings may suggest the existence of two different circuits in the posterior cerebellum for the processing of negative emotional stimuli: a first-time-independent mechanism which can be selectively disrupted by ctRNS, and a second time-dependent mechanism of predictive "sequence detection" which can be selectively enhanced by ctRNS. This latter mechanism might be included among the cerebellar operational models constantly engaged in the rapid adjustment of social predictions based on dynamic behavioral information inherent to others' actions. We speculate that it might be one of the basic principles underlying the understanding of other individuals' social and emotional behaviors during interactions.
Collapse
Affiliation(s)
- Gianluca Malatesta
- Department of Psychological, Health and Territorial Sciences - University "G. d'Annunzio" of Chieti-Pescara, Chieti, Italy.
| | - Anita D'Anselmo
- Department of Psychological, Health and Territorial Sciences - University "G. d'Annunzio" of Chieti-Pescara, Chieti, Italy
| | - Giulia Prete
- Department of Psychological, Health and Territorial Sciences - University "G. d'Annunzio" of Chieti-Pescara, Chieti, Italy
| | - Chiara Lucafò
- Department of Psychological, Health and Territorial Sciences - University "G. d'Annunzio" of Chieti-Pescara, Chieti, Italy
| | - Letizia Faieta
- Department of Psychological, Health and Territorial Sciences - University "G. d'Annunzio" of Chieti-Pescara, Chieti, Italy
| | - Luca Tommasi
- Department of Psychological, Health and Territorial Sciences - University "G. d'Annunzio" of Chieti-Pescara, Chieti, Italy
| |
Collapse
|
5
|
Bott A, Steer HC, Faße JL, Lincoln TM. Visualizing threat and trustworthiness prior beliefs in face perception in high versus low paranoia. SCHIZOPHRENIA (HEIDELBERG, GERMANY) 2024; 10:40. [PMID: 38509135 PMCID: PMC10954723 DOI: 10.1038/s41537-024-00459-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 03/12/2024] [Indexed: 03/22/2024]
Abstract
Predictive processing accounts of psychosis conceptualize delusions as overly strong learned expectations (prior beliefs) that shape cognition and perception. Paranoia, the most prevalent form of delusions, involves threat prior beliefs that are inherently social. Here, we investigated whether paranoia is related to overly strong threat prior beliefs in face perception. Participants with subclinical levels of high (n = 109) versus low (n = 111) paranoia viewed face stimuli paired with written descriptions of threatening versus trustworthy behaviors, thereby activating their threat versus trustworthiness prior beliefs. Subsequently, they completed an established social-psychological reverse correlation image classification (RCIC) paradigm. This paradigm used participants' responses to randomly varying face stimuli to generate individual classification images (ICIs) that intend to visualize either facial prior belief (threat vs. trust). An independent sample (n = 76) rated these ICIs as more threatening in the threat compared to the trust condition, validating the causal effect of prior beliefs on face perception. Contrary to expectations derived from predictive processing accounts, there was no evidence for a main effect of paranoia. This finding suggests that paranoia was not related to stronger threat prior beliefs that directly affected face perception, challenging the assumption that paranoid beliefs operate on a perceptual level.
Collapse
Affiliation(s)
- Antonia Bott
- Clinical Psychology and Psychotherapy, Faculty of Psychology and Human Movement Science, Universität Hamburg, Hamburg, Germany.
| | - Hanna C Steer
- Clinical Psychology and Psychotherapy, Faculty of Psychology and Human Movement Science, Universität Hamburg, Hamburg, Germany
| | - Julian L Faße
- Clinical Psychology and Psychotherapy, Faculty of Psychology and Human Movement Science, Universität Hamburg, Hamburg, Germany
| | - Tania M Lincoln
- Clinical Psychology and Psychotherapy, Faculty of Psychology and Human Movement Science, Universität Hamburg, Hamburg, Germany
| |
Collapse
|
6
|
Zhao S, Cao R, Lin C, Wang S, Yu H. Differences in the link between social trait judgment and socio-emotional experience in neurotypical and autistic individuals. Sci Rep 2024; 14:5400. [PMID: 38443486 PMCID: PMC10915137 DOI: 10.1038/s41598-024-56005-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 02/29/2024] [Indexed: 03/07/2024] Open
Abstract
Neurotypical (NT) individuals and individuals with autism spectrum disorder (ASD) make different judgments of social traits from others' faces; they also exhibit different social emotional responses in social interactions. A common hypothesis is that the differences in face perception in ASD compared with NT is related to distinct social behaviors. To test this hypothesis, we combined a face trait judgment task with a novel interpersonal transgression task that induces measures social emotions and behaviors. ASD and neurotypical participants viewed a large set of naturalistic facial stimuli while judging them on a comprehensive set of social traits (e.g., warm, charismatic, critical). They also completed an interpersonal transgression task where their responsibility in causing an unpleasant outcome to a social partner was manipulated. The purpose of the latter task was to measure participants' emotional (e.g., guilt) and behavioral (e.g., compensation) responses to interpersonal transgression. We found that, compared with neurotypical participants, ASD participants' self-reported guilt and compensation tendency was less sensitive to our responsibility manipulation. Importantly, ASD participants and neurotypical participants showed distinct associations between self-reported guilt and judgments of criticalness from others' faces. These findings reveal a novel link between perception of social traits and social emotional responses in ASD.
Collapse
Affiliation(s)
- Shangcheng Zhao
- Department of Psychological and Brain Sciences, University of California Santa Barbara, Santa Barbara, CA, 93106, USA
| | - Runnan Cao
- Department of Radiology, Washington University in St. Louis, St. Louis, MO, 63110, USA
| | - Chujun Lin
- Department of Psychology, University of California San Diego, San Diego, CA, 92093, USA
| | - Shuo Wang
- Department of Radiology, Washington University in St. Louis, St. Louis, MO, 63110, USA
| | - Hongbo Yu
- Department of Psychological and Brain Sciences, University of California Santa Barbara, Santa Barbara, CA, 93106, USA.
| |
Collapse
|
7
|
Schmitz M, Vanbeneden A, Yzerbyt V. The many faces of compensation: The similarities and differences between social and facial models of perception. PLoS One 2024; 19:e0297887. [PMID: 38394248 PMCID: PMC10890726 DOI: 10.1371/journal.pone.0297887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 01/03/2024] [Indexed: 02/25/2024] Open
Abstract
Previous research shows that stereotypes can distort the visual representation of groups in a top-down fashion. In the present endeavor, we tested if the compensation effect-the negative relationship that emerges between the social dimensions of warmth and competence when judging two social targets-would bias the visual representations of these targets in a compensatory way. We captured participants' near spontaneous facial prototypes of social targets by means of an unconstrained technique, namely the reverse correlation. We relied on a large multi-phase study (N = 869) and found that the expectations of the facial content of two novel groups that differed on one of the two social dimensions are biased in a compensatory manner on the facial dimensions of trustworthiness, warmth, and dominance but not competence. The present research opens new avenues by showing that compensation not only manifests itself on abstract ratings but that it also orients the visual representations of social targets.
Collapse
Affiliation(s)
- Mathias Schmitz
- Université Catholique de Louvain, Institute for Research in the Psychological Sciences, Louvain-la-Neuve, Belgium
| | - Antoine Vanbeneden
- Université Catholique de Louvain, Institute for Research in the Psychological Sciences, Louvain-la-Neuve, Belgium
| | - Vincent Yzerbyt
- Université Catholique de Louvain, Institute for Research in the Psychological Sciences, Louvain-la-Neuve, Belgium
| |
Collapse
|
8
|
Chen C, Messinger DS, Chen C, Yan H, Duan Y, Ince RAA, Garrod OGB, Schyns PG, Jack RE. Cultural facial expressions dynamically convey emotion category and intensity information. Curr Biol 2024; 34:213-223.e5. [PMID: 38141619 PMCID: PMC10831323 DOI: 10.1016/j.cub.2023.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 10/27/2023] [Accepted: 12/01/2023] [Indexed: 12/25/2023]
Abstract
Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.
Collapse
Affiliation(s)
- Chaona Chen
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK.
| | - Daniel S Messinger
- Departments of Psychology, Pediatrics, and Electrical & Computer Engineering, University of Miami, 5665 Ponce De Leon Blvd, Coral Gables, FL 33146, USA
| | - Cheng Chen
- Foreign Language Department, Teaching Centre for General Courses, Chengdu Medical College, 601 Tianhui Street, Chengdu 610083, China
| | - Hongmei Yan
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, North Jianshe Road, Chengdu 611731, China
| | - Yaocong Duan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Oliver G B Garrod
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Rachael E Jack
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| |
Collapse
|
9
|
Yan Y, Zhan J, Garrod O, Cui X, Ince RAA, Schyns PG. Strength of predicted information content in the brain biases decision behavior. Curr Biol 2023; 33:5505-5514.e6. [PMID: 38065096 DOI: 10.1016/j.cub.2023.10.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 10/11/2023] [Accepted: 10/23/2023] [Indexed: 12/21/2023]
Abstract
Prediction-for-perception theories suggest that the brain predicts incoming stimuli to facilitate their categorization.1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 However, it remains unknown what the information contents of these predictions are, which hinders mechanistic explanations. This is because typical approaches cast predictions as an underconstrained contrast between two categories18,19,20,21,22,23,24-e.g., faces versus cars, which could lead to predictions of features specific to faces or cars, or features from both categories. Here, to pinpoint the information contents of predictions and thus their mechanistic processing in the brain, we identified the features that enable two different categorical perceptions of the same stimuli. We then trained multivariate classifiers to discern, from dynamic MEG brain responses, the features tied to each perception. With an auditory cueing design, we reveal where, when, and how the brain reactivates visual category features (versus the typical category contrast) before the stimulus is shown. We demonstrate that the predictions of category features have a more direct influence (bias) on subsequent decision behavior in participants than the typical category contrast. Specifically, these predictions are more precisely localized in the brain (lateralized), are more specifically driven by the auditory cues, and their reactivation strength before a stimulus presentation exerts a greater bias on how the individual participant later categorizes this stimulus. By characterizing the specific information contents that the brain predicts and then processes, our findings provide new insights into the brain's mechanisms of prediction for perception.
Collapse
Affiliation(s)
- Yuening Yan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Jiayu Zhan
- School of Psychological and Cognitive Sciences, Peking University, 5 Yiheyuan Road, Beijing 100871, China
| | - Oliver Garrod
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Xuan Cui
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
10
|
Lin C, Bulls LS, Tepfer LJ, Vyas AD, Thornton MA. Advancing Naturalistic Affective Science with Deep Learning. AFFECTIVE SCIENCE 2023; 4:550-562. [PMID: 37744976 PMCID: PMC10514024 DOI: 10.1007/s42761-023-00215-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/03/2023] [Indexed: 09/26/2023]
Abstract
People express their own emotions and perceive others' emotions via a variety of channels, including facial movements, body gestures, vocal prosody, and language. Studying these channels of affective behavior offers insight into both the experience and perception of emotion. Prior research has predominantly focused on studying individual channels of affective behavior in isolation using tightly controlled, non-naturalistic experiments. This approach limits our understanding of emotion in more naturalistic contexts where different channels of information tend to interact. Traditional methods struggle to address this limitation: manually annotating behavior is time-consuming, making it infeasible to do at large scale; manually selecting and manipulating stimuli based on hypotheses may neglect unanticipated features, potentially generating biased conclusions; and common linear modeling approaches cannot fully capture the complex, nonlinear, and interactive nature of real-life affective processes. In this methodology review, we describe how deep learning can be applied to address these challenges to advance a more naturalistic affective science. First, we describe current practices in affective research and explain why existing methods face challenges in revealing a more naturalistic understanding of emotion. Second, we introduce deep learning approaches and explain how they can be applied to tackle three main challenges: quantifying naturalistic behaviors, selecting and manipulating naturalistic stimuli, and modeling naturalistic affective processes. Finally, we describe the limitations of these deep learning methods, and how these limitations might be avoided or mitigated. By detailing the promise and the peril of deep learning, this review aims to pave the way for a more naturalistic affective science.
Collapse
Affiliation(s)
- Chujun Lin
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Landry S. Bulls
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Lindsey J. Tepfer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Amisha D. Vyas
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Mark A. Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| |
Collapse
|
11
|
Viktorsson C, Valtakari NV, Falck-Ytter T, Hooge ITC, Rudling M, Hessels RS. Stable eye versus mouth preference in a live speech-processing task. Sci Rep 2023; 13:12878. [PMID: 37553414 PMCID: PMC10409748 DOI: 10.1038/s41598-023-40017-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 08/03/2023] [Indexed: 08/10/2023] Open
Abstract
Looking at the mouth region is thought to be a useful strategy for speech-perception tasks. The tendency to look at the eyes versus the mouth of another person during speech processing has thus far mainly been studied using screen-based paradigms. In this study, we estimated the eye-mouth-index (EMI) of 38 adult participants in a live setting. Participants were seated across the table from an experimenter, who read sentences out loud for the participant to remember in both a familiar (English) and unfamiliar (Finnish) language. No statistically significant difference in the EMI between the familiar and the unfamiliar languages was observed. Total relative looking time at the mouth also did not predict the number of correctly identified sentences. Instead, we found that the EMI was higher during an instruction phase than during the speech-processing task. Moreover, we observed high intra-individual correlations in the EMI across the languages and different phases of the experiment. We conclude that there are stable individual differences in looking at the eyes versus the mouth of another person. Furthermore, this behavior appears to be flexible and dependent on the requirements of the situation (speech processing or not).
Collapse
Affiliation(s)
- Charlotte Viktorsson
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Niilo V Valtakari
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
- Center of Neurodevelopmental Disorders (KIND), Division of Neuropsychiatry, Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Maja Rudling
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
12
|
Lampi AJ, Brewer R, Bird G, Jaswal VK. Non-autistic adults can recognize posed autistic facial expressions: Implications for internal representations of emotion. Autism Res 2023; 16:1321-1334. [PMID: 37172211 DOI: 10.1002/aur.2938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/28/2023] [Indexed: 05/14/2023]
Abstract
Autistic people report that their emotional expressions are sometimes misunderstood by non-autistic people. One explanation for these misunderstandings could be that the two neurotypes have different internal representations of emotion: Perhaps they have different expectations about what a facial expression showing a particular emotion looks like. In three well-powered studies with non-autistic college students in the United States (total N = 632), we investigated this possibility. In Study 1, participants recognized most facial expressions posed by autistic individuals more accurately than those posed by non-autistic individuals. Study 2 showed that one reason the autistic expressions were recognized more accurately was because they were better and more intense examples of the intended expressions than the non-autistic expressions. In Study 3, we used a set of expressions created by autistic and non-autistic individuals who could see their faces as they made the expressions, which could allow them to explicitly match the expression they produced with their internal representation of that emotional expression. Here, neither autistic expressions nor non-autistic expressions were consistently recognized more accurately. In short, these findings suggest that differences in internal representations of what emotional expressions look like are unlikely to play a major role in explaining why non-autistic people sometimes misunderstand the emotions autistic people are experiencing.
Collapse
Affiliation(s)
- Andrew J Lampi
- Department of Psychology, University of Virginia, Charlottesville, Virginia, USA
| | - Rebecca Brewer
- Department of Psychology, Royal Holloway University of London, Egham, UK
| | - Geoffrey Bird
- Department of Experimental Psychology, Brasenose College, University of Oxford, Oxford, UK
| | - Vikram K Jaswal
- Department of Psychology, University of Virginia, Charlottesville, Virginia, USA
| |
Collapse
|
13
|
Snoek L, Jack RE, Schyns PG, Garrod OG, Mittenbühler M, Chen C, Oosterwijk S, Scholte HS. Testing, explaining, and exploring models of facial expressions of emotions. SCIENCE ADVANCES 2023; 9:eabq8421. [PMID: 36763663 PMCID: PMC9916981 DOI: 10.1126/sciadv.abq8421] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 01/09/2023] [Indexed: 06/18/2023]
Abstract
Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development.
Collapse
Affiliation(s)
- Lukas Snoek
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Rachael E. Jack
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Philippe G. Schyns
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | | | - Maximilian Mittenbühler
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- Department of Computer Science, University of Tübingen, Tübingen, Germany
| | - Chaona Chen
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Suzanne Oosterwijk
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | - H. Steven Scholte
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
14
|
Schyns PG, Snoek L, Daube C. Degrees of algorithmic equivalence between the brain and its DNN models. Trends Cogn Sci 2022; 26:1090-1102. [PMID: 36216674 DOI: 10.1016/j.tics.2022.09.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 09/01/2022] [Accepted: 09/02/2022] [Indexed: 11/11/2022]
Abstract
Deep neural networks (DNNs) have become powerful and increasingly ubiquitous tools to model human cognition, and often produce similar behaviors. For example, with their hierarchical, brain-inspired organization of computations, DNNs apparently categorize real-world images in the same way as humans do. Does this imply that their categorization algorithms are also similar? We have framed the question with three embedded degrees that progressively constrain algorithmic similarity evaluations: equivalence of (i) behavioral/brain responses, which is current practice, (ii) the stimulus features that are processed to produce these outcomes, which is more constraining, and (iii) the algorithms that process these shared features, the ultimate goal. To improve DNNs as models of cognition, we develop for each degree an increasingly constrained benchmark that specifies the epistemological conditions for the considered equivalence.
Collapse
Affiliation(s)
- Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, Glasgow G12 8QB, UK.
| | - Lukas Snoek
- School of Psychology and Neuroscience, University of Glasgow, Glasgow G12 8QB, UK
| | - Christoph Daube
- School of Psychology and Neuroscience, University of Glasgow, Glasgow G12 8QB, UK
| |
Collapse
|
15
|
Boutet I, Guay J, Chamberland J, Cousineau D, Collin C. Emojis that work! Incorporating visual cues from facial expressions in emojis can reduce ambiguous interpretations. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
16
|
Barrett LF. Context reconsidered: Complex signal ensembles, relational meaning, and population thinking in psychological science. AMERICAN PSYCHOLOGIST 2022; 77:894-920. [PMID: 36409120 PMCID: PMC9683522 DOI: 10.1037/amp0001054] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
This article considers the status and study of "context" in psychological science through the lens of research on emotional expressions. The article begins by updating three well-trod methodological debates on the role of context in emotional expressions to reconsider several fundamental assumptions lurking within the field's dominant methodological tradition: namely, that certain expressive movements have biologically prepared, inherent emotional meanings that issue from singular, universal processes which are independent of but interact with contextual influences. The second part of this article considers the scientific opportunities that await if we set aside this traditional understanding of "context" as a moderator of signals with inherent psychological meaning and instead consider the possibility that psychological events emerge in ecosystems of signal ensembles, such that the psychological meaning of any individual signal is entirely relational. Such a fundamental shift has radical implications not only for the science of emotion but for psychological science more generally. It offers opportunities to improve the validity and trustworthiness of psychological science beyond what can be achieved with improvements to methodological rigor alone. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
|
17
|
Tzschaschel E, Brooks KR, Stephen ID. The valence-dominance model applies to body perception. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220594. [PMID: 36133152 DOI: 10.5061/dryad.8931zcrth] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 08/22/2022] [Indexed: 05/25/2023]
Abstract
First impressions of a person, including social judgements, are often based on appearance. The widely accepted valence-dominance model of face perception (Oosterhof and Todorov 2008 Proc. Natl Acad. Sci. USA 105, 11 087-11 092 (doi:10.1073/pnas.0805664105)) posits that social judgements of faces fall along two orthogonal dimensions: trustworthiness (valence) and dominance. The current study aimed to establish the principal components of social judgements based on the perception of bodies, hypothesizing that these would follow the same dimensions as face perception. Stimuli were black and white photographs showing bodies dressed in grey clothing, standing in their natural posture, in left profile. Raters (N = 237) judged the stimuli on the 14 traits used in Oosterhof and Todorov's original study (Oosterhof and Todorov 2008 Proc. Natl Acad. Sci. USA 105, 11 087-11 092 (doi:10.1073/pnas.0805664105)). Data were analysed using principal component analysis (PCA), as in the original study, with an additional exploratory factor analysis (EFA) using oblique rotation. While PCA produced a third dimension in line with several replications of the original study, results from the EFA produced two dimensions, representing trustworthiness and dominance, providing support for the hypothesis that social perceptions of bodies can be summarized using the valence-dominance model. These two factors could represent universal perceptions we have about people. Future research could explore social judgements of humans based on other stimuli, such as voices or body odour, to evaluate whether the trustworthiness and dominance dimensions are consistent across modalities.
Collapse
Affiliation(s)
- Eva Tzschaschel
- School of Psychological Sciences, Macquarie University, 4 First Walk, North Ryde, New South Wales 2109, Australia
- Perception in Action Research Centre, Macquarie University, Sydney, Australia
| | - Kevin R Brooks
- School of Psychological Sciences, Macquarie University, 4 First Walk, North Ryde, New South Wales 2109, Australia
- Perception in Action Research Centre, Macquarie University, Sydney, Australia
- Body Image and Ingestion Group, Macquarie University, Sydney, Australia
| | - Ian D Stephen
- School of Social Sciences, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
18
|
Tzschaschel E, Brooks KR, Stephen ID. The valence-dominance model applies to body perception. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220594. [PMID: 36133152 PMCID: PMC9449465 DOI: 10.1098/rsos.220594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 08/22/2022] [Indexed: 05/10/2023]
Abstract
First impressions of a person, including social judgements, are often based on appearance. The widely accepted valence-dominance model of face perception (Oosterhof and Todorov 2008 Proc. Natl Acad. Sci. USA 105, 11 087-11 092 (doi:10.1073/pnas.0805664105)) posits that social judgements of faces fall along two orthogonal dimensions: trustworthiness (valence) and dominance. The current study aimed to establish the principal components of social judgements based on the perception of bodies, hypothesizing that these would follow the same dimensions as face perception. Stimuli were black and white photographs showing bodies dressed in grey clothing, standing in their natural posture, in left profile. Raters (N = 237) judged the stimuli on the 14 traits used in Oosterhof and Todorov's original study (Oosterhof and Todorov 2008 Proc. Natl Acad. Sci. USA 105, 11 087-11 092 (doi:10.1073/pnas.0805664105)). Data were analysed using principal component analysis (PCA), as in the original study, with an additional exploratory factor analysis (EFA) using oblique rotation. While PCA produced a third dimension in line with several replications of the original study, results from the EFA produced two dimensions, representing trustworthiness and dominance, providing support for the hypothesis that social perceptions of bodies can be summarized using the valence-dominance model. These two factors could represent universal perceptions we have about people. Future research could explore social judgements of humans based on other stimuli, such as voices or body odour, to evaluate whether the trustworthiness and dominance dimensions are consistent across modalities.
Collapse
Affiliation(s)
- Eva Tzschaschel
- School of Psychological Sciences, Macquarie University, 4 First Walk, North Ryde, New South Wales 2109, Australia
- Perception in Action Research Centre, Macquarie University, Sydney, Australia
| | - Kevin R. Brooks
- School of Psychological Sciences, Macquarie University, 4 First Walk, North Ryde, New South Wales 2109, Australia
- Perception in Action Research Centre, Macquarie University, Sydney, Australia
- Body Image and Ingestion Group, Macquarie University, Sydney, Australia
| | - Ian D. Stephen
- School of Social Sciences, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
19
|
The spatio-temporal features of perceived-as-genuine and deliberate expressions. PLoS One 2022; 17:e0271047. [PMID: 35839208 PMCID: PMC9286247 DOI: 10.1371/journal.pone.0271047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 06/22/2022] [Indexed: 11/24/2022] Open
Abstract
Reading the genuineness of facial expressions is important for increasing the credibility of information conveyed by faces. However, it remains unclear which spatio-temporal characteristics of facial movements serve as critical cues to the perceived genuineness of facial expressions. This study focused on observable spatio-temporal differences between perceived-as-genuine and deliberate expressions of happiness and anger expressions. In this experiment, 89 Japanese participants were asked to judge the perceived genuineness of faces in videos showing happiness or anger expressions. To identify diagnostic facial cues to the perceived genuineness of the facial expressions, we analyzed a total of 128 face videos using an automated facial action detection system; thereby, moment-to-moment activations in facial action units were annotated, and nonnegative matrix factorization extracted sparse and meaningful components from all action units data. The results showed that genuineness judgments reduced when more spatial patterns were observed in facial expressions. As for the temporal features, the perceived-as-deliberate expressions of happiness generally had faster onsets to the peak than the perceived-as-genuine expressions of happiness. Moreover, opening the mouth negatively contributed to the perceived-as-genuine expressions, irrespective of the type of facial expressions. These findings provide the first evidence for dynamic facial cues to the perceived genuineness of happiness and anger expressions.
Collapse
|
20
|
Namba S, Sato W, Matsui H. Spatio-Temporal Properties of Amused, Embarrassed, and Pained Smiles. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00404-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractSmiles are universal but nuanced facial expressions that are most frequently used in face-to-face communications, typically indicating amusement but sometimes conveying negative emotions such as embarrassment and pain. Although previous studies have suggested that spatial and temporal properties could differ among these various types of smiles, no study has thoroughly analyzed these properties. This study aimed to clarify the spatiotemporal properties of smiles conveying amusement, embarrassment, and pain using a spontaneous facial behavior database. The results regarding spatial patterns revealed that pained smiles showed less eye constriction and more overall facial tension than amused smiles; no spatial differences were identified between embarrassed and amused smiles. Regarding temporal properties, embarrassed and pained smiles remained in a state of higher facial tension than amused smiles. Moreover, embarrassed smiles showed a more gradual change from tension states to the smile state than amused smiles, and pained smiles had lower probabilities of staying in or transitioning to the smile state compared to amused smiles. By comparing the spatiotemporal properties of these three smile types, this study revealed that the probability of transitioning between discrete states could help distinguish amused, embarrassed, and pained smiles.
Collapse
|
21
|
Distinct neurocognitive bases for social trait judgments of faces in autism spectrum disorder. Transl Psychiatry 2022; 12:104. [PMID: 35292617 PMCID: PMC8924227 DOI: 10.1038/s41398-022-01870-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 02/18/2022] [Accepted: 02/24/2022] [Indexed: 11/08/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterized by difficulties in social processes, interactions, and communication. Yet, the neurocognitive bases underlying these difficulties are unclear. Here, we triangulated the 'trans-diagnostic' approach to personality, social trait judgments of faces, and neurophysiology to investigate (1) the relative position of autistic traits in a comprehensive social-affective personality space, and (2) the distinct associations between the social-affective personality dimensions and social trait judgment from faces in individuals with ASD and neurotypical individuals. We collected personality and facial judgment data from a large sample of online participants (N = 89 self-identified ASD; N = 307 neurotypical controls). Factor analysis with 33 subscales of 10 social-affective personality questionnaires identified a 4-dimensional personality space. This analysis revealed that ASD and control participants did not differ significantly along the personality dimensions of empathy and prosociality, antisociality, or social agreeableness. However, the ASD participants exhibited a weaker association between prosocial personality dimensions and judgments of facial trustworthiness and warmth than the control participants. Neurophysiological data also indicated that ASD participants had a weaker association with neuronal representations for trustworthiness and warmth from faces. These results suggest that the atypical association between social-affective personality and social trait judgment from faces may contribute to the social and affective difficulties associated with ASD.
Collapse
|
22
|
Lin T, Zhang X, Fields EC, Sekuler R, Gutchess A. Spatial frequency impacts perceptual and attentional ERP components across cultures. Brain Cogn 2022; 157:105834. [PMID: 34999289 PMCID: PMC8792318 DOI: 10.1016/j.bandc.2021.105834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 12/21/2021] [Accepted: 12/21/2021] [Indexed: 11/22/2022]
Abstract
Culture impacts visual perception in several ways.To identify stages of perceptual processing that differ between cultures, we usedelectroencephalography measures of perceptual and attentional responses to simple visual stimuli.Gabor patches of higher or lower spatialfrequencywere presented at high contrast to 25 American and 31 East Asian participants while they were watching for the onset of aninfrequent, oddball stimulus. Region of interest and mass univariate analyses assessed how cultural background and stimuli spatial frequency affected the visual evoked response potentials. Across both groups, the Gabor of lower spatial frequency produced stronger evoked response potentials in the anterior N1 and P3 than did the higher frequency Gabor. The mass univariate analyses also revealed effects of spatial frequency, including a frontal negativity around 150 ms and a widespread posterior positivity around 300 ms. The effects of spatial frequency generally differed little across cultures; although there was some evidence for cultural differences in the P3 response to different frequencies at the Pz electrode, this effect did not emerge in the mass univariate analyses. We discuss these results in relation to those from previous studies, and explore the potential advantages of mass univariate analyses for cultural neuroscience.
Collapse
Affiliation(s)
- Tong Lin
- Brandeis University, United States
| | | | - Eric C Fields
- Brandeis University, United States; Boston College, United States; Westminster College, United States
| | | | | |
Collapse
|
23
|
Liu M, Duan Y, Ince RAA, Chen C, Garrod OGB, Schyns PG, Jack RE. Facial expressions elicit multiplexed perceptions of emotion categories and dimensions. Curr Biol 2022; 32:200-209.e6. [PMID: 34767768 PMCID: PMC8751635 DOI: 10.1016/j.cub.2021.10.035] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 09/07/2021] [Accepted: 10/14/2021] [Indexed: 11/22/2022]
Abstract
Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.
Collapse
Affiliation(s)
- Meng Liu
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Yaocong Duan
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Robin A A Ince
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Chaona Chen
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Oliver G B Garrod
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Philippe G Schyns
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Rachael E Jack
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
24
|
The spatial distance compression effect is due to social interaction and not mere configuration. Psychon Bull Rev 2021; 29:828-836. [PMID: 34918281 DOI: 10.3758/s13423-021-02045-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In recent years, there has been a surge of interest in perception, evaluation, and memory for social interactions from a third-person perspective. One intriguing finding is a spatial distance compression effect when target dyads are facing each other. Specifically, face-to-face dyads are remembered as being spatially closer than back-to-back dyads. There is a vibrant debate about the mechanism behind this effect, and two hypotheses have been proposed. According to the social interaction hypothesis, face-to-face dyads engage a binding process that represents them as a social unit, which compresses the perceived distance between them. In contrast, the configuration hypothesis holds that the effect is produced by the front-to-front configuration of the two visual targets. In the present research we sought to test these accounts. In Experiment 1 we successfully replicated the distance compression effect with two upright faces that were facing each other, but not with inverted faces. In contrast, we found no distance compression effect with three types of nonsocial stimuli: arrows (Experiment 2a), fans (Experiment 2b), and cars (Experiment 3). In Experiment 4, we replicated this effect with another social stimuli: upright bodies. Taken together, these results provide strong support for the social interaction hypothesis.
Collapse
|
25
|
Holleman GA, Hooge ITC, Huijding J, Deković M, Kemner C, Hessels RS. Gaze and speech behavior in parent–child interactions: The role of conflict and cooperation. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02532-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractA primary mode of human social behavior is face-to-face interaction. In this study, we investigated the characteristics of gaze and its relation to speech behavior during video-mediated face-to-face interactions between parents and their preadolescent children. 81 parent–child dyads engaged in conversations about cooperative and conflictive family topics. We used a dual-eye tracking setup that is capable of concurrently recording eye movements, frontal video, and audio from two conversational partners. Our results show that children spoke more in the cooperation-scenario whereas parents spoke more in the conflict-scenario. Parents gazed slightly more at the eyes of their children in the conflict-scenario compared to the cooperation-scenario. Both parents and children looked more at the other's mouth region while listening compared to while speaking. Results are discussed in terms of the role that parents and children take during cooperative and conflictive interactions and how gaze behavior may support and coordinate such interactions.
Collapse
|
26
|
Abstract
Understanding facial signals in humans and other species is crucial for understanding the evolution, complexity, and function of the face as a communication tool. The Facial Action Coding System (FACS) enables researchers to measure facial movements accurately, but we currently lack tools to reliably analyse data and efficiently communicate results. Network analysis can provide a way to use the information encoded in FACS datasets: by treating individual AUs (the smallest units of facial movements) as nodes in a network and their co-occurrence as connections, we can analyse and visualise differences in the use of combinations of AUs in different conditions. Here, we present ‘NetFACS’, a statistical package that uses occurrence probabilities and resampling methods to answer questions about the use of AUs, AU combinations, and the facial communication system as a whole in humans and non-human animals. Using highly stereotyped facial signals as an example, we illustrate some of the current functionalities of NetFACS. We show that very few AUs are specific to certain stereotypical contexts; that AUs are not used independently from each other; that graph-level properties of stereotypical signals differ; and that clusters of AUs allow us to reconstruct facial signals, even when blind to the underlying conditions. The flexibility and widespread use of network analysis allows us to move away from studying facial signals as stereotyped expressions, and towards a dynamic and differentiated approach to facial communication.
Collapse
|
27
|
Daube C, Xu T, Zhan J, Webb A, Ince RA, Garrod OG, Schyns PG. Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity. PATTERNS (NEW YORK, N.Y.) 2021; 2:100348. [PMID: 34693374 PMCID: PMC8515012 DOI: 10.1016/j.patter.2021.100348] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 11/30/2020] [Accepted: 08/20/2021] [Indexed: 01/24/2023]
Abstract
Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed.
Collapse
Affiliation(s)
- Christoph Daube
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Tian Xu
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, England, UK
| | - Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Andrew Webb
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Robin A.A. Ince
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Oliver G.B. Garrod
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Philippe G. Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| |
Collapse
|
28
|
Chen PHA, Qu Y. Taking a Computational Cultural Neuroscience Approach to Study Parent-Child Similarities in Diverse Cultural Contexts. Front Hum Neurosci 2021; 15:703999. [PMID: 34512293 PMCID: PMC8426574 DOI: 10.3389/fnhum.2021.703999] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Accepted: 07/19/2021] [Indexed: 12/03/2022] Open
Abstract
Parent-child similarities and discrepancies at multiple levels provide a window to understand the cultural transmission process. Although prior research has examined parent-child similarities at the belief, behavioral, and physiological levels across cultures, little is known about parent-child similarities at the neural level. The current review introduces an interdisciplinary computational cultural neuroscience approach, which utilizes computational methods to understand neural and psychological processes being involved during parent-child interactions at intra- and inter-personal level. This review provides three examples, including the application of intersubject representational similarity analysis to analyze naturalistic neuroimaging data, the usage of computer vision to capture non-verbal social signals during parent-child interactions, and unraveling the psychological complexities involved during real-time parent-child interactions based on their simultaneous recorded brain response patterns. We hope that this computational cultural neuroscience approach can provide researchers an alternative way to examine parent-child similarities and discrepancies across different cultural contexts and gain a better understanding of cultural transmission processes.
Collapse
Affiliation(s)
- Pin-Hao A. Chen
- Department of Psychology, National Taiwan University, Taipei, Taiwan
- Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan
- Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei, Taiwan
| | - Yang Qu
- School of Education and Social Policy, Northwestern University, Evanston, IL, United States
| |
Collapse
|
29
|
Le Mau T, Hoemann K, Lyons SH, Fugate JMB, Brown EN, Gendron M, Barrett LF. Professional actors demonstrate variability, not stereotypical expressions, when portraying emotional states in photographs. Nat Commun 2021; 12:5037. [PMID: 34413313 PMCID: PMC8376986 DOI: 10.1038/s41467-021-25352-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 08/02/2021] [Indexed: 02/07/2023] Open
Abstract
It is long hypothesized that there is a reliable, specific mapping between certain emotional states and the facial movements that express those states. This hypothesis is often tested by asking untrained participants to pose the facial movements they believe they use to express emotions during generic scenarios. Here, we test this hypothesis using, as stimuli, photographs of facial configurations posed by professional actors in response to contextually-rich scenarios. The scenarios portrayed in the photographs were rated by a convenience sample of participants for the extent to which they evoked an instance of 13 emotion categories, and actors' facial poses were coded for their specific movements. Both unsupervised and supervised machine learning find that in these photographs, the actors portrayed emotional states with variable facial configurations; instances of only three emotion categories (fear, happiness, and surprise) were portrayed with moderate reliability and specificity. The photographs were separately rated by another sample of participants for the extent to which they portrayed an instance of the 13 emotion categories; they were rated when presented alone and when presented with their associated scenarios, revealing that emotion inferences by participants also vary in a context-sensitive manner. Together, these findings suggest that facial movements and perceptions of emotion vary by situation and transcend stereotypes of emotional expressions. Future research may build on these findings by incorporating dynamic stimuli rather than photographs and studying a broader range of cultural contexts.
Collapse
Affiliation(s)
- Tuan Le Mau
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for High Performance Computing, Social and Cognitive Computing, Connexis North, Singapore
| | - Katie Hoemann
- Department of Psychology, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Sam H Lyons
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA
| | - Jennifer M B Fugate
- Department of Psychology, University of Massachusetts at Dartmouth, Dartmouth, MA, 02747, USA
| | - Emery N Brown
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Maria Gendron
- Department of Psychology, Yale University, New Haven, CT, USA
| | - Lisa Feldman Barrett
- Department of Psychology, Northeastern University, Boston, MA, USA.
- Massachusetts General Hospital/Martinos Center for Biomedical Imaging, Charlestown, MA, USA.
| |
Collapse
|
30
|
Huang W, Yan H, Cheng K, Wang Y, Wang C, Li J, Li C, Li C, Zuo Z, Chen H. A dual-channel language decoding from brain activity with progressive transfer training. Hum Brain Mapp 2021; 42:5089-5100. [PMID: 34314088 PMCID: PMC8449118 DOI: 10.1002/hbm.25603] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 06/24/2021] [Accepted: 07/13/2021] [Indexed: 01/03/2023] Open
Abstract
When we view a scene, the visual cortex extracts and processes visual information in the scene through various kinds of neural activities. Previous studies have decoded the neural activity into single/multiple semantic category tags which can caption the scene to some extent. However, these tags are isolated words with no grammatical structure, insufficiently conveying what the scene contains. It is well‐known that textual language (sentences/phrases) is superior to single word in disclosing the meaning of images as well as reflecting people's real understanding of the images. Here, based on artificial intelligence technologies, we attempted to build a dual‐channel language decoding model (DC‐LDM) to decode the neural activities evoked by images into language (phrases or short sentences). The DC‐LDM consisted of five modules, namely, Image‐Extractor, Image‐Encoder, Nerve‐Extractor, Nerve‐Encoder, and Language‐Decoder. In addition, we employed a strategy of progressive transfer to train the DC‐LDM for improving the performance of language decoding. The results showed that the texts decoded by DC‐LDM could describe natural image stimuli accurately and vividly. We adopted six indexes to quantitatively evaluate the difference between the decoded texts and the annotated texts of corresponding visual images, and found that Word2vec‐Cosine similarity (WCS) was the best indicator to reflect the similarity between the decoded and the annotated texts. In addition, among different visual cortices, we found that the text decoded by the higher visual cortex was more consistent with the description of the natural image than the lower one. Our decoding model may provide enlightenment in language‐based brain‐computer interface explorations.
Collapse
Affiliation(s)
- Wei Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Hongmei Yan
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Kaiwen Cheng
- School of Language Intelligence, Sichuan International Studies University, Chongqing, China
| | - Yuting Wang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Chong Wang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiyi Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Chen Li
- Department of Medical Information Engineering, Sichuan University, Chengdu, China
| | - Chaorong Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhentao Zuo
- State Key Laboratory of Brain and Cognitive Science, Beijing MR Center for Brain Research, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Huafu Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
31
|
Alais D, Xu Y, Wardle SG, Taubert J. A shared mechanism for facial expression in human faces and face pareidolia. Proc Biol Sci 2021; 288:20210966. [PMID: 34229489 PMCID: PMC8261219 DOI: 10.1098/rspb.2021.0966] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
Facial expressions are vital for social communication, yet the underlying mechanisms are still being discovered. Illusory faces perceived in objects (face pareidolia) are errors of face detection that share some neural mechanisms with human face processing. However, it is unknown whether expression in illusory faces engages the same mechanisms as human faces. Here, using a serial dependence paradigm, we investigated whether illusory and human faces share a common expression mechanism. First, we found that images of face pareidolia are reliably rated for expression, within and between observers, despite varying greatly in visual features. Second, they exhibit positive serial dependence for perceived facial expression, meaning an illusory face (happy or angry) is perceived as more similar in expression to the preceding one, just as seen for human faces. This suggests illusory and human faces engage similar mechanisms of temporal continuity. Third, we found robust cross-domain serial dependence of perceived expression between illusory and human faces when they were interleaved, with serial effects larger when illusory faces preceded human faces than the reverse. Together, the results support a shared mechanism for facial expression between human faces and illusory faces and suggest that expression processing is not tightly bound to human facial features.
Collapse
Affiliation(s)
- David Alais
- School of Psychology, The University of Sydney, Sydney, New South Wales, Australia
| | - Yiben Xu
- School of Psychology, The University of Sydney, Sydney, New South Wales, Australia
| | - Susan G Wardle
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Jessica Taubert
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| |
Collapse
|
32
|
Marcolin F, Vezzetti E, Monaci M. Face perception foundations for pattern recognition algorithms. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.02.074] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
33
|
Yi J, Pärnamets P, Olsson A. The face value of feedback: facial behaviour is shaped by goals and punishments during interaction with dynamic faces. ROYAL SOCIETY OPEN SCIENCE 2021; 8:202159. [PMID: 34295516 PMCID: PMC8278067 DOI: 10.1098/rsos.202159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 06/21/2021] [Indexed: 06/13/2023]
Abstract
Responding appropriately to others' facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography signals from the participants' face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behaviour, and replicated earlier findings of faster and more accurate responses in congruent versus incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, when compared with frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.
Collapse
Affiliation(s)
- Jonathan Yi
- Department of Clinical Neuroscience, Division of Psychology, Karolinska Institutet, Solna, Sweden
| | - Philip Pärnamets
- Department of Clinical Neuroscience, Division of Psychology, Karolinska Institutet, Solna, Sweden
- Department of Psychology, New York University, New York, NY, USA
| | - Andreas Olsson
- Department of Clinical Neuroscience, Division of Psychology, Karolinska Institutet, Solna, Sweden
| |
Collapse
|
34
|
Scheel AM, Tiokhin L, Isager PM, Lakens D. Why Hypothesis Testers Should Spend Less Time Testing Hypotheses. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2021; 16:744-755. [PMID: 33326363 PMCID: PMC8273364 DOI: 10.1177/1745691620966795] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
For almost half a century, Paul Meehl educated psychologists about how the mindless use of null-hypothesis significance tests made research on theories in the social sciences basically uninterpretable. In response to the replication crisis, reforms in psychology have focused on formalizing procedures for testing hypotheses. These reforms were necessary and influential. However, as an unexpected consequence, psychological scientists have begun to realize that they may not be ready to test hypotheses. Forcing researchers to prematurely test hypotheses before they have established a sound "derivation chain" between test and theory is counterproductive. Instead, various nonconfirmatory research activities should be used to obtain the inputs necessary to make hypothesis tests informative. Before testing hypotheses, researchers should spend more time forming concepts, developing valid measures, establishing the causal relationships between concepts and the functional form of those relationships, and identifying boundary conditions and auxiliary assumptions. Providing these inputs should be recognized and incentivized as a crucial goal in itself. In this article, we discuss how shifting the focus to nonconfirmatory research can tie together many loose ends of psychology's reform movement and help us to develop strong, testable theories, as Paul Meehl urged.
Collapse
Affiliation(s)
- Anne M. Scheel
- Human-Technology Interaction Group, Eindhoven University of Technology
| | - Leonid Tiokhin
- Human-Technology Interaction Group, Eindhoven University of Technology
| | - Peder M. Isager
- Human-Technology Interaction Group, Eindhoven University of Technology
| | - Daniël Lakens
- Human-Technology Interaction Group, Eindhoven University of Technology
| |
Collapse
|
35
|
Taubert N, Stettler M, Siebert R, Spadacenta S, Sting L, Dicke P, Thier P, Giese MA. Shape-invariant encoding of dynamic primate facial expressions in human perception. eLife 2021; 10:61197. [PMID: 34115584 PMCID: PMC8195610 DOI: 10.7554/elife.61197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 04/22/2021] [Indexed: 11/30/2022] Open
Abstract
Dynamic facial expressions are crucial for communication in primates. Due to the difficulty to control shape and dynamics of facial expressions across species, it is unknown how species-specific facial expressions are perceptually encoded and interact with the representation of facial shape. While popular neural network models predict a joint encoding of facial shape and dynamics, the neuromuscular control of faces evolved more slowly than facial shape, suggesting a separate encoding. To investigate these alternative hypotheses, we developed photo-realistic human and monkey heads that were animated with motion capture data from monkeys and humans. Exact control of expression dynamics was accomplished by a Bayesian machine-learning technique. Consistent with our hypothesis, we found that human observers learned cross-species expressions very quickly, where face dynamics was represented largely independently of facial shape. This result supports the co-evolution of the visual processing and motor control of facial expressions, while it challenges appearance-based neural network theories of dynamic expression recognition.
Collapse
Affiliation(s)
- Nick Taubert
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany
| | - Michael Stettler
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany.,International Max Planck Research School for Intelligent Systems (IMPRS-IS), Tübingen, Germany
| | - Ramona Siebert
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Silvia Spadacenta
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Louisa Sting
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany
| | - Peter Dicke
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Peter Thier
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Martin A Giese
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany
| |
Collapse
|
36
|
Brejcha J, Tureček P, Kleisner K. Perception-driven dynamics of mimicry based on attractor field model. Interface Focus 2021; 11:20200052. [PMID: 34055303 PMCID: PMC8086919 DOI: 10.1098/rsfs.2020.0052] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2021] [Indexed: 01/02/2023] Open
Abstract
We provide a formal account of an interface that bridges two different levels of dynamic processes manifested by mimicry: prey-prey interactions and predators' perception. Mimicry is a coevolutionary process between an animate selective agent and at least two similar organisms selected by agent's perception-driven actions. Attractor field model explains perceived similarity of forms by noting that in both human and animal cognition, morphologically intermediate forms are more likely to be perceived as belonging to rare rather than abundant forms. We formalize this model in terms of predators' perception space deformation using numerical simulations and argue that the probability of confusion between similar species creates pressure on the perception space, which in turn leads to inflation of regions of perception space with high density of species representations. Such inflation causes increased discrimination between species by a predator, which implies that adaptive mimicry could initially emerge more easily among atypical species because they do not need the same level of similarity to the model. We provide a theoretical instrument to conceptualize interdependence between objective measurable matrices and perceived matrices of the same external reality. We believe that our framework leads to a more precise understanding of the evolution of mimicry.
Collapse
Affiliation(s)
- Jindřich Brejcha
- Department of Philosophy and History of Science, Faculty of Science, Charles University, Viničná 7, Praha 2 128 00, Czech Republic
| | - Petr Tureček
- Department of Philosophy and History of Science, Faculty of Science, Charles University, Viničná 7, Praha 2 128 00, Czech Republic
- Center for Theoretical Study, Charles University and Czech Academy of Sciences, Jilská 1, Prague 1 110 00, Czech Republic
| | - Karel Kleisner
- Department of Philosophy and History of Science, Faculty of Science, Charles University, Viničná 7, Praha 2 128 00, Czech Republic
| |
Collapse
|
37
|
Kalsum T, Mehmood Z, Kulsoom F, Chaudhry HN, Khan AR, Rashid M, Saba T. Localization and classification of human facial emotions using local intensity order pattern and shape-based texture features. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-201799] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Facial emotion recognition system (FERS) recognize the person’s emotions based on various image processing stages including feature extraction as one of the major processing steps. In this study, we presented a hybrid approach for recognizing facial expressions by performing the feature level fusion of a local and a global feature descriptor that is classified by a support vector machine (SVM) classifier. Histogram of oriented gradients (HoG) is selected for the extraction of global facial features and local intensity order pattern (LIOP) to extract the local features. As HoG is a shape-based descriptor, with the help of edge information, it can extract the deformations caused in facial muscles due to changing emotions. On the contrary, LIOP works based on the information of pixels intensity order and is invariant to change in image viewpoint, illumination conditions, JPEG compression, and image blurring as well. Thus both the descriptors proved useful to recognize the emotions effectively in the images captured in both constrained and realistic scenarios. The performance of the proposed model is evaluated based on the lab-constrained datasets including CK+, TFEID, JAFFE as well as on realistic datasets including SFEW, RaF, and FER-2013 dataset. The optimal recognition accuracy of 99.8%, 98.2%, 93.5%, 78.1%, 63.0%, 56.0% achieved respectively for CK+, JAFFE, TFEID, RaF, FER-2013 and SFEW datasets respectively.
Collapse
Affiliation(s)
- Tehmina Kalsum
- Department of Software Engineering, University of Engineering and Technology Taxila, Taxila, Pakistan
| | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology Taxila, Taxila, Pakistan
| | - Farzana Kulsoom
- Department of Electrical, Computer, and Biomedical Engineering, University of Pavia, Pavia, Itlay
| | - Hassan Nazeer Chaudhry
- Department of Electrical, Information, and Bio Engineering, Politecnico di Milano, Milan, Itlay
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics (AIDA) Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Muhammad Rashid
- Department of Computer Engineering, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics (AIDA) Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
38
|
Thorstenson CA, Pazda AD. Facial coloration influences social approach-avoidance through social perception. Cogn Emot 2021; 35:970-985. [PMID: 33855931 DOI: 10.1080/02699931.2021.1914554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Perceptions of others' social characteristics are essential for guiding social behaviour and decision making. Recent research has demonstrated that increased facial redness facilitates both positive (e.g. health, attractiveness, happiness) and negative (e.g. dominance, anger) social evaluations. Given that similar facial colouration can lead to diverging evaluations, it is unclear how people integrate these cues to inform social decisions (e.g. approach-avoidance). We suggest that the influence of facial redness on social perceptions and decisions depends on contextual information, including facial-muscular emotion expressions. We test this hypothesis across two studies where participants view faces either increasing or decreasing redness, evaluate them on a range of social characteristics (i.e. aggressiveness, attractiveness, health, friendliness, dominance) and decide whether to approach or avoid them. Increased facial redness facilitated, and decreased redness impeded (to a greater extent), perceptions of each social characteristic. However, the extent of this influence was moderated by the muscular expression (i.e. neutral, happy, angry). Further, we found that the influence of facial redness on approach-avoidance was largely mediated by evaluations of attractiveness and health. Altogether, the current work provides nuanced insights into facial colouration's role as a social signal that informs social perception and decision making.
Collapse
Affiliation(s)
- Christopher A Thorstenson
- Department of Psychology and Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, WI, USA
| | - Adam D Pazda
- Department of Psychology, University of South Carolina-Aiken, Aiken, SC, USA
| |
Collapse
|
39
|
Zhan J, Liu M, Garrod OGB, Daube C, Ince RAA, Jack RE, Schyns PG. Modeling individual preferences reveals that face beauty is not universally perceived across cultures. Curr Biol 2021; 31:2243-2252.e6. [PMID: 33798430 PMCID: PMC8162177 DOI: 10.1016/j.cub.2021.03.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 01/15/2021] [Accepted: 03/03/2021] [Indexed: 12/15/2022]
Abstract
Facial attractiveness confers considerable advantages in social interactions,1,2 with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average3 while optimizing sexual dimorphism.4 However, emerging evidence questions this model as an accurate representation of facial attractiveness,5, 6, 7 including representing the diversity of beauty preferences within and across cultures.8, 9, 10, 11, 12 Here, we demonstrate that Western Europeans (WEs) and East Asians (EAs) evaluate facial beauty using culture-specific features, contradicting theories of universality. With a data-driven method, we modeled, at both the individual and group levels, the attractive face features of young females (25 years old) in two matched groups each of 40 young male WE and EA participants. Specifically, we generated a broad range of same- and other-ethnicity female faces with naturally varying shapes and complexions. Participants rated each on attractiveness. We then reverse correlated the face features that drive perception of attractiveness in each participant. From these individual face models, we reconstructed a facial attractiveness representation space that explains preference variations. We show that facial attractiveness is distinct both from averageness and from sexual dimorphism in both cultures. Finally, we disentangled attractive face features into those shared across cultures, culture specific, and specific to individual participants, thereby revealing their diversity. Our results have direct theoretical and methodological impact for representing diversity in social perception and for the design of culturally and ethnically sensitive socially interactive digital agents. We modeled individual preferences for attractive faces in two cultures Attractive face features differ from the face average and sexual dimorphism Instead, culture and individual preferences shape attractive face features Attractive face features from a culture are used to judge other-ethnicity faces
Collapse
Affiliation(s)
- Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK.
| | - Meng Liu
- School of Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK
| | - Oliver G B Garrod
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK
| | - Christoph Daube
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK
| | - Rachael E Jack
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK; School of Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK; School of Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK.
| |
Collapse
|
40
|
Goupil L, Ponsot E, Richardson D, Reyes G, Aucouturier JJ. Listeners' perceptions of the certainty and honesty of a speaker are associated with a common prosodic signature. Nat Commun 2021; 12:861. [PMID: 33558510 PMCID: PMC7870677 DOI: 10.1038/s41467-020-20649-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Accepted: 11/20/2020] [Indexed: 02/07/2023] Open
Abstract
The success of human cooperation crucially depends on mechanisms enabling individuals to detect unreliability in their conspecifics. Yet, how such epistemic vigilance is achieved from naturalistic sensory inputs remains unclear. Here we show that listeners' perceptions of the certainty and honesty of other speakers from their speech are based on a common prosodic signature. Using a data-driven method, we separately decode the prosodic features driving listeners' perceptions of a speaker's certainty and honesty across pitch, duration and loudness. We find that these two kinds of judgments rely on a common prosodic signature that is perceived independently from individuals' conceptual knowledge and native language. Finally, we show that listeners extract this prosodic signature automatically, and that this impacts the way they memorize spoken words. These findings shed light on a unique auditory adaptation that enables human listeners to quickly detect and react to unreliability during linguistic interactions.
Collapse
Affiliation(s)
- Louise Goupil
- STMS UMR 9912 (CNRS/IRCAM/SU), Paris, France.
- University of East London, London, UK.
| | - Emmanuel Ponsot
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL University, CNRS, Paris, France
- Hearing Technology - WAVES, Department of Information Technology, Ghent University, Ghent, Belgium
| | | | | | - Jean-Julien Aucouturier
- STMS UMR 9912 (CNRS/IRCAM/SU), Paris, France
- FEMTO-ST (FEMTO-ST UMR 6174, CNRS/UBFC/ENSMM/UTBM, Besançon, France
| |
Collapse
|
41
|
Improving reverse correlation analysis of faces: Diagnostics of order effects, runs, rater agreement, and image pairs. Behav Res Methods 2021; 53:1609-1647. [PMID: 33409986 DOI: 10.3758/s13428-020-01499-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/04/2020] [Indexed: 11/08/2022]
Abstract
Examinations of the reliability and validity of classification images of faces using the reverse correlation approach remain rare. In the present paper, we focus on order effects of trials, compliance, and reliability effects, as well as the degree of contextual contrast of image pairs. We present different diagnostic methods to examine these three aspects using data from 12 reverse correlation studies conducted both in-lab and online with diverse samples (i.e., from Burkina Faso, China, the Netherlands, the U.S., and an international sample) using five different base faces (i.e., female black, female Asian, female and gender-neutral white, and black/white/female/male morphed composite). For each of the 12 studies, we compare the individual CIs of subgroups of likely non-complier respondents and trials with non-contrastful image pairs to individual CIs of likely compliers and contrastful image pairs. In an appendix, we also examine the effects of filtering out data from individual participants and trials on the signal-to-noise ratio of group CIs. R scripts are publicly available for easy implementation of our suggestions in related research.
Collapse
|
42
|
To which world regions does the valence-dominance model of social perception apply? Nat Hum Behav 2021; 5:159-169. [PMID: 33398150 DOI: 10.1038/s41562-020-01007-2] [Citation(s) in RCA: 66] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 10/23/2020] [Indexed: 01/28/2023]
Abstract
Over the past 10 years, Oosterhof and Todorov's valence-dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov's methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov's original analysis strategy, the valence-dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence-dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 5 November 2018. The protocol, as accepted by the journal, can be found at https://doi.org/10.6084/m9.figshare.7611443.v1 .
Collapse
|
43
|
Vikkelsø S, Hoang TH, Carrara F, Hansen KD, Dinesen B. The telepresence avatar robot OriHime as a communication tool for adults with acquired brain injury: an ethnographic case study. INTEL SERV ROBOT 2020. [DOI: 10.1007/s11370-020-00335-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
44
|
Vision: Face-Centered Representations in the Brain. Curr Biol 2020; 30:R1277-R1278. [PMID: 33080203 DOI: 10.1016/j.cub.2020.07.086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
A longstanding debate in the face recognition field concerns the format of face representations in the brain. New face research clarifies some of this mystery by revealing a face-centered format in a patient with a left splenium lesion of the corpus callosum who perceives the right side of faces as 'melted'.
Collapse
|
45
|
Abstract
Gaze-where one looks, how long, and when-plays an essential part in human social behavior. While many aspects of social gaze have been reviewed, there is no comprehensive review or theoretical framework that describes how gaze to faces supports face-to-face interaction. In this review, I address the following questions: (1) When does gaze need to be allocated to a particular region of a face in order to provide the relevant information for successful interaction; (2) How do humans look at other people, and faces in particular, regardless of whether gaze needs to be directed at a particular region to acquire the relevant visual information; (3) How does gaze support the regulation of interaction? The work reviewed spans psychophysical research, observational research, and eye-tracking research in both lab-based and interactive contexts. Based on the literature overview, I sketch a framework for future research based on dynamic systems theory. The framework holds that gaze should be investigated in relation to sub-states of the interaction, encompassing sub-states of the interactors, the content of the interaction as well as the interactive context. The relevant sub-states for understanding gaze in interaction vary over different timescales from microgenesis to ontogenesis and phylogenesis. The framework has important implications for vision science, psychopathology, developmental science, and social robotics.
Collapse
Affiliation(s)
- Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584CS, Utrecht, The Netherlands.
- Developmental Psychology, Heidelberglaan 1, 3584CS, Utrecht, The Netherlands.
| |
Collapse
|
46
|
Nestor A, Lee ACH, Plaut DC, Behrmann M. The Face of Image Reconstruction: Progress, Pitfalls, Prospects. Trends Cogn Sci 2020; 24:747-759. [PMID: 32674958 PMCID: PMC7429291 DOI: 10.1016/j.tics.2020.06.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Revised: 05/27/2020] [Accepted: 06/15/2020] [Indexed: 10/23/2022]
Abstract
Recent research has demonstrated that neural and behavioral data acquired in response to viewing face images can be used to reconstruct the images themselves. However, the theoretical implications, promises, and challenges of this direction of research remain unclear. We evaluate the potential of this research for elucidating the visual representations underlying face recognition. Specifically, we outline complementary and converging accounts of the visual content, the representational structure, and the neural dynamics of face processing. We illustrate how this research addresses fundamental questions in the study of normal and impaired face recognition, and how image reconstruction provides a powerful framework for uncovering face representations, for unifying multiple types of empirical data, and for facilitating both theoretical and methodological progress.
Collapse
Affiliation(s)
- Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada.
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - David C Plaut
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Carnegie Mellon Neuroscience Institute, Pittsburgh, PA, USA
| | - Marlene Behrmann
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Carnegie Mellon Neuroscience Institute, Pittsburgh, PA, USA
| |
Collapse
|
47
|
Burt AL, Crewther DP. The 4D Space-Time Dimensions of Facial Perception. Front Psychol 2020; 11:1842. [PMID: 32849084 PMCID: PMC7399249 DOI: 10.3389/fpsyg.2020.01842] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 07/06/2020] [Indexed: 12/19/2022] Open
Abstract
Facial information is a powerful channel for human-to-human communication. Characteristically, faces can be defined as biological objects that are four-dimensional (4D) patterns, whereby they have concurrently a spatial structure and surface as well as temporal dynamics. The spatial characteristics of facial objects contain a volume and surface in three dimensions (3D), namely breadth, height and importantly, depth. The temporal properties of facial objects are defined by how a 3D facial structure and surface evolves dynamically over time; where time is referred to as the fourth dimension (4D). Our entire perception of another’s face, whether it be social, affective or cognitive perceptions, is therefore built on a combination of 3D and 4D visual cues. Counterintuitively, over the past few decades of experimental research in psychology, facial stimuli have largely been captured, reproduced and presented to participants with two dimensions (2D), while remaining largely static. The following review aims to advance and update facial researchers, on the recent revolution in computer-generated, realistic 4D facial models produced from real-life human subjects. We delve in-depth to summarize recent studies which have utilized facial stimuli that possess 3D structural and surface cues (geometry, surface and depth) and 4D temporal cues (3D structure + dynamic viewpoint and movement). In sum, we have found that higher-order perceptions such as identity, gender, ethnicity, emotion and personality, are critically influenced by 4D characteristics. In future, it is recommended that facial stimuli incorporate the 4D space-time perspective with the proposed time-resolved methods.
Collapse
Affiliation(s)
- Adelaide L Burt
- Centre for Human Psychopharmacology, Swinburne University of Technology, Melbourne, VIC, Australia
| | - David P Crewther
- Centre for Human Psychopharmacology, Swinburne University of Technology, Melbourne, VIC, Australia
| |
Collapse
|
48
|
Arias P, Rachman L, Liuni M, Aucouturier JJ. Beyond Correlation: Acoustic Transformation Methods for the Experimental Study of Emotional Voice and Speech. EMOTION REVIEW 2020. [DOI: 10.1177/1754073920934544] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
While acoustic analysis methods have become a commodity in voice emotion research, experiments that attempt not only to describe but to computationally manipulate expressive cues in emotional voice and speech have remained relatively rare. We give here a nontechnical overview of voice-transformation techniques from the audio signal-processing community that we believe are ripe for adoption in this context. We provide sound examples of what they can achieve, examples of experimental questions for which they can be used, and links to open-source implementations. We point at a number of methodological properties of these algorithms, such as being specific, parametric, exhaustive, and real-time, and describe the new possibilities that these open for the experimental study of the emotional voice.
Collapse
Affiliation(s)
- Pablo Arias
- STMS UMR9912, IRCAM/CNRS/Sorbonne Université, France
| | - Laura Rachman
- STMS UMR9912, IRCAM/CNRS/Sorbonne Université, France
| | - Marco Liuni
- STMS UMR9912, IRCAM/CNRS/Sorbonne Université, France
| | | |
Collapse
|
49
|
Han S, Liu S, Li Y, Li W, Wang X, Gan Y, Xu Q, Zhang L. Why do you attract me but not others? Retrieval of person knowledge and its generalization bring diverse judgments of facial attractiveness. Soc Neurosci 2020; 15:505-515. [PMID: 32602802 DOI: 10.1080/17470919.2020.1787223] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Judgments of facial attractiveness play an important role in social interactions. However, it still remains unclear why these judgments are malleable. The present study aimed to understand whether the retrieval of person knowledge leads to different judgments of attractiveness of the same face. Event-related potentials and learning-recognition tasks were used to investigate the effects of person knowledge on facial attractiveness. The results showed that compared with familiar faces that were matched with negative person knowledge, those matched with positive person knowledge were evaluated as more attractive and evoked a larger early posterior negativity (EPN) and late positive complex (LPC). Additionally, positive similar faces had the same behavioral results and evoked large LPC, while unfamiliar faces did not have any significant effects. These results indicate that the effect of person knowledge on facial attractiveness occurs from early to late stage of facial attractiveness processing, and this effect could be generalized based on the similarity of the face structure, which occurred at the late stage. This mechanism may explain why individuals form different judgments of facial attractiveness.
Collapse
Affiliation(s)
- Shangfeng Han
- Department and Institute of Psychology, Ningbo University , Ningbo, China.,Shenzhen Key Laboratory of Affective and Social Neuroscience, Shenzhen University , Shenzhen, China.,Center for Brain Disorders and Cognitive Sciences, Shenzhen University , Shenzhen, China.,Center for Neuroimaging, Shenzhen Institute of Neuroscience , Shenzhen, China
| | - Shen Liu
- School of Humanities and Social Sciences, University of Science and Technology of China , Hefei, China
| | - Yue Li
- Department and Institute of Psychology, Ningbo University , Ningbo, China.,KunMing Health Vocational College , KunMing, China
| | - Wanyue Li
- Department and Institute of Psychology, Ningbo University , Ningbo, China
| | - Xiujuan Wang
- Department and Institute of Psychology, Ningbo University , Ningbo, China
| | - Yetong Gan
- Department and Institute of Psychology, Ningbo University , Ningbo, China
| | - Qiang Xu
- Department and Institute of Psychology, Ningbo University , Ningbo, China
| | - Lin Zhang
- Department and Institute of Psychology, Ningbo University , Ningbo, China
| |
Collapse
|
50
|
Skiba RM, Vuilleumier P. Brain Networks Processing Temporal Information in Dynamic Facial Expressions. Cereb Cortex 2020; 30:6021-6038. [DOI: 10.1093/cercor/bhaa176] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 04/30/2020] [Accepted: 05/22/2020] [Indexed: 11/14/2022] Open
Abstract
Abstract
This fMRI study examines the role of local and global motion information in facial movements during exposure to novel dynamic face stimuli. We found that synchronous expressions distinctively engaged medial prefrontal areas in the rostral and caudal sectors of anterior cingulate cortex (r/cACC) extending to inferior supplementary motor areas, as well as motor cortex and bilateral superior frontal gyrus (global temporal-spatial processing). Asynchronous expressions in which one part of the face unfolded before the other activated more the right superior temporal sulcus (STS) and inferior frontal gyrus (local temporal-spatial processing). These differences in temporal dynamics had no effect on visual face-responsive areas. Dynamic causal modeling analysis further showed that processing of asynchronous expression features was associated with a differential information flow, centered on STS, which received direct input from occipital cortex and projected to the amygdala. Moreover, STS and amygdala displayed selective interactions with cACC where the integration of both local and global motion cues could take place. These results provide new evidence for a role of local and global temporal dynamics in emotional expressions, extracted in partly separate brain pathways. Importantly, we show that dynamic expressions with synchronous movement cues may distinctively engage brain areas responsible for motor execution of expressions.
Collapse
Affiliation(s)
- Rafal M Skiba
- Laboratory for Behavioural Neurology and Imaging of Cognition, Department of Basic Neuroscience, University of Geneva, 1211 Geneva, Switzerland
- Swiss Center for Affective Science, University of Geneva, Campus Biotech, 1202 Geneva, Switzerland
| | - Patrik Vuilleumier
- Laboratory for Behavioural Neurology and Imaging of Cognition, Department of Basic Neuroscience, University of Geneva, 1211 Geneva, Switzerland
- Swiss Center for Affective Science, University of Geneva, Campus Biotech, 1202 Geneva, Switzerland
| |
Collapse
|