1
|
Straulino E, Scarpazza C, Spoto A, Betti S, Chozas Barrientos B, Sartori L. The Spatiotemporal Dynamics of Facial Movements Reveals the Left Side of a Posed Smile. BIOLOGY 2023; 12:1160. [PMID: 37759560 PMCID: PMC10525663 DOI: 10.3390/biology12091160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 08/14/2023] [Accepted: 08/19/2023] [Indexed: 09/29/2023]
Abstract
Humans can recombine thousands of different facial expressions. This variability is due to the ability to voluntarily or involuntarily modulate emotional expressions, which, in turn, depends on the existence of two anatomically separate pathways. The Voluntary (VP) and Involuntary (IP) pathways mediate the production of posed and spontaneous facial expressions, respectively, and might also affect the left and right sides of the face differently. This is a neglected aspect in the literature on emotion, where posed expressions instead of genuine expressions are often used as stimuli. Two experiments with different induction methods were specifically designed to investigate the unfolding of spontaneous and posed facial expressions of happiness along the facial vertical axis (left, right) with a high-definition 3-D optoelectronic system. The results showed that spontaneous expressions were distinguished from posed facial movements as revealed by reliable spatial and speed key kinematic patterns in both experiments. Moreover, VP activation produced a lateralization effect: compared with the felt smile, the posed smile involved an initial acceleration of the left corner of the mouth, while an early deceleration of the right corner occurred in the second phase of the movement, after the velocity peak.
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (C.S.); (A.S.)
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (C.S.); (A.S.)
- Translational Neuroimaging and Cognitive Lab, IRCCS San Camillo Hospital, Via Alberoni 70, 30126 Venice, Italy
| | - Andrea Spoto
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (C.S.); (A.S.)
| | - Sonia Betti
- Department of Psychology, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Viale Rasi e Spinelli 176, 47521 Cesena, Italy;
| | - Beatriz Chozas Barrientos
- Department of Chiropractic Medicine, University of Zurich, Balgrist University Hospital, Forchstrasse 340, 8008 Zürich, Switzerland;
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (C.S.); (A.S.)
- Padova Neuroscience Center, University of Padova, Via Giuseppe Orus 2, 35131 Padova, Italy
| |
Collapse
|
2
|
Mielke M, Reisch LM, Mehlmann A, Schindler S, Bien CG, Kissler J. Right medial temporal lobe structures particularly impact early stages of affective picture processing. Hum Brain Mapp 2022; 43:787-798. [PMID: 34687490 PMCID: PMC8720182 DOI: 10.1002/hbm.25687] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 09/29/2021] [Accepted: 10/11/2021] [Indexed: 11/12/2022] Open
Abstract
Human vision prioritizes emotional stimuli. This is reflected in stronger electrocortical activation in response to emotional than neutral stimuli, measurable on the surface of the head. Feedback projections from brain structures deep within the medial temporal lobes (mTLs), in particular the amygdala, are thought to give rise to this phenomenon, although causal evidence is rare. Given the many pathways involved in visual processing, the influence of mTL structures could be restricted to specific time windows. Therefore, we delineate the temporal dynamics of the impact of right mTL structures on affective picture processing, investigating event-related potentials (ERPs) in 19 patients (10 female) with right mTL resections and 19 individually matched healthy participants, while they viewed negative and neutral scenes. Groups differed significantly at early- and mid-latency processing stages. Patients with right mTL resection, unlike controls, showed no (P1: 90-140 ms) or marginal (N1: 170-220 ms) emotion modulation. At mid-latency (early posterior negativity: 220-370 ms), emotion modulation over the ipsi-resectional right hemisphere was smaller in patients than in controls, but groups did not differ over the left hemisphere. During late parietal positivities (400-650 ms and 650-900 ms), both groups had similar emotion modulation. Our results demonstrate that right mTL structures attenuate particularly early processing of affectively negative scenes. This is theoretically consistent with an initial amygdala-dependent feedforward sweep in visual emotion processing whose absence is successively compensated. Findings specify the impact of right mTL structures on emotional picture processing and highlight the value of time-resolved measures in affective neuroscience.
Collapse
Affiliation(s)
- Malena Mielke
- Department of PsychologyBielefeld UniversityBielefeldGermany
| | - Lea Marie Reisch
- Department of PsychologyBielefeld UniversityBielefeldGermany
- Medical School, Department of Epileptology (Krankenhaus Mara)Bielefeld UniversityBielefeldGermany
| | | | - Sebastian Schindler
- Institute of Medical Psychology and Systems NeuroscienceUniversity of MünsterMünsterGermany
| | - Christian G. Bien
- Medical School, Department of Epileptology (Krankenhaus Mara)Bielefeld UniversityBielefeldGermany
| | - Johanna Kissler
- Department of PsychologyBielefeld UniversityBielefeldGermany
| |
Collapse
|
3
|
Ross ED. Differential Hemispheric Lateralization of Emotions and Related Display Behaviors: Emotion-Type Hypothesis. Brain Sci 2021; 11:1034. [PMID: 34439653 PMCID: PMC8393469 DOI: 10.3390/brainsci11081034] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 07/14/2021] [Accepted: 07/26/2021] [Indexed: 11/26/2022] Open
Abstract
There are two well-known hypotheses regarding hemispheric lateralization of emotions. The Right Hemisphere Hypothesis (RHH) postulates that emotions and associated display behaviors are a dominant and lateralized function of the right hemisphere. The Valence Hypothesis (VH) posits that negative emotions and related display behaviors are modulated by the right hemisphere and positive emotions and related display behaviors are modulated by the left hemisphere. Although both the RHH and VH are supported by extensive research data, they are mutually exclusive, suggesting that there may be a missing factor in play that may provide a more accurate description of how emotions are lateralization in the brain. Evidence will be presented that provides a much broader perspective of emotions by embracing the concept that emotions can be classified into primary and social types and that hemispheric lateralization is better explained by the Emotion-type Hypothesis (ETH). The ETH posits that primary emotions and related display behaviors are modulated by the right hemisphere and social emotions and related display behaviors are modulated by the left hemisphere.
Collapse
Affiliation(s)
- Elliott D. Ross
- Department of Neurology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA; or
- Department of Neurology, University of Colorado School of Medicine, Aurora, CO 80045, USA
| |
Collapse
|
4
|
Changes in Computer-Analyzed Facial Expressions with Age. SENSORS 2021; 21:s21144858. [PMID: 34300600 PMCID: PMC8309819 DOI: 10.3390/s21144858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 07/13/2021] [Accepted: 07/15/2021] [Indexed: 11/17/2022]
Abstract
Facial expressions are well known to change with age, but the quantitative properties of facial aging remain unclear. In the present study, we investigated the differences in the intensity of facial expressions between older (n = 56) and younger adults (n = 113). In laboratory experiments, the posed facial expressions of the participants were obtained based on six basic emotions and neutral facial expression stimuli, and the intensities of their faces were analyzed using a computer vision tool, OpenFace software. Our results showed that the older adults expressed strong expressions for some negative emotions and neutral faces. Furthermore, when making facial expressions, older adults used more face muscles than younger adults across the emotions. These results may help to understand the characteristics of facial expressions in aging and can provide empirical evidence for other fields regarding facial recognition.
Collapse
|
5
|
Namba S, Matsui H, Zloteanu M. Distinct temporal features of genuine and deliberate facial expressions of surprise. Sci Rep 2021; 11:3362. [PMID: 33564091 PMCID: PMC7873236 DOI: 10.1038/s41598-021-83077-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 01/28/2021] [Indexed: 01/30/2023] Open
Abstract
The physical properties of genuine and deliberate facial expressions remain elusive. This study focuses on observable dynamic differences between genuine and deliberate expressions of surprise based on the temporal structure of facial parts during emotional expression. Facial expressions of surprise were elicited using multiple methods and video recorded: senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine), other senders were asked to produce deliberate surprise with no preparation (Improvised), by mimicking the expression of another (External), or by reproducing the surprised face after having first experienced genuine surprise (Rehearsed). A total of 127 videos were analyzed, and moment-to-moment movements of eyelids and eyebrows were annotated with deep learning-based tracking software. Results showed that all surprise displays were mainly composed of raising eyebrows and eyelids movements. Genuine displays included horizontal movement in the left part of the face, but also showed the weakest movement coupling of all conditions. External displays had faster eyebrow and eyelid movement, while Improvised displays showed the strongest coupling of movements. The findings demonstrate the importance of dynamic information in the encoding of genuine and deliberate expressions of surprise and the importance of the production method employed in research.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, Kyoto, 6190288, Japan.
| | - Hiroshi Matsui
- Center for Human-Nature, Artificial Intelligence, and Neuroscience, Hokkaido University, Hokkaido, 0600808, Japan
| | - Mircea Zloteanu
- Department of Criminology and Sociology, Kingston University London, Kingston Upon Thames, KT1 2EE, UK
| |
Collapse
|
6
|
Kawulok M, Nalepa J, Kawulok J, Smolka B. Dynamics of facial actions for assessing smile genuineness. PLoS One 2021; 16:e0244647. [PMID: 33400708 PMCID: PMC7785114 DOI: 10.1371/journal.pone.0244647] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 12/14/2020] [Indexed: 11/19/2022] Open
Abstract
Applying computer vision techniques to distinguish between spontaneous and posed smiles is an active research topic of affective computing. Although there have been many works published addressing this problem and a couple of excellent benchmark databases created, the existing state-of-the-art approaches do not exploit the action units defined within the Facial Action Coding System that has become a standard in facial expression analysis. In this work, we explore the possibilities of extracting discriminative features directly from the dynamics of facial action units to differentiate between genuine and posed smiles. We report the results of our experimental study which shows that the proposed features offer competitive performance to those based on facial landmark analysis and on textural descriptors extracted from spatial-temporal blocks. We make these features publicly available for the UvA-NEMO and BBC databases, which will allow other researchers to further improve the classification scores, while preserving the interpretation capabilities attributed to the use of facial action units. Moreover, we have developed a new technique for identifying the smile phases, which is robust against the noise and allows for continuous analysis of facial videos.
Collapse
Affiliation(s)
- Michal Kawulok
- Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Gliwice, Poland
- * E-mail:
| | - Jakub Nalepa
- Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Gliwice, Poland
| | - Jolanta Kawulok
- Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Gliwice, Poland
| | - Bogdan Smolka
- Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Gliwice, Poland
| |
Collapse
|
7
|
Lee K, Lee EC. Siamese Architecture-Based 3D DenseNet with Person-Specific Normalization Using Neutral Expression for Spontaneous and Posed Smile Classification. SENSORS 2020; 20:s20247184. [PMID: 33333873 PMCID: PMC7765265 DOI: 10.3390/s20247184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 12/08/2020] [Accepted: 12/14/2020] [Indexed: 11/16/2022]
Abstract
Clinical studies have demonstrated that spontaneous and posed smiles have spatiotemporal differences in facial muscle movements, such as laterally asymmetric movements, which use different facial muscles. In this study, a model was developed in which video classification of the two types of smile was performed using a 3D convolutional neural network (CNN) applying a Siamese network, and using a neutral expression as reference input. The proposed model makes the following contributions. First, the developed model solves the problem caused by the differences in appearance between individuals, because it learns the spatiotemporal differences between the neutral expression of an individual and spontaneous and posed smiles. Second, using a neutral expression as an anchor improves the model accuracy, when compared to that of the conventional method using genuine and imposter pairs. Third, by using a neutral expression as an anchor image, it is possible to develop a fully automated classification system for spontaneous and posed smiles. In addition, visualizations were designed for the Siamese architecture-based 3D CNN to analyze the accuracy improvement, and to compare the proposed and conventional methods through feature analysis, using principal component analysis (PCA).
Collapse
Affiliation(s)
- Kunyoung Lee
- Department of Computer Science, Graduate School, Sangmyung University, Hongjimun 2-Gil 20, Jongno-Gu, Seoul 03016, Korea;
| | - Eui Chul Lee
- Department of Human-Centered Artificial Intelligence, Sangmyung University, Hongjimun 2-Gil 20, Jongno-Gu, Seoul 03016, Korea
- Correspondence: ; Tel.: +82-2-781-7553
| |
Collapse
|
8
|
Wang S, Zheng Z, Yin S, Yang J, Ji Q. A Novel Dynamic Model Capturing Spatial and Temporal Patterns for Facial Expression Analysis. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:2082-2095. [PMID: 30998459 DOI: 10.1109/tpami.2019.2911937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Facial expression analysis could be greatly improved by incorporating spatial and temporal patterns present in facial behavior, but the patterns have not yet been utilized to their full advantage. We remedy this via a novel dynamic model-an interval temporal restricted Boltzmann machine (IT-RBM) - that is able to capture both universal spatial patterns and complicated temporal patterns in facial behavior for facial expression analysis. We regard a facial expression as a multifarious activity composed of sequential or overlapping primitive facial events. Allen's interval algebra is implemented to portray these complicated temporal patterns via a two-layer Bayesian network. The nodes in the upper-most layer are representative of the primitive facial events, and the nodes in the lower layer depict the temporal relationships between those events. Our model also captures inherent universal spatial patterns via a multi-value restricted Boltzmann machine in which the visible nodes are facial events, and the connections between hidden and visible nodes model intrinsic spatial patterns. Efficient learning and inference algorithms are proposed. Experiments on posed and spontaneous expression distinction and expression recognition demonstrate that our proposed IT-RBM achieves superior performance compared to state-of-the art research due to its ability to incorporate these facial behavior patterns.
Collapse
|
9
|
Ioannucci S, George N, Friedrich P, Cerliani L, Thiebaut de Schotten M. White matter correlates of hemi-face dominance in happy and sad expression. Brain Struct Funct 2020; 225:1379-1388. [PMID: 32055980 DOI: 10.1007/s00429-020-02040-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 01/29/2020] [Indexed: 12/18/2022]
Abstract
The neural underpinnings of human emotional expression are thought to be unevenly distributed among the two brain hemispheres. However, little is known on the anatomy supporting this claim, particularly in the cerebral white matter. Here, we explored the relationship between hemi-face dominance in emotional expression and cerebral white matter asymmetries in 33 healthy participants. Measures of emotional expression were derived from pictures of the participant's faces in a 'happy smiling' and a 'sad frowning' conditions. Chimeric faces were constructed by mirroring right and left hemi-faces, as done in previous studies, resulting in a left mirrored and right mirrored chimeric face per picture. To gain measures of hemi-face dominance per participant, a jury of 20 additional participants rated which chimeric face shows the higher intensity of emotional expressivity, by marking a 155 mm line between the two versions. Measures of the asymmetry of the uncinate, the cingulum and the three branches of superior longitudinal fasciculi were derived from diffusion-weighted imaging tractography dissections. Group effect analyses indicated that the degree of asymmetry in emotional expression was not as prominent as reported in the literature and showed a large inter-individual variability. The degree of asymmetry in emotional expression was, however, significantly associated with the asymmetries in connective properties of the fronto-temporal and fronto-parietal tracts, specifically the uncinate fasciculus and the first branch of the superior longitudinal fasciculus. Therefore, this result raises novel hypotheses on the relationship of specific white matter tracts and emotional expression, especially their role in mood disorders.
Collapse
Affiliation(s)
- Stefano Ioannucci
- Brain Connectivity and Behaviour Laboratory, Sorbonne Universities, Paris, France.
- Department of Neuroscience, University of Padova, Padua, Italy.
- Institut de Neurosciences Cognitives Et Integratives D'Aquitaine-UMR 5287, CNRS, University of Bordeaux, Bordeaux, France.
| | - Nathalie George
- Institut du Cerveau Et de La Moelle Epinière, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Experimental Neurosurgery Team and CENIR, Centre MEG-EEG, 75013, Paris, France
| | - Patrick Friedrich
- Brain Connectivity and Behaviour Laboratory, Sorbonne Universities, Paris, France
- Groupe D'Imagerie Neurofonctionnelle, Institut Des Maladies Neurodégénératives-UMR 5293, CNRS, CEA University of Bordeaux, Bordeaux, France
| | - Leonardo Cerliani
- Brain Connectivity and Behaviour Laboratory, Sorbonne Universities, Paris, France
- Faculty of Social and Behavioural Sciences, Universiteit Van Amsterdam, Amsterdam, The Netherlands
| | - Michel Thiebaut de Schotten
- Brain Connectivity and Behaviour Laboratory, Sorbonne Universities, Paris, France.
- Groupe D'Imagerie Neurofonctionnelle, Institut Des Maladies Neurodégénératives-UMR 5293, CNRS, CEA University of Bordeaux, Bordeaux, France.
| |
Collapse
|
10
|
Plouffe-Demers MP, Fiset D, Saumure C, Duncan J, Blais C. Strategy Shift Toward Lower Spatial Frequencies in the Recognition of Dynamic Facial Expressions of Basic Emotions: When It Moves It Is Different. Front Psychol 2019; 10:1563. [PMID: 31379648 PMCID: PMC6650765 DOI: 10.3389/fpsyg.2019.01563] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 06/20/2019] [Indexed: 11/15/2022] Open
Abstract
Facial expressions of emotion play a key role in social interactions. While in everyday life, their dynamic and transient nature calls for a fast processing of the visual information they contain, a majority of studies investigating the visual processes underlying their recognition have focused on their static display. The present study aimed to gain a better understanding of these processes while using more ecological dynamic facial expressions. In two experiments, we directly compared the spatial frequency (SF) tuning during the recognition of static and dynamic facial expressions. Experiment 1 revealed a shift toward lower SFs for dynamic expressions in comparison to static ones. Experiment 2 was designed to verify if changes in SF tuning curves were specific to the presence of emotional information in motion by comparing the SF tuning profiles for static, dynamic, and shuffled dynamic expressions. Results showed a similar shift toward lower SFs for shuffled expressions, suggesting that the difference found between dynamic and static expressions might not be linked to informative motion per se but to the presence of motion regardless its nature.
Collapse
Affiliation(s)
- Marie-Pier Plouffe-Demers
- Département de Psychologie, Universtité du Québec en Outaouais, Gatineau, QC, Canada
- Département de Psychologie, Université du Québec à Montréal, Montreal, QC, Canada
| | - Daniel Fiset
- Département de Psychologie, Universtité du Québec en Outaouais, Gatineau, QC, Canada
| | - Camille Saumure
- Département de Psychologie, Universtité du Québec en Outaouais, Gatineau, QC, Canada
| | - Justin Duncan
- Département de Psychologie, Universtité du Québec en Outaouais, Gatineau, QC, Canada
- Département de Psychologie, Université du Québec à Montréal, Montreal, QC, Canada
| | - Caroline Blais
- Département de Psychologie, Universtité du Québec en Outaouais, Gatineau, QC, Canada
| |
Collapse
|
11
|
Shangguan C, Wang X, Li X, Wang Y, Lu J, Li Z. Inhibition and Production of Anger Cost More: Evidence From an ERP Study on the Production and Switch of Voluntary Facial Emotional Expression. Front Psychol 2019; 10:1276. [PMID: 31214083 PMCID: PMC6554556 DOI: 10.3389/fpsyg.2019.01276] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Accepted: 05/14/2019] [Indexed: 11/13/2022] Open
Abstract
Humans need to flexibly produce or switch different facial emotional expressions to meet social communication need. However, little is known about the control of voluntary facial emotional expression. We investigated the production and switch of facial expressions of happiness and anger in a response-priming task of 23 Chinese female university students and recorded electroencephalographic (EEG) signals. Results revealed that a frontal-central P2 component demonstrated greater positivity in the invalidly cued condition compared with the validly cued condition. Comparing the two facial emotional expressions, data from the contingent negative variation (CNV) component revealed that happiness and anger did not differ in the motor preparation phase. While data from N2 and P3 showed that switching from anger to happiness elicited larger N2 amplitudes than switching from happiness to anger and switching from happiness to anger elicited larger P3 than switching from anger to happiness. The results revealed that in invalidly cued condition, the inhibition (N2) and reprogramming (P3) cost of anger was greater than that of happiness. The findings indicated that during the switching process, both the inhibition and the reprogramming of anger cost more processing resources than those of happiness.
Collapse
Affiliation(s)
- Chenyu Shangguan
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Xia Wang
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Xu Li
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Yali Wang
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Jiamei Lu
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Zhizhuan Li
- Department of Psychology, Shanghai Normal University, Shanghai, China
| |
Collapse
|
12
|
Saumure C, Plouffe-Demers MP, Estéphan A, Fiset D, Blais C. The use of visual information in the recognition of posed and spontaneous facial expressions. J Vis 2019; 18:21. [PMID: 30372755 DOI: 10.1167/18.9.21] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Recognizing facial expressions is crucial for the success of social interactions, and the visual processes underlying this ability have been the subject of many studies in the field of face perception. Nevertheless, the stimuli used in the majority of these studies consist of facial expressions that were produced on request rather than spontaneously induced. In the present study, we directly compared the visual strategies underlying the recognition of posed and spontaneous expressions of happiness, disgust, surprise, and sadness. We used the Bubbles method with pictures of the same individuals spontaneously expressing an emotion or posing with an expression on request. Two key findings were obtained: Visual strategies were less systematic with spontaneous than with posed expressions, suggesting a higher heterogeneity in the useful facial cues across identities; and with spontaneous expressions, the relative reliance on the mouth and eyes areas was more evenly distributed, contrasting with the higher reliance on the mouth compared to the eyes area observed with posed expressions.
Collapse
Affiliation(s)
- Camille Saumure
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Marie-Pier Plouffe-Demers
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Amanda Estéphan
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Daniel Fiset
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Caroline Blais
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, Québec, Canada
| |
Collapse
|
13
|
Abstract
In social contexts individuals frequently act as social chameleons, synchronizing their responses with those of others. Such synchrony is believed to play an important role, promoting mutual emotional and social states. However, synchrony in facial signals, which serve as the main communicative channel between people, has not been systematically studied. To address this gap, we investigated the social spread of smiling dynamics in a naturalistic social setting and assessed its affiliative function. We also studied whether smiling synchrony between people is linked with convergence in their autonomic and emotional responses. To that aim we measured moment-by-moment changes in zygomatic electromyography and cardiovascular activity in dyads of previously unacquainted participants, who co-viewed and subsequently rated emotional movies. We found a robust, dyad-specific zygomatic synchrony in co-viewing participants. During the positive movie, such zygomatic synchrony co-varied with cardiovascular synchrony and with convergence in positive feelings. No such links were found for the negative movie. Centrally, zygomatic synchrony in both emotional contexts predicted the subsequently reported affiliative feelings of dyad members. These results demonstrate that a naturally unfolding smiling behavior is highly contagious. They further suggest that zygomatic synchrony functions as a social facilitator, eliciting affiliation towards previously unknown others.
Collapse
|
14
|
Neurophysiology of spontaneous facial expressions: II. Motor control of the right and left face is partially independent in adults. Cortex 2018; 111:164-182. [PMID: 30502646 DOI: 10.1016/j.cortex.2018.10.027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 10/18/2018] [Accepted: 10/31/2018] [Indexed: 12/22/2022]
Abstract
Facial expressions are described traditionally as monolithic or unitary entities. However, humans have the capacity to produce facial blends of emotion in which the upper and lower face simultaneously display different expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face in each hemisphere that, presumably, also occur in humans. Using high-speed videography, we began measuring the movement dynamics of spontaneous facial expressions, including facial blends, to develop a more complete understanding of the neurophysiology underlying facial expressions. In our part 1 publication in Cortex (2016), we found that hemispheric motor control of the upper and lower face is overwhelmingly independent; 242 (99%) of the expressions were classified as demonstrating independent hemispheric motor control whereas only 3 (1%) were classified as demonstrating dependent hemispheric motor control. In this companion paper (part 2), 251 unitary facial expressions that occurred on either the upper or lower face were analyzed. 164 (65%) expressions demonstrated dependent hemispheric motor control whereas 87 (35%) expressions demonstrated independent or dual hemispheric motor control, indicating that some expressions represent facial blends of emotion that occur across the vertical facial axis. These findings also support the concepts that 1) spontaneous facial expressions are organized predominantly across the horizontal facial axis and secondarily across the vertical facial axis and 2) facial expressions are complex, multi-component, motoric events. Based on the Emotion-type hypothesis of cerebral lateralization, we propose that facial expressions modulated by a primary-emotional response to an environmental event are initiated by the right hemisphere on the left side of the face whereas facial expressions modulated by a social-emotional response to an environmental event are initiated by the left hemisphere on the right side of the face.
Collapse
|
15
|
Volk GF, Steinerstauch A, Lorenz A, Modersohn L, Mothes O, Denzler J, Klingner CM, Hamzei F, Guntinas-Lichius O. Facial motor and non-motor disabilities in patients with central facial paresis: a prospective cohort study. J Neurol 2018; 266:46-56. [PMID: 30367260 DOI: 10.1007/s00415-018-9099-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2018] [Revised: 10/14/2018] [Accepted: 10/15/2018] [Indexed: 11/26/2022]
Abstract
Although central facial paresis (CFP) is a major symptom of stroke, there is a lack of studies on the motor and non-motor disabilities in stroke patients. A prospective cohort study was performed at admission for inpatient rehabilitation and discharge of post-stroke phase of 112 patients (44% female, median age: 64 years, median Barthel index: 70) with CFP. Motor function was evaluated using House-Brackmann grading, Sunnybrook grading and Stennert Index. Automated action unit (AU) analysis was performed to analyze mimic function in detail. Non-motor function was assessed using the Facial Disability Index (FDI) and the Facial Clinimetric Evaluation (FaCE). Median interval from stroke to rehabilitation was 21 days. Rehabilitation lasted 20 days. House-Brackmann grading was ≥ grade III for 79% at admission. AU activation in the lower face was significantly lower in patients with right hemispheric infarction compared to left hemispheric infarction (all p < 0.05). Median total FDI and FaCE score were 46.5 and 69, respectively. Facial grading and FDI/FaCE scores improved during inpatient rehabilitation (all p < 0.05). There was a significant increase of the activation of AU12 (Zygomaticus major muscle), AU13 (Levator anguli oris muscle), and AU24 (Orbicularis oris muscle) during inpatient rehabilitation (all p < 0.05). Multivariate analysis revealed that activation of AU10 (Levator labii superioris), AU12, AU17 (Depressor labii), and AU 38 (Nasalis) were independent predictors for better quality of life. These results demonstrate that CFP has a significant impact on patient's quality of life. Therapy of CFP with focus on specific AUs should be part of post-stroke rehabilitation.
Collapse
Affiliation(s)
- Gerd Fabian Volk
- Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747, Jena, Germany
- Facial Nerve Center Jena, Jena University Hospital, Jena, Germany
| | - Anika Steinerstauch
- Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747, Jena, Germany
| | - Annegret Lorenz
- Department of Neurology, Moritz Klinik Bad Klosterlausnitz, Bad Klosterlausnitz, Germany
| | - Luise Modersohn
- Department of Computer Science, Friedrich-Schiller-University Jena, Jena, Germany
| | - Oliver Mothes
- Department of Computer Science, Friedrich-Schiller-University Jena, Jena, Germany
| | - Joachim Denzler
- Department of Computer Science, Friedrich-Schiller-University Jena, Jena, Germany
| | - Carsten M Klingner
- Facial Nerve Center Jena, Jena University Hospital, Jena, Germany
- Hans Berger Department of Neurology, Jena University Hospital, Jena, Germany
| | - Farsin Hamzei
- Department of Neurology, Moritz Klinik Bad Klosterlausnitz, Bad Klosterlausnitz, Germany
- Section of Neurological Rehabilitation, Hans Berger Department of Neurology, Jena University Hospital, Jena, Germany
| | - Orlando Guntinas-Lichius
- Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747, Jena, Germany.
- Facial Nerve Center Jena, Jena University Hospital, Jena, Germany.
| |
Collapse
|
16
|
Recio G, Sommer W. Copycat of dynamic facial expressions: Superior volitional motor control for expressions of disgust. Neuropsychologia 2018; 119:512-523. [PMID: 30176302 DOI: 10.1016/j.neuropsychologia.2018.08.027] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/31/2018] [Accepted: 08/29/2018] [Indexed: 10/28/2022]
Abstract
In social situations facial expressions are often strategically employed. Despite the many research on motor control of limb movements, little is known about the control over facial expressions. Using a response-priming task, we investigated motor control over three facial expressions, smiles, disgust and emotionally neutral jaw drops. Prime stimuli consisted of videos of a facial expression to be prepared or - as a neutral prime - an abstract symbol superimposed to a scrambled face. In valid trials an equal symbol (=) indicated to produce the primed expression. In invalid trials, an unequal symbol (‡) prompted participants to produce an alternative, unprimed expression. We examined the impact of emotion in preparing and revoking a prepared expression, and possible facilitation for dynamic facial expressions relative to symbolic primes. Participants' facial responses were scored using automated analyses of facial expressions with computer software. The underlying neurocognitive processes were tracked with event-related-potentials. Reprogramming costs, in the form of longer reaction times (RTs) in trials where participants had prepared an invalidly primed expression and had to quickly switch to the correct one, were more pronounced for smiles and jaw drops than for disgust, possibly indicating the need for being fast when showing disgust. Data from the P3 component related the behavioral effect to a more efficient updating of the correct response in brain systems responsible for motor control. Priming participants with dynamic facial expressions as examples for imitation, improved performance accuracy as compared to the symbolic abstract stimuli, but it not did affect RTs. Priming with dynamic videos also resulted in larger validity effects of the P3 component when disgust was the target response, indicating that the perceptual system might trigger automatic emotional responses, at least for negative affect.
Collapse
Affiliation(s)
- Guillermo Recio
- Differential Psychology and Psychological Assessment, Universität Hamburg, Von-Melle-Park 5, R4020b, D-20146 Hamburg, Germany.
| | - Werner Sommer
- Department of Psychology, Humboldt-Universität zu Berlin, Unter den Linden 6, D-10099 Berlin, Germany.
| |
Collapse
|
17
|
Abstract
Models advanced to explain hemispheric asymmetries in representation of emotions will be discussed following their historical progression. First, the clinical observations that have suggested a general dominance of the right hemisphere for all kinds of emotions will be reviewed. Then the experimental investigations that have led to proposal of a different hemispheric specialization for positive versus negative emotions (valence hypothesis) or, alternatively, for approach versus avoidance tendencies (motivational hypothesis) will be surveyed. The discussion of these general models will be followed by a review of recent studies which have documented laterality effects within specific brain structures, known to play a critical role in different components of emotions, namely the amygdata in the computation of emotionally laden stimuli, the ventromedial prefrontal cortex in the integration between cognition and emotion and in the control of impulsive reactions and the anterior insula in the conscious experience of emotion. Results of these recent investigations support and provide an updated integrated version of early models assuming a general right hemisphere dominance for all kinds of emotions.
Collapse
Affiliation(s)
- Guido Gainotti
- Institute of Neurology, Università Cattolica del Sacro Cuore, Rome, Italy
- IRCCS Fondazione Santa Lucia, Department of Clinical and Behavioral Neurology, Rome, Italy
| |
Collapse
|
18
|
Abstract
The smile is a frequently expressed facial expression that typically conveys a positive emotional state and friendly intent. However, human beings have also learned how to fake smiles, typically by controlling the mouth to provide a genuine-looking expression. This is often accompanied by inaccuracies that can allow others to determine that the smile is false. Mouth movement is one of the most striking features of the smile, yet our understanding of its dynamic elements is still limited. The present study analyzes the dynamic features of lip corners, and considers how they differ between genuine and posed smiles. Employing computer vision techniques, we investigated elements such as the duration, intensity, speed, symmetry of the lip corners, and certain irregularities in genuine and posed smiles obtained from the UvA-NEMO Smile Database. After utilizing the facial analysis tool OpenFace, we further propose a new approach to segmenting the onset, apex, and offset phases of smiles, as well as a means of measuring irregularities and symmetry in facial expressions. We extracted these features according to 2D and 3D coordinates, and conducted an analysis. The results reveal that genuine smiles have higher values for onset, offset, apex, and total durations, as well as offset displacement, and a variable we termed Irregularity-b (the SD of the apex phase) than do posed smiles. Conversely, values tended to be lower for onset and offset Speeds, and Irregularity-a (the rate of peaks), Symmetry-a (the correlation between left and right facial movements), and Symmetry-d (differences in onset frame numbers between the left and right faces). The findings from the present study have been compared to those of previous research, and certain speculations are made.
Collapse
Affiliation(s)
- Hui Guo
- Wenzhou 7th People's Hospital, Wenzhou, China
| | - Xiao-Hui Zhang
- Institute of Psychology and Behavior Sciences, Wenzhou University, Wenzhou, China
| | - Jun Liang
- Institute of Psychology and Behavior Sciences, Wenzhou University, Wenzhou, China
| | - Wen-Jing Yan
- Institute of Psychology and Behavior Sciences, Wenzhou University, Wenzhou, China
| |
Collapse
|
19
|
Lindell A. Lateralization of the expression of facial emotion in humans. PROGRESS IN BRAIN RESEARCH 2018; 238:249-270. [DOI: 10.1016/bs.pbr.2018.06.005] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
20
|
Namba S, Kabir RS, Miyatani M, Nakao T. Spontaneous Facial Actions Map onto Emotional Experiences in a Non-social Context: Toward a Component-Based Approach. Front Psychol 2017; 8:633. [PMID: 28522979 PMCID: PMC5415601 DOI: 10.3389/fpsyg.2017.00633] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 04/09/2017] [Indexed: 11/20/2022] Open
Abstract
While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser.
Collapse
Affiliation(s)
- Shushi Namba
- Graduate School of Education, Hiroshima UniversityHiroshima, Japan
| | - Russell S Kabir
- Graduate School of Education, Hiroshima UniversityHiroshima, Japan
| | - Makoto Miyatani
- Department of Psychology, Hiroshima UniversityHiroshima, Japan
| | - Takashi Nakao
- Department of Psychology, Hiroshima UniversityHiroshima, Japan
| |
Collapse
|
21
|
Abstract
Posed stimuli dominate the study of nonverbal communication of emotion, but concerns have been raised that the use of posed stimuli may inflate recognition accuracy relative to spontaneous expressions. Here, we compare recognition of emotions from spontaneous expressions with that of matched posed stimuli. Participants made forced-choice judgments about the expressed emotion and whether the expression was spontaneous, and rated expressions on intensity (Experiments 1 and 2) and prototypicality (Experiment 2). Listeners were able to accurately infer emotions from both posed and spontaneous expressions, from auditory, visual, and audiovisual cues. Furthermore, perceived intensity and prototypicality were found to play a role in the accurate recognition of emotion, particularly from spontaneous expressions. Our findings demonstrate that perceivers can reliably recognise emotions from spontaneous expressions, and that depending on the comparison set, recognition levels can even be equivalent to that of posed stimulus sets.
Collapse
Affiliation(s)
- Disa A Sauter
- a Department of Social Psychology , University of Amsterdam , Amsterdam , Netherlands
| | - Agneta H Fischer
- a Department of Social Psychology , University of Amsterdam , Amsterdam , Netherlands
| |
Collapse
|
22
|
Boutsen FA, Dvorak JD, Pulusu VK, Ross ED. Altered saccadic targets when processing facial expressions under different attentional and stimulus conditions. Vision Res 2017; 133:150-160. [PMID: 28279711 DOI: 10.1016/j.visres.2016.07.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Revised: 05/16/2016] [Accepted: 07/09/2016] [Indexed: 10/20/2022]
Abstract
Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning.
Collapse
Affiliation(s)
- Frank A Boutsen
- Department of Communication Sciences and Disorders, University of Oklahoma Health Sciences, 1200 North Stonewall Ave., Oklahoma City, OK 73117, USA
| | - Justin D Dvorak
- Department of Communication Sciences and Disorders, University of Oklahoma Health Sciences, 1200 North Stonewall Ave., Oklahoma City, OK 73117, USA
| | - Vinay K Pulusu
- Department of Neurology, University of Oklahoma Health Sciences Center, and the VA Medical Center (127), 921 NE 13th Street, Oklahoma City, OK 73104, USA
| | - Elliott D Ross
- Department of Neurology, University of Oklahoma Health Sciences Center, and the VA Medical Center (127), 921 NE 13th Street, Oklahoma City, OK 73104, USA; Department of Communication Sciences and Disorders, University of Oklahoma Health Sciences, 1200 North Stonewall Ave., Oklahoma City, OK 73117, USA.
| |
Collapse
|
23
|
Müri RM. Cortical control of facial expression. J Comp Neurol 2017; 524:1578-85. [PMID: 26418049 DOI: 10.1002/cne.23908] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2015] [Revised: 06/03/2015] [Accepted: 09/25/2015] [Indexed: 11/10/2022]
Abstract
The present Review deals with the motor control of facial expressions in humans. Facial expressions are a central part of human communication. Emotional face expressions have a crucial role in human nonverbal behavior, allowing a rapid transfer of information between individuals. Facial expressions can be either voluntarily or emotionally controlled. Recent studies in nonhuman primates and humans have revealed that the motor control of facial expressions has a distributed neural representation. At least five cortical regions on the medial and lateral aspects of each hemisphere are involved: the primary motor cortex, the ventral lateral premotor cortex, the supplementary motor area on the medial wall, and the rostral and caudal cingulate cortex. The results of studies in humans and nonhuman primates suggest that the innervation of the face is bilaterally controlled for the upper part and mainly contralaterally controlled for the lower part. Furthermore, the primary motor cortex, the ventral lateral premotor cortex, and the supplementary motor area are essential for the voluntary control of facial expressions. In contrast, the cingulate cortical areas are important for emotional expression, because they receive input from different structures of the limbic system.
Collapse
Affiliation(s)
- René M Müri
- Division of Cognitive and Restorative Neurology, Departments of Neurology and Clinical Research, University Hospital Inselspital, 3010, Bern, Switzerland.,Gerontechnology and Rehabilitation Group, University of Bern, 3012, Bern, Switzerland.,Center for Cognition, Learning, and Memory, University of Bern, 3012, Bern, Switzerland
| |
Collapse
|
24
|
Blake ML. Right-Hemisphere Pragmatic Disorders. PERSPECTIVES IN PRAGMATICS, PHILOSOPHY & PSYCHOLOGY 2017. [DOI: 10.1007/978-3-319-47489-2_10] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
25
|
Korb S, Wood A, Banks CA, Agoulnik D, Hadlock TA, Niedenthal PM. Asymmetry of Facial Mimicry and Emotion Perception in Patients With Unilateral Facial Paralysis. JAMA FACIAL PLAST SU 2016; 18:222-7. [DOI: 10.1001/jamafacial.2015.2347] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Sebastian Korb
- Neuroscience Area, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Adrienne Wood
- Department of Psychology, University of Wisconsin, Madison
| | - Caroline A. Banks
- Department of Otology and Laryngology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, Massachusetts
| | - Dasha Agoulnik
- Department of Otology and Laryngology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, Massachusetts
| | - Tessa A. Hadlock
- Division of Facial Plastic and Reconstructive Surgery, Facial Nerve Center, Boston, Massachusetts
| | | |
Collapse
|
26
|
Ross ED, Gupta SS, Adnan AM, Holden TL, Havlicek J, Radhakrishnan S. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults. Cortex 2016; 76:28-42. [DOI: 10.1016/j.cortex.2016.01.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 12/29/2015] [Accepted: 01/05/2016] [Indexed: 12/01/2022]
|
27
|
Kawulok M, Nalepa J, Nurzynska K, Smolka B. In Search of Truth: Analysis of Smile Intensity Dynamics to Detect Deception. LECTURE NOTES IN COMPUTER SCIENCE 2016. [DOI: 10.1007/978-3-319-47955-2_27] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
28
|
Krippl M, Karim AA, Brechmann A. Neuronal correlates of voluntary facial movements. Front Hum Neurosci 2015; 9:598. [PMID: 26578940 PMCID: PMC4623161 DOI: 10.3389/fnhum.2015.00598] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Accepted: 10/14/2015] [Indexed: 11/30/2022] Open
Abstract
Whereas the somatotopy of finger movements has been extensively studied with neuroimaging, the neural foundations of facial movements remain elusive. Therefore, we systematically studied the neuronal correlates of voluntary facial movements using the Facial Action Coding System (FACS, Ekman et al., 2002). The facial movements performed in the MRI scanner were defined as Action Units (AUs) and were controlled by a certified FACS coder. The main goal of the study was to investigate the detailed somatotopy of the facial primary motor area (facial M1). Eighteen participants were asked to produce the following four facial movements in the fMRI scanner: AU1+2 (brow raiser), AU4 (brow lowerer), AU12 (lip corner puller) and AU24 (lip presser), each in alternation with a resting phase. Our facial movement task induced generally high activation in brain motor areas (e.g., M1, premotor cortex, supplementary motor area, putamen), as well as in the thalamus, insula, and visual cortex. BOLD activations revealed overlapping representations for the four facial movements. However, within the activated facial M1 areas, we could find distinct peak activities in the left and right hemisphere supporting a rough somatotopic upper to lower face organization within the right facial M1 area, and a somatotopic organization within the right M1 upper face part. In both hemispheres, the order was an inverse somatotopy within the lower face representations. In contrast to the right hemisphere, in the left hemisphere the representation of AU4 was more lateral and anterior compared to the rest of the facial movements. Our findings support the notion of a partial somatotopic order within the M1 face area confirming the “like attracts like” principle (Donoghue et al., 1992). AUs which are often used together or are similar are located close to each other in the motor cortex.
Collapse
Affiliation(s)
- Martin Krippl
- Department of Methodology, Psychodiagnostics and Evaluation Research, Institute of Psychology, Otto-von-Guericke University Magdeburg Magdeburg, Germany
| | - Ahmed A Karim
- Department of Psychiatry and Psychotherapy, Universitätsklinikum Tübingen Tübingen, Germany ; Department of Prevention and Health Psychology, SRH Fernhochschule Riedlingen Riedlingen, Germany
| | - André Brechmann
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| |
Collapse
|
29
|
Carr EW, Korb S, Niedenthal PM, Winkielman P. The two sides of spontaneity: Movement onset asymmetries in facial expressions influence social judgments. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2014. [DOI: 10.1016/j.jesp.2014.05.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
30
|
Korb S, With S, Niedenthal P, Kaiser S, Grandjean D. The perception and mimicry of facial movements predict judgments of smile authenticity. PLoS One 2014; 9:e99194. [PMID: 24918939 PMCID: PMC4053432 DOI: 10.1371/journal.pone.0099194] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2014] [Accepted: 05/12/2014] [Indexed: 11/25/2022] Open
Abstract
The mechanisms through which people perceive different types of smiles and judge their authenticity remain unclear. Here, 19 different types of smiles were created based on the Facial Action Coding System (FACS), using highly controlled, dynamic avatar faces. Participants observed short videos of smiles while their facial mimicry was measured with electromyography (EMG) over four facial muscles. Smile authenticity was judged after each trial. Avatar attractiveness was judged once in response to each avatar’s neutral face. Results suggest that, in contrast to most earlier work using static pictures as stimuli, participants relied less on the Duchenne marker (the presence of crow’s feet wrinkles around the eyes) in their judgments of authenticity. Furthermore, mimicry of smiles occurred in the Zygomaticus Major, Orbicularis Oculi, and Corrugator muscles. Consistent with theories of embodied cognition, activity in these muscles predicted authenticity judgments, suggesting that facial mimicry influences the perception of smiles. However, no significant mediation effect of facial mimicry was found. Avatar attractiveness did not predict authenticity judgments or mimicry patterns.
Collapse
Affiliation(s)
- Sebastian Korb
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - Stéphane With
- Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Paula Niedenthal
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - Susanne Kaiser
- Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
31
|
Prenatal hormonal exposure (2D:4D ratio) and strength of lateralisation for processing facial emotion. PERSONALITY AND INDIVIDUAL DIFFERENCES 2014. [DOI: 10.1016/j.paid.2013.09.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
32
|
Equal Time for Psychological and Biological Contributions to Human Variation. REVIEW OF GENERAL PSYCHOLOGY 2013. [DOI: 10.1037/a0033481] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
33
|
Ross ED, Shayya L, Champlain A, Monnot M, Prodan CI. Decoding facial blends of emotion: visual field, attentional and hemispheric biases. Brain Cogn 2013; 83:252-61. [PMID: 24091036 DOI: 10.1016/j.bandc.2013.09.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2012] [Revised: 07/23/2013] [Accepted: 09/02/2013] [Indexed: 10/26/2022]
Abstract
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person's true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer's left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer's left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person's left ear, which also avoids the social stigma of eye-to-eye contact, one's ability to decode facial expressions should be enhanced.
Collapse
Affiliation(s)
- Elliott D Ross
- Department of Neurology, University of Oklahoma Health Sciences Center and the VA Medical Center 127, 921 NE 13th Street, Oklahoma City, OK 73104, USA.
| | | | | | | | | |
Collapse
|