1
|
Urtado MB, Rodrigues RD, Fukusima SS. Visual Field Restriction in the Recognition of Basic Facial Expressions: A Combined Eye Tracking and Gaze Contingency Study. Behav Sci (Basel) 2024; 14:355. [PMID: 38785846 PMCID: PMC11117586 DOI: 10.3390/bs14050355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 04/05/2024] [Accepted: 04/18/2024] [Indexed: 05/25/2024] Open
Abstract
Uncertainties and discrepant results in identifying crucial areas for emotional facial expression recognition may stem from the eye tracking data analysis methods used. Many studies employ parameters of analysis that predominantly prioritize the examination of the foveal vision angle, ignoring the potential influences of simultaneous parafoveal and peripheral information. To explore the possible underlying causes of these discrepancies, we investigated the role of the visual field aperture in emotional facial expression recognition with 163 volunteers randomly assigned to three groups: no visual restriction (NVR), parafoveal and foveal vision (PFFV), and foveal vision (FV). Employing eye tracking and gaze contingency, we collected visual inspection and judgment data over 30 frontal face images, equally distributed among five emotions. Raw eye tracking data underwent Eye Movements Metrics and Visualizations (EyeMMV) processing. Accordingly, the visual inspection time, number of fixations, and fixation duration increased with the visual field restriction. Nevertheless, the accuracy showed significant differences among the NVR/FV and PFFV/FV groups, despite there being no difference in NVR/PFFV. The findings underscore the impact of specific visual field areas on facial expression recognition, highlighting the importance of parafoveal vision. The results suggest that eye tracking data analysis methods should incorporate projection angles extending to at least the parafoveal level.
Collapse
Affiliation(s)
- Melina Boratto Urtado
- Faculty of Philosophy, Sciences and Letters at Ribeirão Preto, University of São Paulo, Ribeirão Preto 14040-901, Brazil;
| | | | - Sergio Sheiji Fukusima
- Faculty of Philosophy, Sciences and Letters at Ribeirão Preto, University of São Paulo, Ribeirão Preto 14040-901, Brazil;
| |
Collapse
|
2
|
Son G, Im HY, Albohn DN, Kveraga K, Adams RB, Sun J, Chong SC. Americans weigh an attended emotion more than Koreans in overall mood judgments. Sci Rep 2023; 13:19323. [PMID: 37935828 PMCID: PMC10630378 DOI: 10.1038/s41598-023-46723-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 11/04/2023] [Indexed: 11/09/2023] Open
Abstract
Face ensemble coding is the perceptual ability to create a quick and overall impression of a group of faces, triggering social and behavioral motivations towards other people (approaching friendly people or avoiding an angry mob). Cultural differences in this ability have been reported, such that Easterners are better at face ensemble coding than Westerners are. The underlying mechanism has been attributed to differences in processing styles, with Easterners allocating attention globally, and Westerners focusing on local parts. However, the remaining question is how such default attention mode is influenced by salient information during ensemble perception. We created visual displays that resembled a real-world social setting in which one individual in a crowd of different faces drew the viewer's attention while the viewer judged the overall emotion of the crowd. In each trial, one face in the crowd was highlighted by a salient cue, capturing spatial attention before the participants viewed the entire group. American participants' judgment of group emotion more strongly weighed the attended individual face than Korean participants, suggesting a greater influence of local information on global perception. Our results showed that different attentional modes between cultural groups modulate social-emotional processing underlying people's perceptions and attributions.
Collapse
Affiliation(s)
- Gaeun Son
- Yonsei University, Seoul, South Korea
| | - Hee Yeon Im
- University of British Columbia, Vancouver, Canada
| | | | | | | | - Jisoo Sun
- Yonsei University, Seoul, South Korea
| | | |
Collapse
|
3
|
Oswald F, Adams RB. Feminist Social Vision: Seeing Through the Lens of Marginalized Perceivers. PERSONALITY AND SOCIAL PSYCHOLOGY REVIEW 2022:10888683221126582. [PMID: 36218340 PMCID: PMC10391697 DOI: 10.1177/10888683221126582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Social vision research, which examines, in part, how humans visually perceive social stimuli, is well-positioned to improve understandings of social inequality. However, social vision research has rarely prioritized the perspectives of marginalized group members. We offer a theoretical argument for diversifying understandings of social perceptual processes by centering marginalized perspectives. We examine (a) how social vision researchers frame their research questions and who these framings prioritize and (b) how perceptual processes (person perception; people perception; perception of social objects) are linked to group membership and thus comprehensively understanding these processes necessitates attention to marginalized perceivers. We discuss how social vision research translates into theoretical advances and to action for reducing negative intergroup consequences (e.g., prejudice). The purpose of this article is to delineate how prioritizing marginalized perspectives in social vision research could develop novel questions, bridge theoretical gaps, and elevate social vision's translational impact to improve outcomes for marginalized groups.
Collapse
Affiliation(s)
- Flora Oswald
- The Pennsylvania State University, University Park, USA
| | | |
Collapse
|
4
|
Thomas L, von Castell C, Hecht H. How facial masks alter the interaction of gaze direction, head orientation, and emotion recognition. Front Neurosci 2022; 16:937939. [PMID: 36213742 PMCID: PMC9533556 DOI: 10.3389/fnins.2022.937939] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 09/01/2022] [Indexed: 11/22/2022] Open
Abstract
The COVID-19 pandemic has altered the way we interact with each other: mandatory mask-wearing obscures facial information that is crucial for emotion recognition. Whereas the influence of wearing a mask on emotion recognition has been repeatedly investigated, little is known about the impact on interaction effects among emotional signals and other social signals. Therefore, the current study sought to explore how gaze direction, head orientation, and emotional expression interact with respect to emotion perception, and how these interactions are altered by wearing a face mask. In two online experiments, we presented face stimuli from the Radboud Faces Database displaying different facial expressions (anger, fear, happiness, neutral, and sadness), gaze directions (−13°, 0°, and 13°), and head orientations (−45°, 0°, and 45°) – either without (Experiment 1) or with mask (Experiment 2). Participants categorized the displayed emotional expressions. Not surprisingly, masks impaired emotion recognition. Surprisingly, without the mask, emotion recognition was unaffected by averted head orientations and only slightly affected by gaze direction. The mask strongly interfered with this ability. The mask increased the influence of head orientation and gaze direction, in particular for the emotions that were poorly recognized with mask. The results suggest that in case of uncertainty due to ambiguity or absence of signals, we seem to unconsciously factor in extraneous information.
Collapse
|
5
|
The impact of social features in an online community on member contribution. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2021.107149] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
6
|
Pereira M, Meng H, Hone K. Prediction of Communication Effectiveness During Media Skills Training Using Commercial Automatic Non-verbal Recognition Systems. Front Psychol 2021; 12:675721. [PMID: 34659000 PMCID: PMC8511452 DOI: 10.3389/fpsyg.2021.675721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 07/08/2021] [Indexed: 11/13/2022] Open
Abstract
It is well recognised that social signals play an important role in communication effectiveness. Observation of videos to understand non-verbal behaviour is time-consuming and limits the potential to incorporate detailed and accurate feedback of this behaviour in practical applications such as communication skills training or performance evaluation. The aim of the current research is twofold: (1) to investigate whether off-the-shelf emotion recognition technology can detect social signals in media interviews and (2) to identify which combinations of social signals are most promising for evaluating trainees' performance in a media interview. To investigate this, non-verbal signals were automatically recognised from practice on-camera media interviews conducted within a media training setting with a sample size of 34. Automated non-verbal signal detection consists of multimodal features including facial expression, hand gestures, vocal behaviour and 'honest' signals. The on-camera interviews were categorised into effective and poor communication exemplars based on communication skills ratings provided by trainers and neutral observers which served as a ground truth. A correlation-based feature selection method was used to select signals associated with performance. To assess the accuracy of the selected features, a number of machine learning classification techniques were used. Naive Bayes analysis produced the best results with an F-measure of 0.76 and prediction accuracy of 78%. Results revealed that a combination of body movements, hand movements and facial expression are relevant for establishing communication effectiveness in the context of media interviews. The results of the current study have implications for the automatic evaluation of media interviews with a number of potential application areas including enhancing communication training including current media skills training.
Collapse
Affiliation(s)
- Monica Pereira
- Department of Psychology, School of Social Sciences, London Metropolitan University, London, United Kingdom
| | - Hongying Meng
- Department of Electronic and Computer Engineering, College of Engineering, Design and Physical Sciences, Brunel University London, London, United Kingdom
| | - Kate Hone
- Department of Computer Science, College of Engineering, Design and Physical Sciences, Brunel University London, London, United Kingdom
| |
Collapse
|
7
|
Kveraga K, Im HY, Ward N, Adams RB. Fast saccadic and manual responses to faces presented to the koniocellular visual pathway. J Vis 2020; 20:9. [PMID: 32097485 PMCID: PMC7343428 DOI: 10.1167/jov.20.2.9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The parallel pathways of the human visual system differ in their tuning to luminance, color, and spatial frequency. These attunements recently have been shown to propagate to differential processing of higher-order stimuli, facial threat cues, in the magnocellular (M) and parvocellular (P) pathways, with greater sensitivity to clear and ambiguous threat, respectively. The role of the third, koniocellular (K) pathway in facial threat processing, however, remains unknown. To address this gap in knowledge, we briefly presented peripheral face stimuli psychophysically biased towards M, P, or K pathways. Observers were instructed to report via a key-press whether the face was angry or neutral while their eye movements and manual responses were recorded. We found that short-latency saccades were made more frequently to faces presented in the K channel than to P or M channels. Saccade latencies were not significantly modulated by expressive and identity cues. In contrast, manual response latencies and accuracy were modulated by both pathway biasing and by interactions of facial expression with facial masculinity, such that angry male faces elicited the fastest, and angry female faces, the least accurate, responses. We conclude that face stimuli can evoke fast saccadic and manual responses when projected to the K pathway.
Collapse
|
8
|
Cushing CA, Im HY, Adams RB, Ward N, Kveraga K. Magnocellular and parvocellular pathway contributions to facial threat cue processing. Soc Cogn Affect Neurosci 2020; 14:151-162. [PMID: 30721981 PMCID: PMC6382926 DOI: 10.1093/scan/nsz003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 12/18/2018] [Accepted: 01/12/2019] [Indexed: 01/25/2023] Open
Abstract
Human faces evolved to signal emotions, with their meaning contextualized by eye gaze. For instance, a fearful expression paired with averted gaze clearly signals both presence of threat and its probable location. Conversely, direct gaze paired with facial fear leaves the source of the fear-evoking threat ambiguous. Given that visual perception occurs in parallel streams with different processing emphases, our goal was to test a recently developed hypothesis that clear and ambiguous threat cues would differentially engage the magnocellular (M) and parvocellular (P) pathways, respectively. We employed two-tone face images to characterize the neurodynamics evoked by stimuli that were biased toward M or P pathways. Human observers (N = 57) had to identify the expression of fearful or neutral faces with direct or averted gaze while their magnetoencephalogram was recorded. Phase locking between the amygdaloid complex, orbitofrontal cortex (OFC) and fusiform gyrus increased early (0–300 ms) for M-biased clear threat cues (averted-gaze fear) in the β-band (13–30 Hz) while P-biased ambiguous threat cues (direct-gaze fear) evoked increased θ (4–8 Hz) phase locking in connections with OFC of the right hemisphere. We show that M and P pathways are relatively more sensitive toward clear and ambiguous threat processing, respectively, and characterize the neurodynamics underlying emotional face processing in the M and P pathways.
Collapse
Affiliation(s)
- Cody A Cushing
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Hee Yeon Im
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Reginald B Adams
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA
| | - Noreen Ward
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Kestutis Kveraga
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
9
|
Adams RB, Im HY, Cushing C, Boshyan J, Ward N, Albohn DN, Kveraga K. Differential magnocellular versus parvocellular pathway contributions to the combinatorial processing of facial threat. PROGRESS IN BRAIN RESEARCH 2019; 247:71-87. [PMID: 31196444 DOI: 10.1016/bs.pbr.2019.03.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Recently, speed of presentation of facially expressive stimuli was found to influence the processing of compound threat cues (e.g., anger/fear/gaze). For instance, greater amygdala responses were found to clear (e.g., direct gaze anger/averted gaze fear) versus ambiguous (averted gaze anger/direct gaze fear) combinations of threat cues when rapidly presented (33 and 300ms), but greater to ambiguous versus clear threat cues when presented for more sustained durations (1, 1.5, and 2s). A working hypothesis was put forth (Adams et al., 2012) that these effects were due to differential magnocellular versus parvocellular pathways contributions to the rapid versus sustained processing of threat, respectively. To test this possibility directly here, we restricted visual stream processing in the fMRI environment using facially expressive stimuli specifically designed to bias visual input exclusively to the magnocellular versus parvocellular pathways. We found that for magnocellular-biased stimuli, activations were predominantly greater to clear versus ambiguous threat-gaze pairs (on par with that previously found for rapid presentations of threat cues), whereas activations to ambiguous versus clear threat-gaze pairs were greater for parvocellular-biased stimuli (on par with that previously found for sustained presentations). We couch these findings in an adaptive dual process account of threat perception and highlight implications for other dual process models within psychology.
Collapse
Affiliation(s)
- Reginald B Adams
- Department of Psychology, The Pennsylvania State University, University Park, PA, United States.
| | - Hee Yeon Im
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, United States
| | - Cody Cushing
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, United States
| | - Jasmine Boshyan
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, United States
| | - Noreen Ward
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, United States
| | - Daniel N Albohn
- Department of Psychology, The Pennsylvania State University, University Park, PA, United States
| | - Kestutis Kveraga
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, United States
| |
Collapse
|
10
|
Spatial and feature-based attention to expressive faces. Exp Brain Res 2019; 237:967-975. [PMID: 30683957 DOI: 10.1007/s00221-019-05472-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 01/05/2019] [Indexed: 10/27/2022]
Abstract
Facial emotion is an important cue for deciding whether an individual is potentially helpful or harmful. However, facial expressions are inherently ambiguous and observers typically employ other cues to categorize emotion expressed on the face, such as race, sex, and context. Here, we explored the effect of increasing or reducing different types of uncertainty associated with a facial expression that is to be categorized. On each trial, observers responded according to the emotion and location of a peripherally presented face stimulus and were provided with either: (1) no information about the upcoming face; (2) its location; (3) its expressed emotion; or (4) both its location and emotion. While cueing emotion or location resulted in faster response times than cueing unpredictive information, cueing face emotion alone resulted in faster responses than cueing face location alone. Moreover, cueing both stimulus location and emotion resulted in a superadditive reduction of response times compared with cueing location or emotion alone, suggesting that feature-based attention to emotion and spatially selective attention interact to facilitate perception of face stimuli. While categorization of facial expressions was significantly affected by stable identity cues (sex and race) in the face, we found that these interactions were eliminated when uncertainty about facial expression, but not spatial uncertainty about stimulus location, was reduced by predictive cueing. This demonstrates that feature-based attention to facial expression greatly attenuates the need to rely on stable identity cues to interpret facial emotion.
Collapse
|
11
|
Im HY, Adams RB, Cushing CA, Boshyan J, Ward N, Kveraga K. Sex-related differences in behavioral and amygdalar responses to compound facial threat cues. Hum Brain Mapp 2018. [PMID: 29520882 DOI: 10.1002/hbm.24035] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
During face perception, we integrate facial expression and eye gaze to take advantage of their shared signals. For example, fear with averted gaze provides a congruent avoidance cue, signaling both threat presence and its location, whereas fear with direct gaze sends an incongruent cue, leaving threat location ambiguous. It has been proposed that the processing of different combinations of threat cues is mediated by dual processing routes: reflexive processing via magnocellular (M) pathway and reflective processing via parvocellular (P) pathway. Because growing evidence has identified a variety of sex differences in emotional perception, here we also investigated how M and P processing of fear and eye gaze might be modulated by observer's sex, focusing on the amygdala, a structure important to threat perception and affective appraisal. We adjusted luminance and color of face stimuli to selectively engage M or P processing and asked observers to identify emotion of the face. Female observers showed more accurate behavioral responses to faces with averted gaze and greater left amygdala reactivity both to fearful and neutral faces. Conversely, males showed greater right amygdala activation only for M-biased averted-gaze fear faces. In addition to functional reactivity differences, females had proportionately greater bilateral amygdala volumes, which positively correlated with behavioral accuracy for M-biased fear. Conversely, in males only the right amygdala volume was positively correlated with accuracy for M-biased fear faces. Our findings suggest that M and P processing of facial threat cues is modulated by functional and structural differences in the amygdalae associated with observer's sex.
Collapse
Affiliation(s)
- Hee Yeon Im
- Athinoula A. Martinos Center, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Radiology, Harvard Medical School, Boston, Massachusetts
| | - Reginald B Adams
- Department of Psychology, The Pennsylvania State University, State College, Pennsylvania
| | - Cody A Cushing
- Athinoula A. Martinos Center, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts
| | - Jasmine Boshyan
- Athinoula A. Martinos Center, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Radiology, Harvard Medical School, Boston, Massachusetts
| | - Noreen Ward
- Athinoula A. Martinos Center, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts
| | - Kestutis Kveraga
- Athinoula A. Martinos Center, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Radiology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
12
|
Im HY, Albohn DN, Steiner TG, Cushing CA, Adams RB, Kveraga K. Differential hemispheric and visual stream contributions to ensemble coding of crowd emotion. Nat Hum Behav 2017; 1:828-842. [PMID: 29226255 DOI: 10.1038/s41562-017-0225-z] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In crowds, where scrutinizing individual facial expressions is inefficient, humans can make snap judgments about the prevailing mood by reading "crowd emotion". We investigated how the brain accomplishes this feat in a set of behavioral and fMRI studies. Participants were asked to either avoid or approach one of two crowds of faces presented in the left and right visual hemifields. Perception of crowd emotion was improved when crowd stimuli contained goal-congruent cues and was highly lateralized to the right hemisphere. The dorsal visual stream was preferentially activated in crowd emotion processing, with activity in the intraparietal sulcus and superior frontal gyrus predicting perceptual accuracy for crowd emotion perception, whereas activity in the fusiform cortex in the ventral stream predicted better perception of individual facial expressions. Our findings thus reveal significant behavioral differences and differential involvement of the hemispheres and the major visual streams in reading crowd versus individual face expressions.
Collapse
Affiliation(s)
- Hee Yeon Im
- Department of Radiology, Harvard Medical School, Charlestown, MA, 02129, USA.,Athinoula A. Martinos Center for Biomedical Imaging, Department Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, USA
| | - Daniel N Albohn
- Department of Psychology, The Pennsylvania State University, State College, PA, 16802, USA
| | - Troy G Steiner
- Department of Psychology, The Pennsylvania State University, State College, PA, 16802, USA
| | - Cody A Cushing
- Athinoula A. Martinos Center for Biomedical Imaging, Department Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, USA
| | - Reginald B Adams
- Department of Psychology, The Pennsylvania State University, State College, PA, 16802, USA
| | - Kestutis Kveraga
- Department of Radiology, Harvard Medical School, Charlestown, MA, 02129, USA. .,Athinoula A. Martinos Center for Biomedical Imaging, Department Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, USA.
| |
Collapse
|
13
|
Mimicking emotions. Curr Opin Psychol 2017; 17:151-155. [PMID: 28950963 DOI: 10.1016/j.copsyc.2017.07.008] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 06/30/2017] [Accepted: 07/07/2017] [Indexed: 12/12/2022]
Abstract
Emotional mimicry refers to the tendency to mimic other's emotions in order to share minds. We present new evidence that supports our Contextual Model of Emotional Mimicry, showing that emotional mimicry serves affiliative goals that vary across social contexts. This also implies the opposite, namely that we (unconsciously) refrain from mimicking others' emotions if we want to keep emotional distance. Facial mimicry of emotions is further suggested to be a largely top-down process, based on goals and representations, rather than on mere watching others' facial movements.
Collapse
|
14
|
Kesner L, Horáček J. Empathy-Related Responses to Depicted People in Art Works. Front Psychol 2017; 8:228. [PMID: 28286487 PMCID: PMC5323429 DOI: 10.3389/fpsyg.2017.00228] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Accepted: 02/06/2017] [Indexed: 01/10/2023] Open
Abstract
Existing theories of empathic response to visual art works postulate the primacy of automatic embodied reaction to images based on mirror neuron mechanisms. Arguing for a more inclusive concept of empathy-related response and integrating four distinct bodies of literature, we discuss contextual, and personal factors which modulate empathic response to depicted people. We then present an integrative model of empathy-related responses to depicted people in art works. The model assumes that a response to empathy-eliciting figural artworks engages the dynamic interaction of two mutually interlinked sets of processes: socio-affective/cognitive processing, related to the person perception, and esthetic processing, primarily concerned with esthetic appreciation and judgment and attention to non-social aspects of the image. The model predicts that the specific pattern of interaction between empathy-related and esthetic processing is co-determined by several sets of factors: (i) the viewer's individual characteristics, (ii) the context variables (which include various modes of priming by narratives and other images), (iii) multidimensional features of the image, and (iv) aspects of a viewer's response. Finally we propose that the model is implemented by the interaction of functionally connected brain networks involved in socio-cognitive and esthetic processing.
Collapse
Affiliation(s)
- Ladislav Kesner
- Applied Neurosciences and Brain Imaging, National Institute of Mental HealthKlecany, Czechia; Department of Art History, Masaryk University BrnoBrno, Czechia
| | - Jiří Horáček
- Applied Neurosciences and Brain Imaging, National Institute of Mental Health Klecany, Czechia
| |
Collapse
|
15
|
Marchi F, Newen A. Cognitive penetrability and emotion recognition in human facial expressions. Front Psychol 2015; 6:828. [PMID: 26150796 PMCID: PMC4473593 DOI: 10.3389/fpsyg.2015.00828] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2015] [Accepted: 06/01/2015] [Indexed: 11/13/2022] Open
Abstract
Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion.
Collapse
Affiliation(s)
- Francesco Marchi
- Department of Philosophy II, Ruhr University Bochum , Bochum, Germany
| | - Albert Newen
- Department of Philosophy II, Ruhr University Bochum , Bochum, Germany
| |
Collapse
|
16
|
|
17
|
|