1
|
Sato W, Shimokawa K, Uono S, Minato T. Mentalistic attention orienting triggered by android eyes. Sci Rep 2024; 14:23143. [PMID: 39367157 PMCID: PMC11452688 DOI: 10.1038/s41598-024-75063-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 10/01/2024] [Indexed: 10/06/2024] Open
Abstract
The eyes play a special role in human communications. Previous psychological studies have reported reflexive attention orienting in response to another individual's eyes during live interactions. Although robots are expected to collaborate with humans in various social situations, it remains unclear whether robot eyes have the potential to trigger attention orienting similarly to human eyes, specifically based on mental attribution. We investigated this issue in a series of experiments using a live gaze-cueing paradigm with an android. In Experiment 1, the non-predictive cue was the eyes and head of an android placed in front of human participants. Light-emitting diodes in the periphery served as target signals. The reaction times (RTs) required to localize the valid cued targets were faster than those for invalid cued targets for both types of cues. In Experiment 2, the gaze direction of the android eyes changed before the peripheral target lights appeared with or without barriers that made the targets non-visible, such that the android did not attend to them. The RTs were faster for validly cued targets only when there were no barriers. In Experiment 3, the targets were changed from lights to sounds, which the android could attend to even in the presence of barriers. The RTs to the target sounds were faster with valid cues, irrespective of the presence of barriers. These results suggest that android eyes may automatically induce attention orienting in humans based on mental state attribution.
Collapse
Affiliation(s)
- Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan.
| | - Koh Shimokawa
- Psychological Process Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan
| | - Shota Uono
- Division of Disability Sciences, Institute of Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, 305-8572, Ibaraki, Japan
| | - Takashi Minato
- Interactive Robot Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan
| |
Collapse
|
2
|
Becker C, Conduit R, Chouinard PA, Laycock R. EEG correlates of static and dynamic face perception: the role of naturalistic motion. Neuropsychologia 2024:108986. [PMID: 39218391 DOI: 10.1016/j.neuropsychologia.2024.108986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 08/09/2024] [Accepted: 08/28/2024] [Indexed: 09/04/2024]
Abstract
Much of our understanding of how the brain processes dynamic faces comes from research that compares static photographs to dynamic morphs, which exhibit simplified, computer-generated motion. By comparing static, video recorded, and dynamic morphed expressions, we aim to identify the neural correlates of naturalistic facial dynamism, using time-domain and time-frequency analysis. Dynamic morphs were made from the neutral and peak frames of video recorded transitions of happy and fearful expressions, which retained expression change and removed asynchronous and non-linear features of naturalistic facial motion. We found that dynamic morphs elicited increased N400 amplitudes and lower LPP amplitudes compared to other stimulus types. Video recordings elicited higher LPP amplitudes and greater frontal delta activity compared to other stimuli. Thematic analysis of participant interviews using a large language model revealed that participants found it difficult to assess the genuineness of morphed expressions, and easier to analyse the genuineness of happy compared to fearful expressions. Our findings suggest that animating real faces with artificial motion may violate expectations (N400) and reduce the social salience (LPP) of dynamic morphs. Results also suggest that delta oscillations in the frontal region may be involved with the perception of naturalistic facial motion in happy and fearful expressions. Overall, our findings highlight the sensitivity of neural mechanisms required for face perception to subtle changes in facial motion characteristics, which has important implications for neuroimaging research using faces with simplified motion.
Collapse
Affiliation(s)
- Casey Becker
- RMIT University, School of Health & Biomedical Sciences, STEM college, 225-254 Plenty Rd, Bundoora, Victoria, 3083, Australia.
| | - Russell Conduit
- RMIT University, School of Health & Biomedical Sciences, STEM college, 225-254 Plenty Rd, Bundoora, Victoria, 3083, Australia.
| | - Philippe A Chouinard
- La Trobe University, Department of Psychology, Counselling, & Therapy, 75 Kingsbury Drive, Bundoora, Victoria, 3086, Australia.
| | - Robin Laycock
- RMIT University, School of Health & Biomedical Sciences, STEM college, 225-254 Plenty Rd, Bundoora, Victoria, 3083, Australia.
| |
Collapse
|
3
|
Lee HK, Tong SX. Impaired inhibitory control when processing real but not cartoon emotional faces in autistic children: Evidence from an event-related potential study. Autism Res 2024; 17:1556-1571. [PMID: 38840481 DOI: 10.1002/aur.3176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 05/25/2024] [Indexed: 06/07/2024]
Abstract
Impaired socioemotional functioning characterizes autistic children, but does weak inhibition control underlie their socioemotional difficulty? This study addressed this question by examining whether and, if so, how inhibition control is affected by face realism and emotional valence in school-age autistic and neurotypical children. Fifty-two autistic and 52 age-matched neurotypical controls aged 10-12 years completed real and cartoon emotional face Go/Nogo tasks while event-related potentials (ERPs) were recorded. The analyses of inhibition-emotion components (i.e., N2, P3, and LPP) and a face-specific N170 revealed that autistic children elicited greater N2 while inhibiting Nogo trials and greater P3/LPP and late LPP for real but not cartoon emotional faces. Moreover, autistic children exhibited a reduced N170 to real face emotions only. Furthermore, correlation results showed that better behavioral inhibition and emotion recognition in autistic children were associated with a reduced N170. These findings suggest that neural mechanisms of inhibitory control in autistic children are less efficient and more disrupted during real face processing, which may affect their age-appropriate socio-emotional development.
Collapse
Affiliation(s)
- Hyun Kyung Lee
- Human Communication, Learning, and Development, Faculty of Education, The University of Hong Kong, Pokfulam, Hong Kong
| | - Shelley Xiuli Tong
- Human Communication, Learning, and Development, Faculty of Education, The University of Hong Kong, Pokfulam, Hong Kong
| |
Collapse
|
4
|
Wu J, Du X, Liu Y, Tang W, Xue C. How the Degree of Anthropomorphism of Human-like Robots Affects Users' Perceptual and Emotional Processing: Evidence from an EEG Study. SENSORS (BASEL, SWITZERLAND) 2024; 24:4809. [PMID: 39123856 PMCID: PMC11314648 DOI: 10.3390/s24154809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 07/16/2024] [Accepted: 07/22/2024] [Indexed: 08/12/2024]
Abstract
Anthropomorphized robots are increasingly integrated into human social life, playing vital roles across various fields. This study aimed to elucidate the neural dynamics underlying users' perceptual and emotional responses to robots with varying levels of anthropomorphism. We investigated event-related potentials (ERPs) and event-related spectral perturbations (ERSPs) elicited while participants viewed, perceived, and rated the affection of robots with low (L-AR), medium (M-AR), and high (H-AR) levels of anthropomorphism. EEG data were recorded from 42 participants. Results revealed that H-AR induced a more negative N1 and increased frontal theta power, but decreased P2 in early time windows. Conversely, M-AR and L-AR elicited larger P2 compared to H-AR. In later time windows, M-AR generated greater late positive potential (LPP) and enhanced parietal-occipital theta oscillations than H-AR and L-AR. These findings suggest distinct neural processing phases: early feature detection and selective attention allocation, followed by later affective appraisal. Early detection of facial form and animacy, with P2 reflecting higher-order visual processing, appeared to correlate with anthropomorphism levels. This research advances the understanding of emotional processing in anthropomorphic robot design and provides valuable insights for robot designers and manufacturers regarding emotional and feature design, evaluation, and promotion of anthropomorphic robots.
Collapse
Affiliation(s)
| | | | | | | | - Chengqi Xue
- School of Mechanical Engineering, Southeast University, Suyuan Avenue 79, Nanjing 211189, China; (J.W.); (X.D.); (Y.L.); (W.T.)
| |
Collapse
|
5
|
Achour-Benallegue A, Pelletier J, Kaminski G, Kawabata H. Facial icons as indexes of emotions and intentions. Front Psychol 2024; 15:1356237. [PMID: 38807962 PMCID: PMC11132266 DOI: 10.3389/fpsyg.2024.1356237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 04/02/2024] [Indexed: 05/30/2024] Open
Abstract
Various objects and artifacts incorporate representations of faces, encompassing artworks like portraits, as well as ethnographic or industrial artifacts such as masks or humanoid robots. These representations exhibit diverse degrees of human-likeness, serving different functions and objectives. Despite these variations, they share common features, particularly facial attributes that serve as building blocks for facial expressions-an effective means of communicating emotions. To provide a unified conceptualization for this broad spectrum of face representations, we propose the term "facial icons" drawing upon Peirce's semiotic concepts. Additionally, based on these semiotic principles, we posit that facial icons function as indexes of emotions and intentions, and introduce a significant anthropological theory aligning with our proposition. Subsequently, we support our assertions by examining processes related to face and facial expression perception, as well as sensorimotor simulation processes involved in discerning others' mental states, including emotions. Our argumentation integrates cognitive and experimental evidence, reinforcing the pivotal role of facial icons in conveying mental states.
Collapse
Affiliation(s)
- Amel Achour-Benallegue
- Cognition, Environment and Communication Research Team, Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology, Kashiwa, Japan
| | - Jérôme Pelletier
- Institut Jean Nicod, Département d'études cognitives, ENS, EHESS, CNRS, PSL University, Paris, France
- Department of Philosophy, University of Western Brittany, Brest, France
| | - Gwenaël Kaminski
- Cognition, Langues, Langage, Ergonomie, Université de Toulouse, Toulouse, France
- Institut Universitaire de France, Paris, France
| | - Hideaki Kawabata
- Department of Psychology, Faculty of Letters, Keio University, Tokyo, Japan
| |
Collapse
|
6
|
Sagehorn M, Johnsdorf M, Kisker J, Gruber T, Schöne B. Electrophysiological correlates of face and object perception: A comparative analysis of 2D laboratory and virtual reality conditions. Psychophysiology 2024; 61:e14519. [PMID: 38219244 DOI: 10.1111/psyp.14519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 12/12/2023] [Accepted: 12/26/2023] [Indexed: 01/16/2024]
Abstract
Human face perception is a specialized visual process with inherent social significance. The neural mechanisms reflecting this intricate cognitive process have evolved in spatially complex and emotionally rich environments. Previous research using VR to transfer an established face perception paradigm to realistic conditions has shown that the functional properties of face-sensitive neural correlates typically observed in the laboratory are attenuated outside the original modality. The present study builds on these results by comparing the perception of persons and objects under conventional laboratory (PC) and realistic conditions in VR. Adhering to established paradigms, the PC- and VR modalities both featured images of persons and cars alongside standard control images. To investigate the individual stages of realistic face processing, response times, the typical face-sensitive N170 component, and relevant subsequent components (L1, L2; pre-, post-response) were analyzed within and between modalities. The between-modality comparison of response times and component latencies revealed generally faster processing under realistic conditions. However, the obtained N170 latency and amplitude differences showed reduced discriminative capacity under realistic conditions during this early stage. These findings suggest that the effects commonly observed in the lab are specific to monitor-based presentations. Analyses of later and response-locked components showed specific neural mechanisms for identification and evaluation are employed when perceiving the stimuli under realistic conditions, reflected in discernible amplitude differences in response to faces and objects beyond the basic perceptual features. Conversely, the results do not provide evidence for comparable stimulus-specific perceptual processing pathways when viewing pictures of the stimuli under conventional laboratory conditions.
Collapse
Affiliation(s)
- Merle Sagehorn
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Marike Johnsdorf
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Joanna Kisker
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Thomas Gruber
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Benjamin Schöne
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
7
|
Chen Y, Stephani T, Bagdasarian MT, Hilsmann A, Eisert P, Villringer A, Bosse S, Gaebler M, Nikulin VV. Realness of face images can be decoded from non-linear modulation of EEG responses. Sci Rep 2024; 14:5683. [PMID: 38454099 PMCID: PMC10920746 DOI: 10.1038/s41598-024-56130-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 03/01/2024] [Indexed: 03/09/2024] Open
Abstract
Artificially created human faces play an increasingly important role in our digital world. However, the so-called uncanny valley effect may cause people to perceive highly, yet not perfectly human-like faces as eerie, bringing challenges to the interaction with virtual agents. At the same time, the neurocognitive underpinnings of the uncanny valley effect remain elusive. Here, we utilized an electroencephalography (EEG) dataset of steady-state visual evoked potentials (SSVEP) in which participants were presented with human face images of different stylization levels ranging from simplistic cartoons to actual photographs. Assessing neuronal responses both in frequency and time domain, we found a non-linear relationship between SSVEP amplitudes and stylization level, that is, the most stylized cartoon images and the real photographs evoked stronger responses than images with medium stylization. Moreover, realness of even highly similar stylization levels could be decoded from the EEG data with task-related component analysis (TRCA). Importantly, we also account for confounding factors, such as the size of the stimulus face's eyes, which previously have not been adequately addressed. Together, this study provides a basis for future research and neuronal benchmarking of real-time detection of face realness regarding three aspects: SSVEP-based neural markers, efficient classification methods, and low-level stimulus confounders.
Collapse
Affiliation(s)
- Yonghao Chen
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Tilman Stephani
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | | - Anna Hilsmann
- Department of Vision and Imaging Technologies, Fraunhofer HHI, Berlin, Germany
- Visual Computing Group, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Peter Eisert
- Department of Vision and Imaging Technologies, Fraunhofer HHI, Berlin, Germany
- Visual Computing Group, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Arno Villringer
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Clinic of Cognitive Neurology, University Hospital Leipzig, Leipzig, Germany
- MindBrainBody Institute at the Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Sebastian Bosse
- Department of Vision and Imaging Technologies, Fraunhofer HHI, Berlin, Germany
| | - Michael Gaebler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- MindBrainBody Institute at the Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Vadim V Nikulin
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
8
|
Schindler S, Bruchmann M, Straube T. Beyond facial expressions: A systematic review on effects of emotional relevance of faces on the N170. Neurosci Biobehav Rev 2023; 153:105399. [PMID: 37734698 DOI: 10.1016/j.neubiorev.2023.105399] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/15/2023] [Accepted: 09/17/2023] [Indexed: 09/23/2023]
Abstract
The N170 is the most prominent electrophysiological signature of face processing. While facial expressions reliably modulate the N170, there is considerable variance in N170 modulations by other sources of emotional relevance. Therefore, we systematically review and discuss this research area using different methods to manipulate the emotional relevance of inherently neutral faces. These methods were categorized into (1) existing pre-experimental affective person knowledge (e.g., negative attitudes towards outgroup faces), (2) experimentally instructed affective person knowledge (e.g., negative person information), (3) contingency-based affective learning (e.g., fear-conditioning), or (4) the immediate affective context (e.g., emotional information directly preceding the face presentation). For all categories except the immediate affective context category, the majority of studies reported significantly increased N170 amplitudes depending on the emotional relevance of faces. Furthermore, the potentiated N170 was observed across different attention conditions, supporting the role of the emotional relevance of faces on the early prioritized processing of configural facial information, regardless of low-level differences. However, we identified several open research questions and suggest venues for further research.
Collapse
Affiliation(s)
- Sebastian Schindler
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany.
| | - Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| |
Collapse
|
9
|
Eiserbeck A, Maier M, Baum J, Abdel Rahman R. Deepfake smiles matter less-the psychological and neural impact of presumed AI-generated faces. Sci Rep 2023; 13:16111. [PMID: 37752242 PMCID: PMC10522659 DOI: 10.1038/s41598-023-42802-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 09/14/2023] [Indexed: 09/28/2023] Open
Abstract
High-quality AI-generated portraits ("deepfakes") are becoming increasingly prevalent. Understanding the responses they evoke in perceivers is crucial in assessing their societal implications. Here we investigate the impact of the belief that depicted persons are real or deepfakes on psychological and neural measures of human face perception. Using EEG, we tracked participants' (N = 30) brain responses to real faces showing positive, neutral, and negative expressions, after being informed that they are either real or fake. Smiling faces marked as fake appeared less positive, as reflected in expression ratings, and induced slower evaluations. Whereas presumed real smiles elicited canonical emotion effects with differences relative to neutral faces in the P1 and N170 components (markers of early visual perception) and in the EPN component (indicative of reflexive emotional processing), presumed deepfake smiles showed none of these effects. Additionally, only smiles presumed as fake showed enhanced LPP activity compared to neutral faces, suggesting more effortful evaluation. Negative expressions induced typical emotion effects, whether considered real or fake. Our findings demonstrate a dampening effect on perceptual, emotional, and evaluative processing of presumed deepfake smiles, but not angry expressions, adding new specificity to the debate on the societal impact of AI-generated content.
Collapse
Affiliation(s)
- Anna Eiserbeck
- Department of Psychology, Faculty of Life Sciences, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099, Berlin, Germany.
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Berlin, Germany.
| | - Martin Maier
- Department of Psychology, Faculty of Life Sciences, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099, Berlin, Germany.
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Berlin, Germany.
| | - Julia Baum
- Department of Psychology, Faculty of Life Sciences, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099, Berlin, Germany
| | - Rasha Abdel Rahman
- Department of Psychology, Faculty of Life Sciences, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099, Berlin, Germany
- Cluster of Excellence Science of Intelligence, Technische Universität Berlin, Berlin, Germany
| |
Collapse
|
10
|
Treal T, Jackson PL, Meugnot A. Biological postural oscillations during facial expression of pain in virtual characters modulate early and late ERP components associated with empathy: A pilot study. Heliyon 2023; 9:e18161. [PMID: 37560681 PMCID: PMC10407205 DOI: 10.1016/j.heliyon.2023.e18161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 06/21/2023] [Accepted: 07/10/2023] [Indexed: 08/11/2023] Open
Abstract
There is a surge in the use of virtual characters in cognitive sciences. However, their behavioural realism remains to be perfected in order to trigger more spontaneous and socially expected reactions in users. It was recently shown that biological postural oscillations (idle motion) were a key ingredient to enhance the empathic response to its facial pain expression. The objective of this study was to examine, using electroencephalography, whether idle motion would modulate the neural response associated with empathy when viewing a pain-expressing virtual character. Twenty healthy young adults were shown video clips of a virtual character displaying a facial expression of pain while its body was either static (Still condition) or animated with pre-recorded human postural oscillations (Idle condition). Participants rated the virtual human's facial expression of pain as significantly more intense in the Idle condition compared to the Still condition. Both the early (N2-N3) and the late (rLPP) event-related potentials (ERPs) associated with distinct dimensions of empathy, affective resonance and perspective-taking, respectively, were greater in the Idle condition compared to the Still condition. These findings confirm the potential of idle motion to increase empathy for pain expressed by virtual characters. They are discussed in line with contemporary empathy models in relation to human-machine interactions.
Collapse
Affiliation(s)
- Thomas Treal
- Université Paris-Saclay CIAMS, 91405, Orsay, France
- CIAMS, Université d'Orléans, 45067, Orléans, France
| | - Philip L. Jackson
- École de Psychologie, Université Laval, Québec, Canada
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, Canada
- CERVO Research Center, Québec, Canada
| | - Aurore Meugnot
- Université Paris-Saclay CIAMS, 91405, Orsay, France
- CIAMS, Université d'Orléans, 45067, Orléans, France
| |
Collapse
|
11
|
Nussbaum C, Pöhlmann M, Kreysa H, Schweinberger SR. Perceived naturalness of emotional voice morphs. Cogn Emot 2023; 37:731-747. [PMID: 37104118 DOI: 10.1080/02699931.2023.2200920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 04/03/2023] [Accepted: 04/05/2023] [Indexed: 04/28/2023]
Abstract
Research into voice perception benefits from manipulation software to gain experimental control over acoustic expression of social signals such as vocal emotions. Today, parameter-specific voice morphing allows a precise control of the emotional quality expressed by single vocal parameters, such as fundamental frequency (F0) and timbre. However, potential side effects, in particular reduced naturalness, could limit ecological validity of speech stimuli. To address this for the domain of emotion perception, we collected ratings of perceived naturalness and emotionality on voice morphs expressing different emotions either through F0 or Timbre only. In two experiments, we compared two different morphing approaches, using either neutral voices or emotional averages as emotionally non-informative reference stimuli. As expected, parameter-specific voice morphing reduced perceived naturalness. However, perceived naturalness of F0 and Timbre morphs were comparable with averaged emotions as reference, potentially making this approach more suitable for future research. Crucially, there was no relationship between ratings of emotionality and naturalness, suggesting that the perception of emotion was not substantially affected by a reduction of voice naturalness. We hold that while these findings advocate parameter-specific voice morphing as a suitable tool for research on vocal emotion perception, great care should be taken in producing ecologically valid stimuli.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Manuel Pöhlmann
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| | - Helene Kreysa
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, Switzerland
| |
Collapse
|
12
|
Sagehorn M, Johnsdorf M, Kisker J, Sylvester S, Gruber T, Schöne B. Real-life relevant face perception is not captured by the N170 but reflected in later potentials: A comparison of 2D and virtual reality stimuli. Front Psychol 2023; 14:1050892. [PMID: 37057177 PMCID: PMC10086431 DOI: 10.3389/fpsyg.2023.1050892] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 02/27/2023] [Indexed: 03/30/2023] Open
Abstract
The perception of faces is one of the most specialized visual processes in the human brain and has been investigated by means of the early event-related potential component N170. However, face perception has mostly been studied in the conventional laboratory, i.e., monitor setups, offering rather distal presentation of faces as planar 2D-images. Increasing spatial proximity through Virtual Reality (VR) allows to present 3D, real-life-sized persons at personal distance to participants, thus creating a feeling of social involvement and adding a self-relevant value to the presented faces. The present study compared the perception of persons under conventional laboratory conditions (PC) with realistic conditions in VR. Paralleling standard designs, pictures of unknown persons and standard control images were presented in a PC- and a VR-modality. To investigate how the mechanisms of face perception differ under realistic conditions from those under conventional laboratory conditions, the typical face-specific N170 and subsequent components were analyzed in both modalities. Consistent with previous laboratory research, the N170 lost discriminatory power when translated to realistic conditions, as it only discriminated faces and controls under laboratory conditions. Most interestingly, analysis of the later component [230–420 ms] revealed more differentiated face-specific processing in VR, as indicated by distinctive, stimulus-specific topographies. Complemented by source analysis, the results on later latencies show that face-specific neural mechanisms are applied only under realistic conditions (A video abstract is available in the Supplementary material and via YouTube: https://youtu.be/TF8wiPUrpSY).
Collapse
Affiliation(s)
- Merle Sagehorn
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
- *Correspondence: Merle Sagehorn,
| | - Marike Johnsdorf
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Joanna Kisker
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Sophia Sylvester
- Semantic Information Systems Research Group, Institute of Computer Science, Osnabrück University, Osnabrück, Germany
| | - Thomas Gruber
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Benjamin Schöne
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
13
|
Vaitonytė J, Alimardani M, Louwerse MM. Scoping review of the neural evidence on the uncanny valley. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
|
14
|
Dawel A, Miller EJ, Horsburgh A, Ford P. A systematic survey of face stimuli used in psychological research 2000-2020. Behav Res Methods 2022; 54:1889-1901. [PMID: 34731426 DOI: 10.3758/s13428-021-01705-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2021] [Indexed: 12/16/2022]
Abstract
For decades, psychology has relied on highly standardized images to understand how people respond to faces. Many of these stimuli are rigorously generated and supported by excellent normative data; as such, they have played an important role in the development of face science. However, there is now clear evidence that testing with ambient images (i.e., naturalistic images "in the wild") and including expressions that are spontaneous can lead to new and important insights. To precisely quantify the extent to which our current knowledge base has relied on standardized and posed stimuli, we systematically surveyed the face stimuli used in 12 key journals in this field across 2000-2020 (N = 3374 articles). Although a small number of posed expression databases continue to dominate the literature, the use of spontaneous expressions seems to be increasing. However, there has been no increase in the use of ambient or dynamic stimuli over time. The vast majority of articles have used highly standardized and nonmoving pictures of faces. An emerging trend is that virtual faces are being used as stand-ins for human faces in research. Overall, the results of the present survey highlight that there has been a significant imbalance in favor of standardized face stimuli. We argue that psychology would benefit from a more balanced approach because ambient and spontaneous stimuli have much to offer. We advocate a cognitive ethological approach that involves studying face processing in natural settings as well as the lab, incorporating more stimuli from "the wild".
Collapse
Affiliation(s)
- Amy Dawel
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia.
| | - Elizabeth J Miller
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| | - Annabel Horsburgh
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| | - Patrice Ford
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| |
Collapse
|
15
|
Tanda T, Toyomori K, Kawahara JI. Attentional biases toward real images and drawings of negative faces. Acta Psychol (Amst) 2022; 229:103665. [PMID: 35843198 DOI: 10.1016/j.actpsy.2022.103665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 06/27/2022] [Accepted: 07/01/2022] [Indexed: 11/29/2022] Open
Abstract
The allocation of attention is affected by internal emotional states, such as anxiety and depression. The attention captured by real images of negative faces can be quantified by emotional probe tasks. The present study investigated whether attentional bias toward drawings of negative faces (line drawings and cartoon faces) differs from that of real faces. Non-clinical university students indicated their levels of anxiety and depression via self-report questionnaires, and completed a probe discrimination task under three face image conditions in a between-participants design. Significant correlations were found between bias scores and scores on the self-reported BDI-II under the real face condition. However, two types of face drawings were only weakly correlated with self-report scores. In our probe task to investigate attentional bias to facial stimuli in nonclinical adults, the strength of the relationship between depression and attentional bias to negative face was stronger for real faces than for face drawings.
Collapse
Affiliation(s)
- Tomoyuki Tanda
- Department of Psychology, Hokkaido University, Sapporo, Japan.
| | - Kai Toyomori
- Department of Psychology, Hokkaido University, Sapporo, Japan
| | - Jun I Kawahara
- Department of Psychology, Hokkaido University, Sapporo, Japan
| |
Collapse
|
16
|
Moshel ML, Robinson AK, Carlson TA, Grootswagers T. Are you for real? Decoding realistic AI-generated faces from neural activity. Vision Res 2022; 199:108079. [PMID: 35749833 DOI: 10.1016/j.visres.2022.108079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 05/30/2022] [Accepted: 06/06/2022] [Indexed: 11/17/2022]
Abstract
Can we trust our eyes? Until recently, we rarely had to question whether what we see is indeed what exists, but this is changing. Artificial neural networks can now generate realistic images that challenge our perception of what is real. This new reality can have significant implications for cybersecurity, counterfeiting, fake news, and border security. We investigated how the human brain encodes and interprets realistic artificially generated images using behaviour and brain imaging. We found that we could reliably decode AI generated faces using people's neural activity. However, while at a group level people performed near chance classifying real and realistic fakes, participants tended to interchange the labels, classifying real faces as realistic fakes and vice versa. Understanding this difference between brain and behavioural responses may be key in determining the 'real' in our new reality. Stimuli, code, and data for this study can be found at https://osf.io/n2z73/.
Collapse
Affiliation(s)
- Michoel L Moshel
- School of Psychology, University of Sydney, NSW, Australia; School of Psychology, Macquarie University, NSW, Australia.
| | - Amanda K Robinson
- School of Psychology, University of Sydney, NSW, Australia; Queensland Brain Institute, The University of Queensland, QLD, Australia
| | | | - Tijl Grootswagers
- School of Psychology, University of Sydney, NSW, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| |
Collapse
|
17
|
Pitt KM, Mansouri A, Wang Y, Zosky J. Toward P300-brain-computer interface access to contextual scene displays for AAC: An initial exploration of context and asymmetry processing in healthy adults. Neuropsychologia 2022; 173:108289. [PMID: 35690117 DOI: 10.1016/j.neuropsychologia.2022.108289] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 05/04/2022] [Accepted: 06/04/2022] [Indexed: 11/16/2022]
Abstract
Brain-computer interfaces for augmentative and alternative communication (BCI-AAC) may help overcome physical barriers to AAC access. Traditionally, visually based P300-BCI-AAC displays utilize a symmetrical grid layout. Contextual scene displays are composed of context-rich images (e.g., photographs) and may support AAC success. However, contextual scene displays contrast starkly with the standard P300-grid approach. Understanding the neurological processes from which BCI-AAC devices function is crucial to human-centered computing for BCI-AAC. Therefore, the aim of this multidisciplinary investigation is to provide an initial exploration of contextual scene use for BCI-AAC. METHODS Participants completed three experimental conditions to evaluate the effects of item arrangement asymmetry and context on P300-based BCI-AAC signals and offline BCI-AAC accuracy, including 1) the full contextual scene condition, 2) asymmetrical item arraignment without context condition and 3) the grid condition. Following each condition, participants completed task-evaluation ratings (e.g., engagement). Offline BCI-AAC accuracy for each condition was evaluated using cross-validation. RESULTS Display asymmetry significantly decreased P300 latency in the centro-parietal cluster. P300 amplitudes in the frontal cluster were decreased, though nonsignificantly. Display context significantly increased N170 amplitudes in the occipital cluster, and N400 amplitudes in the centro-parietal and occipital clusters. Scenes were rated as more visually appealing and engaging, and offline BCI-AAC performance for the scene condition was not statistically different from the grid standard. CONCLUSION Findings support the feasibility of incorporating scene-based displays for P300-BCI-AAC development to help provide communication for individuals with minimal or emerging language and literacy skills.
Collapse
Affiliation(s)
- Kevin M Pitt
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, NE, USA.
| | - Amirsalar Mansouri
- Department of Electrical and Computer Engineering, University of Nebraska-Lincoln, Lincoln, NE, USA
| | - Yingying Wang
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, NE, USA
| | - Joshua Zosky
- Department of Psychology, University of Nebraska-Lincoln, Lincoln, NE, USA
| |
Collapse
|
18
|
Diel A, Weigelt S, Macdorman KF. A Meta-analysis of the Uncanny Valley's Independent and Dependent Variables. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3470742] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The
uncanny valley (UV)
effect is a negative affective reaction to human-looking artificial entities. It hinders comfortable, trust-based interactions with android robots and virtual characters. Despite extensive research, a consensus has not formed on its theoretical basis or methodologies. We conducted a meta-analysis to assess operationalizations of human likeness (independent variable) and the UV effect (dependent variable). Of 468 studies, 72 met the inclusion criteria. These studies employed 10 different stimulus creation techniques, 39 affect measures, and 14 indirect measures. Based on 247 effect sizes, a three-level meta-analysis model revealed the UV effect had a large effect size, Hedges’
g
= 1.01 [0.80, 1.22]. A mixed-effects meta-regression model with creation technique as the moderator variable revealed
face distortion
produced the largest effect size,
g
= 1.46 [0.69, 2.24], followed by
distinct entities, g
= 1.20 [1.02, 1.38],
realism render, g
= 0.99 [0.62, 1.36], and
morphing, g
= 0.94 [0.64, 1.24]. Affective indices producing the largest effects were
threatening, likable, aesthetics, familiarity
, and
eeriness
, and indirect measures were
dislike frequency, categorization reaction time, like frequency, avoidance
, and
viewing duration
. This meta-analysis—the first on the UV effect—provides a methodological foundation and design principles for future research.
Collapse
Affiliation(s)
- Alexander Diel
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Sarah Weigelt
- Department of Vision, Visual Impairments & Blindness, Faculty of Rehabilitation Sciences, Technical University of Dortmund, Dortmund, Germany
| | - Karl F. Macdorman
- School of Informatics and Computing, Indiana University, Indianapolis, IN, USA
| |
Collapse
|
19
|
Sarauskyte L, Monciunskaite R, Griksiene R. The role of sex and emotion on emotion perception in artificial faces: An ERP study. Brain Cogn 2022; 159:105860. [PMID: 35339916 DOI: 10.1016/j.bandc.2022.105860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 02/08/2022] [Accepted: 03/10/2022] [Indexed: 11/17/2022]
Abstract
Sex has a significant impact on the perception of emotional expressions. However, it remains unclear whether sex influences the perception of emotions in artificial faces, which are becoming popular in emotion research. We used an emotion recognition task with FaceGen faces portraying six basic emotions aiming to investigate the effect of sex and emotion on behavioural and electrophysiological parameters. 71 participants performed the task while EEG was recorded. The recognition of sadness was the poorest, however, females recognized sadness better than males. ERP results indicated that fear, disgust, and anger evoked higher amplitudes of late positive potential over the left parietal region compared to neutral expression. Females demonstrated higher values of global field power as compared to males. The interaction between sex and emotion on ERPs was not significant. The results of our study may be valuable for future therapies and research, as it emphasizes possibly distinct processing of emotions and potential sex differences in the recognition of emotional expressions in FaceGen faces.
Collapse
Affiliation(s)
- Livija Sarauskyte
- Vilnius University, Life Sciences Center, Institute of Biosciences, Vilnius, Lithuania.
| | - Rasa Monciunskaite
- Vilnius University, Life Sciences Center, Institute of Biosciences, Vilnius, Lithuania
| | - Ramune Griksiene
- Vilnius University, Life Sciences Center, Institute of Biosciences, Vilnius, Lithuania
| |
Collapse
|
20
|
Yu Z, Kritikos A, Pegna AJ. Enhanced early ERP responses to looming angry faces. Biol Psychol 2022; 170:108308. [PMID: 35271956 DOI: 10.1016/j.biopsycho.2022.108308] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 03/01/2022] [Accepted: 03/04/2022] [Indexed: 11/02/2022]
Abstract
Although the brain is known to process threatening emotional stimuli and looming motion rapidly, little is known about how the emotion and motion interact. To address this question, two experiments were carried out which presented angry and neutral emotional faces on a depth-cued background that induced the perception of distance, or a non-cued background. Furthermore, faces either expanded or contracted in size such that they appeared to approach or recede from the viewer. EEG/ERP measures were used to identify the time course of brain activity for these looming and receding, angry and neutral emotional faces. The results of both experiments revealed that the P1 was enhanced by looming angry faces on the depth-cued background, compared to neutral approaching faces, as well as all receding faces, indicating an early interaction of emotion and motion within 100 ms of presentation. Angry expressions were also found to enhance the N170 regardless of movement. These findings suggest that processing of threat and looming motion interact at the very early stages of visual processing. Furthermore, as the modulating effect of looming motion on angry expressions only arose on the depth-cued background, the findings highlight the importance of approaching movements rather than sole increases in the retinal size of the stimuli.
Collapse
Affiliation(s)
- Zhou Yu
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4068, Australia
| | - Ada Kritikos
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4068, Australia
| | - Alan J Pegna
- School of Psychology, The University of Queensland, Saint Lucia, Brisbane, QLD 4068, Australia.
| |
Collapse
|
21
|
Egger S. Susceptibility to Ingroup Influence in Adolescents With Intellectual Disability: A Minimal Group Experiment on Social Judgment Making. Front Psychol 2021; 12:671910. [PMID: 34512438 PMCID: PMC8423920 DOI: 10.3389/fpsyg.2021.671910] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 07/28/2021] [Indexed: 11/13/2022] Open
Abstract
Adolescents with intellectual disability (ID) experience challenges and uncertainty when making judgments about other people's intentions. In an attempt to achieve certainty, they might exhibit judgment tendencies that differ from those of typically developing adolescents. This study investigated social judgment making in adolescents with ID (n = 34, M age = 14.89 years, SD = 1.41 years) compared with chronological age-matched adolescents without ID (n = 34, M age = 14.68 years, SD = 1.15 years) and mental age (MA)-matched children (n = 34, M age = 7.93 years, SD = 0.64 years). Participants used a computer-based task to judge the hostility of persons (fictitious characters). Adolescents with ID were found to make more polarizing judgments (i.e., either positive or negative, as opposed to moderate judgments) and were more likely to be guided by the opinions of a fictitious peer ingroup (minimal group) compared with adolescents without ID. No such differences were found between adolescents with ID and MA-matched children. The results are discussed in terms of scientific and practical implications.
Collapse
Affiliation(s)
- Sara Egger
- Department of Special Needs Education, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
22
|
Maquate K, Knoeferle P. Integration of Social Context vs. Linguistic Reference During Situated Language Processing. Front Psychol 2021; 12:547360. [PMID: 34408686 PMCID: PMC8365155 DOI: 10.3389/fpsyg.2021.547360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 06/16/2021] [Indexed: 11/17/2022] Open
Abstract
Research findings on language comprehension suggest that many kinds of non-linguistic cues can rapidly affect language processing. Extant processing accounts of situated language comprehension model these rapid effects and are only beginning to accommodate the role of non-linguistic emotional, cues. To begin with a detailed characterization of distinct cues and their relative effects, three visual-world eye-tracking experiments assessed the relative importance of two cue types (action depictions vs. emotional facial expressions) as well as the effects of the degree of naturalness of social (facial) cues (smileys vs. natural faces). We predicted to replicate previously reported rapid effects of referentially mediated actions. In addition, we assessed distinct world-language relations. If how a cue is conveyed matters for its effect, then a verb referencing an action depiction should elicit a stronger immediate effect on visual attention and language comprehension than a speaker's emotional facial expression. The latter is mediated non-referentially via the emotional connotations of an adverb. The results replicated a pronounced facilitatory effect of action depiction (relative to no action depiction). By contrast, the facilitatory effect of a preceding speaker's emotional face was less pronounced. How the facial emotion was rendered mattered in that the emotional face effect was present with natural faces (Experiment 2) but not with smileys (Experiment 1). Experiment 3 suggests that contrast, i.e., strongly opposing emotional valence information vs. non-opposing valence information, might matter for the directionality of this effect. These results are the first step toward a more principled account of how distinct visual (social) cues modulate language processing, whereby the visual cues that are referenced by language (the depicted action), copresent (the depicted action), and more natural (the natural emotional prime face) tend to exert more pronounced effects.
Collapse
Affiliation(s)
- Katja Maquate
- Psycholinguistics, Institute for German Language and Linguistics, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Pia Knoeferle
- Psycholinguistics, Institute for German Language and Linguistics, Humboldt-Universität zu Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.,Einstein Center for Neurosciences Berlin, Berlin, Germany
| |
Collapse
|
23
|
Kegel LC, Brugger P, Frühholz S, Grunwald T, Hilfiker P, Kohnen O, Loertscher ML, Mersch D, Rey A, Sollfrank T, Steiger BK, Sternagel J, Weber M, Jokeit H. Dynamic human and avatar facial expressions elicit differential brain responses. Soc Cogn Affect Neurosci 2021; 15:303-317. [PMID: 32232359 PMCID: PMC7235958 DOI: 10.1093/scan/nsaa039] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 03/02/2020] [Accepted: 03/24/2020] [Indexed: 01/25/2023] Open
Abstract
Computer-generated characters, so-called avatars, are widely used in advertising, entertainment, human–computer interaction or as research tools to investigate human emotion perception. However, brain responses to avatar and human faces have scarcely been studied to date. As such, it remains unclear whether dynamic facial expressions of avatars evoke different brain responses than dynamic facial expressions of humans. In this study, we designed anthropomorphic avatars animated with motion tracking and tested whether the human brain processes fearful and neutral expressions in human and avatar faces differently. Our fMRI results showed that fearful human expressions evoked stronger responses than fearful avatar expressions in the ventral anterior and posterior cingulate gyrus, the anterior insula, the anterior and posterior superior temporal sulcus, and the inferior frontal gyrus. Fearful expressions in human and avatar faces evoked similar responses in the amygdala. We did not find different responses to neutral human and avatar expressions. Our results highlight differences, but also similarities in the processing of fearful human expressions and fearful avatar expressions even if they are designed to be highly anthropomorphic and animated with motion tracking. This has important consequences for research using dynamic avatars, especially when processes are investigated that involve cortical and subcortical regions.
Collapse
Affiliation(s)
- Lorena C Kegel
- Swiss Epilepsy Center, CH-8008 Zurich, Switzerland.,Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Peter Brugger
- Neuropsychology Unit, Valens Rehabilitation Centre, Valens, Switzerland.,Department of Psychiatry, Psychotherapy, and Psychosomatics, University Hospital of Psychiatry Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | | | | | - Oona Kohnen
- Swiss Epilepsy Center, CH-8008 Zurich, Switzerland
| | - Miriam L Loertscher
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland.,Department of Psychology, University of Bern, Bern, Switzerland
| | - Dieter Mersch
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Anton Rey
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | | | | | - Joerg Sternagel
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Michel Weber
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | - Hennric Jokeit
- Swiss Epilepsy Center, CH-8008 Zurich, Switzerland.,Department of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
24
|
Sollfrank T, Kohnen O, Hilfiker P, Kegel LC, Jokeit H, Brugger P, Loertscher ML, Rey A, Mersch D, Sternagel J, Weber M, Grunwald T. The Effects of Dynamic and Static Emotional Facial Expressions of Humans and Their Avatars on the EEG: An ERP and ERD/ERS Study. Front Neurosci 2021; 15:651044. [PMID: 33967681 PMCID: PMC8100234 DOI: 10.3389/fnins.2021.651044] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 03/30/2021] [Indexed: 11/13/2022] Open
Abstract
This study aimed to examine whether the cortical processing of emotional faces is modulated by the computerization of face stimuli ("avatars") in a group of 25 healthy participants. Subjects were passively viewing 128 static and dynamic facial expressions of female and male actors and their respective avatars in neutral or fearful conditions. Event-related potentials (ERPs), as well as alpha and theta event-related synchronization and desynchronization (ERD/ERS), were derived from the EEG that was recorded during the task. All ERP features, except for the very early N100, differed in their response to avatar and actor faces. Whereas the N170 showed differences only for the neutral avatar condition, later potentials (N300 and LPP) differed in both emotional conditions (neutral and fear) and the presented agents (actor and avatar). In addition, we found that the avatar faces elicited significantly stronger reactions than the actor face for theta and alpha oscillations. Especially theta EEG frequencies responded specifically to visual emotional stimulation and were revealed to be sensitive to the emotional content of the face, whereas alpha frequency was modulated by all the stimulus types. We can conclude that the computerized avatar faces affect both, ERP components and ERD/ERS and evoke neural effects that are different from the ones elicited by real faces. This was true, although the avatars were replicas of the human faces and contained similar characteristics in their expression.
Collapse
Affiliation(s)
| | | | | | - Lorena C. Kegel
- Swiss Epilepsy Center, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Hennric Jokeit
- Swiss Epilepsy Center, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Peter Brugger
- Valens Rehabilitation Centre, Valens, Switzerland
- Psychiatric University Hospital Zurich, Zurich, Switzerland
| | - Miriam L. Loertscher
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | - Anton Rey
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | - Dieter Mersch
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Joerg Sternagel
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Michel Weber
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | | |
Collapse
|
25
|
Differential Facial Articulacy in Robots and Humans Elicit Different Levels of Responsiveness, Empathy, and Projected Feelings. ROBOTICS 2020. [DOI: 10.3390/robotics9040092] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Life-like humanoid robots are on the rise, aiming at communicative purposes that resemble humanlike conversation. In human social interaction, the facial expression serves important communicative functions. We examined whether a robot’s face is similarly important in human-robot communication. Based on emotion research and neuropsychological insights on the parallel processing of emotions, we argue that greater plasticity in the robot’s face elicits higher affective responsivity, more closely resembling human-to-human responsiveness than a more static face. We conducted a between-subjects experiment of 3 (facial plasticity: human vs. facially flexible robot vs. facially static robot) × 2 (treatment: affectionate vs. maltreated). Participants (N = 265; Mage = 31.5) were measured for their emotional responsiveness, empathy, and attribution of feelings to the robot. Results showed empathically and emotionally less intensive responsivity toward the robots than toward the human but followed similar patterns. Significantly different intensities of feelings and attributions (e.g., pain upon maltreatment) followed facial articulacy. Theoretical implications for underlying processes in human-robot communication are discussed. We theorize that precedence of emotion and affect over cognitive reflection, which are processed in parallel, triggers the experience of ‘because I feel, I believe it’s real,’ despite being aware of communicating with a robot. By evoking emotional responsiveness, the cognitive awareness of ‘it is just a robot’ fades into the background and appears not relevant anymore.
Collapse
|
26
|
Schindler S, Bruchmann M, Steinweg AL, Moeck R, Straube T. Attentional conditions differentially affect early, intermediate and late neural responses to fearful and neutral faces. Soc Cogn Affect Neurosci 2020; 15:765-774. [PMID: 32701163 PMCID: PMC7511883 DOI: 10.1093/scan/nsaa098] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 06/17/2020] [Accepted: 07/11/2020] [Indexed: 11/14/2022] Open
Abstract
The processing of fearful facial expressions is prioritized by the human brain. This priority is maintained across various information processing stages as evident in early, intermediate and late components of event-related potentials (ERPs). However, emotional modulations are inconsistently reported for these different processing stages. In this pre-registered study, we investigated how feature-based attention differentially affects ERPs to fearful and neutral faces in 40 participants. The tasks required the participants to discriminate either the orientation of lines overlaid onto the face, the sex of the face or the face's emotional expression, increasing attention to emotion-related features. We found main effects of emotion for the N170, early posterior negativity (EPN) and late positive potential (LPP). While N170 emotional modulations were task-independent, interactions of emotion and task were observed for the EPN and LPP. While EPN emotion effects were found in the sex and emotion tasks, the LPP emotion effect was mainly driven by the emotion task. This study shows that early responses to fearful faces are task-independent (N170) and likely based on low-level and configural information while during later processing stages, attention to the face (EPN) or-more specifically-to the face's emotional expression (LPP) is crucial for reliable amplified processing of emotional faces.
Collapse
Affiliation(s)
- Sebastian Schindler
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Münster D-48149, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster D-48149, Germany
| | - Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Münster D-48149, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster D-48149, Germany
| | - Anna-Lena Steinweg
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Münster D-48149, Germany
| | - Robert Moeck
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Münster D-48149, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Münster D-48149, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster D-48149, Germany
| |
Collapse
|
27
|
Schindler S, Bublatzky F. Attention and emotion: An integrative review of emotional face processing as a function of attention. Cortex 2020; 130:362-386. [DOI: 10.1016/j.cortex.2020.06.010] [Citation(s) in RCA: 86] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 05/28/2020] [Accepted: 06/29/2020] [Indexed: 11/25/2022]
|
28
|
Time-dependent effects of perceptual load on processing fearful and neutral faces. Neuropsychologia 2020; 146:107529. [DOI: 10.1016/j.neuropsychologia.2020.107529] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 05/27/2020] [Accepted: 06/08/2020] [Indexed: 11/20/2022]
|
29
|
Synthetic-Neuroscore: Using a neuro-AI interface for evaluating generative adversarial networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.069] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
30
|
Perceived match between own and observed models' bodies: influence of face, viewpoints, and body size. Sci Rep 2020; 10:13991. [PMID: 32814786 PMCID: PMC7438501 DOI: 10.1038/s41598-020-70856-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 07/29/2020] [Indexed: 02/06/2023] Open
Abstract
People are generally unable to accurately determine their own body measurements and to translate this knowledge to identifying a model/avatar that best represents their own body. This inability has not only been related to health problems (e.g. anorexia nervosa), but has important practical implications as well (e.g. online retail). Here we aimed to investigate the influence of three basic visual features—face presence, amount of viewpoints, and observed model size—on the perceived match between own and observed models’ bodies and on attitudes towards these models. Models were real-life models (Experiment 1) or avatar models based on participants’ own bodies (Experiment 2). Results in both experiments showed a strong effect of model size, irrespective of participants’ own body measurements. When models were randomly presented one by one, participants gave significantly higher ratings to smaller- compared to bigger-sized models. The reverse was true, however, when participants observed and compared models freely, suggesting that the mode of presentation affected participants’ judgments. Limited evidence was found for an effect of facial presence or amount of viewpoints. These results add evidence to research on visual features affecting the ability to match observed bodies with own body image, which has biological, clinical, and practical implications.
Collapse
|
31
|
Schindler S, Bruchmann M, Bublatzky F, Straube T. Modulation of face- and emotion-selective ERPs by the three most common types of face image manipulations. Soc Cogn Affect Neurosci 2020; 14:493-503. [PMID: 30972417 PMCID: PMC6545565 DOI: 10.1093/scan/nsz027] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 03/20/2019] [Accepted: 04/02/2019] [Indexed: 12/15/2022] Open
Abstract
In neuroscientific studies, the naturalness of face presentation differs; a third of published studies makes use of close-up full coloured faces, a third uses close-up grey-scaled faces and another third employs cutout grey-scaled faces. Whether and how these methodological choices affect emotion-sensitive components of the event-related brain potentials (ERPs) is yet unclear. Therefore, this pre-registered study examined ERP modulations to close-up full-coloured and grey-scaled faces as well as cutout fearful and neutral facial expressions, while attention was directed to no-face oddballs. Results revealed no interaction of face naturalness and emotion for any ERP component, but showed, however, large main effects for both factors. Specifically, fearful faces and decreasing face naturalness elicited substantially enlarged N170 and early posterior negativity amplitudes and lower face naturalness also resulted in a larger P1.This pattern reversed for the LPP, showing linear increases in LPP amplitudes with increasing naturalness. We observed no interaction of emotion with face naturalness, which suggests that face naturalness and emotion are decoded in parallel at these early stages. Researchers interested in strong modulations of early components should make use of cutout grey-scaled faces, while those interested in a pronounced late positivity should use close-up coloured faces.
Collapse
Affiliation(s)
- Sebastian Schindler
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149 Münster, Germany
| | - Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149 Münster, Germany
| | - Florian Bublatzky
- Central Institute of Mental Health Mannheim, Medical Faculty Mannheim/Heidelberg University, Mannheim, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, 48149 Münster, Germany
| |
Collapse
|
32
|
Nakano T, Uesugi Y. Risk Factors Leading to Preference for Extreme Facial Retouching. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2019; 23:52-59. [PMID: 31851844 PMCID: PMC6985765 DOI: 10.1089/cyber.2019.0545] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Young women posting their edited face photographs on social networking sites have become a popular phenomenon, but an excessively retouched face image sometimes gives a strange impression to its viewers. This study investigates what personal characteristics facilitate a bias toward an excessively edited face image. Thirty young Asian women evaluated the attractiveness and naturalness of their face images, which were edited in eight different levels-from mild to excessive-by expanding their eyes and thinning their chin. The mildly retouched face was evaluated as more attractive than the original face, but the excessively retouched face was evaluated as unattractive and unnatural in comparison with the original face. The preferred face edit level was higher for one's own face than for others. Moreover, participants with higher autism-spectrum quotient (AQ) scores were found to regard excessively edited face images as more attractive. The attention to detail subscale of the AQ showed a significant positive correlation with the preferred face edit level. The imagination subscale, on the contrary, showed a significant negative correlation with the preferred face edit level. The pupil response for self-face images was significantly larger than those for others' face images, but this difference decreased with higher AQ scores. This study suggests that an increased attractiveness in their mildly retouched face promotes this behavior of retouching one's own face, but autistic traits, which are insensitive to the creepiness of the excessively retouched face, might pose a potential risk to inducing retouch dependence.
Collapse
Affiliation(s)
- Tamami Nakano
- Graduate School of Frontiers Bioscience, Osaka University, Osaka, Japan.,Faculty of Medicine, Osaka University, Osaka, Japan.,PRESTO, Japan Science Technology, Saitama, Japan
| | | |
Collapse
|
33
|
Kätsyri J, de Gelder B, de Borst AW. Amygdala responds to direct gaze in real but not in computer-generated faces. Neuroimage 2019; 204:116216. [PMID: 31553928 DOI: 10.1016/j.neuroimage.2019.116216] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 08/22/2019] [Accepted: 09/19/2019] [Indexed: 10/25/2022] Open
Abstract
Computer-generated (CG) faces are an important visual interface for human-computer interaction in social contexts. Here we investigated whether the human brain processes emotion and gaze similarly in real and carefully matched CG faces. Real faces evoked greater responses in the fusiform face area than CG faces, particularly for fearful expressions. Emotional (angry and fearful) facial expressions evoked similar activations in the amygdala in real and CG faces. Direct as compared with averted gaze elicited greater fMRI responses in the amygdala regardless of facial expression but only for real and not for CG faces. We observed an interaction effect between gaze and emotion (i.e., the shared signal effect) in the right posterior temporal sulcus and other regions, but not in the amygdala, and we found no evidence for different shared signal effects in real and CG faces. Taken together, the present findings highlight similarities (emotional processing in the amygdala) and differences (overall processing in the fusiform face area, gaze processing in the amygdala) in the neural processing of real and CG faces.
Collapse
Affiliation(s)
- Jari Kätsyri
- Brain and Emotion Laboratory, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Department of Computer Science, Aalto University, Espoo, Finland.
| | - Beatrice de Gelder
- Brain and Emotion Laboratory, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Department of Computer Science, University College London, London, United Kingdom
| | - Aline W de Borst
- UCL Interaction Centre, University College London, London, United Kingdom
| |
Collapse
|
34
|
Kätsyri J, de Gelder B, Takala T. Virtual Faces Evoke Only a Weak Uncanny Valley Effect: An Empirical Investigation With Controlled Virtual Face Images. Perception 2019; 48:968-991. [PMID: 31474183 DOI: 10.1177/0301006619869134] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Affiliation(s)
- Jari Kätsyri
- Department of Cognitive Neuroscience, Maastricht University, the Netherlands; Department of Computer Science, Aalto University, Finland
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Maastricht University, the Netherlands
| | - Tapio Takala
- Department of Computer Science, Aalto University, Finland
| |
Collapse
|
35
|
The mind minds minds: The effect of intentional stance on the neural encoding of joint attention. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:1479-1491. [DOI: 10.3758/s13415-019-00734-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
36
|
Ratajczyk D, Jukiewicz M, Lupkowski P. Evaluation of the uncanny valley hypothesis based on declared emotional response and psychophysiological reaction. BIO-ALGORITHMS AND MED-SYSTEMS 2019. [DOI: 10.1515/bams-2019-0008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Abstract
The uncanny valley (UV) hypothesis suggests that the observation of almost human-like characters causes an increase of discomfort. We conducted a study using self-report questionnaire, response time measurement, and electrodermal activity (EDA) evaluation. In the study, 12 computer-generated characters (robots, androids, animated, and human characters) were presented to 33 people (17 women) to (1) test the effect of a background context on the perception of characters, (2) establish whether there is a relation between declared feelings and physiological arousal, and (3) detect the valley of the presented stimuli. The findings provide support for reverse relation between human-likeness and the arousal (EDA). Furthermore, a positive correlation between EDA and human-likeness appraisal reaction time upholds one of the most common explanations of the UV – the categorization ambiguity. The absence of the significant relationship between declared comfort and EDA advocates the necessity of physiological measures for UV studies.
Collapse
|
37
|
Guo J, Luo X, Wang E, Li B, Chang Q, Sun L, Song Y. Abnormal alpha modulation in response to human eye gaze predicts inattention severity in children with ADHD. Dev Cogn Neurosci 2019; 38:100671. [PMID: 31229834 PMCID: PMC6969336 DOI: 10.1016/j.dcn.2019.100671] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2018] [Revised: 05/26/2019] [Accepted: 06/10/2019] [Indexed: 01/05/2023] Open
Abstract
In response to the human eye gaze, compared with TD children, ADHD children showed a decreased alpha lateralization. The attenuation of alpha modulation in ADHD children was mainly manifested in the left hemisphere. The left hemisphere alpha modulation predicted higher inattentive severity and lower behavioural accuracy in ADHD children. Classification analysis showed the left alpha modulation has a high capability to recognize ADHD from TD children.
Attention-deficit/hyperactivity disorder (ADHD) is characterized by problems in directing and sustaining attention. Recent behavioral studies indicated that children with ADHD are more likely to fail to show the orienting effect in response to human eye gaze. The present study aimed to identify the neurophysiological bases of attention deficits directed by social human eye gaze in children with ADHD, focusing on the relationship between alpha modulations and ADHD symptoms. The electroencephalography data were recorded from 8–13-year-old children (typically developing (TD): n = 24; ADHD: n = 21) while they performed a cued visuospatial covert attention task. The cues were designed as human eyes that might gaze to the left or right visual field. The results revealed that TD children showed a significant alpha lateralization in response to the gaze of human eyes, whereas children with ADHD showed an inverse pattern of alpha modulation in the left parieto-occipital area. Importantly, the abnormal alpha modulation in the left hemisphere predicted inattentive symptom severity and behavioral accuracy in children with ADHD. These results suggest that the dysfunction of alpha modulation in the left hemisphere in response to social cues might be a potential neurophysiologic marker of attention deficit in children with ADHD.
Collapse
Affiliation(s)
- Jialiang Guo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiangsheng Luo
- Peking University Sixth Hospital/Institute of Mental Health, Beijing, China; National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Key Laboratory of Mental Health, Ministry of Health (Peking University), Beijing, China
| | - Encong Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Bingkun Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Qinyuan Chang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Li Sun
- Peking University Sixth Hospital/Institute of Mental Health, Beijing, China; National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Key Laboratory of Mental Health, Ministry of Health (Peking University), Beijing, China.
| | - Yan Song
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China; Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing, China.
| |
Collapse
|
38
|
Barker RM, Bialystok E. Processing differences between monolingual and bilingual young adults on an emotion n-back task. Brain Cogn 2019; 134:29-43. [PMID: 31108367 DOI: 10.1016/j.bandc.2019.05.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 05/05/2019] [Accepted: 05/06/2019] [Indexed: 10/26/2022]
Abstract
Bilingualism is associated with enhancement of executive control (EC) across the lifespan. Working memory and non-verbal emotion regulation both draw upon EC mechanisms so may also be affected by bilingualism, but these relationships are not fully understood. These relationships were explored using an n-back task with distracting emotional stimuli administered to young adults while continuous EEG was recorded. Monolinguals were faster but less accurate on the 2-back than bilinguals, and monolingual accuracy was more impeded by the presence of emotional stimuli than was that of bilinguals. The P300 event-related potential, a neural signature of working memory processing in the n-back, had smaller amplitudes in both groups on the 2-back than the 1-back, but attenuation in response to distracting emotional stimuli was greater for bilinguals than monolinguals. P300 latencies were also differentially affected by emotional stimuli in each group: Bilingual latencies were constant across emotions but monolingual latencies increased from neutral to angry conditions. In general, bilingual performance was less impacted by the emotional distraction than was that of the monolinguals. Additionally, bilinguals adjusted to the changing demands of the 1-back and 2-back conditions by recruiting neural networks to support different behavioral outcomes than monolinguals.
Collapse
|
39
|
Zhao J, Meng Q, An L, Wang Y. An event-related potential comparison of facial expression processing between cartoon and real faces. PLoS One 2019; 14:e0198868. [PMID: 30629582 PMCID: PMC6328201 DOI: 10.1371/journal.pone.0198868] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Accepted: 12/14/2018] [Indexed: 11/22/2022] Open
Abstract
Faces play important roles in the social lives of humans. Besides real faces, people also encounter numerous cartoon faces in daily life which convey basic emotional states through facial expressions. Using event-related potentials (ERPs), we conducted a facial expression recognition experiment with 17 university students to compare the processing of cartoon faces with that of real faces. This study used face type (real vs. cartoon), emotion valence (happy vs. angry) and participant gender (male vs. female) as independent variables. Reaction time, recognition accuracy, and the amplitudes and latencies of emotion processing-related ERP components such as N170, VPP (vertex positive potential), and LPP (late positive potential) were used as dependent variables. The ERP results revealed that cartoon faces caused larger N170 and VPP amplitudes as well as a briefer N170 latency than did real faces; that real faces induced larger LPP amplitudes than did cartoon faces. In addition, the results showed a significant difference in the brain regions as reflected in a right hemispheric advantage. The behavioral results showed that the reaction times for happy faces were shorter than those for angry faces; that females showed a higher accuracy than did males; and that males showed a higher recognition accuracy for angry faces than happy faces. Due to the sample size, these results may suggestively but not rigorously demonstrate differences in facial expression recognition and neurological processing between cartoon faces and real faces. Cartoon faces showed a higher processing intensity and speed than real faces during the early processing stage. However, more attentional resources were allocated for real faces during the late processing stage.
Collapse
Affiliation(s)
- Jiayin Zhao
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China
| | - Qi Meng
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China
| | - Licong An
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China
| | - Yifang Wang
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China
| |
Collapse
|
40
|
Nemrodov D, Behrmann M, Niemeier M, Drobotenko N, Nestor A. Multimodal evidence on shape and surface information in individual face processing. Neuroimage 2019; 184:813-825. [DOI: 10.1016/j.neuroimage.2018.09.083] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 09/22/2018] [Accepted: 09/30/2018] [Indexed: 11/27/2022] Open
|
41
|
Carrito ML, Bem-Haja P, Silva CF, Perrett DI, Santos IM. Event-related potentials modulated by the perception of sexual dimorphism: The influence of attractiveness and sex of faces. Biol Psychol 2018; 137:1-11. [PMID: 29913202 DOI: 10.1016/j.biopsycho.2018.06.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 11/17/2017] [Accepted: 06/12/2018] [Indexed: 01/02/2023]
Abstract
Sexual dimorphism has been proposed as one of the facial traits to have evolved through sexual selection and to affect attractiveness perception. Even with numerous studies documenting its effect on attractiveness and mate choice, the neurophysiological correlates of the perception of sexual dimorphism are not yet fully understood. In the present study, event-related potentials (ERPs) were recorded during visualisation of faces that had been previously transformed in shape to appear more masculine or more feminine. The participants' task consisted of judging the attractiveness of half of the total number of faces, and performing a sex discrimination task on the other half. Both early and late potentials were modulated by the sex of faces, whereas the effect of the sexually dimorphic transform was mainly visible in the P2 (positive deflection around 200 ms after stimulus onset), EPN (early posterior negativity) and LPP (late positive potential) components. There was an effect of sexual dimorphism on P2 and EPN amplitudes when female participants visualised male faces, which may indicate that masculinity is particularly attended to when viewing opposite sex members. Also, ERP results seem to support the idea of sex differences in social categorisation decisions regarding faces, although differences were not evident on behavioural results. In general, these findings contribute to a better understanding of how humans perceive sexually dimorphic characteristics in other individuals' faces and how they affect attractiveness judgements.
Collapse
Affiliation(s)
- M L Carrito
- Center for Health Technology and Services Research (CINTESIS), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal; ISPA - Instituto Universitário, William James Center for Research, Rua Jardim do Tabaco 34, 1149-041 Lisboa, Portugal; Centre for Psychology at University of Porto, Faculty of Psychology and Education Sciences, University of Porto, Rua Alfredo Allen, 4200-135 Porto, Portugal
| | - P Bem-Haja
- Center for Health Technology and Services Research (CINTESIS), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal; Institute for Biomedical Imaging and Life Sciences (IBILI), Faculty of Medicine, University of Coimbra, 3000-548 Coimbra, Portugal
| | - C F Silva
- Center for Health Technology and Services Research (CINTESIS), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
| | - D I Perrett
- School of Psychology and Neuroscience, University of St Andrews, St Mary's Quad, South Street, St Andrews, Fife, KY16 9JP, Scotland, United Kingdom
| | - I M Santos
- Center for Health Technology and Services Research (CINTESIS), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal.
| |
Collapse
|
42
|
Reuten A, van Dam M, Naber M. Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed. Front Psychol 2018; 9:774. [PMID: 29875722 PMCID: PMC5974161 DOI: 10.3389/fpsyg.2018.00774] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Accepted: 05/01/2018] [Indexed: 11/13/2022] Open
Abstract
Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses.
Collapse
Affiliation(s)
- Anne Reuten
- Experimental Psychology, Helmholtz Institute, Faculty of Social Sciences, Utrecht University, Utrecht, Netherlands
| | - Maureen van Dam
- Experimental Psychology, Helmholtz Institute, Faculty of Social Sciences, Utrecht University, Utrecht, Netherlands
| | - Marnix Naber
- Experimental Psychology, Helmholtz Institute, Faculty of Social Sciences, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
43
|
Kotowski K, Stapor K, Leski J, Kotas M. Validation of Emotiv EPOC+ for extracting ERP correlates of emotional face processing. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.06.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
44
|
Abstract
An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.
Collapse
|