1
|
Hill AT, Ford TC, Bailey NW, Lum JAG, Bigelow FJ, Oberman LM, Enticott PG. EEG During Dynamic Facial Emotion Processing Reveals Neural Activity Patterns Associated with Autistic Traits in Children. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.27.609816. [PMID: 39372765 PMCID: PMC11451616 DOI: 10.1101/2024.08.27.609816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/08/2024]
Abstract
Altered brain connectivity and atypical neural oscillations have been observed in autism, yet their relationship with autistic traits in non-clinical populations remains underexplored. Here, we employ electroencephalography (EEG) to examine functional connectivity, oscillatory power, and broadband aperiodic activity during a dynamic facial emotion processing (FEP) task in 101 typically developing children aged 4-12 years. We investigate associations between these electrophysiological measures of brain dynamics and autistic traits as assessed by the Social Responsiveness Scale, 2nd Edition (SRS-2). Our results revealed that increased FEP-related connectivity across theta (4-7 Hz) and beta (13-30 Hz) frequencies correlated positively with higher SRS-2 scores, predominantly in right-lateralized (theta) and bilateral (beta) cortical networks. Additionally, a steeper 1/f-like aperiodic slope (spectral exponent) across fronto-central electrodes was associated with higher SRS-2 scores. Greater aperiodic-adjusted theta and alpha oscillatory power further correlated with both higher SRS-2 scores and steeper aperiodic slopes. These findings underscore important links between FEP-related brain dynamics and autistic traits in typically developing children. Future work could extend these findings to assess these EEG-derived markers as potential mechanisms underlying behavioural difficulties in autism.
Collapse
Affiliation(s)
- Aron T. Hill
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Burwood, Australia
| | - Talitha C. Ford
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Burwood, Australia
- Centre for Human Psychopharmacology & Swinburne Neuroimaging, School of Health Sciences, Swinburne University of Technology, Melbourne, Australia
| | - Neil W. Bailey
- School of Medicine and Psychology, The Australian National University, Canberra, ACT, Australia
- Monarch Research Institute Monarch Mental Health Group, Sydney, New South Wales, Australia
| | - Jarrad A. G. Lum
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Burwood, Australia
| | - Felicity J. Bigelow
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Burwood, Australia
| | - Lindsay M. Oberman
- Noninvasive Neuromodulation Unit, Experimental Therapeutics and Pathophysiology Branch, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, United States
| | - Peter G. Enticott
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Burwood, Australia
| |
Collapse
|
2
|
Karl V, Engen H, Beck D, Norbom LB, Ferschmann L, Aksnes ER, Kjelkenes R, Voldsbekk I, Andreassen OA, Alnæs D, Ladouceur CD, Westlye LT, Tamnes CK. The role of functional emotion circuits in distinct dimensions of psychopathology in youth. Transl Psychiatry 2024; 14:317. [PMID: 39095355 PMCID: PMC11297301 DOI: 10.1038/s41398-024-03036-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 07/17/2024] [Accepted: 07/23/2024] [Indexed: 08/04/2024] Open
Abstract
Several mental disorders emerge during childhood or adolescence and are often characterized by socioemotional difficulties, including alterations in emotion perception. Emotional facial expressions are processed in discrete functional brain modules whose connectivity patterns encode emotion categories, but the involvement of these neural circuits in psychopathology in youth is poorly understood. This study examined the associations between activation and functional connectivity patterns in emotion circuits and psychopathology during development. We used task-based fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC, N = 1221, 8-23 years) and conducted generalized psycho-physiological interaction (gPPI) analyses. Measures of psychopathology were derived from an independent component analysis of questionnaire data. The results showed positive associations between identifying fearful, sad, and angry faces and depressive symptoms, and a negative relationship between sadness recognition and positive psychosis symptoms. We found a positive main effect of depressive symptoms on BOLD activation in regions overlapping with the default mode network, while individuals reporting higher levels of norm-violating behavior exhibited emotion-specific lower functional connectivity within regions of the salience network and between modules that overlapped with the salience and default mode network. Our findings illustrate the relevance of functional connectivity patterns underlying emotion processing for behavioral problems in children and adolescents.
Collapse
Affiliation(s)
- Valerie Karl
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo, Norway.
- PROMENTA Research Center, Department of Psychology, University of Oslo, Oslo, Norway.
| | - Haakon Engen
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Psychology, University of Oslo, Oslo, Norway
- Institute of Military Psychiatry Norwegian Armed Forces Joint Medical Services, Oslo, Norway
| | - Dani Beck
- PROMENTA Research Center, Department of Psychology, University of Oslo, Oslo, Norway
- Division of Mental Health and Substance Abuse, Diakonhjemmet Hospital, Oslo, Norway
- NORMENT, Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Linn B Norbom
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- PROMENTA Research Center, Department of Psychology, University of Oslo, Oslo, Norway
| | - Lia Ferschmann
- PROMENTA Research Center, Department of Psychology, University of Oslo, Oslo, Norway
| | - Eira R Aksnes
- PROMENTA Research Center, Department of Psychology, University of Oslo, Oslo, Norway
- Division of Mental Health and Substance Abuse, Diakonhjemmet Hospital, Oslo, Norway
- NORMENT, Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Rikka Kjelkenes
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Irene Voldsbekk
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Ole A Andreassen
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- KG Jebsen Centre for Neurodevelopmental Disorders, University of Oslo, Oslo, Norway
| | - Dag Alnæs
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Cecile D Ladouceur
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Lars T Westlye
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Psychology, University of Oslo, Oslo, Norway
- KG Jebsen Centre for Neurodevelopmental Disorders, University of Oslo, Oslo, Norway
| | - Christian K Tamnes
- PROMENTA Research Center, Department of Psychology, University of Oslo, Oslo, Norway
- Division of Mental Health and Substance Abuse, Diakonhjemmet Hospital, Oslo, Norway
- NORMENT, Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| |
Collapse
|
3
|
Paparelli A, Sokhn N, Stacchi L, Coutrot A, Richoz AR, Caldara R. Idiosyncratic fixation patterns generalize across dynamic and static facial expression recognition. Sci Rep 2024; 14:16193. [PMID: 39003314 PMCID: PMC11246522 DOI: 10.1038/s41598-024-66619-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 07/02/2024] [Indexed: 07/15/2024] Open
Abstract
Facial expression recognition (FER) is crucial for understanding the emotional state of others during human social interactions. It has been assumed that humans share universal visual sampling strategies to achieve this task. However, recent studies in face identification have revealed striking idiosyncratic fixation patterns, questioning the universality of face processing. More importantly, very little is known about whether such idiosyncrasies extend to the biological relevant recognition of static and dynamic facial expressions of emotion (FEEs). To clarify this issue, we tracked observers' eye movements categorizing static and ecologically valid dynamic faces displaying the six basic FEEs, all normalized for time presentation (1 s), contrast and global luminance across exposure time. We then used robust data-driven analyses combining statistical fixation maps with hidden Markov Models to explore eye-movements across FEEs and stimulus modalities. Our data revealed three spatially and temporally distinct equally occurring face scanning strategies during FER. Crucially, such visual sampling strategies were mostly comparably effective in FER and highly consistent across FEEs and modalities. Our findings show that spatiotemporal idiosyncratic gaze strategies also occur for the biologically relevant recognition of FEEs, further questioning the universality of FER and, more generally, face processing.
Collapse
Affiliation(s)
- Anita Paparelli
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Lisa Stacchi
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Antoine Coutrot
- Laboratoire d'Informatique en Image Et Systèmes d'information, French Centre National de La Recherche Scientifique, University of Lyon, Lyon, France
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Faucigny 2, 1700, Fribourg, Switzerland.
| |
Collapse
|
4
|
Murray T, Binetti N, Venkataramaiyer R, Namboodiri V, Cosker D, Viding E, Mareschal I. Expression perceptive fields explain individual differences in the recognition of facial emotions. COMMUNICATIONS PSYCHOLOGY 2024; 2:62. [PMID: 39242751 PMCID: PMC11332168 DOI: 10.1038/s44271-024-00111-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 06/05/2024] [Indexed: 09/09/2024]
Abstract
Humans can use the facial expressions of another to infer their emotional state, although it remains unknown how this process occurs. Here we suppose the presence of perceptive fields within expression space, analogous to feature-tuned receptive-fields of early visual cortex. We developed genetic algorithms to explore a multidimensional space of possible expressions and identify those that individuals associated with different emotions. We next defined perceptive fields as probabilistic maps within expression space, and found that they could predict the emotions that individuals infer from expressions presented in a separate task. We found profound individual variability in their size, location, and specificity, and that individuals with more similar perceptive fields had similar interpretations of the emotion communicated by an expression, providing possible channels for social communication. Modelling perceptive fields therefore provides a predictive framework in which to understand how individuals infer emotions from facial expressions.
Collapse
Affiliation(s)
- Thomas Murray
- Department of Psychology, University of Cambridge, Cambridge, UK.
- Department of Psychology, Queen Mary University of London, London, UK.
| | - Nicola Binetti
- Department of Cognitive Neuroscience, International School for Advanced Studies, Trieste, Italy
- Dipartimento di Medicina dei Sistemi, Università degli studi di Roma Tor Vergata, Rome, Italy
| | | | | | - Darren Cosker
- Department of Computer Science, University of Bath, Bath, UK
- Mixed Reality & AI Lab - Cambridge, Microsoft, Cambridge, UK
| | - Essi Viding
- Division of Psychology and Language Sciences, University College London, London, UK
| | | |
Collapse
|
5
|
Urtado MB, Rodrigues RD, Fukusima SS. Visual Field Restriction in the Recognition of Basic Facial Expressions: A Combined Eye Tracking and Gaze Contingency Study. Behav Sci (Basel) 2024; 14:355. [PMID: 38785846 PMCID: PMC11117586 DOI: 10.3390/bs14050355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 04/05/2024] [Accepted: 04/18/2024] [Indexed: 05/25/2024] Open
Abstract
Uncertainties and discrepant results in identifying crucial areas for emotional facial expression recognition may stem from the eye tracking data analysis methods used. Many studies employ parameters of analysis that predominantly prioritize the examination of the foveal vision angle, ignoring the potential influences of simultaneous parafoveal and peripheral information. To explore the possible underlying causes of these discrepancies, we investigated the role of the visual field aperture in emotional facial expression recognition with 163 volunteers randomly assigned to three groups: no visual restriction (NVR), parafoveal and foveal vision (PFFV), and foveal vision (FV). Employing eye tracking and gaze contingency, we collected visual inspection and judgment data over 30 frontal face images, equally distributed among five emotions. Raw eye tracking data underwent Eye Movements Metrics and Visualizations (EyeMMV) processing. Accordingly, the visual inspection time, number of fixations, and fixation duration increased with the visual field restriction. Nevertheless, the accuracy showed significant differences among the NVR/FV and PFFV/FV groups, despite there being no difference in NVR/PFFV. The findings underscore the impact of specific visual field areas on facial expression recognition, highlighting the importance of parafoveal vision. The results suggest that eye tracking data analysis methods should incorporate projection angles extending to at least the parafoveal level.
Collapse
Affiliation(s)
- Melina Boratto Urtado
- Faculty of Philosophy, Sciences and Letters at Ribeirão Preto, University of São Paulo, Ribeirão Preto 14040-901, Brazil;
| | | | - Sergio Sheiji Fukusima
- Faculty of Philosophy, Sciences and Letters at Ribeirão Preto, University of São Paulo, Ribeirão Preto 14040-901, Brazil;
| |
Collapse
|
6
|
Fino E, Menegatti M, Avenanti A, Rubini M. Reading of ingroup politicians' smiles triggers smiling in the corner of one's eyes. PLoS One 2024; 19:e0290590. [PMID: 38635525 PMCID: PMC11025833 DOI: 10.1371/journal.pone.0290590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 01/19/2024] [Indexed: 04/20/2024] Open
Abstract
Spontaneous smiles in response to politicians can serve as an implicit barometer for gauging electorate preferences. However, it is unclear whether a subtle Duchenne smile-an authentic expression involving the coactivation of the zygomaticus major (ZM) and orbicularis oculi (OO) muscles-would be elicited while reading about a favored politician smiling, indicating a more positive disposition and political endorsement. From an embodied simulation perspective, we investigated whether written descriptions of a politician's smile would trigger morphologically different smiles in readers depending on shared or opposing political orientation. In a controlled reading task in the laboratory, participants were presented with subject-verb phrases describing left and right-wing politicians smiling or frowning. Concurrently, their facial muscular reactions were measured via electromyography (EMG) recording at three facial muscles: the ZM and OO, coactive during Duchenne smiles, and the corrugator supercilii (CS) involved in frowning. We found that participants responded with a Duchenne smile detected at the ZM and OO facial muscles when exposed to portrayals of smiling politicians of same political orientation and reported more positive emotions towards these latter. In contrast, when reading about outgroup politicians smiling, there was a weaker activation of the ZM muscle and no activation of the OO muscle, suggesting a weak non-Duchenne smile, while emotions reported towards outgroup politicians were significantly more negative. Also, a more enhanced frown response in the CS was found for ingroup compared to outgroup politicians' frown expressions. Present findings suggest that a politician's smile may go a long way to influence electorates through both non-verbal and verbal pathways. They add another layer to our understanding of how language and social information shape embodied effects in a highly nuanced manner. Implications for verbal communication in the political context are discussed.
Collapse
Affiliation(s)
- Edita Fino
- Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, Bologna, Italy
| | - Michela Menegatti
- Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, Bologna, Italy
| | - Alessio Avenanti
- Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, Bologna, Italy
- Centro Studi e Ricerche in Neuroscienze Cognitive, Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, Campus di Cesena, Cesena, Italy
- Centro de Investigación en Neuropsicología y Neurociencias Cognitivas, Universidad Católica del Maule, Talca, Chile
| | - Monica Rubini
- Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, Bologna, Italy
| |
Collapse
|
7
|
Wang G, Ma L, Wang L, Pang W. Independence Threat or Interdependence Threat? The Focusing Effect on Social or Physical Threat Modulates Brain Activity. Brain Sci 2024; 14:368. [PMID: 38672018 PMCID: PMC11047893 DOI: 10.3390/brainsci14040368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 04/04/2024] [Accepted: 04/04/2024] [Indexed: 04/28/2024] Open
Abstract
OBJECTIVE The neural basis of threat perception has mostly been examined separately for social or physical threats. However, most of the threats encountered in everyday life are complex. The features of interactions between social and physiological threats under different attentional conditions are unclear. METHOD The present study explores this issue using an attention-guided paradigm based on ERP techniques. The screen displays social threats (face threats) and physical threats (action threats), instructing participants to concentrate on only one type of threat, thereby exploring brain activation characteristics. RESULTS It was found that action threats did not affect the processing of face threats in the face-attention condition, and electrophysiological evidence from the brain suggests a comparable situation to that when processing face threats alone, with higher amplitudes of the N170 and EPN (Early Posterior Negativity) components of anger than neutral emotions. However, when focusing on the action-attention condition, the brain was affected by face threats, as evidenced by a greater N190 elicited by stimuli containing threatening emotions, regardless of whether the action was threatening or not. This trend was also reflected in EPN. CONCLUSIONS The current study reveals important similarities and differences between physical and social threats, suggesting that the brain has a greater processing advantage for social threats.
Collapse
Affiliation(s)
- Guan Wang
- The School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
- School of Education Science, Huaiyin Normal University, Huaian 223300, China
| | - Lian Ma
- School of Computer Science and Technology, Huaiyin Normal University, Huaian 223300, China
| | - Lili Wang
- School of Education Science, Huaiyin Normal University, Huaian 223300, China
| | - Weiguo Pang
- The School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| |
Collapse
|
8
|
González-Gualda LM, Vicente-Querol MA, García AS, Molina JP, Latorre JM, Fernández-Sotos P, Fernández-Caballero A. An exploratory study of the effect of age and gender on face scanning during affect recognition in immersive virtual reality. Sci Rep 2024; 14:5553. [PMID: 38448515 PMCID: PMC10918108 DOI: 10.1038/s41598-024-55774-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Accepted: 02/26/2024] [Indexed: 03/08/2024] Open
Abstract
A person with impaired emotion recognition is not able to correctly identify facial expressions represented by other individuals. The aim of the present study is to assess eyes gaze and facial emotion recognition in a healthy population using dynamic avatars in immersive virtual reality (IVR). For the first time, the viewing of each area of interest of the face in IVR is studied by gender and age. This work in healthy people is conducted to assess the future usefulness of IVR in patients with deficits in the recognition of facial expressions. Seventy-four healthy volunteers participated in the study. The materials used were a laptop computer, a game controller, and a head-mounted display. Dynamic virtual faces randomly representing the six basic emotions plus neutral expression were used as stimuli. After the virtual human represented an emotion, a response panel was displayed with the seven possible options. Besides storing the hits and misses, the software program internally divided the faces into different areas of interest (AOIs) and recorded how long participants looked at each AOI. As regards the overall accuracy of the participants' responses, hits decreased from the youngest to the middle-aged and older adults. Also, all three groups spent the highest percentage of time looking at the eyes, but younger adults had the highest percentage. It is also noteworthy that attention to the face compared to the background decreased with age. Moreover, the hits between women and men were remarkably similar and, in fact, there were no statistically significant differences between them. In general, men paid more attention to the eyes than women, but women paid more attention to the forehead and mouth. In contrast to previous work, our study indicates that there are no differences between men and women in facial emotion recognition. Moreover, in line with previous work, the percentage of face viewing time for younger adults is higher than for older adults. However, contrary to earlier studies, older adults look more at the eyes than at the mouth.Consistent with other studies, the eyes are the AOI with the highest percentage of viewing time. For men the most viewed AOI is the eyes for all emotions in both hits and misses. Women look more at the eyes for all emotions, except for joy, fear, and anger on hits. On misses, they look more into the eyes for almost all emotions except surprise and fear.
Collapse
Affiliation(s)
- Luz M González-Gualda
- Servicio de Salud de Castilla-La Mancha, Complejo Hospitalario Universitario de Albacete, Servicio de Salud Mental, 02004, Albacete, Spain
| | - Miguel A Vicente-Querol
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain
| | - Arturo S García
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain
- Departmento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071, Albacete, Spain
| | - José P Molina
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain
- Departmento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071, Albacete, Spain
| | - José M Latorre
- Departmento de Psicología, Universidad de Castilla-La Mancha, 02071, Albacete, Spain
| | - Patricia Fernández-Sotos
- Servicio de Salud de Castilla-La Mancha, Complejo Hospitalario Universitario de Albacete, Servicio de Salud Mental, 02004, Albacete, Spain
- CIBERSAM-ISCIII (Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III), 28016, Madrid, Spain
| | - Antonio Fernández-Caballero
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain.
- Departmento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071, Albacete, Spain.
- CIBERSAM-ISCIII (Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III), 28016, Madrid, Spain.
| |
Collapse
|
9
|
Liu Y, Ji L. Ensemble coding of multiple facial expressions is not affected by attentional load. BMC Psychol 2024; 12:102. [PMID: 38414021 PMCID: PMC10900713 DOI: 10.1186/s40359-024-01598-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 02/16/2024] [Indexed: 02/29/2024] Open
Abstract
Human observers can extract the mean emotion from multiple faces rapidly and precisely. However, whether attention is required in the ensemble coding of facial expressions remains debated. In this study, we examined the effect of attentional load on mean emotion processing with the dual-task paradigm. Individual emotion processing was also investigated as the control task. In the experiment, the letter string and a set of four happy or angry faces of various emotional intensities were shown. Participants had to complete the string task first, judging either the string color (low attention load) or the presence of the target letter (high attention load). Then a cue appeared indicating whether the secondary task was to evaluate the mean emotion of the faces or the emotion of the cued single face, and participants made their judgments on the visual analog scale. The results showed that compared with the color task, the letter task had a longer response time and lower accuracy, which verified the valid manipulation of the attention loads. More importantly, there was no significant difference in averaging performance between the low and high attention loads. By contrast, the individual face processing was impaired under the high attention load relative to the low attentional load. In addition, the advantage of extracting mean emotion over individual emotion was larger under the high attentional load. These results support the power of averaging and provide new evidence that a rather small amount of attention is needed in the ensemble coding of multiple facial expressions.
Collapse
Affiliation(s)
- Yujuan Liu
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 510006, Guangzhou, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macao, China
| | - Luyan Ji
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 510006, Guangzhou, China.
| |
Collapse
|
10
|
Brandt M, de Oliveira Silva F, Simões Neto JP, Tourinho Baptista MA, Belfort T, Lacerda IB, Nascimento Dourado MC. Facial Expression Recognition of Emotional Situations in Mild and Moderate Alzheimer's Disease. J Geriatr Psychiatry Neurol 2024; 37:73-83. [PMID: 37160761 DOI: 10.1177/08919887231175432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Background: Recognizing emotional situations may be impaired in people with Alzheimer's disease (AD). Purpose: We examined differences in the comprehension of an emotional situation in healthy older controls (HOC) and individuals with mild and moderate AD. Research Design: cross-sectional study. Study Sample: We assessed a convenience sample of 115 participants in 3 contexts: understanding the situation, ability to name the congruent emotion, and choice of the correct face in 4 emotional situations (sadness, surprise, anger, happiness). Data Colection: Chi-square and Mann-Whitney U tests were used for comparison between groups separated by CDR 1 and 2. Chi-square and Kruskal-Wallis tests were also used for comparison between groups separated by CDR 0, 1, and 2, with a pairwise comparisons analysis. Results: We found that the ability to understand, name, and choose the proper emotion is not linked and depends on the portrayed emotion. Conclusions: The findings suggest an interaction between emotional processing and cognitive functioning. Therefore, knowledge of an emotional condition and the connection to a specific facial choice most likely involve 2 degraded areas of knowledge, resulting in even higher odds of inaccuracy.
Collapse
Affiliation(s)
- Michelle Brandt
- Center for Alzheimer's disease, Institute of Psychiatry, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Felipe de Oliveira Silva
- Center for Alzheimer's disease, Institute of Psychiatry, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - José Pedro Simões Neto
- Department of Political Sociology, Universidade Federal de Santa Catarina, Florianópolis, Brazil
| | - Maria Alice Tourinho Baptista
- Center for Alzheimer's disease, Institute of Psychiatry, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Tatiana Belfort
- Center for Alzheimer's disease, Institute of Psychiatry, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | - Isabel Barbeito Lacerda
- Center for Alzheimer's disease, Institute of Psychiatry, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil
| | | |
Collapse
|
11
|
Rocha M, Grave J, Korb S, Parma V, Semin GR, Soares SC. Emotional self-body odors do not influence the access to visual awareness by emotional faces. Chem Senses 2024; 49:bjad034. [PMID: 37642223 DOI: 10.1093/chemse/bjad034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Indexed: 08/31/2023] Open
Abstract
A growing body of research suggests that emotional chemosignals in others' body odor (BO), particularly those sampled during fearful states, enhance emotional face perception in conscious and preconscious stages. For instance, emotional faces access visual awareness faster when presented with others' fear BOs. However, the effect of these emotional signals in self-BO, that is, one's own BO, is still neglected in the literature. In the present work, we sought to determine whether emotional self-BOs modify the access to visual awareness of emotional faces. Thirty-eight women underwent a breaking-Continuous Flash Suppression task in which they were asked to detect fearful, happy, and neutral faces, as quickly and accurately as possible, while being exposed to their fear, happiness, and neutral self-BOs. Self-BOs were previously collected and later delivered via an olfactometer, using an event-related design. Results showed a main effect of emotional faces, with happy faces being detected significantly faster than fearful and neutral faces. However, our hypothesis that fear self-BOs would lead to faster emotional face detection was not confirmed, as no effect of emotional self-BOs was found-this was confirmed with Bayesian analysis. Although caution is warranted when interpreting these results, our findings suggest that emotional face perception is not modulated by emotional self-BOs, contrasting with the literature on others' BOs. Further research is needed to understand the role of self-BOs in visual processing and emotion perception.
Collapse
Affiliation(s)
- Marta Rocha
- William James Center for Research (WJCR-Aveiro), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
- Center for Health Technology and Services Research (CINTESIS@RISE), Department of Education and Psychology, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
| | - Joana Grave
- William James Center for Research (WJCR-Aveiro), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
- Center for Health Technology and Services Research (CINTESIS@RISE), Department of Education and Psychology, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
| | - Sebastian Korb
- Department of Psychology, University of Essex, CO4 3SQ Colchester, United Kingdom
- Department of Cognition, Emotion, and Methods in Psychology, University of Vienna, Liebiggasse 5, 1010 Vienna, Austria
| | - Valentina Parma
- Monell Chemical Senses Center, Philadelphia, PA 19104, United States
| | - Gün R Semin
- William James Center for Research (WJCR), ISPA-Instituto Universitário, 1149-041, Lisbon, Portugal
- Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
| | - Sandra C Soares
- William James Center for Research (WJCR-Aveiro), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
- Center for Health Technology and Services Research (CINTESIS@RISE), Department of Education and Psychology, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
| |
Collapse
|
12
|
Karl V, Rohe T. Structural brain changes in emotion recognition across the adult lifespan. Soc Cogn Affect Neurosci 2023; 18:nsad052. [PMID: 37769357 PMCID: PMC10627307 DOI: 10.1093/scan/nsad052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 06/22/2023] [Accepted: 09/19/2023] [Indexed: 09/30/2023] Open
Abstract
Emotion recognition (ER) declines with increasing age, yet little is known whether this observation is based on structural brain changes conveyed by differential atrophy. To investigate whether age-related ER decline correlates with reduced grey matter (GM) volume in emotion-related brain regions, we conducted a voxel-based morphometry analysis using data of the Human Connectome Project-Aging (N = 238, aged 36-87) in which facial ER was tested. We expected to find brain regions that show an additive or super-additive age-related change in GM volume indicating atrophic processes that reduce ER in older adults. The data did not support our hypotheses after correction for multiple comparisons. Exploratory analyses with a threshold of P < 0.001 (uncorrected), however, suggested that relationships between GM volume and age-related general ER may be widely distributed across the cortex. Yet, small effect sizes imply that only a small fraction of the decline of ER in older adults can be attributed to local GM volume changes in single voxels or their multivariate patterns.
Collapse
Affiliation(s)
- Valerie Karl
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91054, Germany
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo 0424, Norway
- PROMENTA Research Center, Department of Psychology, University of Oslo, Oslo 0373, Norway
| | - Tim Rohe
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91054, Germany
| |
Collapse
|
13
|
Itier RJ, Durston AJ. Mass-univariate analysis of scalp ERPs reveals large effects of gaze fixation location during face processing that only weakly interact with face emotional expression. Sci Rep 2023; 13:17022. [PMID: 37813928 PMCID: PMC10562468 DOI: 10.1038/s41598-023-44355-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 10/06/2023] [Indexed: 10/11/2023] Open
Abstract
Decoding others' facial expressions is critical for social functioning. To clarify the neural correlates of expression perception depending on where we look on the face, three combined gaze-contingent ERP experiments were analyzed using robust mass-univariate statistics. Regardless of task, fixation location impacted face processing from 50 to 350 ms, maximally around 120 ms, reflecting retinotopic mapping around C2 and P1 components. Fixation location also impacted majorly the N170-P2 interval while weak effects were seen at the face-sensitive N170 peak. Results question the widespread assumption that faces are processed holistically into an indecomposable perceptual whole around the N170. Rather, face processing is a complex and view-dependent process that continues well beyond the N170. Expression and fixation location interacted weakly during the P1-N170 interval, supporting a role for the mouth and left eye in fearful and happy expression decoding. Expression effects were weakest at the N170 peak but strongest around P2, especially for fear, reflecting task-independent affective processing. Results suggest N170 reflects a transition between processes rather than the maximum of a holistic face processing stage. Focus on this peak should be replaced by data-driven analyses of the epoch using robust statistics to fully unravel the early visual processing of faces and their affective content.
Collapse
Affiliation(s)
- Roxane J Itier
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada.
| | - Amie J Durston
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| |
Collapse
|
14
|
Schmuck J, Schnuerch R, Kirsten H, Shivani V, Gibbons H. The influence of selective attention to specific emotions on the processing of faces as revealed by event-related brain potentials. Psychophysiology 2023; 60:e14325. [PMID: 37162391 DOI: 10.1111/psyp.14325] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 04/20/2023] [Accepted: 04/21/2023] [Indexed: 05/11/2023]
Abstract
Event-related potential studies using affective words have indicated that selective attention to valence can increase affective discrimination at early perceptual stages. This effect most likely relies on neural associations between perceptual features of a stimulus and its affective value. Similar to words, emotional expressions in human faces are linked to specific visual elements. Therefore, selectively attending to a given emotion should allow for the preactivation of neural networks coding for the emotion and associated first-order visual elements, leading to enhanced early processing of faces expressing the attended emotion. To investigate this, we employed an expression detection task (N = 65). Fearful, happy, and neutral faces were randomly presented in three blocks while participants were instructed to respond only to one predefined target level of expression in each block. Reaction times were the fastest for happy target faces, which was accompanied by an increased occipital P1 for happy compared with fearful faces. The N170 yielded an arousal effect (emotional > neutral) while both components were not modulated by target status. In contrast, the early posterior negativity (EPN) arousal effect tended to be larger for target compared with nontarget faces. The late positive potential (LPP) revealed large effects of status and expression as well as an interaction driven by an increased LPP specifically for nontarget fearful faces. These findings tentatively indicate that selective attention to facial affect may enhance early emotional processing (EPN) even though further research is needed. Moreover, late controlled processing of facial emotions appears to involve a negativity bias.
Collapse
Affiliation(s)
- Jonas Schmuck
- Department of Psychology, University of Bonn, Bonn, Germany
| | | | - Hannah Kirsten
- Department of Psychology, University of Bonn, Bonn, Germany
| | | | | |
Collapse
|
15
|
Entzmann L, Guyader N, Kauffmann L, Peyrin C, Mermillod M. Detection of emotional faces: The role of spatial frequencies and local features. Vision Res 2023; 211:108281. [PMID: 37421829 DOI: 10.1016/j.visres.2023.108281] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 06/18/2023] [Accepted: 06/28/2023] [Indexed: 07/10/2023]
Abstract
Models of emotion processing suggest that threat-related stimuli such as fearful faces can be detected based on the rapid extraction of low spatial frequencies. However, this remains debated as other models argue that the decoding of facial expressions occurs with a more flexible use of spatial frequencies. The purpose of this study was to clarify the role of spatial frequencies and differences in luminance contrast between spatial frequencies, on the detection of facial emotions. We used a saccadic choice task in which emotional-neutral face pairs were presented and participants were asked to make a saccade toward the neutral or the emotional (happy or fearful) face. Faces were displayed either in low, high, or broad spatial frequencies. Results showed that participants were better to saccade toward the emotional face. They were also better for high or broad than low spatial frequencies, and the accuracy was higher with a happy target. An analysis of the eye and mouth saliency ofour stimuli revealed that the mouth saliency of the target correlates with participants' performance. Overall, this study underlines the importance of local more than global information, and of the saliency of the mouth region in the detection of emotional and neutral faces.
Collapse
Affiliation(s)
- Léa Entzmann
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France; Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France; Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland.
| | - Nathalie Guyader
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
| | - Louise Kauffmann
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| | - Carole Peyrin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| | - Martial Mermillod
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| |
Collapse
|
16
|
Pierce JE, Petro NM, Clancy E, Gratton C, Petersen SE, Neta M. Specialized late cingulo-opercular network activation elucidates the mechanisms underlying decisions about ambiguity. Neuroimage 2023; 279:120314. [PMID: 37557971 PMCID: PMC10528723 DOI: 10.1016/j.neuroimage.2023.120314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 07/20/2023] [Accepted: 08/06/2023] [Indexed: 08/11/2023] Open
Abstract
Cortical task control networks, including the cingulo-opercular (CO) network play a key role in decision-making across a variety of functional domains. In particular, the CO network functions in a performance reporting capacity that supports successful task performance, especially in response to errors and ambiguity. In two studies testing the contribution of the CO network to ambiguity processing, we presented a valence bias task in which masked clearly and ambiguously valenced emotional expressions were slowly revealed over several seconds. This slow reveal task design provides a window into the decision-making mechanisms as they unfold over the course of a trial. In the main study, the slow reveal task was administered to 32 young adults in the fMRI environment and BOLD time courses were extracted from regions of interest in three control networks. In a follow-up study, the task was administered to a larger, online sample (n = 81) using a more extended slow reveal design with additional unmasking frames. Positive judgments of surprised faces were uniquely accompanied by slower response times and strong, late activation in the CO network. These results support the initial negativity hypothesis, which posits that the default response to ambiguity is negative and positive judgments are associated with a more effortful controlled process, and additionally suggest that this controlled process is mediated by the CO network. Moreover, ambiguous trials were characterized by a second CO response at the end of the trial, firmly placing CO function late in the decision-making process.
Collapse
Affiliation(s)
- Jordan E Pierce
- Center for Brain, Biology, and Behavior, University of Nebraska-Lincoln, Lincoln, NE, USA.
| | - Nathan M Petro
- Institute for Human Neuroscience, Boys Town National Research Hospital, Boys Town, NE, USA; Center for Pediatric Brain Health, Boys Town National Research Hospital, Boys Town, NE, USA
| | - Elizabeth Clancy
- Department of Psychology, University of Guelph, Guelph, Ontario, Canada
| | - Caterina Gratton
- Department of Psychology, Florida State University, Tallahassee, FL, USA
| | - Steven E Petersen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA
| | - Maital Neta
- Center for Brain, Biology, and Behavior, University of Nebraska-Lincoln, Lincoln, NE, USA
| |
Collapse
|
17
|
Kim H, Küster D, Girard JM, Krumhuber EG. Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity. Front Psychol 2023; 14:1221081. [PMID: 37794914 PMCID: PMC10546417 DOI: 10.3389/fpsyg.2023.1221081] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/22/2023] [Indexed: 10/06/2023] Open
Abstract
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
Collapse
Affiliation(s)
- Hyunwoo Kim
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| | - Dennis Küster
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Jeffrey M. Girard
- Department of Psychology, University of Kansas, Lawrence, KS, United States
| | - Eva G. Krumhuber
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
18
|
Rodger H, Sokhn N, Lao J, Liu Y, Caldara R. Developmental eye movement strategies for decoding facial expressions of emotion. J Exp Child Psychol 2023; 229:105622. [PMID: 36641829 DOI: 10.1016/j.jecp.2022.105622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 12/21/2022] [Accepted: 12/23/2022] [Indexed: 01/15/2023]
Abstract
In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six "basic emotions" to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations.
Collapse
Affiliation(s)
- Helen Rodger
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Yingdi Liu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| |
Collapse
|
19
|
Serafini L, Pesciarelli F. Neural timing of the other-race effect across the lifespan: A review. Psychophysiology 2023; 60:e14203. [PMID: 36371686 DOI: 10.1111/psyp.14203] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 09/20/2022] [Accepted: 10/05/2022] [Indexed: 11/15/2022]
Abstract
Face race influences the way we process faces, so that faces of a different ethnic group are processed for identity less efficiently than faces of one's ethnic group - a phenomenon known as the Other-Race Effect (ORE). Although widely replicated, the ORE is still poorly characterized in terms of its development and the underlying mechanisms. In the last two decades, the Event-Related Potential (ERP) technique has brought insight into the mechanisms underlying the ORE and has demonstrated potential to clarify its development. Here, we review the ERP evidence for a differential neural processing of own-race and other-race faces throughout the lifespan. In infants, race-related processing differences emerged at the N290 and P400 (structural encoding) stages. In children, race affected the P100 (early processing, attention) perceptual stage and was implicitly encoded at the N400 (semantic processing) stage. In adults, processing difficulties for other-race faces emerged at the N170 (structural encoding), P200 (configuration processing) and N250 (accessing individual representations) perceptual stages. Early in processing, race was implicitly encoded from other-race faces (N100, P200 attentional biases) and in-depth processing preferentially applied to own-race faces (N200 attentional bias). Encoding appeared less efficient (Dm effects) and retrieval less recollection-based (old/new effects) for other-race faces. Evidence admits the contribution of perceptual, attentional, and motivational processes to the development and functioning of the ORE, offering no conclusive support for perceptual or socio-cognitive accounts. Cross-racial and non-cross-racial studies provided convergent evidence. Future research would need to include less represented ethnic populations and the developmental population.
Collapse
Affiliation(s)
- Luana Serafini
- Department of Biomedical, Metabolic, and Neural Sciences, University of Modena and Reggio Emilia, Modena, Italy
| | - Francesca Pesciarelli
- Department of Biomedical, Metabolic, and Neural Sciences, University of Modena and Reggio Emilia, Modena, Italy
| |
Collapse
|
20
|
Lohaus T, Rogalla S, Thoma P. Use of Technologies in the Therapy of Social Cognition Deficits in Neurological and Mental Diseases: A Systematic Review. Telemed J E Health 2023; 29:331-351. [PMID: 35532968 DOI: 10.1089/tmj.2022.0037] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Objective: This article systematically reviews the effects of technology-based (TB) treatments on impaired social cognition (SC) in neurological and mental disorders. Methods: Strictly adhering to the PRISMA guidelines, a systematic search was carried out in PsycINFO, PubMed, and Web of Science (last search: April 22, 2021) to identify studies that, implementing a control group design, evaluated TB treatments targeting deficits in emotion recognition, Theory of Mind (ToM) and social behavior in adult patients with nondevelopmental and nonprogressive neurological or mental disorders. Risk of bias was assessed using the PEDro Scale, certainty assessment followed the GRADE approach. Results: Sixteen studies involving 857 patients, all focusing on psychotic disorders, were retrieved. The most pronounced effects were observed concerning emotion recognition with all studies revealing overall improvements. Regarding ToM and social behavior, results were mixed. However, the number of studies including outcome measures for these domains, is significantly lower compared to the domain of emotion recognition, limiting the validity of the results. Risk of bias and certainty assessment revealed further limitations of evidence. Conclusion: TB treatment achieves positive effects especially with regard to emotion recognition impairments, at least for patients with schizophrenia. Future research should expand the evaluation of TB training of other SC domains, ought to be carried out in more diverse patient populations, rely on different devices, and include follow-up measurements.
Collapse
Affiliation(s)
- Tobias Lohaus
- Neuropsychological Therapy Centre (NTC), Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| | - Sally Rogalla
- Neuropsychological Therapy Centre (NTC), Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| | - Patrizia Thoma
- Neuropsychological Therapy Centre (NTC), Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
21
|
Bimler DL, Paramei GV. Gauging response time distributions to examine the effect of facial expression inversion. Front Psychol 2023; 14:957160. [PMID: 36910747 PMCID: PMC10000311 DOI: 10.3389/fpsyg.2023.957160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 01/16/2023] [Indexed: 02/26/2023] Open
Abstract
Introduction We used images of facial expressions (FEs) of emotion in a speeded Same/Different task to examine (i) distributional characteristics of response times (RTs) in relation to inter-stimulus similarity and (ii) the impact of inversion on FE processing. Methods Stimuli were seven emotion prototypes, posed by one male and one female, and eight intermediate morphs. Image pairs (N = 225) were presented for 500 ms, upright or inverted, in a block design, each 100 times. Results For both upright and inverted FEs, RTs were a non-monotonic function: median values were longest for stimulus pairs of intermediate similarity, decreasing for both more-dissimilar and more-similar pairs. RTs of "Same" and "Different" judgments followed ex-Gaussian distributions. The non-monotonicity is interpreted within a dual-process decision model framework as reflecting the infrequency of identical pairs, shifting the balance between the Same and Different processes. The effect of stimulus inversion was gauged by comparing RT-based multidimensional scaling solutions for the two presentation modes. Solutions for upright and inverted FEs showed little difference, with both displaying some evidence of categorical perception. The same features appeared in hierarchical clustering solutions. Discussion This outcome replicates and reinforces the solutions derived from accuracy of "Different" responses reported in our earlier companion paper. We attribute this lack of inversion effect to the brief exposure time, allowing low-level visual processing to dominate Same/Different decisions while elevating early featural analysis, which is insensitive to face orientation but enables initial positive/negative valence categorization of FEs.
Collapse
Affiliation(s)
| | - Galina V. Paramei
- Department of Psychology, Liverpool Hope University, Liverpool, United Kingdom
| |
Collapse
|
22
|
Barros F, Soares SC, Rocha M, Bem-Haja P, Silva S, Lundqvist D. The angry versus happy recognition advantage: the role of emotional and physical properties. PSYCHOLOGICAL RESEARCH 2023; 87:108-123. [PMID: 35113209 DOI: 10.1007/s00426-022-01648-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 01/18/2022] [Indexed: 01/27/2023]
Abstract
Facial emotional expressions are pivotal for social communication. Their fast and accurate recognition is crucial to promote adaptive responses to social demands, for the development of functional relationships, and for well-being. However, the literature has been inconsistent in showing differentiated recognition patterns for positive vs. negative facial expressions (e.g., happy and angry expressions, respectively), likely due to affective and perceptual factors. Accordingly, the present study explored differences in recognition performance between angry and happy faces, while specifically assessing the role of emotional intensity and global/regional low-level visual features. 98 participants categorized angry and happy faces morphed between neutral and emotional across 9 levels of expression intensity (10-90%). We observed a significantly higher recognition efficiency (higher accuracy and shorter response latencies) for angry compared to happy faces in lower levels of expression intensity, suggesting that our cognitive resources are biased to prioritize the recognition of potentially harmful stimuli, especially when briefly presented at an ambiguous stage of expression. Conversely, an advantage for happy faces was observed from the midpoint of expression intensity, regarding response speed. However, when compensating for the contribution of regional low-level properties of distinct facial key regions, the effect of emotion was maintained only for response accuracy. Altogether, these results shed new light on the processing of facial emotional stimuli, emphasizing the need to consider emotional intensity and regional low-level image properties in emotion recognition analysis.
Collapse
Affiliation(s)
- Filipa Barros
- William James Center for Research (WJCR), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal. .,Center for Health Technology and Services Research (CINTESIS), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal.
| | - Sandra C Soares
- William James Center for Research (WJCR), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal. .,Center for Health Technology and Services Research (CINTESIS), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal.
| | - Marta Rocha
- William James Center for Research (WJCR), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal.,Center for Health Technology and Services Research (CINTESIS), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - Pedro Bem-Haja
- Center for Health Technology and Services Research (CINTESIS), Department of Education and Psychology, University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - Samuel Silva
- Department of Electronics, Telecommunications and Informatics (DETI), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal.,Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - Daniel Lundqvist
- NatMEG, Department of Clinical Neuroscience, Karolinska Institute, Nobels väg 9, 171 77, Stockholm, Sweden
| |
Collapse
|
23
|
Emotional face recognition when a colored mask is worn: a cross-sectional study. Sci Rep 2023; 13:174. [PMID: 36599964 PMCID: PMC9812539 DOI: 10.1038/s41598-022-27049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 12/23/2022] [Indexed: 01/05/2023] Open
Abstract
Studies of the impact of face masks on emotional facial expression recognition are sparse in children. Moreover, to our knowledge no study has so far considered mask color (in adults and in children), even though this esthetic property is thought to have an impact on information processing. In order to explore these issues, the present study looked at whether first- and fifth-graders and young adults were influenced by the absence or presence (and color: pink, green, red, black, or white) of a face mask when asked to judge emotional facial expressions of fear, anger, sadness, or neutrality. Analysis of results suggested that the presence of a mask did affect the recognition of sad or fearful faces but did not influence significantly the perception of angry and neutral faces. Mask color slightly modulated the recognition of facial emotional expressions, without a systematic pattern that would allow a clear conclusion to be drawn. Moreover, none of these findings varied according to age group. The contribution of different facial areas to efficient emotion recognition is discussed with reference to methodological and theoretical considerations, and in the light of recent studies.
Collapse
|
24
|
Gunderson CA, Baker A, Pence AD, ten Brinke L. Interpersonal Consequences of Deceptive Expressions of Sadness. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2023; 49:97-109. [PMID: 34906011 PMCID: PMC9684658 DOI: 10.1177/01461672211059700] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 10/26/2021] [Indexed: 11/16/2022]
Abstract
Emotional expressions evoke predictable responses from observers; displays of sadness are commonly met with sympathy and help from others. Accordingly, people may be motivated to feign emotions to elicit a desired response. In the absence of suspicion, we predicted that emotional and behavioral responses to genuine (vs. deceptive) expressers would be guided by empirically valid cues of sadness authenticity. Consistent with this hypothesis, untrained observers (total N = 1,300) reported less sympathy and offered less help to deceptive (vs. genuine) expressers of sadness. This effect was replicated using both posed, low-stakes, laboratory-created stimuli, and spontaneous, real, high-stakes emotional appeals to the public. Furthermore, lens models suggest that sympathy reactions were guided by difficult-to-fake facial actions associated with sadness. Results suggest that naive observers use empirically valid cues to deception to coordinate social interactions, providing novel evidence that people are sensitive to subtle cues to deception.
Collapse
Affiliation(s)
| | - Alysha Baker
- Okanagan College, Kelowna, British
Columbia, Canada
| | | | | |
Collapse
|
25
|
Straulino E, Scarpazza C, Sartori L. What is missing in the study of emotion expression? Front Psychol 2023; 14:1158136. [PMID: 37179857 PMCID: PMC10173880 DOI: 10.3389/fpsyg.2023.1158136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/06/2023] [Indexed: 05/15/2023] Open
Abstract
While approaching celebrations for the 150 years of "The Expression of the Emotions in Man and Animals", scientists' conclusions on emotion expression are still debated. Emotion expression has been traditionally anchored to prototypical and mutually exclusive facial expressions (e.g., anger, disgust, fear, happiness, sadness, and surprise). However, people express emotions in nuanced patterns and - crucially - not everything is in the face. In recent decades considerable work has critiqued this classical view, calling for a more fluid and flexible approach that considers how humans dynamically perform genuine expressions with their bodies in context. A growing body of evidence suggests that each emotional display is a complex, multi-component, motoric event. The human face is never static, but continuously acts and reacts to internal and environmental stimuli, with the coordinated action of muscles throughout the body. Moreover, two anatomically and functionally different neural pathways sub-serve voluntary and involuntary expressions. An interesting implication is that we have distinct and independent pathways for genuine and posed facial expressions, and different combinations may occur across the vertical facial axis. Investigating the time course of these facial blends, which can be controlled consciously only in part, is recently providing a useful operational test for comparing the different predictions of various models on the lateralization of emotions. This concise review will identify shortcomings and new challenges regarding the study of emotion expressions at face, body, and contextual levels, eventually resulting in a theoretical and methodological shift in the study of emotions. We contend that the most feasible solution to address the complex world of emotion expression is defining a completely new and more complete approach to emotional investigation. This approach can potentially lead us to the roots of emotional display, and to the individual mechanisms underlying their expression (i.e., individual emotional signatures).
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Padova, Italy
- *Correspondence: Elisa Straulino,
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- IRCCS San Camillo Hospital, Venice, Italy
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
- Luisa Sartori,
| |
Collapse
|
26
|
Jacques C, Caharel S. The time course of categorical perception of facial expressions. Neuropsychologia 2022; 177:108424. [PMID: 36400243 DOI: 10.1016/j.neuropsychologia.2022.108424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 10/13/2022] [Accepted: 11/14/2022] [Indexed: 11/17/2022]
Abstract
Decoding emotions on others' faces is one of the most important functions of the human brain, which has been widely studied in cognitive neuroscience. However, the precise time course of facial expression categorization in the human brain is still a matter of debate. Here we used an original paradigm to measure categorical perception of facial expression changes during event-related potentials (ERPs) recording, in which a face stimulus dynamically switched either to a different expression (between-category condition) or to the same expression (within-category condition), the physical distance between the two successive faces being equal across conditions. The switch between faces generated a negative differential potential peaking at around 160 ms over occipito-temporal regions, similar in term of latency and topography to the well-known face-selective N170 component. This response was larger in the condition where the switch occurred between faces that were perceived as having different facial expressions compared to the same expression. In addition, happy expressions were categorized around 20 ms faster than fearful expressions (respectively, 135 and 156 ms). These findings provide evidence that changes of facial expressions are categorically perceived as early as 160 ms following stimulus onset over the occipito-temporal cortex.
Collapse
Affiliation(s)
- Corentin Jacques
- Université Catholique de Louvain, Psychological Science Research Institute (IPSY), Louvain-La-Neuve, Belgium.
| | | |
Collapse
|
27
|
Surian D, van den Boomen C. The age bias in labeling facial expressions in children: Effects of intensity and expression. PLoS One 2022; 17:e0278483. [PMID: 36459504 PMCID: PMC9718404 DOI: 10.1371/journal.pone.0278483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 11/17/2022] [Indexed: 12/03/2022] Open
Abstract
Emotion reasoning, including labeling of facial expressions, is an important building block for a child's social development. This study investigated age biases in labeling facial expressions in children and adults, focusing on the influence of intensity and expression on age bias. Children (5 to 14 years old; N = 152) and adults (19 to 25 years old; N = 30) labeled happiness, disgust or sadness at five intensity levels (0%; 25%; 50%; 75%; and 100%) in facial images of children and adults. Sensitivity was computed for each of the expression-intensity combinations, separately for the child and adult faces. Results show that children and adults have an age bias at low levels of intensity (25%). In the case of sadness, children have an age bias for all intensities. Thus, the impact of the age of the face seems largest for expressions which might be most difficult to recognise. Moreover, both adults and children label most expressions best in adult rather than child faces, leading to an other-age bias in children and an own-age bias in adults. Overall, these findings reveal that both children and adults exhibit an age bias in labeling subtle facial expressions of emotions.
Collapse
Affiliation(s)
- Dafni Surian
- Department of Developmental Psychology, Utrecht University, Utrecht, The Netherlands
- * E-mail:
| | - Carlijn van den Boomen
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
28
|
Mermillod M, Perrier MJ, Lacroix A, Kauffmann L, Peyrin C, Méot A, Vermeulen N, Dutheil F. High spatial frequencies disrupt conscious visual recognition: evidence from an attentional blink paradigm. Heliyon 2022; 8:e11964. [PMID: 36561662 PMCID: PMC9763755 DOI: 10.1016/j.heliyon.2022.e11964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 05/18/2022] [Accepted: 11/21/2022] [Indexed: 11/30/2022] Open
Abstract
In this article, we tested the respective importance of low spatial frequencies (LSF) and high spatial frequencies (HSF) for conscious visual recognition of emotional stimuli by using an attentional blink paradigm. Thirty-eight participants were asked to identify and report two targets (happy faces) embedded in a rapid serial visual presentation of distractors (angry faces). During attentional blink, conscious perception of the second target (T2) is usually altered when the lag between the two targets is short (200-500 ms) but is restored at longer lags. The distractors between T1 and T2 were either non-filtered (broad spatial frequencies, BSF), low-pass filtered (LSF), or high-pass filtered (HSF). Assuming that prediction abilities could be at the root of conscious visual recognition, we expected that LSF distractors could result in a greater disturbance of T2 reporting than HSF distractors. Results showed that both LSF and HSF play a role in the emergence of exogenous consciousness in the visual system. Furthermore, HSF distractors strongly affected T1 and T2 reporting irrespective of the lag between targets, suggesting their role for facial emotion processing. We discuss these results with regards to other models of visual recognition. .
Collapse
Affiliation(s)
- Martial Mermillod
- LPNC, Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, 38000, Grenoble, France,Corresponding author.
| | | | - Adeline Lacroix
- LPNC, Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, 38000, Grenoble, France
| | - Louise Kauffmann
- LPNC, Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, 38000, Grenoble, France
| | - Carole Peyrin
- LPNC, Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, 38000, Grenoble, France
| | - Alain Méot
- Université Clermont Auvergne, CNRS, LAPSCO, F-63000 Clermont-Ferrand, France
| | - Nicolas Vermeulen
- Université Catholique de Louvain (UCLouvain), Psychological Sciences Research Institute, Louvain-la-Neuve, Belgium,Fund for Scientific Research (FNRS-FRS), Brussels, Belgium
| | - Frédéric Dutheil
- Université Clermont Auvergne, CNRS, LAPSCO, F-63000 Clermont-Ferrand, France
| |
Collapse
|
29
|
Méndez CA, Celeghin A, Diano M, Orsenigo D, Ocak B, Tamietto M. A deep neural network model of the primate superior colliculus for emotion recognition. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210512. [PMID: 36126660 PMCID: PMC9489290 DOI: 10.1098/rstb.2021.0512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 07/18/2022] [Indexed: 12/01/2022] Open
Abstract
Although sensory processing is pivotal to nearly every theory of emotion, the evaluation of the visual input as 'emotional' (e.g. a smile as signalling happiness) has been traditionally assumed to take place in supramodal 'limbic' brain regions. Accordingly, subcortical structures of ancient evolutionary origin that receive direct input from the retina, such as the superior colliculus (SC), are traditionally conceptualized as passive relay centres. However, mounting evidence suggests that the SC is endowed with the necessary infrastructure and computational capabilities for the innate recognition and initial categorization of emotionally salient features from retinal information. Here, we built a neurobiologically inspired convolutional deep neural network (DNN) model that approximates physiological, anatomical and connectional properties of the retino-collicular circuit. This enabled us to characterize and isolate the initial computations and discriminations that the DNN model of the SC can perform on facial expressions, based uniquely on the information it directly receives from the virtual retina. Trained to discriminate facial expressions of basic emotions, our model matches human error patterns and above chance, yet suboptimal, classification accuracy analogous to that reported in patients with V1 damage, who rely on retino-collicular pathways for non-conscious vision of emotional attributes. When presented with gratings of different spatial frequencies and orientations never 'seen' before, the SC model exhibits spontaneous tuning to low spatial frequencies and reduced orientation discrimination, as can be expected from the prevalence of the magnocellular (M) over parvocellular (P) projections. Likewise, face manipulation that biases processing towards the M or P pathway affects expression recognition in the SC model accordingly, an effect that dovetails with variations of activity in the human SC purposely measured with ultra-high field functional magnetic resonance imaging. Lastly, the DNN generates saliency maps and extracts visual features, demonstrating that certain face parts, like the mouth or the eyes, provide higher discriminative information than other parts as a function of emotional expressions like happiness and sadness. The present findings support the contention that the SC possesses the necessary infrastructure to analyse the visual features that define facial emotional stimuli also without additional processing stages in the visual cortex or in 'limbic' areas. This article is part of the theme issue 'Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience'.
Collapse
Affiliation(s)
- Carlos Andrés Méndez
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Alessia Celeghin
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Matteo Diano
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Davide Orsenigo
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
| | - Brian Ocak
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
- Section of Cognitive Neurophysiology and Imaging, National Institute of Mental Health, 49 Convent Drive, Bethesda, MD 20892, USA
| | - Marco Tamietto
- Department of Psychology, University of Torino, Via Verdi 10, Torino 10124, Italy
- Department of Medical and Clinical Psychology, and CoRPS - Center of Research on Psychology in Somatic diseases, Tilburg University, PO Box 90153, 5000 LE Tilburg, The Netherlands
| |
Collapse
|
30
|
Dirzyte A, Antanaitis F, Patapas A. Law Enforcement Officers’ Ability to Recognize Emotions: The Role of Personality Traits and Basic Needs’ Satisfaction. Behav Sci (Basel) 2022; 12:bs12100351. [PMID: 36285920 PMCID: PMC9598174 DOI: 10.3390/bs12100351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 09/10/2022] [Accepted: 09/19/2022] [Indexed: 12/01/2022] Open
Abstract
Background: This study intended to explore the role of personality traits and basic psychological needs in law enforcement officers’ ability to recognize emotions: anger, joy, sadness, fear, surprise, disgust, and neutral. It was significant to analyze law enforcement officers’ emotion recognition and the contributing factors, as this field has been under-researched despite increased excessive force use by officers in many countries. Methods: This study applied the Big Five–2 (BFI-2), the Basic Psychological Needs Satisfaction and Frustration Scale (BPNSFS), and the Karolinska Directed Emotional Faces set of stimuli (KDEF). The data was gathered using an online questionnaire provided directly to law enforcement agencies. A total of 154 law enforcement officers participated in the study, 50.65% were females, and 49.35% were males. The mean age was 41.2 (age range = 22–61). In order to analyze the data, SEM and multiple linear regression methods were used. Results: This study analyzed variables of motion recognition, personality traits, and needs satisfaction and confirmed that law enforcement officers’ personality traits play a significant role in emotion recognition. Respondents’ agreeableness significantly predicted increased overall emotion recognition; conscientiousness predicted increased anger recognition; joy recognition was significantly predicted by extraversion, neuroticism, and agreeableness. This study also confirmed that law enforcement officers’ basic psychological needs satisfaction/frustration play a significant role in emotion recognition. Respondents’ relatedness satisfaction significantly predicted increased overall emotion recognition, fear recognition, joy recognition, and sadness recognition. Relatedness frustration significantly predicted decreased anger recognition, surprise recognition, and neutral face recognition. Furthermore, this study confirmed links between law enforcement officers’ personality traits, satisfaction/frustration of basic psychological needs, and emotion recognition, χ2 = 57.924; df = 41; p = 0.042; TLI = 0.929; CFI = 0.956; RMSEA = 0.042 [0.009–0.065]. Discussion: The findings suggested that agreeableness, conscientiousness, extraversion, and neuroticism play an essential role in satisfaction and frustration of relatedness needs, which, subsequently, link to emotion recognition. Due to the relatively small sample size, the issues of validity/reliability of some instruments, and other limitations, the results of this study should preferably be regarded with concern.
Collapse
Affiliation(s)
- Aiste Dirzyte
- Institute of Psychology, Mykolas Romeris University, Ateities 20, LT-08303 Vilnius, Lithuania
- Correspondence:
| | - Faustas Antanaitis
- Institute of Psychology, Mykolas Romeris University, Ateities 20, LT-08303 Vilnius, Lithuania
| | - Aleksandras Patapas
- Institute of Public Administration, Mykolas Romeris University, Ateities 20, LT-08303 Vilnius, Lithuania
| |
Collapse
|
31
|
The Effect of Mouth-Opening on Recognition of Facial Expressions in the NimStim Set: An Evaluation from Chinese College Students. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00417-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
32
|
Franěk M, Petružálek J, Šefara D. Facial Expressions and Self-Reported Emotions When Viewing Nature Images. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:10588. [PMID: 36078304 PMCID: PMC9518385 DOI: 10.3390/ijerph191710588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 08/16/2022] [Accepted: 08/22/2022] [Indexed: 06/15/2023]
Abstract
Many studies have demonstrated that exposure to simulated natural scenes has positive effects on emotions and reduces stress. In the present study, we investigated emotional facial expressions while viewing images of various types of natural environments. Both automated facial expression analysis by iMotions' AFFDEX 8.1 software (iMotions, Copenhagen, Denmark) and self-reported emotions were analyzed. Attractive and unattractive natural images were used, representing either open or closed natural environments. The goal was to further understand the actual features and characteristics of natural scenes that could positively affect emotional states and to evaluate face reading technology to measure such effects. It was predicted that attractive natural scenes would evoke significantly higher levels of positive emotions than unattractive scenes. The results showed generally small values of emotional facial expressions while observing the images. The facial expression of joy was significantly higher than that of other registered emotions. Contrary to predictions, there was no difference between facial emotions while viewing attractive and unattractive scenes. However, the self-reported emotions evoked by the images showed significantly larger differences between specific categories of images in accordance with the predictions. The differences between the registered emotional facial expressions and self-reported emotions suggested that the participants more likely described images in terms of common stereotypes linked with the beauty of natural environments. This result might be an important finding for further methodological considerations.
Collapse
|
33
|
Poncet F, Leleu A, Rekow D, Damon F, Dzhelyova MP, Schaal B, Durand K, Faivre L, Rossion B, Baudouin JY. A neural marker of rapid discrimination of facial expression in 3.5- and 7-month-old infants. Front Neurosci 2022; 16:901013. [PMID: 36061610 PMCID: PMC9434348 DOI: 10.3389/fnins.2022.901013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 07/29/2022] [Indexed: 01/23/2023] Open
Abstract
Infants’ ability to discriminate facial expressions has been widely explored, but little is known about the rapid and automatic ability to discriminate a given expression against many others in a single experiment. Here we investigated the development of facial expression discrimination in infancy with fast periodic visual stimulation coupled with scalp electroencephalography (EEG). EEG was recorded in eighteen 3.5- and eighteen 7-month-old infants presented with a female face expressing disgust, happiness, or a neutral emotion (in different stimulation sequences) at a base stimulation frequency of 6 Hz. Pictures of the same individual expressing other emotions (either anger, disgust, fear, happiness, sadness, or neutrality, randomly and excluding the expression presented at the base frequency) were introduced every six stimuli (at 1 Hz). Frequency-domain analysis revealed an objective (i.e., at the predefined 1-Hz frequency and harmonics) expression-change brain response in both 3.5- and 7-month-olds, indicating the visual discrimination of various expressions from disgust, happiness and neutrality from these early ages. At 3.5 months, the responses to the discrimination from disgust and happiness expressions were located mainly on medial occipital sites, whereas a more lateral topography was found for the response to the discrimination from neutrality, suggesting that expression discrimination from an emotionally neutral face relies on distinct visual cues than discrimination from a disgust or happy face. Finally, expression discrimination from happiness was associated with a reduced activity over posterior areas and an additional response over central frontal scalp regions at 7 months as compared to 3.5 months. This result suggests developmental changes in the processing of happiness expressions as compared to negative/neutral ones within this age range.
Collapse
Affiliation(s)
- Fanny Poncet
- Development of Olfactory Communication and Cognition Laboratory, Centre des Sciences du Goût et de l’Alimentation, CNRS, Université Bourgogne Franche-Comté, INRAE, Institut Agro, Dijon, France
- Université Grenoble Alpes, Saint-Martin-d’Hères, France
- *Correspondence: Fanny Poncet,
| | - Arnaud Leleu
- Development of Olfactory Communication and Cognition Laboratory, Centre des Sciences du Goût et de l’Alimentation, CNRS, Université Bourgogne Franche-Comté, INRAE, Institut Agro, Dijon, France
| | - Diane Rekow
- Development of Olfactory Communication and Cognition Laboratory, Centre des Sciences du Goût et de l’Alimentation, CNRS, Université Bourgogne Franche-Comté, INRAE, Institut Agro, Dijon, France
| | - Fabrice Damon
- Development of Olfactory Communication and Cognition Laboratory, Centre des Sciences du Goût et de l’Alimentation, CNRS, Université Bourgogne Franche-Comté, INRAE, Institut Agro, Dijon, France
| | | | - Benoist Schaal
- Development of Olfactory Communication and Cognition Laboratory, Centre des Sciences du Goût et de l’Alimentation, CNRS, Université Bourgogne Franche-Comté, INRAE, Institut Agro, Dijon, France
| | - Karine Durand
- Development of Olfactory Communication and Cognition Laboratory, Centre des Sciences du Goût et de l’Alimentation, CNRS, Université Bourgogne Franche-Comté, INRAE, Institut Agro, Dijon, France
| | - Laurence Faivre
- Inserm UMR 1231 GAD, Genetics of Developmental Disorders, and Centre de Référence Maladies Rares “Anomalies du Développement et Syndromes Malformatifs,” FHU TRANSLAD, CHU Dijon and Université de Bourgogne-Franche Comté, Dijon, France
| | - Bruno Rossion
- Université de Lorraine, CNRS, CRAN–UMR 7039, Nancy, France
- Service de Neurologie, Université de Lorraine, CHRU-Nancy, Nancy, France
| | - Jean-Yves Baudouin
- Laboratoire “Développement, Individu, Processus, Handicap, Éducation”, Département Psychologie du Développement, de l’Éducation et des Vulnérabilités, Institut de Psychologie, Université de Lyon, Université Lumière Lyon 2, Bron, France
- Jean-Yves Baudouin,
| |
Collapse
|
34
|
Boyle A, Johnson A, Ellenbogen M. Intranasal oxytocin alters attention to emotional facial expressions, particularly for males and those with depressive symptoms. Psychoneuroendocrinology 2022; 142:105796. [PMID: 35617742 DOI: 10.1016/j.psyneuen.2022.105796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 05/02/2022] [Accepted: 05/03/2022] [Indexed: 11/18/2022]
Abstract
Intranasal oxytocin (OT) can enhance emotion recognition, perhaps by promoting increased attention to social cues. Some studies indicate that individuals with difficulties processing social information, including those with psychopathology, show more pronounced effects in response to OT. As such, there is interest in the potential therapeutic use of OT in populations with deficits in social cognition. The present study examined the effects of intranasal OT on the processing of facial features and selective attention to emotional facial expressions, as well as whether individual differences in depressive symptom severity predict sensitivity to intranasal OT. In a double-blind placebo-controlled within-subject design, eye tracking was used to measure attention to facial features in an emotional expression appraisal task, and attention to emotional expressions in a free-viewing task with a quadrant of multiple faces. OT facilitated the processing of positive cues, enhancing the maintenance of attention to the mouth region of happy faces and to happy faces within a quadrant, with similar effect sizes, despite the latter effect not being statistically significant. Further, persons with depressive symptoms, and particularly males, were sensitive to OT's effects. For males only, OT, relative to placebo, increased attentional focus to the mouth region of all faces. Individuals with depressive symptoms showed less attentional focus on angry (males only) and sad facial expressions, and more attention to happy faces (particularly for males). Results indicate increased sensitivity to OT in males and persons at risk for depression, with OT administration promoting a positive bias in selective attention to social stimuli.
Collapse
Affiliation(s)
- Ariel Boyle
- Department of Psychology, Concordia University, Canada.
| | - Aaron Johnson
- Department of Psychology, Concordia University, Canada.
| | | |
Collapse
|
35
|
Suslow T, Kersting A. The Relations of Attention to and Clarity of Feelings With Facial Affect Perception. Front Psychol 2022; 13:819902. [PMID: 35874362 PMCID: PMC9298753 DOI: 10.3389/fpsyg.2022.819902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 06/20/2022] [Indexed: 11/13/2022] Open
Abstract
Attention to emotions and emotional clarity are core dimensions of individual differences in emotion awareness. Findings from prior research based on self-report indicate that attention to and recognition of one's own emotions are related to attention to and recognition of other people's emotions. In the present experimental study, we examined the relations of attention to and clarity of emotions with the efficiency of facial affect perception. Moreover, it was explored whether attention to and clarity of emotions are linked to negative interpretations of facial expressions. A perception of facial expressions (PFE) task based on schematic faces with neutral, ambiguous, or unambiguous emotional expressions and a gender decision task were administered to healthy individuals along with measures of emotion awareness, state and trait anxiety, depression, and verbal intelligence. Participants had to decide how much the faces express six basic affects. Evaluative ratings and decision latencies were analyzed. Attention to feelings was negatively correlated with evaluative decision latency, whereas clarity of feelings was not related to decision latency in the PFE task. Attention to feelings was positively correlated with the perception of negative affects in ambiguous faces. Attention to feelings and emotional clarity were not related to gender decision latency. According to our results, dispositional attention to feelings goes along with an enhanced efficiency of facial affect perception. Habitually paying attention to one's own emotions may facilitate processing of external emotional information. Preliminary evidence was obtained suggesting a relationship of dispositional attention to feelings with negative interpretations of facial expressions.
Collapse
Affiliation(s)
- Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| | - Anette Kersting
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| |
Collapse
|
36
|
Ross P, George E. Are Face Masks a Problem for Emotion Recognition? Not When the Whole Body Is Visible. Front Neurosci 2022; 16:915927. [PMID: 35924222 PMCID: PMC9339646 DOI: 10.3389/fnins.2022.915927] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 06/23/2022] [Indexed: 01/10/2023] Open
Abstract
The rise of the novel COVID-19 virus has made face masks commonplace items around the globe. Recent research found that face masks significantly impair emotion recognition on isolated faces. However, faces are rarely seen in isolation and the body is also a key cue for emotional portrayal. Here, therefore, we investigated the impact of face masks on emotion recognition when surveying the full body. Stimuli expressing anger, happiness, sadness, and fear were selected from the BEAST stimuli set. Masks were added to these images and participants were asked to recognize the emotion and give a confidence level for that decision for both the masked and unmasked stimuli. We found that, contrary to some work viewing faces in isolation, emotion recognition was generally not impaired by face masks when the whole body is present. We did, however, find that when viewing masked faces, only the recognition of happiness significantly decreased when the whole body was present. In contrast to actual performance, confidence levels were found to decline during the Mask condition across all emotional conditions. This research suggests that the impact of masks on emotion recognition may not be as pronounced as previously thought, as long as the whole body is also visible.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham, United Kingdom
| | | |
Collapse
|
37
|
Li Y, Zhang M, Liu S, Luo W. EEG decoding of multidimensional information from emotional faces. Neuroimage 2022; 258:119374. [PMID: 35700944 DOI: 10.1016/j.neuroimage.2022.119374] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 06/03/2022] [Accepted: 06/10/2022] [Indexed: 10/18/2022] Open
Abstract
Humans can detect and recognize faces quickly, but there has been little research on the temporal dynamics of the different dimensional face information that is extracted. The present study aimed to investigate the time course of neural responses to the representation of different dimensional face information, such as age, gender, emotion, and identity. We used support vector machine decoding to obtain representational dissimilarity matrices of event-related potential responses to different faces for each subject over time. In addition, we performed representational similarity analysis with the model representational dissimilarity matrices that contained different dimensional face information. Three significant findings were observed. First, the extraction process of facial emotion occurred before that of facial identity and lasted for a long time, which was specific to the right frontal region. Second, arousal was preferentially extracted before valence during the processing of facial emotional information. Third, different dimensional face information exhibited representational stability during different periods. In conclusion, these findings reveal the precise temporal dynamics of multidimensional information processing in faces and provide powerful support for computational models on emotional face perception.
Collapse
Affiliation(s)
- Yiwen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
38
|
Sarauskyte L, Monciunskaite R, Griksiene R. The role of sex and emotion on emotion perception in artificial faces: An ERP study. Brain Cogn 2022; 159:105860. [PMID: 35339916 DOI: 10.1016/j.bandc.2022.105860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 02/08/2022] [Accepted: 03/10/2022] [Indexed: 11/17/2022]
Abstract
Sex has a significant impact on the perception of emotional expressions. However, it remains unclear whether sex influences the perception of emotions in artificial faces, which are becoming popular in emotion research. We used an emotion recognition task with FaceGen faces portraying six basic emotions aiming to investigate the effect of sex and emotion on behavioural and electrophysiological parameters. 71 participants performed the task while EEG was recorded. The recognition of sadness was the poorest, however, females recognized sadness better than males. ERP results indicated that fear, disgust, and anger evoked higher amplitudes of late positive potential over the left parietal region compared to neutral expression. Females demonstrated higher values of global field power as compared to males. The interaction between sex and emotion on ERPs was not significant. The results of our study may be valuable for future therapies and research, as it emphasizes possibly distinct processing of emotions and potential sex differences in the recognition of emotional expressions in FaceGen faces.
Collapse
Affiliation(s)
- Livija Sarauskyte
- Vilnius University, Life Sciences Center, Institute of Biosciences, Vilnius, Lithuania.
| | - Rasa Monciunskaite
- Vilnius University, Life Sciences Center, Institute of Biosciences, Vilnius, Lithuania
| | - Ramune Griksiene
- Vilnius University, Life Sciences Center, Institute of Biosciences, Vilnius, Lithuania
| |
Collapse
|
39
|
Schiano Lomoriello A, Sessa P, Doro M, Konvalinka I. Shared Attention Amplifies the Neural Processing of Emotional Faces. J Cogn Neurosci 2022; 34:917-932. [PMID: 35258571 DOI: 10.1162/jocn_a_01841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sharing an experience, without communicating, affects people's subjective perception of the experience, often by intensifying it. We investigated the neural mechanisms underlying shared attention by implementing an EEG study where participants attended to and rated the intensity of emotional faces, simultaneously or independently. Participants performed the task in three experimental conditions: (a) alone; (b) simultaneously next to each other in pairs, without receiving feedback of the other's responses (shared without feedback); and (c) simultaneously while receiving the feedback (shared with feedback). We focused on two face-sensitive ERP components: The amplitude of the N170 was greater in the "shared with feedback" condition compared to the alone condition, reflecting a top-down effect of shared attention on the structural encoding of faces, whereas the EPN was greater in both shared context conditions compared to the alone condition, reflecting an enhanced attention allocation in the processing of emotional content of faces, modulated by the social context. Taken together, these results suggest that shared attention amplifies the neural processing of faces, regardless of the valence of facial expressions.
Collapse
|
40
|
Abstract
Abstract. With the widespread adoption of masks, there is a need for understanding how facial obstruction affects emotion recognition. We asked 120 participants to identify emotions from faces with and without masks. We also examined if recognition performance was related to autistic traits and personality. Masks impacted recognition of expressions with diagnostic lower face features the most and those with diagnostic upper face features the least. Persons with higher autistic traits were worse at identifying unmasked expressions, while persons with lower extraversion and higher agreeableness were better at recognizing masked expressions. These results show that different features play different roles in emotion recognition and suggest that obscuring features affects social communication differently as a function of autistic traits and personality.
Collapse
Affiliation(s)
| | | | | | - Jelena Ristic
- Department of Psychology, McGill University, Montreal, Canada
| |
Collapse
|
41
|
Wang Z, Goerlich KS, Xu P, Luo Y, Aleman A. Perceptive and Affective Impairments In Emotive Eye-Region Processing in Alexithymia. Soc Cogn Affect Neurosci 2022; 17:912-922. [PMID: 35277722 PMCID: PMC9527467 DOI: 10.1093/scan/nsac013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 01/13/2022] [Accepted: 02/20/2022] [Indexed: 12/02/2022] Open
Abstract
Alexithymia is characterized by impairments in emotion processing, frequently linked to facial expressions of emotion. The eye-region conveys information necessary for emotion processing. It has been demonstrated that alexithymia is associated with reduced attention to the eyes, but little is known regarding the cognitive and electrophysiological mechanisms underlying emotive eye-region processing in alexithymia. Here, we recorded behavioral and electrophysiological responses of individuals with alexithymia (ALEX; n = 25) and individuals without alexithymia (NonALEX; n = 23) while they viewed intact and eyeless faces with angry and sad expressions during a dual-target rapid serial visual presentation task. Results showed different eye-region focuses and differentiating N1 responses between intact and eyeless faces to anger and sadness in NonALEX, but not in ALEX, suggesting deficient perceptual processing of the eye-region in alexithymia. Reduced eye-region focus and smaller differences in frontal alpha asymmetry in response to sadness between intact and eyeless faces were observed in ALEX than NonALEX, indicative of impaired affective processing of the eye-region in alexithymia. These findings highlight perceptual and affective abnormalities of emotive eye-region processing in alexithymia. Our results contribute to understanding the neuropsychopathology of alexithymia and alexithymia-related disorders.
Collapse
Affiliation(s)
- Zhihao Wang
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging Center, Center for Brain Disorders and Cognitive Sciences, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen 518060, China
- Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Katharina S Goerlich
- Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Pengfei Xu
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (BNU), Faculty of Psychology, Beijing Normal University, Beijing 100875, China
- Center for Neuroimaging, Shenzhen Institute of Neuroscience, Shenzhen 518106, China
| | - Yuejia Luo
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging Center, Center for Brain Disorders and Cognitive Sciences, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen 518060, China
- College of Teacher Education, Qilu Normal University, Jinan 250200, China
- The Research Center of Brain Science and Visual Cognition, Medical School, Kunming University of Science and Technology, Kunming 650031, China
| | - André Aleman
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging Center, Center for Brain Disorders and Cognitive Sciences, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen 518060, China
- Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
42
|
Wincenciak J, Palumbo L, Epihova G, Barraclough NE, Jellema T. Are adaptation aftereffects for facial emotional expressions affected by prior knowledge about the emotion? Cogn Emot 2022; 36:602-615. [PMID: 35094648 DOI: 10.1080/02699931.2022.2031907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Accurate perception of the emotional signals conveyed by others is crucial for successful social interaction. Such perception is influenced not only by sensory input, but also by knowledge we have about the others' emotions. This study addresses the issue of whether knowing that the other's emotional state is congruent or incongruent with their displayed emotional expression ("genuine" and "fake", respectively) affects the neural mechanisms underpinning the perception of their facial emotional expressions. We used a visual adaptation paradigm to investigate this question in three experiments employing increasing adaptation durations. The adapting stimuli consisted of photographs of emotional facial expressions of joy and anger, purported to reflect (in-)congruency between felt and expressed emotion, displayed by professional actors. A Validity checking procedure ensured participants had the correct knowledge about the (in-)congruency. Significantly smaller adaptation aftereffects were obtained when participants knew that the displayed expression was incongruent with the felt emotion, following all tested adaptation periods. This study shows that knowledge relating to the congruency between felt and expressed emotion modulates face expression aftereffects. We argue that this reflects that the neural substrate responsible for the perception of facial expressions of emotion incorporates the presumed felt emotion underpinning the expression.
Collapse
Affiliation(s)
| | - Letizia Palumbo
- Department of Psychology, Liverpool Hope University, Liverpool, UK
| | | | | | | |
Collapse
|
43
|
Lacroix A, Dutheil F, Logemann A, Cserjesi R, Peyrin C, Biro B, Gomot M, Mermillod M. Flexibility in autism during unpredictable shifts of socio-emotional stimuli: Investigation of group and sex differences. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2021; 26:1681-1697. [PMID: 34957880 DOI: 10.1177/13623613211062776] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
LAY ABSTRACT Flexibility difficulties in autism might be particularly common in complex situations, when shifts (i.e. the switch of attentional resources or strategy according to the situation) are unpredictable, implicit (i.e. not guided by explicit rules) and the stimuli are complex. We analyzed the data of 101 autistic and 145 non-autistic adults, without intellectual deficiency, on two flexibility tasks performed online. The first task involved unpredictable and non-explicit shifts of complex socio-emotional stimuli, whereas the second task involved predictable and explicit shifts of character stimuli. Considering the discrepancies between laboratory results and the real-life flexibility-related challenges faced by autistic individuals, we need to determine which factor could be of particular importance in flexibility difficulties. We point out that the switch cost (i.e. the difference between shift and non-shift condition) was larger for autistic than for non-autistic participants on the complex flexibility task with unpredictable and non-explicit shifts of socio-emotional stimuli, whereas this was not the case when shifts were predictable, explicit and involved less complex stimuli. We also highlight sex differences, suggesting that autistic females have better social skills than autistic males and that they also have a specific cognitive profile, which could contribute to social camouflaging. The findings of this work help us understand which factors could influence flexibility difficulties in autism and are important for designing future studies. They also add to the literature on sex differences in autism which underpin better social skills, executive function, and camouflaging in autistic females.
Collapse
Affiliation(s)
- Adeline Lacroix
- University of Grenoble Alpes, France.,University of Savoie Mont Blanc, France
| | | | | | | | - Carole Peyrin
- University of Grenoble Alpes, France.,University of Savoie Mont Blanc, France
| | - Brigi Biro
- Eötvös Loránd University (ELTE), Hungary
| | | | - Martial Mermillod
- University of Grenoble Alpes, France.,University of Savoie Mont Blanc, France
| |
Collapse
|
44
|
Lacroix A, Nalborczyk L, Dutheil F, Kovarski K, Chokron S, Garrido M, Gomot M, Mermillod M. High spatial frequency filtered primes hastens happy faces categorization in autistic adults. Brain Cogn 2021; 155:105811. [PMID: 34737127 DOI: 10.1016/j.bandc.2021.105811] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 09/29/2021] [Accepted: 10/06/2021] [Indexed: 10/20/2022]
Abstract
Coarse information of a visual stimulus is conveyed by Low Spatial Frequencies (LSF) and is thought to be rapidly extracted to generate predictions. This may guide fast recognition with the subsequent integration of fine information, conveyed by High Spatial Frequencies (HSF). In autism, emotional face recognition is challenging, and might be related to alterations in LSF predictive processes. We analyzed the data of 27 autistic and 34 non autistic (NA) adults on an emotional Stroop task (i.e., emotional face with congruent or incongruent emotional word) with spatially filtered primes (HSF vs.LSF). We hypothesized that LSF primes would generate predictions leading to faster categorization of the target face compared to HSF primes, in the NA group but not in autism. Surprisingly, HSF primes led to faster categorization than LSF primes in both groups. Moreover, the advantage of HSF vs.LSF primes was stronger for angry than happy faces in NA, but was stronger for happy than angry faces in autistic participants. Drift diffusion modelling confirmed HSF advantage and showed a longer non-decision time (e.g., encoding) in autism. Despite LSF predictive impairments in autism was not corroborated, our analyses suggest low level processing specificities in autism.
Collapse
Affiliation(s)
- Adeline Lacroix
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France.
| | - Ladislas Nalborczyk
- Aix Marseille Univ, CNRS, LPC, Marseille, France; Aix Marseille Univ, CNRS, LNC, Marseille, France
| | - Frédéric Dutheil
- Université Clermont Auvergne, CNRS, LaPSCo, CHU Clermont-Ferrand, WittyFit, F-63000 Clermont-Ferrand, France
| | - Klara Kovarski
- Hôpital Fondation Ophtalmologique A. de Rothschild, Paris, France; Université de Paris, INCC UMR 8002, CNRS, F-75006 Paris, France
| | - Sylvie Chokron
- Hôpital Fondation Ophtalmologique A. de Rothschild, Paris, France; Université de Paris, INCC UMR 8002, CNRS, F-75006 Paris, France
| | - Marta Garrido
- Cognitive Neuroscience and Computational Psychiatry Lab, Melbourne School of Psychological Sciences, The University of Melbourne, Australia; Australian Research Council Centre of Excellence for Integrative Brain Function, Australia
| | - Marie Gomot
- UMR 1253 iBrain, Université de Tours, Inserm, Tours, France
| | - Martial Mermillod
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| |
Collapse
|
45
|
Matt S, Dzhelyova M, Maillard L, Lighezzolo-Alnot J, Rossion B, Caharel S. The rapid and automatic categorization of facial expression changes in highly variable natural images. Cortex 2021; 144:168-184. [PMID: 34666300 DOI: 10.1016/j.cortex.2021.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 07/08/2021] [Accepted: 08/09/2021] [Indexed: 01/23/2023]
Abstract
Emotional expressions are quickly and automatically read from human faces under natural viewing conditions. Yet, categorization of facial expressions is typically measured in experimental contexts with homogenous sets of face stimuli. Here we evaluated how the 6 basic facial emotions (Fear, Disgust, Happiness, Anger, Surprise or Sadness) can be rapidly and automatically categorized with faces varying in head orientation, lighting condition, identity, gender, age, ethnic origin and background context. High-density electroencephalography was recorded in 17 participants viewing 50 s sequences with natural variable images of neutral-expression faces alternating at a 6 Hz rate. Every five stimuli (1.2 Hz), variable natural images of one of the six basic expressions were presented. Despite the wide physical variability across images, a significant F/5 = 1.2 Hz response and its harmonics (e.g., 2F/5 = 2.4 Hz, etc.) was observed for all expression changes at the group-level and in every individual participant. Facial categorization responses were found mainly over occipito-temporal sites, with distinct hemispheric lateralization and cortical topographies according to the different expressions. Specifically, a stronger response was found to Sadness categorization, especially over the left hemisphere, as compared to Fear and Happiness, together with a right hemispheric dominance for categorization of Fearful faces. Importantly, these differences were specific to upright faces, ruling out the contribution of low-level visual cues. Overall, these observations point to robust rapid and automatic facial expression categorization processes in the human brain.
Collapse
Affiliation(s)
- Stéphanie Matt
- Université de Lorraine, 2LPN, Nancy, France; Université de Lorraine, Laboratoire INTERPSY, Nancy, France.
| | - Milena Dzhelyova
- Université Catholique de Louvain, Institute of Research in Psychological Science, Louvain-la-Neuve, Belgium.
| | - Louis Maillard
- Université de Lorraine, CNRS, CRAN, Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, Nancy, France.
| | | | - Bruno Rossion
- Université Catholique de Louvain, Institute of Research in Psychological Science, Louvain-la-Neuve, Belgium; Université de Lorraine, CNRS, CRAN, Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, Nancy, France.
| | - Stéphanie Caharel
- Université de Lorraine, 2LPN, Nancy, France; Institut Universitaire de France, Paris, France.
| |
Collapse
|
46
|
Barrick EM, Thornton MA, Tamir DI. Mask exposure during COVID-19 changes emotional face processing. PLoS One 2021; 16:e0258470. [PMID: 34637454 PMCID: PMC8509869 DOI: 10.1371/journal.pone.0258470] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 09/28/2021] [Indexed: 11/19/2022] Open
Abstract
Faces are one of the key ways that we obtain social information about others. They allow people to identify individuals, understand conversational cues, and make judgements about others' mental states. When the COVID-19 pandemic hit the United States, widespread mask-wearing practices were implemented, causing a shift in the way Americans typically interact. This introduction of masks into social exchanges posed a potential challenge-how would people make these important inferences about others when a large source of information was no longer available? We conducted two studies that investigated the impact of mask exposure on emotion perception. In particular, we measured how participants used facial landmarks (visual cues) and the expressed valence and arousal (affective cues), to make similarity judgements about pairs of emotion faces. Study 1 found that in August 2020, participants with higher levels of mask exposure used cues from the eyes to a greater extent when judging emotion similarity than participants with less mask exposure. Study 2 measured participants' emotion perception in both April and September 2020 -before and after widespread mask adoption-in the same group of participants to examine changes in the use of facial cues over time. Results revealed an overall increase in the use of visual cues from April to September. Further, as mask exposure increased, people with the most social interaction showed the largest increase in the use of visual facial cues. These results provide evidence that a shift has occurred in how people process faces such that the more people are interacting with others that are wearing masks, the more they have learned to focus on visual cues from the eye area of the face.
Collapse
Affiliation(s)
- Elyssa M. Barrick
- Department of Psychology, Princeton University, Princeton, New Jersey, United States of America
| | - Mark A. Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire, United States of America
| | - Diana I. Tamir
- Department of Psychology, Princeton University, Princeton, New Jersey, United States of America
| |
Collapse
|
47
|
Entzmann L, Guyader N, Kauffmann L, Lenouvel J, Charles C, Peyrin C, Vuillaume R, Mermillod M. The Role of Emotional Content and Perceptual Saliency During the Programming of Saccades Toward Faces. Cogn Sci 2021; 45:e13042. [PMID: 34606110 DOI: 10.1111/cogs.13042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 06/22/2021] [Accepted: 08/10/2021] [Indexed: 11/27/2022]
Abstract
Previous studies have shown that the human visual system can detect a face and elicit a saccadic eye movement toward it very efficiently compared to other categories of visual stimuli. In the first experiment, we tested the influence of facial expressions on fast face detection using a saccadic choice task. Face-vehicle pairs were simultaneously presented and participants were asked to saccade toward the target (the face or the vehicle). We observed that saccades toward faces were initiated faster, and more often in the correct direction, than saccades toward vehicles, regardless of the facial expressions (happy, fearful, or neutral). We also observed that saccade endpoints on face images were lower when the face was happy and higher when it was neutral. In the second experiment, we explicitly tested the detection of facial expressions. We used a saccadic choice task with emotional-neutral pairs of faces and participants were asked to saccade toward the emotional (happy or fearful) or the neutral face. Participants were faster when they were asked to saccade toward the emotional face. They also made fewer errors, especially when the emotional face was happy. Using computational modeling, we showed that this happy face advantage can, at least partly, be explained by perceptual factors. Also, saccade endpoints were lower when the target was happy than when it was fearful. Overall, we suggest that there is no automatic prioritization of emotional faces, at least for saccades with short latencies, but that salient local face features can automatically attract attention.
Collapse
Affiliation(s)
- Léa Entzmann
- LPNC, CNRS, Université Grenoble Alpes Université Savoie Mont Blanc.,GIPSA-lab, Université Grenoble Alpes CNRS Grenoble INP
| | | | - Louise Kauffmann
- LPNC, CNRS, Université Grenoble Alpes Université Savoie Mont Blanc
| | | | - Clémence Charles
- LPNC, CNRS, Université Grenoble Alpes Université Savoie Mont Blanc
| | - Carole Peyrin
- LPNC, CNRS, Université Grenoble Alpes Université Savoie Mont Blanc
| | | | | |
Collapse
|
48
|
Hudson A, Durston AJ, McCrackin SD, Itier RJ. Emotion, Gender and Gaze Discrimination Tasks do not Differentially Impact the Neural Processing of Angry or Happy Facial Expressions-a Mass Univariate ERP Analysis. Brain Topogr 2021; 34:813-833. [PMID: 34596796 DOI: 10.1007/s10548-021-00873-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 09/20/2021] [Indexed: 10/20/2022]
Abstract
Facial expression processing is a critical component of social cognition yet, whether it is influenced by task demands at the neural level remains controversial. Past ERP studies have found mixed results with classic statistical analyses, known to increase both Type I and Type II errors, which Mass Univariate statistics (MUS) control better. However, MUS open-access toolboxes can use different fundamental statistics, which may lead to inconsistent results. Here, we compared the output of two MUS toolboxes, LIMO and FMUT, on the same data recorded during the processing of angry and happy facial expressions investigated under three tasks in a within-subjects design. Both toolboxes revealed main effects of emotion during the N170 timing and main effects of task during later time points typically associated with the LPP component. Neither toolbox yielded an interaction between the two factors at the group level, nor at the individual level in LIMO, confirming that the neural processing of these two face expressions is largely independent from task demands. Behavioural data revealed main effects of task on reaction time and accuracy, but no influence of expression or an interaction between the two. Expression processing and task demands are discussed in the context of the consistencies and discrepancies between the two toolboxes and existing literature.
Collapse
Affiliation(s)
- Anna Hudson
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| | - Amie J Durston
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| | - Sarah D McCrackin
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| | - Roxane J Itier
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada.
| |
Collapse
|
49
|
Carlisi CO, Reed K, Helmink FGL, Lachlan R, Cosker DP, Viding E, Mareschal I. Using genetic algorithms to uncover individual differences in how humans represent facial emotion. ROYAL SOCIETY OPEN SCIENCE 2021; 8:202251. [PMID: 34659775 PMCID: PMC8511778 DOI: 10.1098/rsos.202251] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 09/17/2021] [Indexed: 06/13/2023]
Abstract
Emotional facial expressions critically impact social interactions and cognition. However, emotion research to date has generally relied on the assumption that people represent categorical emotions in the same way, using standardized stimulus sets and overlooking important individual differences. To resolve this problem, we developed and tested a task using genetic algorithms to derive assumption-free, participant-generated emotional expressions. One hundred and five participants generated a subjective representation of happy, angry, fearful and sad faces. Population-level consistency was observed for happy faces, but fearful and sad faces showed a high degree of variability. High test-retest reliability was observed across all emotions. A separate group of 108 individuals accurately identified happy and angry faces from the first study, while fearful and sad faces were commonly misidentified. These findings are an important first step towards understanding individual differences in emotion representation, with the potential to reconceptualize the way we study atypical emotion processing in future research.
Collapse
Affiliation(s)
- Christina O. Carlisi
- Division of Psychology and Language Sciences, Developmental Risk and Resilience Unit, University College London, 26 Bedford Way, London WC1H 0AP, UK
| | - Kyle Reed
- Department of Computer Science, University of Bath, 1 West, Claverton Down, Bath BA2 7AY, UK
| | - Fleur G. L. Helmink
- Erasmus University Medical Center, s-Gravendijkwal 230, Rotterdam 3015 CE, The Netherlands
| | - Robert Lachlan
- Department of Psychology, Royal Holloway University of London, Wolfson Building, Egham TW20 0EX, UK
| | - Darren P. Cosker
- Department of Computer Science, University of Bath, 1 West, Claverton Down, Bath BA2 7AY, UK
| | - Essi Viding
- Division of Psychology and Language Sciences, Developmental Risk and Resilience Unit, University College London, 26 Bedford Way, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Chemical Sciences, Department of Psychology, Queen Mary University of London, G. E. Fogg Building, Mile End Road, London E1 4DQ, UK
| |
Collapse
|
50
|
Pazhoohi F, Forby L, Kingstone A. Facial masks affect emotion recognition in the general population and individuals with autistic traits. PLoS One 2021; 16:e0257740. [PMID: 34591895 PMCID: PMC8483373 DOI: 10.1371/journal.pone.0257740] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Accepted: 09/08/2021] [Indexed: 11/25/2022] Open
Abstract
Facial expressions, and the ability to recognize these expressions, have evolved in humans to communicate information to one another. Face masks are equipment used in healthcare by health professionals to prevent the transmission of airborne infections. As part of the social distancing efforts related to COVID-19, wearing facial masks has been practiced globally. Such practice might influence affective information communication among humans. Previous research suggests that masks disrupt expression recognition of some emotions (e.g., fear, sadness or neutrality) and lower the confidence in their identification. To extend the previous research, in the current study we tested a larger and more diverse sample of individuals and also investigated the effect of masks on perceived intensity of expressions. Moreover, for the first time in the literature we examined these questions using individuals with autistic traits. Specifically, across three experiments using different populations (college students and general population), and the 10-item Autism Spectrum Quotient (AQ-10; lower and higher scorers), we tested the effect of facial masks on facial emotion recognition of anger, disgust, fear, happiness, sadness, and neutrality. Results showed that the ability to identify all facial expressions decreased when faces were masked, a finding observed across all three studies, contradicting previous research on fear, sad, and neutral expressions. Participants were also less confident in their judgements for all emotions, supporting previous research; and participants perceived emotions as less expressive in the mask condition compared to the unmasked condition, a finding novel to the literature. An additional novel finding was that participants with higher scores on the AQ-10 were less accurate and less confident overall in facial expression recognition, as well as perceiving expressions as less intense. Our findings reveal that wearing face masks decreases facial expression recognition, confidence in expression identification, as well as the perception of intensity for all expressions, affecting high-scoring AQ-10 individuals more than low-scoring individuals.
Collapse
Affiliation(s)
- Farid Pazhoohi
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Leilani Forby
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|