1
|
Tholl S, Sojer CA, Schmidt SNL, Mier D. How to elicit a negative bias? Manipulating contrast and saturation with the facial emotion salience task. Front Psychol 2024; 15:1284595. [PMID: 39268387 PMCID: PMC11390599 DOI: 10.3389/fpsyg.2024.1284595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 08/19/2024] [Indexed: 09/15/2024] Open
Abstract
Introduction Emotion recognition impairments and a tendency to misclassify neutral faces as negative are common in schizophrenia. A possible explanation for these deficits is aberrant salience attribution. To explore the possibility of salience driven emotion recognition deficits, we implemented a novel facial emotion salience task (FEST). Methods Sixty-six healthy participants with variations in psychometric schizotypy completed the FEST. In the FEST, we manipulated physical salience (FEST-1: contrast, FEST-2: saturation) of emotionally salient (positive, i.e., happy and negative, i.e., fearful) and non-salient (neutral) facial expressions. Results When salience was high (increased contrast), participants recognized negative facial expressions faster, whereas neutral faces were recognized more slowly and were more frequently misclassified as negative. When salience was low (decreased saturation), positive expressions were recognized more slowly. These measures were not associated with schizotypy in our sample. Discussion Our findings show that the match between physical and emotional salience influences emotion recognition and suggest that the FEST is suitable to simulate aberrant salience processing during emotion recognition in healthy participants.
Collapse
Affiliation(s)
- Sarah Tholl
- Department of Psychology, University of Konstanz, Konstanz, Germany
| | | | | | - Daniela Mier
- Department of Psychology, University of Konstanz, Konstanz, Germany
| |
Collapse
|
2
|
Buehler SK, Lowther M, Lukow PB, Kirk PA, Pike AC, Yamamori Y, Chavanne AV, Gormley S, Goble T, Tuominen EW, Aylward J, McCloud T, Rodriguez-Sanchez J, Robinson OJ. Independent replications reveal anterior and posterior cingulate cortex activation underlying state anxiety-attenuated face encoding. COMMUNICATIONS PSYCHOLOGY 2024; 2:80. [PMID: 39184223 PMCID: PMC11343718 DOI: 10.1038/s44271-024-00128-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 08/08/2024] [Indexed: 08/27/2024]
Abstract
Anxiety involves the anticipation of aversive outcomes and can impair neurocognitive processes, such as the ability to recall faces encoded during the anxious state. It is important to precisely delineate and determine the replicability of these effects using causal state anxiety inductions in the general population. This study therefore aimed to replicate prior research on the distinct impacts of threat-of-shock-induced anxiety on the encoding and recognition stage of emotional face processing, in a large asymptomatic sample (n = 92). We successfully replicated previous results demonstrating impaired recognition of faces encoded under threat-of-shock. This was supported by a mega-analysis across three independent studies using the same paradigm (n = 211). Underlying this, a whole-brain fMRI analysis revealed enhanced activation in the posterior cingulate cortex (PCC), alongside previously seen activity in the anterior cingulate cortex (ACC) when combined in a mega-analysis with the fMRI findings we aimed to replicate. We further found replications of hippocampus activation when the retrieval and encoding states were congruent. Our results support the notion that state anxiety disrupts face recognition, potentially due to attentional demands of anxious arousal competing with affective stimuli processing during encoding and suggest that regions of the cingulate cortex play pivotal roles in this.
Collapse
Affiliation(s)
| | | | | | - Peter A. Kirk
- National Institute of Mental Health, Washington, DC USA
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
3
|
Schmuck J, Voltz E, Gibbons H. You're Beautiful When You Smile: Event-Related Brain Potential (ERP) Evidence of Early Opposite-Gender Bias in Happy Faces. Brain Sci 2024; 14:739. [PMID: 39199434 PMCID: PMC11353154 DOI: 10.3390/brainsci14080739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Revised: 07/18/2024] [Accepted: 07/22/2024] [Indexed: 09/01/2024] Open
Abstract
Studies of social cognition have shown gender differences regarding human face processing. One interesting finding is the enhanced processing of opposite-gender faces at different time stages, as revealed by event-related brain potentials. Crucially, from an evolutionary perspective, such a bias might interact with the emotional expression of the face. To investigate this, 100 participants (50 female, 50 male) completed an expression-detection task while their EEG was recorded. In three blocks, fearful, happy and neutral faces (female and male) were randomly presented, with participants instructed to respond to only one predefined target expression level in each block. Using linear mixed models, we observed both faster reaction times as well as larger P1 and late positive potential (LPP) amplitudes for women compared to men, supporting a generally greater female interest in faces. Highly interestingly, the analysis revealed an opposite-gender bias at P1 for happy target faces. This suggests that participants' attentional templates may include more opposite-gender facial features when selectively attending to happy faces. While N170 was influenced by neither the face nor the participant gender, LPP was modulated by the face gender and specific combinations of the target status, face gender and expression, which is interpreted in the context of gender-emotion stereotypes. Future research should further investigate this expression and attention dependency of early opposite-gender biases.
Collapse
Affiliation(s)
| | | | - Henning Gibbons
- Department of Psychology, University of Bonn, Kaiser-Karl-Ring 9, 53111 Bonn, Germany; (J.S.); (E.V.)
| |
Collapse
|
4
|
Quek GL, de Heering A. Visual periodicity reveals distinct attentional signatures for face and non-face categories. Cereb Cortex 2024; 34:bhae228. [PMID: 38879816 PMCID: PMC11180377 DOI: 10.1093/cercor/bhae228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 03/19/2024] [Accepted: 05/14/2024] [Indexed: 06/19/2024] Open
Abstract
Observers can selectively deploy attention to regions of space, moments in time, specific visual features, individual objects, and even specific high-level categories-for example, when keeping an eye out for dogs while jogging. Here, we exploited visual periodicity to examine how category-based attention differentially modulates selective neural processing of face and non-face categories. We combined electroencephalography with a novel frequency-tagging paradigm capable of capturing selective neural responses for multiple visual categories contained within the same rapid image stream (faces/birds in Exp 1; houses/birds in Exp 2). We found that the pattern of attentional enhancement and suppression for face-selective processing is unique compared to other object categories: Where attending to non-face objects strongly enhances their selective neural signals during a later stage of processing (300-500 ms), attentional enhancement of face-selective processing is both earlier and comparatively more modest. Moreover, only the selective neural response for faces appears to be actively suppressed by attending towards an alternate visual category. These results underscore the special status that faces hold within the human visual system, and highlight the utility of visual periodicity as a powerful tool for indexing selective neural processing of multiple visual categories contained within the same image sequence.
Collapse
Affiliation(s)
- Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Westmead Innovation Quarter, 160 Hawkesbury Rd, Westmead NSW 2145, Australia
| | - Adélaïde de Heering
- Unité de Recherche en Neurosciences Cognitives (UNESCOG), ULB Neuroscience Institue (UNI), Center for Research in Cognition & Neurosciences (CRCN), Université libre de Bruxelles (ULB), Avenue Franklin Roosevelt, 50-CP191, 1050 Brussels, Belgium
| |
Collapse
|
5
|
Weber S, Salomoni SE, St George RJ, Hinder MR. Stopping Speed in Response to Auditory and Visual Stop Signals Depends on Go Signal Modality. J Cogn Neurosci 2024; 36:1395-1411. [PMID: 38683725 DOI: 10.1162/jocn_a_02171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
Past research has found that the speed of the action cancellation process is influenced by the sensory modality of the environmental change that triggers it. However, the effect on selective stopping processes (where participants must cancel only one component of a multicomponent movement) remains unknown, despite these complex movements often being required as we navigate our busy modern world. Thirty healthy adults (mean age = 31.1 years, SD = 10.5) completed five response-selective stop signal tasks featuring different combinations of "go signal" modality (the environmental change baring an imperative to initiate movement; auditory or visual) and "stop signal" modality (the environmental change indicating that action cancellation is required: auditory, visual, or audiovisual). EMG recordings of effector muscles allowed detailed comparison of the characteristics of voluntary action and cancellation between tasks. Behavioral and physiological measures of stopping speed demonstrated that the modality of the go signal influenced how quickly participants cancelled movement in response to the stop signal: Stopping was faster in two cross-modal experimental conditions (auditory go - visual stop; visual go - auditory stop), than in two conditions using the same modality for both signals. A separate condition testing for multisensory facilitation revealed that stopping was fastest when the stop signal consisted of a combined audiovisual stimulus, compared with all other go-stop stimulus combinations. These findings provide novel evidence regarding the role of attentional networks in action cancellation and suggest modality-specific cognitive resources influence the latency of the stopping process.
Collapse
|
6
|
Sever Aktuna YS, Koskderelioglu A, Eskut N, Aktuna A. Is impairment of facial emotion recognition independent of cognitive dysfunction in multiple sclerosis? Neurol Sci 2024; 45:2791-2800. [PMID: 38246940 PMCID: PMC11081977 DOI: 10.1007/s10072-024-07314-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 01/07/2024] [Indexed: 01/23/2024]
Abstract
BACKGROUND Emotions expressed on the face play a key role in social cognition and communication by providing inner emotional experiences. This study aimed to evaluate facial emotion identification and discrimination and empathy abilities in patients with MS and whether it is related to cognitive dysfunction. METHODS One hundred twenty patients with relapsing-remitting MS and age- and sex-matched 120 healthy controls were enrolled in the study. All the subjects were evaluated with the Facial Emotion Identification Test (FEIT), Facial Emotion Discrimination Test (FEIDT), and Empathy Quotient (EQ). We used the Beck Depression Inventory (BDI) for depression and detailed cognitive tests, including the Montreal Cognitive Assessment (MoCA), the Symbol Digit Modalities Test (SDMT), and the Paced Auditory Serial Addition Test (PASAT). The quality of life was assessed with Multiple Sclerosis Quality of Life-54 (MSQL-54). RESULTS Patients with MS were 37.6 ± 9.5 years old, had a mean disease duration of 8.8 ± 6.6 (8-28) years, and a mean EDSS score of 1.6 ± 1.3 (0-4.5). We found significant differences in the identification of facial emotions, discrimination of facial emotions, and empathy in MS patients compared to controls (p < 0.05). Especially the recognition of feelings of sadness, fear, and shame was significantly lower in MS patients. The multivariate logistic regression analysis showed low SDMT and FEIDT scores which showed an independent association with MS. CONCLUSIONS Our findings indicate that facial emotion recognition and identification deficits are remarkable among patients with MS and emotion recognition is impaired together with and independently of cognitive dysfunction in MS patients.
Collapse
Affiliation(s)
- Yagmur Simge Sever Aktuna
- Neurology Department, University of Health Sciences, Izmir Bozyaka Education and Research Hospital, 35170, Izmir, Turkey.
| | - Asli Koskderelioglu
- Neurology Department, University of Health Sciences, Izmir Bozyaka Education and Research Hospital, 35170, Izmir, Turkey
| | - Neslihan Eskut
- Neurology Department, University of Health Sciences, Izmir Bozyaka Education and Research Hospital, 35170, Izmir, Turkey
| | - Atalay Aktuna
- Department of Public Health, Ministry of Health, Bornova District Health Directorate, 35030, Bornova, Izmir, Turkey
| |
Collapse
|
7
|
Yuan Y, Guan L, Cao Y, Xu Y. The distinct effects of fearful and disgusting scenes on self-relevant face recognition. THE JOURNAL OF GENERAL PSYCHOLOGY 2024:1-17. [PMID: 38767464 DOI: 10.1080/00221309.2024.2349764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 04/24/2024] [Indexed: 05/22/2024]
Abstract
Self-face recognition denotes the process by which a person can recognize their own face by distinguishing it from another's face. Although many research studies have explored the inhibition effect of negative information on self-relevant face processing, few researchers have examined whether negative scenes influence self-relevant face processing. Fearful and disgusting scenes are typical negative scenes, but little research to data has examined their discriminative effects on self-relevant face recognition. To investigate these issues, the current study explored the effect of negative scenes on self-relevant face recognition. In Study 1, 44 participants (20 men, 24 women) were asked to judge the orientation of a target face (self-face or friend-face) pictured in a negative or neutral scene, whereas 40 participants (19 men, 21 women) were asked to complete the same task in a fearful, disgusting, or neutral scene in Study 2. The results showed that negative scenes inhibited the speed of recognizing self-faces. Furthermore, the above effect of negative scenes on self-relevant face recognition occurred with fearful rather than disgusting scenes. Our findings suggest the distinct effects of fearful scenes and disgusting scenes on self-relevant face processing, which may be associated with the automatic attentional capture to negative scenes (especially fearful scenes) and the tendency to escape self-awareness.
Collapse
Affiliation(s)
- Yuan Yuan
- School of Psychology, Northeast Normal University
- Jilin Provincial Key Laboratory of Cognitive Neuroscience and Brain Development, Northeast Normal University
| | - Lili Guan
- School of Psychology, Northeast Normal University
- Jilin Provincial Key Laboratory of Cognitive Neuroscience and Brain Development, Northeast Normal University
| | - Yifei Cao
- School of Psychology, Northeast Normal University
| | - Yang Xu
- School of Psychology, Northeast Normal University
| |
Collapse
|
8
|
Weidner EM, Moratti S, Schindler S, Grewe P, Bien CG, Kissler J. Amygdala and cortical gamma-band responses to emotional faces are modulated by attention to valence. Psychophysiology 2024; 61:e14512. [PMID: 38174584 DOI: 10.1111/psyp.14512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 09/22/2023] [Accepted: 12/13/2023] [Indexed: 01/05/2024]
Abstract
The amygdala might support an attentional bias for emotional faces. However, whether and how selective attention toward a specific valence modulates this bias is not fully understood. Likewise, it is unclear whether amygdala and cortical signals respond to emotion and attention in a similar way. We recorded gamma-band activity (GBA, > 30 Hz) intracranially in the amygdalae of 11 patients with epilepsy and collected scalp recordings from 19 healthy participants. We presented angry, neutral, and happy faces randomly, and we denoted one valence as the target. Participants detected happy targets most quickly and accurately. In the amygdala, during attention to negative faces, low gamma-band activity (LGBA, < 90 Hz) increased for angry compared with happy faces from 160 ms. From 220 ms onward, amygdala high gamma-band activity (HGBA, > 90 Hz) was higher for angry and neutral faces than for happy ones. Monitoring neutral faces increased amygdala HGBA for emotions compared with neutral faces from 40 ms. Expressions were not differentiated in GBA while monitoring positive faces. On the scalp, only threat monitoring resulted in expression differentiation. Here, posterior LGBA was increased selectively for angry targets from 60 ms. The data show that GBA differentiation of emotional expressions is modulated by attention to valence: Top-down-controlled threat vigilance coordinates widespread GBA in favor of angry faces. Stimulus-driven emotion differentiation in amygdala GBA occurs during a neutral attentional focus. These findings align with a multi-pathway model of emotion processing and specify the role of GBA in this process.
Collapse
Affiliation(s)
- Enya M Weidner
- Department of Psychology, Bielefeld University, Bielefeld, Germany
| | - Stephan Moratti
- Department of Experimental Psychology, Complutense University of Madrid, Madrid, Spain
| | - Sebastian Schindler
- Institute of Medical Psychology and Systems Neuroscience, University of Münster, Münster, Germany
| | - Philip Grewe
- Deptartment of Epileptology, Krankenhaus Mara, Bethel Epilepsy Center, Medical School OWL, Bielefeld University, Bielefeld, Germany
- Clinical Neuropsychology and Epilepsy Research, Medical School OWL, Bielefeld University, Bielefeld, Germany
| | - Christian G Bien
- Deptartment of Epileptology, Krankenhaus Mara, Bethel Epilepsy Center, Medical School OWL, Bielefeld University, Bielefeld, Germany
| | - Johanna Kissler
- Department of Psychology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
9
|
Li Y, Li S, Hu W, Yang L, Luo W. Spatial representation of multidimensional information in emotional faces revealed by fMRI. Neuroimage 2024; 290:120578. [PMID: 38499051 DOI: 10.1016/j.neuroimage.2024.120578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 03/13/2024] [Accepted: 03/15/2024] [Indexed: 03/20/2024] Open
Abstract
Face perception is a complex process that involves highly specialized procedures and mechanisms. Investigating into face perception can help us better understand how the brain processes fine-grained, multidimensional information. This research aimed to delve deeply into how different dimensions of facial information are represented in specific brain regions or through inter-regional connections via an implicit face recognition task. To capture the representation of various facial information in the brain, we employed support vector machine decoding, functional connectivity, and model-based representational similarity analysis on fMRI data, resulting in the identification of three crucial findings. Firstly, despite the implicit nature of the task, emotions were still represented in the brain, contrasting with all other facial information. Secondly, the connection between the medial amygdala and the parahippocampal gyrus was found to be essential for the representation of facial emotion in implicit tasks. Thirdly, in implicit tasks, arousal representation occurred in the parahippocampal gyrus, while valence depended on the connection between the primary visual cortex and the parahippocampal gyrus. In conclusion, these findings dissociate the neural mechanisms of emotional valence and arousal, revealing the precise spatial patterns of multidimensional information processing in faces.
Collapse
Affiliation(s)
- Yiwen Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China; Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Shuaixia Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Weiyu Hu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Lan Yang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China.
| |
Collapse
|
10
|
Liu Y, Ji L. Ensemble coding of multiple facial expressions is not affected by attentional load. BMC Psychol 2024; 12:102. [PMID: 38414021 PMCID: PMC10900713 DOI: 10.1186/s40359-024-01598-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 02/16/2024] [Indexed: 02/29/2024] Open
Abstract
Human observers can extract the mean emotion from multiple faces rapidly and precisely. However, whether attention is required in the ensemble coding of facial expressions remains debated. In this study, we examined the effect of attentional load on mean emotion processing with the dual-task paradigm. Individual emotion processing was also investigated as the control task. In the experiment, the letter string and a set of four happy or angry faces of various emotional intensities were shown. Participants had to complete the string task first, judging either the string color (low attention load) or the presence of the target letter (high attention load). Then a cue appeared indicating whether the secondary task was to evaluate the mean emotion of the faces or the emotion of the cued single face, and participants made their judgments on the visual analog scale. The results showed that compared with the color task, the letter task had a longer response time and lower accuracy, which verified the valid manipulation of the attention loads. More importantly, there was no significant difference in averaging performance between the low and high attention loads. By contrast, the individual face processing was impaired under the high attention load relative to the low attentional load. In addition, the advantage of extracting mean emotion over individual emotion was larger under the high attentional load. These results support the power of averaging and provide new evidence that a rather small amount of attention is needed in the ensemble coding of multiple facial expressions.
Collapse
Affiliation(s)
- Yujuan Liu
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 510006, Guangzhou, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macao, China
| | - Luyan Ji
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 510006, Guangzhou, China.
| |
Collapse
|
11
|
Hagen S, Laguesse R, Rossion B. Extensive Visual Training in Adulthood Reduces an Implicit Neural Marker of the Face Inversion Effect. Brain Sci 2024; 14:146. [PMID: 38391720 PMCID: PMC10886861 DOI: 10.3390/brainsci14020146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/16/2024] [Accepted: 01/26/2024] [Indexed: 02/24/2024] Open
Abstract
Face identity recognition (FIR) in humans is supported by specialized neural processes whose function is spectacularly impaired when simply turning a face upside-down: the face inversion effect (FIE). While the FIE appears to have a slow developmental course, little is known about the plasticity of the neural processes involved in this effect-and in FIR in general-at adulthood. Here, we investigate whether extensive training (2 weeks, ~16 h) in young human adults discriminating a large set of unfamiliar inverted faces can reduce an implicit neural marker of the FIE for a set of entirely novel faces. In all, 28 adult observers were trained to individuate 30 inverted face identities presented under different depth-rotated views. Following training, we replicate previous behavioral reports of a significant reduction (56% relative accuracy rate) in the behavioral FIE as measured with a challenging four-alternative delayed-match-to-sample task for individual faces across depth-rotated views. Most importantly, using EEG together with a validated frequency tagging approach to isolate a neural index of FIR, we observe the same substantial (56%) reduction in the neural FIE at the expected occipito-temporal channels. The reduction in the neural FIE correlates with the reduction in the behavioral FIE at the individual participant level. Overall, we provide novel evidence suggesting a substantial degree of plasticity in processes that are key for face identity recognition in the adult human brain.
Collapse
Affiliation(s)
- Simen Hagen
- Université de Lorraine, CNRS, IMoPA, F-54000 Nancy, France
| | - Renaud Laguesse
- Psychological Sciences Research Institute, UCLouvain, 1348 Louvain-La-Neuve, Belgium
| | - Bruno Rossion
- Université de Lorraine, CNRS, IMoPA, F-54000 Nancy, France
- Université de Lorraine, CHRU-Nancy, Service de Neurologie, F-54000 Nancy, France
- Université de Lorraine, CHRU-Nancy, Service de Neurochirurgie, F-54000 Nancy, France
| |
Collapse
|
12
|
Schniter E, Shields TW. Better-than-chance prediction of cooperative behaviour from first and second impressions. EVOLUTIONARY HUMAN SCIENCES 2024; 6:e2. [PMID: 38516366 PMCID: PMC10955359 DOI: 10.1017/ehs.2023.30] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 11/09/2023] [Accepted: 11/12/2023] [Indexed: 03/23/2024] Open
Abstract
Could cooperation among strangers be facilitated by adaptations that use sparse information to accurately predict cooperative behaviour? We hypothesise that predictions are influenced by beliefs, descriptions, appearance and behavioural history available for first and second impressions. We also hypothesise that predictions improve when more information is available. We conducted a two-part study. First, we recorded thin-slice videos of university students just before their choices in a repeated Prisoner's Dilemma with matched partners. Second, a worldwide sample of raters evaluated each player using videos, photos, only gender labels or neither images nor labels. Raters guessed players' first-round Prisoner's Dilemma choices and then their second-round choices after reviewing first-round behavioural histories. Our design allows us to investigate incremental effects of gender, appearance and behavioural history gleaned during first and second impressions. Predictions become more accurate and better-than-chance when gender, appearance or behavioural history is added. However, these effects are not incrementally cumulative. Predictions from treatments showing player appearance were no more accurate than those from treatments revealing gender labels and predictions from videos were no more accurate than those from photos. These results demonstrate how people accurately predict cooperation under sparse information conditions, helping explain why conditional cooperation is common among strangers.
Collapse
Affiliation(s)
- Eric Schniter
- Economic Science Institute, Chapman University, Orange, CA 92866, USA
- Center for the Study of Human Nature, California State University Fullerton, Fullerton, CA 92831, USA
- Argyros School of Business and Economics, Chapman University, Orange, CA 92866, USA
- Division of Anthropology, California State University Fullerton, Fullerton, CA 92831, USA
| | - Timothy W. Shields
- Economic Science Institute, Chapman University, Orange, CA 92866, USA
- Argyros School of Business and Economics, Chapman University, Orange, CA 92866, USA
| |
Collapse
|
13
|
Sharma Y, Persson LM, Golubickis M, Jalalian P, Falbén JK, Macrae CN. Facial first impressions are not mandatory: A priming investigation. Cognition 2023; 241:105620. [PMID: 37741097 DOI: 10.1016/j.cognition.2023.105620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/05/2023] [Accepted: 09/07/2023] [Indexed: 09/25/2023]
Abstract
A common assertion is that, based around prominent character traits, first impressions are spontaneously extracted from faces. Specifically, mere exposure to a person is sufficient to trigger the involuntary extraction of core personality characteristics (e.g., trustworthiness, dominance, competence), an outcome that supports a range of significant judgments (e.g., hiring, investing, electing). But is this in fact the case? Noting ambiguities in the extant literature, here we used a repetition priming procedure to probe the extent to which impressions of dominance are extracted from faces absent the instruction to evaluate the stimuli in this way. Across five experiments in which either the character trait of interest was made increasingly obvious to participants (Expts. 1-3) or attention was explicitly directed toward the faces to generate low-level/high-level judgments (Expts. 4 & 5), no evidence for the spontaneous extraction of first impressions was observed. Instead, priming only emerged when judgments of dominance were an explicit requirement of the task at hand. Thus, at least using a priming methodology, the current findings contest the notion that first impressions are a mandatory product of person perception.
Collapse
Affiliation(s)
- Yadvi Sharma
- School of Psychology, University of Aberdeen, Aberdeen, UK.
| | - Linn M Persson
- School of Psychology, University of Aberdeen, Aberdeen, UK
| | | | | | - Johanna K Falbén
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - C Neil Macrae
- School of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
14
|
Erickson WB, Weatherford DR. Measuring the Contributions of Perceptual and Attentional Processes in the Complete Composite Face Paradigm. Vision (Basel) 2023; 7:76. [PMID: 37987296 PMCID: PMC10661262 DOI: 10.3390/vision7040076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/02/2023] [Accepted: 11/03/2023] [Indexed: 11/22/2023] Open
Abstract
Theories of holistic face processing vary widely with respect to conceptualizations, paradigms, and stimuli. These divergences have left several theoretical questions unresolved. Namely, the role of attention in face perception is understudied. To rectify this gap in the literature, we combined the complete composite face task (allowing for predictions of multiple theoretical conceptualizations and connecting with a large body of research) with a secondary auditory discrimination task at encoding (to avoid a visual perceptual bottleneck). Participants studied upright, intact faces within a continuous recognition paradigm, which intermixes study and test trials at multiple retention intervals. Within subjects, participants studied faces under full or divided attention. Test faces varied with respect to alignment, congruence, and retention intervals. Overall, we observed the predicted beneficial outcomes of holistic processing (e.g., higher discriminability for Congruent, Aligned faces relative to Congruent, Misaligned faces) that persisted across retention intervals and attention. However, we did not observe the predicted detrimental outcomes of holistic processing (e.g., higher discriminability for Incongruent, Misaligned faces relative to Incongruent, Aligned faces). Because the continuous recognition paradigm exerts particularly strong demands on attention, we interpret these findings through the lens of resource dependency and domain specificity.
Collapse
Affiliation(s)
- William Blake Erickson
- Department of Health and Behavioral Sciences, Texas A&M University-San Antonio, San Antonio, TX 78224, USA;
| | | |
Collapse
|
15
|
Mioni G, Zangrossi A, Cipolletta S. Me, myself and you: How self-consciousness influences time perception. Atten Percept Psychophys 2023; 85:2626-2636. [PMID: 37563512 PMCID: PMC10600286 DOI: 10.3758/s13414-023-02767-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/16/2023] [Indexed: 08/12/2023]
Abstract
Several investigations have shown that the processing of self-relevant information differs from processing objective information. The present study aimed to investigate the effect of social stimuli on subjective time processing. Here, social stimuli are images of an unknown male and female person and an image of participants' self. Forty university students were tested with a time reproduction task in which they were asked to reproduce the duration of the stimulus previously presented. Images of others or themselves were used to mark the temporal intervals. Participants also performed questionnaires to evaluate the level of anxiety and depression as well as self-consciousness. A generalised linear mixed-effects model approach was adopted. Results showed that male participants with higher Private Self-Consciousness scores showed higher time perception accuracy than females. Also, female participants reported higher scores for the Public Self-Consciousness subscale than male participants. The findings are discussed in terms of social context models of how attention is solicited and arousal is generated by social stimuli, highlighting the effect of social context on subjective perception of time.
Collapse
Affiliation(s)
- Giovanna Mioni
- Department of General Psychology, University of Padova, Via Venezia, 8, 35131, Padova, Italy.
| | - Andrea Zangrossi
- Department of General Psychology, University of Padova, Via Venezia, 8, 35131, Padova, Italy
- Padova Neuroscience Center (PNC), University of Padova, Padova, Italy
| | - Sabrina Cipolletta
- Department of General Psychology, University of Padova, Via Venezia, 8, 35131, Padova, Italy
| |
Collapse
|
16
|
Itier RJ, Durston AJ. Mass-univariate analysis of scalp ERPs reveals large effects of gaze fixation location during face processing that only weakly interact with face emotional expression. Sci Rep 2023; 13:17022. [PMID: 37813928 PMCID: PMC10562468 DOI: 10.1038/s41598-023-44355-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 10/06/2023] [Indexed: 10/11/2023] Open
Abstract
Decoding others' facial expressions is critical for social functioning. To clarify the neural correlates of expression perception depending on where we look on the face, three combined gaze-contingent ERP experiments were analyzed using robust mass-univariate statistics. Regardless of task, fixation location impacted face processing from 50 to 350 ms, maximally around 120 ms, reflecting retinotopic mapping around C2 and P1 components. Fixation location also impacted majorly the N170-P2 interval while weak effects were seen at the face-sensitive N170 peak. Results question the widespread assumption that faces are processed holistically into an indecomposable perceptual whole around the N170. Rather, face processing is a complex and view-dependent process that continues well beyond the N170. Expression and fixation location interacted weakly during the P1-N170 interval, supporting a role for the mouth and left eye in fearful and happy expression decoding. Expression effects were weakest at the N170 peak but strongest around P2, especially for fear, reflecting task-independent affective processing. Results suggest N170 reflects a transition between processes rather than the maximum of a holistic face processing stage. Focus on this peak should be replaced by data-driven analyses of the epoch using robust statistics to fully unravel the early visual processing of faces and their affective content.
Collapse
Affiliation(s)
- Roxane J Itier
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada.
| | - Amie J Durston
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| |
Collapse
|
17
|
Lidström A. Serial dependence in facial identity perception and visual working memory. Atten Percept Psychophys 2023; 85:2226-2241. [PMID: 37794301 PMCID: PMC10584723 DOI: 10.3758/s13414-023-02799-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2023] [Indexed: 10/06/2023]
Abstract
Serial dependence (SD) refers to the effect in which a person's current perceptual judgment is attracted toward recent stimulus history. Perceptual and memory processes, as well as response and decisional biases, are thought to contribute to SD effects. The current study examined the processing stages of SD facial identity effects in the context of task-related decision processes and how such effects may differ from visual working memory (VWM) interactions. In two experiments, participants were shown a series of two sequentially presented face images. In Experiment 1, the two faces were separated by an interstimulus interval (ISI) of 1, 3, 6, or 10 s, and participants were instructed to reproduce the second face after a varying response delay of 0, 1, 3, 6, or 10 s. Results showed that SD effects occurred most consistently at ISI of 1 s and response delays of 1 and 6 s consistent with early and late stages of processing. In Experiment 2, the ISI was held constant at 1 s, and to separate SD from VWM interactions participants were post-cued to reproduce either the first or the second face. When the second face was the target, SD effects again occurred at response delays of 1 and 6 s, but not when the first face was the target. Together, the results demonstrates that SD facial identity effects occur independently of task-related processes in a distinct temporal fashion and suggest that SD and VWM interactions may rely on separate underlying mechanisms.
Collapse
Affiliation(s)
- Anette Lidström
- Department of Psychology, Lund University, Allhelgona kyrkogata 16A, 223 50, Lund, Sweden.
| |
Collapse
|
18
|
Du Y, Hua L, Tian S, Dai Z, Xia Y, Zhao S, Zou H, Wang X, Sun H, Zhou H, Huang Y, Yao Z, Lu Q. Altered beta band spatial-temporal interactions during negative emotional processing in major depressive disorder: An MEG study. J Affect Disord 2023; 338:254-261. [PMID: 37271293 DOI: 10.1016/j.jad.2023.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 05/31/2023] [Accepted: 06/01/2023] [Indexed: 06/06/2023]
Abstract
BACKGROUND The mood-concordance bias is a key feature of major depressive disorder (MDD), but the spatiotemporal neural activity associated with emotional processing in MDD remains unclear. Understanding the dysregulated connectivity patterns during emotional processing and their relationship with clinical symptoms could provide insights into MDD neuropathology. METHODS We enrolled 108 MDD patients and 64 healthy controls (HCs) who performed an emotion recognition task during magnetoencephalography recording. Network-based statistics (NBS) was used to analyze whole-brain functional connectivity (FC) across different frequency ranges during distinct temporal periods. The relationship between the aberrant FC and affective symptoms was explored. RESULTS MDD patients exhibited decreased FC strength in the beta band (13-30 Hz) compared to HCs. During the early stage of emotional processing (0-100 ms), reduced FC was observed between the left parahippocampal gyrus and the left cuneus. In the late stage (250-400 ms), aberrant FC was primarily found in the cortex-limbic-striatum systems. Moreover, the FC strength between the right fusiform gyrus and left thalamus, and between the left calcarine fissure and left inferior temporal gyrus were negatively associated with Hamilton Depression Rating Scale (HAMD) scores. LIMITATIONS Medication information was not involved. CONCLUSION MDD patients exhibited abnormal temporal-spatial neural interactions in the beta band, ranging from early sensory to later cognitive processing stages. These aberrant interactions involve the cortex-limbic-striatum circuit. Notably, aberrant FC in may serve as a potential biomarker for assessing depression severity.
Collapse
Affiliation(s)
- Yishan Du
- Department of Psychiatry, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Lingling Hua
- Department of Psychiatry, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Shui Tian
- Department of Radiology, the First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - ZhongPeng Dai
- School of Biological Sciences & Medical Engineering, Southeast University, Nanjing 210096, China; Child Development and Learning Science, Key Laboratory of Ministry of Education, Southeast University, Nanjing 210096, China
| | - Yi Xia
- Department of Psychiatry, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Shuai Zhao
- Department of Psychiatry, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing 210029, China
| | - HaoWen Zou
- Nanjing Brain Hospital, Medical School of Nanjing University, Nanjing 210093, China
| | - Xiaoqin Wang
- Department of Psychiatry, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Hao Sun
- Nanjing Brain Hospital, Medical School of Nanjing University, Nanjing 210093, China
| | - Hongliang Zhou
- Department of Psychiatry, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing 210029, China
| | - YingHong Huang
- Nanjing Brain Hospital, Medical School of Nanjing University, Nanjing 210093, China
| | - ZhiJian Yao
- Department of Psychiatry, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing 210029, China; School of Biological Sciences & Medical Engineering, Southeast University, Nanjing 210096, China; Nanjing Brain Hospital, Medical School of Nanjing University, Nanjing 210093, China.
| | - Qing Lu
- School of Biological Sciences & Medical Engineering, Southeast University, Nanjing 210096, China; Child Development and Learning Science, Key Laboratory of Ministry of Education, Southeast University, Nanjing 210096, China.
| |
Collapse
|
19
|
Şentürk YD, Tavacioglu EE, Duymaz İ, Sayim B, Alp N. The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions. Behav Res Methods 2023; 55:3078-3099. [PMID: 36018484 DOI: 10.3758/s13428-022-01951-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/06/2022] [Indexed: 11/08/2022]
Abstract
Faces convey a wide range of information, including one's identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.
Collapse
Affiliation(s)
| | | | - İlker Duymaz
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey
| | - Bilge Sayim
- SCALab - Sciences Cognitives et Sciences Affectives, Université de Lille, CNRS, Lille, France
- Institute of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland
| | - Nihan Alp
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey.
| |
Collapse
|
20
|
Huber R, Fischer R, Kozlik J. When a smile is still a conflict: Affective conflicts from emotional facial expressions of ingroup or outgroup members occur irrespective of the social interaction context. Acta Psychol (Amst) 2023; 239:104008. [PMID: 37603901 DOI: 10.1016/j.actpsy.2023.104008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 06/30/2023] [Accepted: 08/08/2023] [Indexed: 08/23/2023] Open
Abstract
Facial expressions play a crucial role in human interactions. Typically, a positive (negative) expression evokes a congruent positive (negative) reaction within the observer. This congruent behavior is inverted, however, when the same positive (negative) expression is displayed by an outgroup member. Two approaches provide an explanation for this phenomenon. The social intentions account proposes underlying social messages within the facial display, whereas the processing conflict account assumes an affective conflict triggered by incongruent combinations of emotion and the affective connotation of group membership. In three experiments, we aimed at further substantiating the processing conflict account by separating the affective conflict from potential social intentions. For this, we created a new paradigm, in which the participant was an outside observer of a social interaction scene between two faces. Participants were required to respond to the emotional target person that could represent an ingroup or outgroup member. In all three experiments, irrespective of any social intention, responses were consistently affected by the group relation between participant and emotional target, i.e., the affective (in)congruency of the target seen by participants. These results further support the processing conflict account. The implications for the two theoretical accounts are discussed.
Collapse
Affiliation(s)
- Robert Huber
- Department of Psychology, University of Greifswald, Greifswald, Germany.
| | - Rico Fischer
- Department of Psychology, University of Greifswald, Greifswald, Germany
| | - Julia Kozlik
- University Medicine Greifswald, Greifswald, Germany
| |
Collapse
|
21
|
Xie T, Fu S, Mento G. Faces do not guide attention in an object-based facilitation manner. Atten Percept Psychophys 2023; 85:1920-1935. [PMID: 37349624 PMCID: PMC10545631 DOI: 10.3758/s13414-023-02742-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/01/2023] [Indexed: 06/24/2023]
Abstract
Numerous studies on face processing have revealed their special ability to affect attention, but relatively little research has been done on how faces guide spatial attention allocation. To enrich this field, this study resorted to the object-based attention (OBA) effect in a modified double-rectangle paradigm where the rectangles were replaced with human faces and mosaic patterns (non-face objects). Experiment 1 replicated the typical OBA effect in the non-face objects, but this effect was absent in Asian and Caucasian faces. Experiment 2 removed the eye region from Asian faces, but still found no object-based facilitation in the faces without eyes. In Experiment 3, the OBA effect was also observed for faces when the faces disappear a short period before the responses. Overall, these results revealed that when two faces are presented together, they do not exert object-based facilitation regardless of their facial features such as race and the presence of eyes. We argue that the lack of a typical OBA effect is due to the filtering cost induced by the entire face content. This cost slows down the response when attention shifts within a face and results in the absence of object-based facilitation.
Collapse
Affiliation(s)
- Tong Xie
- Department of General Psychology, University of Padova, Via Venezia, 8, 35131, Padova, Italy.
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, 510006, China.
| | - Shimin Fu
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, 510006, China.
| | - Giovanni Mento
- Department of General Psychology, University of Padova, Via Venezia, 8, 35131, Padova, Italy
- IRCCS E. Medea Scientific Institute, Treviso, Italy
| |
Collapse
|
22
|
Hunter BK, Markant J. 6- to 10-year-old children do not show race-based orienting biases to faces during an online attention capture task. J Exp Child Psychol 2023; 230:105628. [PMID: 36706653 DOI: 10.1016/j.jecp.2023.105628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 01/05/2023] [Accepted: 01/05/2023] [Indexed: 01/27/2023]
Abstract
Research has established that frequency of exposure to own- and other-race faces shapes the development of face processing biases characterized by enhanced attention to and recognition of more familiar own-race faces, that is, the other-race effect (ORE). The ORE is first evident during infancy based on differences in looking to own- versus other-race faces and is later assessed based on recognition memory task performance during childhood and adulthood. Using these measures, researchers have found that race-based face processing biases initially develop during infancy but remain sensitive to experiences with own- and other-race faces through childhood. In contrast, limited work suggests that infants' attention orienting may be less affected by frequency of exposure to own- and other-race faces. However, the plasticity of race-based face processing biases during childhood suggests that biased orienting to own-race faces may develop at later ages following continued exposure to these faces. We addressed this question by examining 6- to 10-year-old children's attention capture by own- and other-race faces during an online task. Children searched for a target among multiple distractors. During some trials, either an own- or other-race face appeared as one of the distractors. Children showed similar target detection performance (omission errors, accuracy, and response times) regardless of whether an own- or other-race face appeared as a distractor. These results differ from research demonstrating race-based biases in attention holding and recognition memory but converge with previous infant research suggesting that attention orienting might not be as strongly affected by frequency of exposure to race-based information during development.
Collapse
Affiliation(s)
- Brianna K Hunter
- Center for Mind and Brain, University of California, Davis, Davis, CA 95618, USA; Department of Psychology, Tulane University, New Orleans, LA 70118, USA.
| | - Julie Markant
- Department of Psychology, Tulane University, New Orleans, LA 70118, USA; Tulane Brain Institute, Tulane University, New Orleans, LA 70118, USA
| |
Collapse
|
23
|
Schöpper LM, Küpper V, Frings C. Attentional Biases Toward Spiders Do Not Modulate Retrieval. Exp Psychol 2023; 70:135-144. [PMID: 37589232 PMCID: PMC10658639 DOI: 10.1027/1618-3169/a000584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 05/23/2023] [Accepted: 06/20/2023] [Indexed: 08/18/2023]
Abstract
When responding to stimuli, response and stimulus' features are thought to be integrated into a short episodic memory trace, an event file. Repeating any of its components causes retrieval of the whole event file leading to benefits for full repetitions and changes but interference for partial repetitions. These binding effects are especially pronounced if attention is allocated to certain features. We used attentional biases caused by spider stimuli, aiming to modulate the impact of attention on retrieval. Participants discriminated the orientation of bars repeating or changing their location in prime-probe sequences. Crucially, shortly before probe target onset, an image of a spider and that of a cub appeared at one position each - one of which was spatially congruent with the following probe target. Participants were faster when responding to targets spatially congruent with a preceding spider, suggesting an attentional bias toward aversive information. Yet, neither overall binding effects differed between content of preceding spatially congruent images nor did this effect emerge when taking individual fear of spiders into account. We conclude that attentional biases toward spiders modulate overall behavior, but that this has no impact on retrieval.
Collapse
Affiliation(s)
| | - Verena Küpper
- Department of Cognitive Psychology, University of Trier, Germany
| | - Christian Frings
- Department of Cognitive Psychology, University of Trier, Germany
| |
Collapse
|
24
|
Marquardt CA, Hitz AC, Hill JE, Erbes CR, Polusny MA. Trait absorption predicts enhanced face emotion intensity discrimination among military recruits. MOTIVATION AND EMOTION 2023. [DOI: 10.1007/s11031-023-10014-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
25
|
Looking at faces in the wild. Sci Rep 2023; 13:783. [PMID: 36646709 PMCID: PMC9842722 DOI: 10.1038/s41598-022-25268-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/28/2022] [Indexed: 01/18/2023] Open
Abstract
Faces are key to everyday social interactions, but our understanding of social attention is based on experiments that present images of faces on computer screens. Advances in wearable eye-tracking devices now enable studies in unconstrained natural settings but this approach has been limited by manual coding of fixations. Here we introduce an automatic 'dynamic region of interest' approach that registers eye-fixations to bodies and faces seen while a participant moves through the environment. We show that just 14% of fixations are to faces of passersby, contrasting with prior screen-based studies that suggest faces automatically capture visual attention. We also demonstrate the potential for this new tool to help understand differences in individuals' social attention, and the content of their perceptual exposure to other people. Together, this can form the basis of a new paradigm for studying social attention 'in the wild' that opens new avenues for theoretical, applied and clinical research.
Collapse
|
26
|
Stuit SM, Paffen CLE, Van der Stigchel S. Prioritization of emotional faces is not driven by emotional content. Sci Rep 2023; 13:549. [PMID: 36631453 PMCID: PMC9834315 DOI: 10.1038/s41598-022-25575-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 12/01/2022] [Indexed: 01/12/2023] Open
Abstract
Emotional faces have prioritized access to visual awareness. However, studies concerned with what expressions are prioritized most are inconsistent and the source of prioritization remains elusive. Here we tested the predictive value of spatial frequency-based image-features and emotional content, the sub-part of the image content that signals the emotional expression of the actor in the image as opposed to the image content irrelevant for the emotional expression, for prioritization for awareness. Participants reported which of two faces (displaying a combination of angry, happy, and neutral expressions), that were temporarily suppressed from awareness, was perceived first. Even though the results show that happy expressions were prioritized for awareness, this prioritization was driven by the contrast energy of the images. In fact, emotional content could not predict prioritization at all. Our findings show that the source of prioritization for awareness is not the information carrying the emotional content. We argue that the methods used here, or similar approaches, should become standard practice to break the chain of inconsistent findings regarding emotional superiority effects that have been part of the field for decades.
Collapse
Affiliation(s)
- Sjoerd M. Stuit
- grid.5477.10000000120346234Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Chris L. E. Paffen
- grid.5477.10000000120346234Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Stefan Van der Stigchel
- grid.5477.10000000120346234Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
27
|
Same, but different: Binding effects in auditory, but not visual detection performance. Atten Percept Psychophys 2023; 85:438-451. [PMID: 35107812 PMCID: PMC9935720 DOI: 10.3758/s13414-021-02436-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/27/2021] [Indexed: 11/08/2022]
Abstract
Responding to a stimulus leads to the integration of response and stimulus' features into an event file. Upon repetition of any of its features, the previous event file is retrieved, thereby affecting ongoing performance. Such integration-retrieval explanations exist for a number of sequential tasks (that measure these processes as 'binding effects') and are thought to underlie all actions. However, based on attentional orienting literature, Schöpper, Hilchey, et al. (2020) could show that binding effects are absent when participants detect visual targets in a sequence: In visual detection performance, there is simply a benefit for target location changes (inhibition of return). In contrast, Mondor and Leboe (2008) had participants detect auditory targets in a sequence, and found a benefit for frequency repetition - presumably reflecting a binding effect in auditory detection performance. In the current study, we conducted two experiments, that only differed in the modality of the target: Participants signaled the detection of a sound (N = 40) or of a visual target (N = 40). Whereas visual detection performance showed a pattern incongruent with binding assumptions, auditory detection performance revealed a non-spatial feature repetition benefit, suggesting that frequency was bound to the response. Cumulative reaction time distributions indicated that the absence of a binding effect in visual detection performance was not caused by overall faster responding. The current results show a clear limitation to binding accounts in action control: Binding effects are not only limited by task demands, but can entirely depend on target modality.
Collapse
|
28
|
Tucciarelli R, Vehar N, Chandaria S, Tsakiris M. On the realness of people who do not exist: The social processing of artificial faces. iScience 2022; 25:105441. [PMID: 36590465 PMCID: PMC9801245 DOI: 10.1016/j.isci.2022.105441] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 07/12/2022] [Accepted: 10/20/2022] [Indexed: 12/12/2022] Open
Abstract
Today more than ever, we are asked to evaluate the realness, truthfulness and trustworthiness of our social world. Here, we focus on how people evaluate realistic-looking faces of non-existing people generated by generative adversarial networks (GANs). GANs are increasingly used in marketing, journalism, social media, and political propaganda. In three studies, we investigated if and how participants can distinguish between GAN and REAL faces and the social consequences of their exposure to artificial faces. GAN faces were more likely to be perceived as real than REAL faces, a pattern partly explained by intrinsic stimulus characteristics. Moreover, participants' realness judgments influenced their behavior because they displayed increased social conformity toward faces perceived as real, independently of their actual realness. Lastly, knowledge about the presence of GAN faces eroded social trust. Our findings point to potentially far-reaching consequences for the pervasive use of GAN faces in a culture powered by images at unprecedented levels.
Collapse
Affiliation(s)
- Raffaele Tucciarelli
- The Warburg Institute, School of Advanced Study, University of London, London WC1H 0AB, UK,Corresponding author
| | - Neza Vehar
- The Warburg Institute, School of Advanced Study, University of London, London WC1H 0AB, UK
| | - Shamil Chandaria
- Institute of Philosophy, School of Advanced Study, University of London, London, UK,Centre for Psychedelic Research, Imperial College London, London, UK
| | - Manos Tsakiris
- The Warburg Institute, School of Advanced Study, University of London, London WC1H 0AB, UK,Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK,Centre for the Politics of Feelings, School of Advanced Study, University of London, London, UK
| |
Collapse
|
29
|
Li YL, Cheng G, Wu XH, Dai HY, Jia YC. The effect of emotional uncertainty on attentional bias toward neutral infant faces in adults. Dev Psychobiol 2022; 64:e22335. [PMID: 36426785 DOI: 10.1002/dev.22335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 08/19/2022] [Accepted: 09/19/2022] [Indexed: 01/27/2023]
Abstract
Recent studies have found that adults have stronger attentional bias toward neutral infant faces than emotional (positive or negative) infant faces. This phenomenon may derive from uncertainty over neutral expressions. To test this hypothesis, we recruited 176 participants to examine the relationship between their attentional bias toward neutral infant faces (with neutral adult faces as a comparison baseline) and their level of certainty in their appraisal of emotional valence through eye-tracking indices. The results showed that participants had a longer dwell time and higher fixation counts for infant faces than for adult faces and that a more uncertain appraisal of facial expressions positively predicted attentional bias toward neutral infant faces. Therefore, this study preliminarily demonstrates that emotional uncertainty heightens adults' attentional bias toward infant faces with neutral expressions.
Collapse
Affiliation(s)
- Yue Lin Li
- College of National Culture and Cognitive Science, Guizhou Minzu University, Guiyang, China.,School of Psychology, Guizhou Normal University, Guiyang, China.,Center for Rural Children and Adolescents Mental Health Education, Guizhou Normal University, Guiyang, China
| | - Gang Cheng
- School of Psychology, Guizhou Normal University, Guiyang, China.,Center for Rural Children and Adolescents Mental Health Education, Guizhou Normal University, Guiyang, China
| | - Xiu Hong Wu
- School of Psychology, Guizhou Normal University, Guiyang, China.,Center for Rural Children and Adolescents Mental Health Education, Guizhou Normal University, Guiyang, China
| | - Huang Yan Dai
- School of Psychology, Guizhou Normal University, Guiyang, China.,Center for Rural Children and Adolescents Mental Health Education, Guizhou Normal University, Guiyang, China
| | - Yun Cheng Jia
- College of National Culture and Cognitive Science, Guizhou Minzu University, Guiyang, China.,Center for Rural Children and Adolescents Mental Health Education, Guizhou Normal University, Guiyang, China
| |
Collapse
|
30
|
Fabrício DDM, Ferreira BLC, Maximiano-Barreto MA, Muniz M, Chagas MHN. Construction of face databases for tasks to recognize facial expressions of basic emotions: a systematic review. Dement Neuropsychol 2022; 16:388-410. [DOI: 10.1590/1980-5764-dn-2022-0039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 08/01/2022] [Accepted: 08/23/2022] [Indexed: 12/12/2022] Open
Abstract
ABSTRACT. Recognizing the other's emotions is an important skill for the social context that can be modulated by variables such as gender, age, and race. A number of studies seek to elaborate specific face databases to assess the recognition of basic emotions in different contexts. Objectives: This systematic review sought to gather these studies, describing and comparing the methodologies used in their elaboration. Methods: The databases used to select the articles were the following: PubMed, Web of Science, PsycInfo, and Scopus. The following word crossing was used: “Facial expression database OR Stimulus set AND development OR Validation.” Results: A total of 36 articles showed that most of the studies used actors to express the emotions that were elicited from specific situations to generate the most spontaneous emotion possible. The databases were mainly composed of colorful and static stimuli. In addition, most of the studies sought to establish and describe patterns to record the stimuli, such as color of the garments used and background. The psychometric properties of the databases are also described. Conclusions: The data presented in this review point to the methodological heterogeneity among the studies. Nevertheless, we describe their patterns, contributing to the planning of new research studies that seek to create databases for new contexts.
Collapse
Affiliation(s)
| | | | | | - Monalisa Muniz
- Universidade Federal de São Carlos, Brazil; Universidade Federal de São Carlos, Brazil
| | - Marcos Hortes Nisihara Chagas
- Universidade Federal de São Carlos, Brazil; Universidade Federal de São Carlos, Brazil; Universidade de São Paulo, Brazil; Instituto Bairral de Psiquiatria, Brazil
| |
Collapse
|
31
|
Working memory updating in individuals with bipolar and unipolar depression: fMRI study. Transl Psychiatry 2022; 12:441. [PMID: 36220840 PMCID: PMC9553934 DOI: 10.1038/s41398-022-02211-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 09/26/2022] [Accepted: 09/29/2022] [Indexed: 01/10/2023] Open
Abstract
Understanding neurobiological characteristics of cognitive dysfunction in distinct psychiatric disorders remains challenging. In this secondary data analysis, we examined neurobiological differences in brain response during working memory updating among individuals with bipolar disorder (BD), those with unipolar depression (UD), and healthy controls (HC). Individuals between 18-45 years of age with BD (n = 100), UD (n = 109), and HC (n = 172) were scanned using fMRI while performing 0-back (easy) and 2-back (difficult) tasks with letters as the stimuli and happy, fearful, or neutral faces as distractors. The 2(n-back) × 3(groups) × 3(distractors) ANCOVA examined reaction time (RT), accuracy, and brain activation during the task. HC showed more accurate and faster responses than individuals with BD and UD. Difficulty-related activation in the prefrontal, posterior parietal, paracingulate cortices, striatal, lateral occipital, precuneus, and thalamic regions differed among groups. Individuals with BD showed significantly lower difficulty-related activation differences in the left lateral occipital and the right paracingulate cortices than those with UD. In individuals with BD, greater difficulty-related worsening in accuracy was associated with smaller activity changes in the right precuneus, while greater difficulty-related slowing in RT was associated with smaller activity changes in the prefrontal, frontal opercular, paracingulate, posterior parietal, and lateral occipital cortices. Measures of current depression and mania did not correlate with the difficulty-related brain activation differences in either group. Our findings suggest that the alterations in the working memory circuitry may be a trait characteristic of reduced working memory capacity in mood disorders. Aberrant patterns of activation in the left lateral occipital and paracingulate cortices may be specific to BD.
Collapse
|
32
|
Swe DC, Palermo R, Gwinn OS, Bell J, Nakanishi A, Collova J, Sutherland CAM. Trustworthiness perception is mandatory: Task instructions do not modulate fast periodic visual stimulation trustworthiness responses. J Vis 2022; 22:17. [PMID: 36315159 PMCID: PMC9631496 DOI: 10.1167/jov.22.11.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Although it is often assumed that humans spontaneously respond to the trustworthiness of others’ faces, it is still unclear whether responses to facial trust are mandatory or can be modulated by instructions. Considerable scientific interest lies in understanding whether trust processing is mandatory, given the societal consequences of biased trusting behavior. We tested whether neural responses indexing trustworthiness discrimination depended on whether the task involved focusing on facial trustworthiness or not, using a fast periodic visual stimulation electroencephalography oddball paradigm with a neural marker of trustworthiness discrimination at 1 Hz. Participants judged faces on size without any reference to trust, explicitly formed impressions of facial trust, or were given a financial lending context that primed trust, without explicit trust judgement instructions. Significant trustworthiness discrimination responses at 1 Hz were found in all three conditions, demonstrating the robust nature of trustworthiness discrimination at the neural level. Moreover, no effect of task instruction was observed, with Bayesian analyses providing moderate to decisive evidence that task instruction did not affect trustworthiness discrimination. Our finding that visual trustworthiness discrimination is mandatory points to the remarkable spontaneity of trustworthiness processing, providing clues regarding why these often unreliable impressions are ubiquitous.
Collapse
Affiliation(s)
- Derek C Swe
- School of Psychological Science, The University of Western Australia, Perth, Australia.,
| | - Romina Palermo
- School of Psychological Science, The University of Western Australia, Perth, Australia.,
| | - O Scott Gwinn
- College of Education, Psychology, and Social Work, Flinders University, Adelaide, Australia.,
| | - Jason Bell
- School of Psychological Science, The University of Western Australia, Perth, Australia.,
| | - Anju Nakanishi
- School of Psychological Science, The University of Western Australia, Perth, Australia.,
| | - Jemma Collova
- School of Psychological Science, The University of Western Australia, Perth, Australia.,
| | - Clare A M Sutherland
- School of Psychological Science, The University of Western Australia, Perth, Australia.,School of Psychology, University of Aberdeen, King's College, Aberdeen, Scotland.,
| |
Collapse
|
33
|
Viola M. Seeing through the shades of situated affectivity. Sunglasses as a socio-affective artifact. PHILOSOPHICAL PSYCHOLOGY 2022. [DOI: 10.1080/09515089.2022.2118574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Affiliation(s)
- Marco Viola
- Department of Philosophy, Communication, and Performing Arts, Rome 3 University, Rome, Italy
| |
Collapse
|
34
|
Nasrollahi N, Jowett T, Machado L. Emotional information processing in young and older adults: meta-analysis reveals faces elicit distinct biases. Eur J Ageing 2022; 19:369-379. [PMID: 36052179 PMCID: PMC9424464 DOI: 10.1007/s10433-021-00676-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/18/2021] [Indexed: 11/03/2022] Open
Abstract
Although a number of empirical studies have found support for distinct emotional information processing biases in young versus older adults, it remains unclear whether these biases are driven by differential processing of positive or negative emotional information (or both) and whether they are moderated by stimulus type, in particular face versus non-face, the former of which is known to be subject to distinct processing. To address these gaps in the literature, our analyses included 2237 younger (mean age = 21.61 years) and 2136 older (mean age = 70.58 years) adults from 73 data sets, 19 involving face stimuli and 54 involving non-face stimuli (objects or scenes). Our findings indicated a significant overall age-related positivity effect (Hedge's g = 0.35) when comparing positive and negative stimuli, but consideration of emotionally neutral stimuli revealed significant age differences in emotional processing for negative stimuli only, with younger adults showing a stronger negativity bias. Furthermore, compared to emotionally neutral stimuli, both younger and older adults showed evidence of biases toward non-face positive and negative stimuli and toward positive but not negative face stimuli. Thus, although the present meta-analysis found evidence of an overall age-related positivity effect consistent with a shift toward positivity with aging, a different picture emerged when comparing emotional against neutral stimuli, and consideration of stimulus type revealed a distinct pattern for face stimuli, which may reflect the biological and social significance of facial expressions. Supplementary Information The online version contains supplementary material available at 10.1007/s10433-021-00676-w.
Collapse
Affiliation(s)
- Neda Nasrollahi
- Department of Psychology and Brain Health Research Centre, University of Otago, William James Building, 275 Leith Walk, Dunedin, 9016 New Zealand
- Brain Research New Zealand, Auckland, New Zealand
| | - Tim Jowett
- Department of Psychology and Brain Health Research Centre, University of Otago, William James Building, 275 Leith Walk, Dunedin, 9016 New Zealand
- Department of Mathematics and Statistics, University of Otago, Dunedin, New Zealand
| | - Liana Machado
- Department of Psychology and Brain Health Research Centre, University of Otago, William James Building, 275 Leith Walk, Dunedin, 9016 New Zealand
- Brain Research New Zealand, Auckland, New Zealand
| |
Collapse
|
35
|
Suslow T, Kersting A. The Relations of Attention to and Clarity of Feelings With Facial Affect Perception. Front Psychol 2022; 13:819902. [PMID: 35874362 PMCID: PMC9298753 DOI: 10.3389/fpsyg.2022.819902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 06/20/2022] [Indexed: 11/13/2022] Open
Abstract
Attention to emotions and emotional clarity are core dimensions of individual differences in emotion awareness. Findings from prior research based on self-report indicate that attention to and recognition of one's own emotions are related to attention to and recognition of other people's emotions. In the present experimental study, we examined the relations of attention to and clarity of emotions with the efficiency of facial affect perception. Moreover, it was explored whether attention to and clarity of emotions are linked to negative interpretations of facial expressions. A perception of facial expressions (PFE) task based on schematic faces with neutral, ambiguous, or unambiguous emotional expressions and a gender decision task were administered to healthy individuals along with measures of emotion awareness, state and trait anxiety, depression, and verbal intelligence. Participants had to decide how much the faces express six basic affects. Evaluative ratings and decision latencies were analyzed. Attention to feelings was negatively correlated with evaluative decision latency, whereas clarity of feelings was not related to decision latency in the PFE task. Attention to feelings was positively correlated with the perception of negative affects in ambiguous faces. Attention to feelings and emotional clarity were not related to gender decision latency. According to our results, dispositional attention to feelings goes along with an enhanced efficiency of facial affect perception. Habitually paying attention to one's own emotions may facilitate processing of external emotional information. Preliminary evidence was obtained suggesting a relationship of dispositional attention to feelings with negative interpretations of facial expressions.
Collapse
Affiliation(s)
- Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| | - Anette Kersting
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| |
Collapse
|
36
|
Lin L, Liu Y, Mo J, Wang C, Liu T, Xu Z, Jiang Y, Bai X, Wu X. Attentional Bias to Emotional Facial Expressions in Undergraduates With Suicidal Ideation: An ERP Study. Arch Suicide Res 2022:1-18. [PMID: 35787745 DOI: 10.1080/13811118.2022.2096518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
Individuals who with suicide behaviors pay more attention to negative signals than positive ones. However, it is unclear that whether this bias exists when suicide ideators perceive interpersonal stimuli (such as faces with emotion) and the underlying neural mechanism of the attention process. The present study aimed to examine the attentional bias toward emotional facial expressions by employing event-related potentials in a population with suicide ideation. Twenty-five undergraduates with suicide ideation (SI group) and sixteen undergraduates without suicide ideation (NSI group) participated in a modified dot-probe task. Compared to the NSI group, the SI group exhibited: (1) a longer mean reaction time to fearful faces; (2) a larger N1 component to fearful faces; (3) a larger N1 component to the location of sad faces, as well as to the opposite location of fearful faces and happy faces; and (4) a larger N1 component to the contralateral location of happy faces, whereas the NSI group elicited a larger N1 component to the ipsilateral location of happy faces. These results indicated that the SI group was more sensitive to negative emotions (fearful and sad faces) than positive emotions (happy faces), and the negative interpersonal stimuli in suicide ideators was processed at an early attention stage.
Collapse
|
37
|
Kang J, Park YE, Yoon HK. Feeling Blue and Getting Red: An Exploratory Study on the Effect of Color in the Processing of Emotion Information. Front Psychol 2022; 13:515215. [PMID: 35846653 PMCID: PMC9280203 DOI: 10.3389/fpsyg.2022.515215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 06/08/2022] [Indexed: 11/25/2022] Open
Abstract
Specific emotions and colors are associated. The current study tested whether the interference of colors with affective processing occurs solely in the semantic stage or extends to a more complex stage like the lexical processing of emotional words. We performed two experiments to determine the effect of colors on affective processing. In Experiment 1, participants completed a color-emotion priming task. The priming stimulus included a color-tinted (blue, red, and gray) image of a neutral face, followed by a target stimulus of gray-scaled emotional (angry and sad) and neutral faces after 50 ms. Experiment 2 used a modified emostroop paradigm and superimposed emotion words on the center of the color-tinted emotional and neutral faces. Results showed the priming effect of red for the angry face compared to the control, but not in blue for the sad face compared to the control. However, responses to the blue-sad pair were significantly faster than the red-sad pair. In the color-emostroop task, we observed a significant interaction between color and emotion target words in the modified emostroop task. Participants detected sad targets more accurately and faster in blue than red, but only in the incongruent condition. The results indicate that the influence of color in the processing of emotional information exists at the semantic level but found no evidence supporting the lexical level effect.
Collapse
Affiliation(s)
- June Kang
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Yeo Eun Park
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Ho-Kyoung Yoon
- Department of Psychiatry, College of Medicine, Korea University, Seoul, South Korea
- *Correspondence: Ho-Kyoung Yoon,
| |
Collapse
|
38
|
Qiu Z, Lei X, Becker SI, Pegna AJ. Neural activities during the Processing of unattended and unseen emotional faces: a voxel-wise Meta-analysis. Brain Imaging Behav 2022; 16:2426-2443. [PMID: 35739373 PMCID: PMC9581832 DOI: 10.1007/s11682-022-00697-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2022] [Indexed: 11/27/2022]
Abstract
Voxel-wise meta-analyses of task-evoked regional activity were conducted for healthy individuals during the unconscious processing of emotional and neutral faces with an aim to examine whether and how different experimental paradigms influenced brain activation patterns. Studies were categorized into sensory and attentional unawareness paradigms. Thirty-four fMRI studies including 883 healthy participants were identified. Across experimental paradigms, unaware emotional faces elicited stronger activation of the limbic system, striatum, inferior frontal gyrus, insula and the temporal lobe, compared to unaware neutral faces. Crucially, in attentional unawareness paradigms, unattended emotional faces elicited a right-lateralized increased activation (i.e., right amygdala, right temporal pole), suggesting a right hemisphere dominance for processing emotional faces during inattention. By contrast, in sensory unawareness paradigms, unseen emotional faces elicited increased activation of the left striatum, the left amygdala and the right middle temporal gyrus. Additionally, across paradigms, unconsciously processed positive emotions were found associated with more activation in temporal and parietal cortices whereas unconsciously processed negative emotions elicited stronger activation in subcortical regions, compared to neutral faces.
Collapse
Affiliation(s)
- Zeguo Qiu
- School of Psychology, The University of Queensland, Brisbane, 4072, Australia.
| | - Xue Lei
- School of Psychology, The University of Queensland, Brisbane, 4072, Australia
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, 4072, Australia
| | - Alan J Pegna
- School of Psychology, The University of Queensland, Brisbane, 4072, Australia
| |
Collapse
|
39
|
Zsidó AN, Stecina DT, Cseh R, Hout MC. The effects of task-irrelevant threatening stimuli on orienting- and executive attentional processes under cognitive load. Br J Psychol 2022; 113:412-433. [PMID: 34773254 PMCID: PMC9299041 DOI: 10.1111/bjop.12540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 10/27/2021] [Indexed: 11/29/2022]
Abstract
Human visual attention is biased to rapidly detect threats in the environment so that our nervous system can initiate quick reactions. The processes underlying threat detection (and how they operate under cognitive load), however, are still poorly understood. Thus, we sought to test the impact of task-irrelevant threatening stimuli on the salience network and executive control of attention during low and high cognitive load. Participants were exposed to neutral or threatening pictures (with moderate and high arousal levels) as task-irrelevant distractors in near (parafoveal) and far (peripheral) positions while searching for numbers in ascending order in a matrix array. We measured reaction times and recorded eye-movements. Our results showed that task-irrelevant distractors primarily influenced behavioural measures during high cognitive load. The distracting effect of threatening images with moderate arousal level slowed reaction times for finding the first number. However, this slowing was offset by high arousal threatening stimuli, leading to overall shorter search times. Eye-tracking measures showed that participants fixated threatening pictures more later and for shorter durations compared to neutral images. Together, our results indicate a complex relationship between threats and attention that results not in a unitary bias but in a sequence of effects that unfold over time.
Collapse
Affiliation(s)
| | | | - Rebecca Cseh
- Institute of PsychologyUniversity of PécsPecsHungary
| | - Michael C. Hout
- Department of PsychologyNew Mexico State UniversityLas CrucesNew MexicoUSA
- National Science FoundationVirginiaAlexandriaUSA
| |
Collapse
|
40
|
Can faces affect object-based attention? Evidence from online experiments. Atten Percept Psychophys 2022; 84:1220-1233. [PMID: 35396617 PMCID: PMC8992784 DOI: 10.3758/s13414-022-02473-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2022] [Indexed: 11/23/2022]
Abstract
This study tested how human faces affect object-based attention (OBA) through two online experiments in a modified double-rectangle paradigm. The results of Experiment 1 revealed that faces did not elicit the OBA effect as non-face objects, which was caused by a longer response time (RT) when attention is focused on faces relative to non-face objects. In addition, by observing faster RTs when attention was engaged horizontally rather than vertically, we found a significant horizontal attention bias, which might override the OBA effect if vertical rectangles were the only items presented; these results were replicated in Experiment 2 (using only vertical rectangles) after directly measuring horizontal bias and excluding its influence on the OBA effect. This study suggested that faces cannot elicit the same-object advantage in the double-rectangle paradigm and provided a method to measure the OBA effect free from horizontal bias.
Collapse
|
41
|
Vicente-Querol MA, Fernandez-Caballero A, Molina JP, Gonzalez-Gualda LM, Fernandez-Sotos P, Garcia AS. Facial Affect Recognition in Immersive Virtual Reality: Where Is the Participant Looking? Int J Neural Syst 2022; 32:2250029. [DOI: 10.1142/s0129065722500290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
42
|
Wang H, Chen E, Li J, Ji F, Lian Y, Fu S. Configural but Not Featural Face Information Is Associated With Automatic Processing. Front Hum Neurosci 2022; 16:884823. [PMID: 35496070 PMCID: PMC9045007 DOI: 10.3389/fnhum.2022.884823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 03/18/2022] [Indexed: 12/03/2022] Open
Abstract
Configural face processing precedes featural face processing under the face-attended condition, but their temporal sequence in the absence of attention is unclear. The present study investigated this issue by recording visual mismatch negativity (vMMN), which indicates the automatic processing of visual information under unattended conditions. Participants performed a central cross size change detection task, in which random sequences of faces were presented peripherally, in an oddball paradigm. In Experiment 1, configural and featural faces (deviant stimuli) were presented infrequently among original faces (standard stimuli). In Experiment 2, configural faces were presented infrequently among featural faces, or vice versa. The occipital-temporal vMMN emerged in the 200–360 ms latency range for configural, but not featural, face information. More specifically, configural face information elicited a substantial vMMN component in the 200–360 ms range in Experiment 1. This result was replicated in the 320–360 ms range in Experiment 2, especially in the right hemisphere. These results suggest that configural, but not featural, face information is associated with automatic processing and provides new electrophysiological evidence for the different mechanisms underlying configural and featural face processing under unattended conditions.
Collapse
Affiliation(s)
- Hailing Wang
- School of Psychology, Shandong Normal University, Jinan, China
- *Correspondence: Hailing Wang,
| | - Enguang Chen
- School of Psychology, Shandong Normal University, Jinan, China
| | - JingJing Li
- School of Psychology, Shandong Normal University, Jinan, China
| | - Fanglin Ji
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
| | - Yujing Lian
- School of Psychology, Shandong Normal University, Jinan, China
| | - Shimin Fu
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China
- Shimin Fu,
| |
Collapse
|
43
|
Fresnoza S, Mayer RM, Schneider KS, Christova M, Gallasch E, Ischebeck A. Modulation of proper name recall by transcranial direct current stimulation of the anterior temporal lobes. Sci Rep 2022; 12:5735. [PMID: 35388106 PMCID: PMC8987057 DOI: 10.1038/s41598-022-09781-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 03/08/2022] [Indexed: 01/11/2023] Open
Abstract
We often fail to recall another person's name. Proper names might be more difficult to memorize and retrieve than other pieces of knowledge, such as one's profession because they are processed differently in the brain. Neuroimaging and neuropsychological studies associate the bilateral anterior temporal lobes (ATL) in the retrieval of proper names and other person-related knowledge. Specifically, recalling a person's name is thought to be supported by the left ATL, whereas recalling specific information such as a person's occupation is suggested to be subserved by the right ATL. To clarify and further explore the causal relationship between both ATLs and proper name retrieval, we stimulated these regions with anodal, cathodal and sham transcranial direct current stimulation (tDCS) while the participants memorized surnames (e.g., Mr. Baker) and professions (e.g., baker) presented with a person’s face. The participants were then later asked to recall the surname and the profession. Left ATL anodal stimulation resulted in higher intrusion errors for surnames than sham, whereas right ATL anodal stimulation resulted in higher overall intrusion errors, both, surnames and professions, compared to cathodal stimulation. Cathodal stimulation of the left and right ATL had no significant effect on surname and profession recall. The results indicate that the left ATL plays a role in recalling proper names. On the other hand, the specific role of the right ATL remaines to be explored.
Collapse
Affiliation(s)
- Shane Fresnoza
- Institute of Psychology, University of Graz, Universitätsplatz 2/DG, 8010, Graz, Austria. .,BioTechMed, Graz, Austria.
| | - Rosa-Maria Mayer
- Institute of Psychology, University of Graz, Universitätsplatz 2/DG, 8010, Graz, Austria
| | | | - Monica Christova
- Section of Physiology, Otto Loewi Research Center, Medical University of Graz, Graz, Austria.,Institute for Physiotherapy, University of Applied Sciences, FH-Joanneum, Graz, Austria
| | - Eugen Gallasch
- Section of Physiology, Otto Loewi Research Center, Medical University of Graz, Graz, Austria
| | - Anja Ischebeck
- Institute of Psychology, University of Graz, Universitätsplatz 2/DG, 8010, Graz, Austria.,BioTechMed, Graz, Austria
| |
Collapse
|
44
|
Schiano Lomoriello A, Sessa P, Doro M, Konvalinka I. Shared Attention Amplifies the Neural Processing of Emotional Faces. J Cogn Neurosci 2022; 34:917-932. [PMID: 35258571 DOI: 10.1162/jocn_a_01841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sharing an experience, without communicating, affects people's subjective perception of the experience, often by intensifying it. We investigated the neural mechanisms underlying shared attention by implementing an EEG study where participants attended to and rated the intensity of emotional faces, simultaneously or independently. Participants performed the task in three experimental conditions: (a) alone; (b) simultaneously next to each other in pairs, without receiving feedback of the other's responses (shared without feedback); and (c) simultaneously while receiving the feedback (shared with feedback). We focused on two face-sensitive ERP components: The amplitude of the N170 was greater in the "shared with feedback" condition compared to the alone condition, reflecting a top-down effect of shared attention on the structural encoding of faces, whereas the EPN was greater in both shared context conditions compared to the alone condition, reflecting an enhanced attention allocation in the processing of emotional content of faces, modulated by the social context. Taken together, these results suggest that shared attention amplifies the neural processing of faces, regardless of the valence of facial expressions.
Collapse
|
45
|
Leung FYN, Sin J, Dawson C, Ong JH, Zhao C, Veić A, Liu F. Emotion recognition across visual and auditory modalities in autism spectrum disorder: A systematic review and meta-analysis. DEVELOPMENTAL REVIEW 2022. [DOI: 10.1016/j.dr.2021.101000] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
46
|
Abstract
Ensemble coding and attention are two mechanisms utilized by our visual system to overcome the limitation of visual processing when confronted with the overwhelming visual information. Recent evidence in ensemble coding of size suggests that the attended items contributed more to the averaging. On the other hand, some new evidence also indicates that reduced attention jeopardies the perceptual averaging of stimuli. What is the relationship between attention and ensemble coding? To answer this question, in the current study, we tested whether an exogenous attentional cue would influence the reported mean emotion of a crowd. We showed participants a group of four faces with different emotions. Participants' attention was guided to the happiest or saddest face (attention conditions), or not to any specific face (baseline condition). The results supported the notion that the attention alters the ensemble perception of the facial expression by elevating the weight of that face in the ensemble representation. This opens the question for the neural mechanisms of ensemble coding and its connection to visual attention.
Collapse
|
47
|
Frot M, Mauguière F, Garcia-Larrea L. Insular Dichotomy in the Implicit Detection of Emotions in Human Faces. Cereb Cortex 2022; 32:4215-4228. [PMID: 35029677 DOI: 10.1093/cercor/bhab477] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 11/03/2021] [Accepted: 11/23/2021] [Indexed: 12/17/2022] Open
Abstract
The functional roles of the insula diverge between its posterior portion (PI), mainly connected with somato-sensory and motor areas, and its anterior section (AI) connected with the frontal, limbic, and cingulate regions. We report intracranial recordings of local field evoked potentials from PI, AI, and the visual fusiform gyrus to a full array of emotional faces including pain while the individuals' attention was diverted from emotions. The fusiform gyrus and PI responded equally to all types of faces, including neutrals. Conversely, the AI responded only to emotional faces, maximally to pain and fear, while remaining insensitive to neutrals. The two insular sectors reacted with almost identical latency suggesting their parallel initial activation via distinct functional routes. The consistent responses to all emotions, together with the absence of response to neutral faces, suggest that early responses in the AI reflect the immediate arousal value and behavioral relevance of emotional stimuli, which may be subserved by "fast track" routes conveying coarse-spatial-frequency information via the superior colliculus and dorsal pulvinar. Such responses precede the conscious detection of the stimulus' precise signification and valence, which need network interaction and information exchange with other brain areas, for which the AI is an essentialhub.
Collapse
Affiliation(s)
- Maud Frot
- Central Integration of Pain (NeuroPain) Lab-Lyon Neuroscience Research Center, INSERM U1028, CNRS, UMR5292, Université Claude Bernard, Bron 69677, France
| | - François Mauguière
- Central Integration of Pain (NeuroPain) Lab-Lyon Neuroscience Research Center, INSERM U1028, CNRS, UMR5292, Université Claude Bernard, Bron 69677, France
| | - Luis Garcia-Larrea
- Central Integration of Pain (NeuroPain) Lab-Lyon Neuroscience Research Center, INSERM U1028, CNRS, UMR5292, Université Claude Bernard, Bron 69677, France
- Centre d'Evaluation et de Traitement de la Douleur, Hospices Civils de Lyon, Lyon 69003, France
| |
Collapse
|
48
|
The Neurophysiology of the Cerebellum in Emotion. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1378:87-108. [DOI: 10.1007/978-3-030-99550-8_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
49
|
Gainotti G. Is There a Causal Link between the Left Lateralization of Language and Other Brain Asymmetries? A Review of Data Gathered in Patients with Focal Brain Lesions. Brain Sci 2021; 11:1644. [PMID: 34942946 PMCID: PMC8699490 DOI: 10.3390/brainsci11121644] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 12/01/2021] [Accepted: 12/11/2021] [Indexed: 11/16/2022] Open
Abstract
This review evaluated if the hypothesis of a causal link between the left lateralization of language and other brain asymmetries could be supported by a careful review of data gathered in patients with unilateral brain lesions. In a short introduction a distinction was made between brain activities that could: (a) benefit from the shaping influences of language (such as the capacity to solve non-verbal cognitive tasks and the increased levels of consciousness and of intentionality); (b) be incompatible with the properties and the shaping activities of language (e.g., the relations between language and the automatic orienting of visual-spatial attention or between cognition and emotion) and (c) be more represented on the right hemisphere due to competition for cortical space. The correspondence between predictions based on the theoretical impact of language on other brain functions and data obtained in patients with lesions of the right and left hemisphere was then assessed. The reviewed data suggest that different kinds of hemispheric asymmetries observed in patients with unilateral brain lesions could be subsumed by common mechanisms, more or less directly linked to the left lateralization of language.
Collapse
Affiliation(s)
- Guido Gainotti
- Institute of Neurology, Catholic University, 00168 Rome, Italy
| |
Collapse
|
50
|
Human face and gaze perception is highly context specific and involves bottom-up and top-down neural processing. Neurosci Biobehav Rev 2021; 132:304-323. [PMID: 34861296 DOI: 10.1016/j.neubiorev.2021.11.042] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 11/24/2021] [Accepted: 11/24/2021] [Indexed: 11/21/2022]
Abstract
This review summarizes human perception and processing of face and gaze signals. Face and gaze signals are important means of non-verbal social communication. The review highlights that: (1) some evidence is available suggesting that the perception and processing of facial information starts in the prenatal period; (2) the perception and processing of face identity, expression and gaze direction is highly context specific, the effect of race and culture being a case in point. Culture affects by means of experiential shaping and social categorization the way in which information on face and gaze is collected and perceived; (3) face and gaze processing occurs in the so-called 'social brain'. Accumulating evidence suggests that the processing of facial identity, facial emotional expression and gaze involves two parallel and interacting pathways: a fast and crude subcortical route and a slower cortical pathway. The flow of information is bi-directional and includes bottom-up and top-down processing. The cortical networks particularly include the fusiform gyrus, superior temporal sulcus (STS), intraparietal sulcus, temporoparietal junction and medial prefrontal cortex.
Collapse
|