1
|
Zhang Z, Chen T, Liu Y, Wang C, Zhao K, Liu CH, Fu X. Decoding the temporal representation of facial expression in face-selective regions. Neuroimage 2023; 283:120442. [PMID: 37926217 DOI: 10.1016/j.neuroimage.2023.120442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/07/2023] Open
Abstract
The ability of humans to discern facial expressions in a timely manner typically relies on distributed face-selective regions for rapid neural computations. To study the time course in regions of interest for this process, we used magnetoencephalography (MEG) to measure neural responses participants viewed facial expressions depicting seven types of emotions (happiness, sadness, anger, disgust, fear, surprise, and neutral). Analysis of the time-resolved decoding of neural responses in face-selective sources within the inferior parietal cortex (IP-faces), lateral occipital cortex (LO-faces), fusiform gyrus (FG-faces), and posterior superior temporal sulcus (pSTS-faces) revealed that facial expressions were successfully classified starting from ∼100 to 150 ms after stimulus onset. Interestingly, the LO-faces and IP-faces showed greater accuracy than FG-faces and pSTS-faces. To examine the nature of the information processed in these face-selective regions, we entered with facial expression stimuli into a convolutional neural network (CNN) to perform similarity analyses against human neural responses. The results showed that neural responses in the LO-faces and IP-faces, starting ∼100 ms after the stimuli, were more strongly correlated with deep representations of emotional categories than with image level information from the input images. Additionally, we observed a relationship between the behavioral performance and the neural responses in the LO-faces and IP-faces, but not in the FG-faces and lpSTS-faces. Together, these results provided a comprehensive picture of the time course and nature of information involved in facial expression discrimination across multiple face-selective regions, which advances our understanding of how the human brain processes facial expressions.
Collapse
Affiliation(s)
- Zhihao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tong Chen
- Chongqing Key Laboratory of Non-Linear Circuit and Intelligent Information Processing, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing 400715, China
| | - Ye Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chongyang Wang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Dorset, United Kingdom
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
2
|
Burleigh L, Greening SG. Fear in the mind's eye: the neural correlates of differential fear acquisition to imagined conditioned stimuli. Soc Cogn Affect Neurosci 2023; 18:6984812. [PMID: 36629508 PMCID: PMC10036874 DOI: 10.1093/scan/nsac063] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 11/07/2022] [Accepted: 01/10/2023] [Indexed: 01/12/2023] Open
Abstract
Mental imagery is involved in both the expression and treatment of fear-related disorders such as anxiety and post-traumatic stress disorder. However, the neural correlates associated with the acquisition and generalization of differential fear conditioning to imagined conditioned stimuli are relatively unknown. In this study, healthy human participants (n = 27) acquired differential fear conditioning to imagined conditioned stimuli paired with a physical unconditioned stimulus (i.e. mild shock), as measured via self-reported fear, the skin conductance response and significant right anterior insula (aIn) activation. Multivoxel pattern analysis cross-classification also demonstrated that the pattern of activity in the right aIn during imagery acquisition was quantifiably similar to the pattern produced by standard visual acquisition. Additionally, mental imagery was associated with significant differential fear generalization. Fear conditioning acquired to imagined stimuli generalized to viewing those same stimuli as measured with self-reported fear and right aIn activity, and likewise fear conditioning to visual stimuli was associated with significant generalized differential self-reported fear and right aIn activity when imagining those stimuli. Together, the study provides a novel understanding of the neural mechanisms associated with the acquisition of differential fear conditioning to imagined stimuli and that of the relationship between imagery and emotion more generally.
Collapse
Affiliation(s)
- Lauryn Burleigh
- Department of Psychology, Cognitive and Brain Sciences, Louisiana State University, Baton Rouge, LA 70803, USA
| | - Steven G Greening
- Department of Psychology, Cognitive and Brain Sciences, Louisiana State University, Baton Rouge, LA 70803, USA
- Department of Psychology, Brain and Cognitive Sciences, University of Manitoba, Winnipeg, Manitoba R3T 2N2, Canada
| |
Collapse
|
3
|
The medial temporal lobe structure and function support positive affect. Neuropsychologia 2022; 176:108373. [PMID: 36167193 DOI: 10.1016/j.neuropsychologia.2022.108373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/20/2022] [Accepted: 09/21/2022] [Indexed: 11/24/2022]
Abstract
Positive affect (PA) is not only associated with individuals' psychological and physical health, but also their cognitive processes. However, whether medial temporal lobe (MTL) and its subfields' volume/functional connectivity can explain individual variability in PA remains understudied. We investigated the morphological (i.e., grey matter volume; GMV) and functional characteristics (i.e., resting-state functional connectivity; rsFC) of PA with a combination of univariate and multivariate pattern analyses (MVPA) using a large sample of participants (n = 321). We simultaneously collected the T1-weighted (n = 321), high-resolution MTL T2-weighted, and resting-state functional imaging data (n = 209). The MTL and its subfields' volumes, including the CA1, CA2+3, DG, and subiculum (SUB), perirhinal cortex (PRC), and parahippocampus (PHC), were extracted using an automatic segmentation of hippocampal subfields (ASHS) software. The morphological results revealed that GMVs in the prefrontal-occipital and limbic (i.e., hippocampus, amygdala, and PHC) systems were associated with variability in PA at the whole-brain level using MVPA but not univariate analysis. Linear regression results further revealed a positive association between the MTL subfields' GMV, especially for the right PRC, and PA after controlling for several covariates. PRC-seed-based rsFC analyses further revealed that its couplings with the fronto-parietal-occipital system predicted PA in both univariate and MVPA. These findings provide novel insights into the neuroanatomical and functional substrates underlying human PA trait. Findings also suggest critical contributions of the MTL and its subfield of the perirhinal cortex, but not hippocampal subfields, as well as its functional coupling with the fronto-parietal control-system on the formation of PA.
Collapse
|
4
|
Representational structure of fMRI/EEG responses to dynamic facial expressions. Neuroimage 2022; 263:119631. [PMID: 36113736 DOI: 10.1016/j.neuroimage.2022.119631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 09/09/2022] [Accepted: 09/12/2022] [Indexed: 11/23/2022] Open
Abstract
Face perception provides an excellent example of how the brain processes nuanced visual differences and transforms them into behaviourally useful representations of identities and emotional expressions. While a body of literature has looked into the spatial and temporal neural processing of facial expressions, few studies have used a dimensionally varying set of stimuli containing subtle perceptual changes. In the current study, we used 48 short videos varying dimensionally in their intensity and category (happy, angry, surprised) of expression. We measured both fMRI and EEG responses to these video clips and compared the neural response patterns to the predictions of models based on image features and models derived from behavioural ratings of the stimuli. In fMRI, the inferior frontal gyrus face area (IFG-FA) carried information related only to the intensity of the expression, independent of image-based models. The superior temporal sulcus (STS), inferior temporal (IT) and lateral occipital (LO) areas contained information about both expression category and intensity. In the EEG, the coding of expression category and low-level image features were most pronounced at around 400 ms. The expression intensity model did not, however, correlate significantly at any EEG timepoint. Our results show a specific role for IFG-FA in the coding of expressions and suggest that it contains image and category invariant representations of expression intensity.
Collapse
|
5
|
Bailey KM, Giordano BL, Kaas AL, Smith FW. Decoding sounds depicting hand-object interactions in primary somatosensory cortex. Cereb Cortex 2022; 33:3621-3635. [PMID: 36045002 DOI: 10.1093/cercor/bhac296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/24/2022] [Accepted: 07/07/2022] [Indexed: 11/13/2022] Open
Abstract
Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas.
Collapse
Affiliation(s)
- Kerri M Bailey
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Bruno L Giordano
- Institut des Neurosciences de La Timone, CNRS UMR 7289, Université Aix-Marseille, Marseille CNRS UMR 7289, France
| | - Amanda L Kaas
- Department of Cognitive Neuroscience, Maastricht University, Maastricht 6229 EV, The Netherlands
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| |
Collapse
|
6
|
Greening SG, Lee TH, Burleigh L, Grégoire L, Robinson T, Jiang X, Mather M, Kaplan J. Mental imagery can generate and regulate acquired differential fear conditioned reactivity. Sci Rep 2022; 12:997. [PMID: 35046506 PMCID: PMC8770773 DOI: 10.1038/s41598-022-05019-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 01/03/2022] [Indexed: 12/27/2022] Open
Abstract
Mental imagery is an important tool in the cognitive control of emotion. The present study tests the prediction that visual imagery can generate and regulate differential fear conditioning via the activation and prioritization of stimulus representations in early visual cortices. We combined differential fear conditioning with manipulations of viewing and imagining basic visual stimuli in humans. We discovered that mental imagery of a fear-conditioned stimulus compared to imagery of a safe conditioned stimulus generated a significantly greater conditioned response as measured by self-reported fear, the skin conductance response, and right anterior insula activity (experiment 1). Moreover, mental imagery effectively down- and up-regulated the fear conditioned responses (experiment 2). Multivariate classification using the functional magnetic resonance imaging data from retinotopically defined early visual regions revealed significant decoding of the imagined stimuli in V2 and V3 (experiment 1) but significantly reduced decoding in these regions during imagery-based regulation (experiment 2). Together, the present findings indicate that mental imagery can generate and regulate a differential fear conditioned response via mechanisms of the depictive theory of imagery and the biased-competition theory of attention. These findings also highlight the potential importance of mental imagery in the manifestation and treatment of psychological illnesses.
Collapse
Affiliation(s)
- Steven G Greening
- Brain and Cognitive Sciences, Department of Psychology, University of Manitoba, Winnipeg, R3T 2N2, Canada.
- Department of Psychology, Louisiana State University, Baton Rouge, USA.
- Leonard Davis School of Gerontology, University of Southern California, Los Angeles, USA.
| | - Tae-Ho Lee
- Department of Psychology, Virginia Tech, Blacksburg, USA
- Department of Psychology, University of Southern California, Los Angeles, USA
| | - Lauryn Burleigh
- Department of Psychology, Louisiana State University, Baton Rouge, USA
| | - Laurent Grégoire
- Department of Psychology, Louisiana State University, Baton Rouge, USA
- Department of Psychology and Brain Sciences, Texas A&M University, College Station, USA
| | - Tyler Robinson
- Department of Psychology, Louisiana State University, Baton Rouge, USA
| | - Xinrui Jiang
- Department of Psychology, Louisiana State University, Baton Rouge, USA
| | - Mara Mather
- Leonard Davis School of Gerontology, University of Southern California, Los Angeles, USA
- Department of Psychology, University of Southern California, Los Angeles, USA
- Neuroscience Graduate Program, University of Southern California, Los Angeles, USA
| | - Jonas Kaplan
- Brain and Creativity Institute, Dornsife College of Letters Arts and Sciences, University of Southern California, Los Angeles, USA
| |
Collapse
|
7
|
Watanabe A, Yamazaki T. Representation of the brain network by electroencephalograms during facial expressions. J Neurosci Methods 2021; 357:109158. [PMID: 33819556 DOI: 10.1016/j.jneumeth.2021.109158] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 03/19/2021] [Accepted: 03/22/2021] [Indexed: 10/21/2022]
Abstract
BACKGROUND Facial expressions, such as smiling and anger, cause many physical and psychological effects in the body, known as 'embodied emotions' or 'facial feedback theory.' In the clinical application of this theory in certain diseases, such as autism and depression, treatments such as forcing patients to smile have been used. However, the neural mechanisms underlying the representation of facial expressions remain unclear. NEW METHOD We proposed a method to construct brain networks based on the time course of the synchronization likelihood and determine the effects of various facial expressions on the situation using visual stimulus of faces. This method was applied to analyze electroencephalographic (EEG) data recorded during the recognition and representation of various positive and negative facial expressions. The brain networks were constructed based on the EEG data recorded in 11 healthy participants. RESULTS Channel sets from brain networks during unsymmetrical smiling expressions (i.e., only the right or left side) were highly linearly symmetrical. Channel sets from brain networks during negative facial expressions (i.e., anger and sadness) and symmetrical smiling expressions (i.e., smiling with an opened or closed mouth) were similar. COMPARISON WITH EXISTING METHODS While we obtained brain networks based on time course EEG correlations throughout the experiment, existing methods can analyze EEG data only at a certain time point. CONCLUSIONS The comparisons of different facial expressions could be used to identify the side of the facial muscles used while smiling and to determine how similar brain networks are induced by positive and negative facial expressions.
Collapse
Affiliation(s)
- Asako Watanabe
- Department of Bioscience and Bioinformatics, Graduate School of Computer Science and Systems Engineering, Kyushu Institute of Technology, Kawazu 680-4, Iizuka City, Fukuoka, 820-8502, Japan.
| | - Toshimasa Yamazaki
- Department of Bioscience and Bioinformatics, Graduate School of Computer Science and Systems Engineering, Kyushu Institute of Technology, Kawazu 680-4, Iizuka City, Fukuoka, 820-8502, Japan
| |
Collapse
|
8
|
Guo K, Calver L, Soornack Y, Bourke P. Valence-dependent Disruption in Processing of Facial Expressions of Emotion in Early Visual Cortex—A Transcranial Magnetic Stimulation Study. J Cogn Neurosci 2020; 32:906-916. [DOI: 10.1162/jocn_a_01520] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Our visual inputs are often entangled with affective meanings in natural vision, implying the existence of extensive interaction between visual and emotional processing. However, little is known about the neural mechanism underlying such interaction. This exploratory transcranial magnetic stimulation (TMS) study examined the possible involvement of the early visual cortex (EVC, Area V1/V2/V3) in perceiving facial expressions of different emotional valences. Across three experiments, single-pulse TMS was delivered at different time windows (50–150 msec) after a brief 10-msec onset of face images, and participants reported the visibility and perceived emotional valence of faces. Interestingly, earlier TMS at ∼90 msec only reduced the face visibility irrespective of displayed expressions, but later TMS at ∼120 msec selectively disrupted the recognition of negative facial expressions, indicating the involvement of EVC in the processing of negative expressions at a later time window, possibly beyond the initial processing of fed-forward facial structure information. The observed TMS effect was further modulated by individuals' anxiety level. TMS at ∼110–120 msec disrupted the recognition of anger significantly more for those scoring relatively low in trait anxiety than the high scorers, suggesting that cognitive bias influences the processing of facial expressions in EVC. Taken together, it seems that EVC is involved in structural encoding of (at least) negative facial emotional valence, such as fear and anger, possibly under modulation from higher cortical areas.
Collapse
|
9
|
Spatio-temporal dynamics of face perception. Neuroimage 2020; 209:116531. [DOI: 10.1016/j.neuroimage.2020.116531] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 12/19/2019] [Accepted: 01/08/2020] [Indexed: 11/27/2022] Open
|
10
|
Imbriano G, Sussman TJ, Jin J, Mohanty A. The role of imagery in threat-related perceptual decision making. ACTA ACUST UNITED AC 2019; 20:1495-1501. [PMID: 31192666 DOI: 10.1037/emo0000610] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual perception is heavily influenced by "top-down" factors, including goals, expectations, and prior knowledge about the environmental context. Recent research has demonstrated the beneficial role threat-related cues play in perceptual decision making; however, the psychological processes contributing to this differential effect remain unclear. Since visual imagery helps to create perceptual representations or "templates" based on prior knowledge (e.g., cues), the present study examines the role vividness of visual imagery plays in enhanced perceptual decision making following threatening cues. In a perceptual decision-making task, participants used threat-related and neutral cues to detect perceptually degraded fearful and neutral faces presented at predetermined perceptual thresholds. Participants' vividness of imagery was measured by the Vividness of Visual Imagery Questionnaire-2 (VVIQ-2). Our results replicated prior work demonstrating that threat cues improve accuracy, perceptual sensitivity, and speed of perceptual decision making compared to neutral cues. Furthermore, better performance following threat and neutral cues was associated with higher VVIQ-2 scores. Importantly, more precise and rapid perceptual decision making following threatening cues was associated with greater VVIQ-2 scores, even after controlling for performance related to neutral cues. This association may be because greater imagery ability allows one to conjure more vivid threat-related templates, which facilitate subsequent perception. While the detection of threatening stimuli is well studied in the literature, our findings elucidate how threatening cues occurring prior to the stimulus aid in subsequent perception. Overall, these findings highlight the necessity of considering top-down threat-related factors in visual perceptual decision making. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
11
|
Smith FW, Smith ML. Decoding the dynamic representation of facial expressions of emotion in explicit and incidental tasks. Neuroimage 2019; 195:261-271. [PMID: 30940611 DOI: 10.1016/j.neuroimage.2019.03.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/18/2019] [Accepted: 03/27/2019] [Indexed: 11/24/2022] Open
Abstract
Faces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we used Multivariate Pattern Analysis (MVPA) with EEG to investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding facial expression category from the EEG when task is on expression) or incidental task contexts (e.g. decoding facial expression category from the EEG when task is on identity). Decoding of both face categories, across both task contexts, peaked in time-windows spanning 91-170 ms (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under incidental processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, however under incidental conditions only errors in EEG decoding of expression correlated with behavior. Furthermore, decoding time-courses and the spatial pattern of informative electrodes showed consistently better decoding of identity under explicit conditions at later-time periods, with weak evidence for similar effects for decoding of expression at isolated time-windows. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs incidental task contexts and suggest that facial expressions are processed to a richer degree under incidental processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.
Collapse
Affiliation(s)
- Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK.
| | - Marie L Smith
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
12
|
Smith FW, Rossit S. Identifying and detecting facial expressions of emotion in peripheral vision. PLoS One 2018; 13:e0197160. [PMID: 29847562 PMCID: PMC5976168 DOI: 10.1371/journal.pone.0197160] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Accepted: 04/27/2018] [Indexed: 11/24/2022] Open
Abstract
Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus.
Collapse
Affiliation(s)
- Fraser W. Smith
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Stephanie Rossit
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| |
Collapse
|