1
|
Stuit SM, Paffen CLE, Van der Stigchel S. Prioritization of emotional faces is not driven by emotional content. Sci Rep 2023; 13:549. [PMID: 36631453 PMCID: PMC9834315 DOI: 10.1038/s41598-022-25575-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 12/01/2022] [Indexed: 01/12/2023] Open
Abstract
Emotional faces have prioritized access to visual awareness. However, studies concerned with what expressions are prioritized most are inconsistent and the source of prioritization remains elusive. Here we tested the predictive value of spatial frequency-based image-features and emotional content, the sub-part of the image content that signals the emotional expression of the actor in the image as opposed to the image content irrelevant for the emotional expression, for prioritization for awareness. Participants reported which of two faces (displaying a combination of angry, happy, and neutral expressions), that were temporarily suppressed from awareness, was perceived first. Even though the results show that happy expressions were prioritized for awareness, this prioritization was driven by the contrast energy of the images. In fact, emotional content could not predict prioritization at all. Our findings show that the source of prioritization for awareness is not the information carrying the emotional content. We argue that the methods used here, or similar approaches, should become standard practice to break the chain of inconsistent findings regarding emotional superiority effects that have been part of the field for decades.
Collapse
Affiliation(s)
- Sjoerd M. Stuit
- grid.5477.10000000120346234Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Chris L. E. Paffen
- grid.5477.10000000120346234Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Stefan Van der Stigchel
- grid.5477.10000000120346234Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
2
|
Baier D, Kempkes M, Ditye T, Ansorge U. Do Subliminal Fearful Facial Expressions Capture Attention? Front Psychol 2022; 13:840746. [PMID: 35496171 PMCID: PMC9039161 DOI: 10.3389/fpsyg.2022.840746] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 03/22/2022] [Indexed: 11/29/2022] Open
Abstract
In two experiments, we tested whether fearful facial expressions capture attention in an awareness-independent fashion. In Experiment 1, participants searched for a visible neutral face presented at one of two positions. Prior to the target, a backward-masked and, thus, invisible emotional (fearful/disgusted) or neutral face was presented as a cue, either at target position or away from the target position. If negative emotional faces capture attention in a stimulus-driven way, we would have expected a cueing effect: better performance where fearful or disgusted facial cues were presented at target position than away from the target. However, no evidence of capture of attention was found, neither in behavior (response times or error rates), nor in event-related lateralizations (N2pc). In Experiment 2, we went one step further and used fearful faces as visible targets, too. Thereby, we sought to boost awareness-independent capture of attention by fearful faces. However, still, we found no significant attention-capture effect. Our results show that fearful facial expressions do not capture attention in an awareness-independent way. Results are discussed in light of existing theories.
Collapse
Affiliation(s)
- Diane Baier
- Faculty of Psychology, University of Vienna, Vienna, Austria.,Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Marleen Kempkes
- Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Thomas Ditye
- Faculty of Psychology, University of Vienna, Vienna, Austria.,Faculty of Psychology, Sigmund Freud University, Vienna, Austria
| | - Ulrich Ansorge
- Faculty of Psychology, University of Vienna, Vienna, Austria.,Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
3
|
The image features of emotional faces that predict the initial eye movement to a face. Sci Rep 2021; 11:8287. [PMID: 33859332 PMCID: PMC8050215 DOI: 10.1038/s41598-021-87881-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 04/05/2021] [Indexed: 11/08/2022] Open
Abstract
Emotional facial expressions are important visual communication signals that indicate a sender's intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their low-level image features rather than in terms of the emotional content (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the initial eye movement towards one out of two simultaneously presented faces. Interestingly, the identified features serve as better predictors than the emotional content of the expressions. We therefore propose that our modelling approach can further specify which visual features drive these and other behavioural effects related to emotional expressions, which can help solve the inconsistencies found in this line of research.
Collapse
|
4
|
External and internal facial features modulate processing of vertical but not horizontal spatial relations. Vision Res 2019; 157:44-54. [DOI: 10.1016/j.visres.2017.12.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/19/2017] [Accepted: 12/21/2017] [Indexed: 11/20/2022]
|
5
|
Hashemi A, Pachai MV, Bennett PJ, Sekuler AB. The role of horizontal facial structure on the N170 and N250. Vision Res 2019; 157:12-23. [DOI: 10.1016/j.visres.2018.02.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 02/01/2018] [Accepted: 02/03/2018] [Indexed: 10/17/2022]
|
6
|
Pachai MV, Bennett PJ, Sekuler AB. The effect of training with inverted faces on the selective use of horizontal structure. Vision Res 2019; 157:24-35. [DOI: 10.1016/j.visres.2018.04.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 02/12/2018] [Accepted: 04/03/2018] [Indexed: 01/03/2023]
|
7
|
Fixed or flexible? Orientation preference in identity and gaze processing in humans. PLoS One 2019; 14:e0210503. [PMID: 30682035 PMCID: PMC6347268 DOI: 10.1371/journal.pone.0210503] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 12/23/2018] [Indexed: 11/19/2022] Open
Abstract
Vision begins with the encoding of contrast at specific orientations. Several works showed that humans identify their conspecifics best based on the horizontally-oriented information contained in the face image; this range conveys the main morphological features of the face. In contrast, the vertical structure of the eye region seems to deliver optimal cues to gaze direction. The present work investigates whether the human face processing system flexibly tunes to vertical information contained in the eye region when processing gaze direction. Alternatively, face processing may invariantly rely on the horizontal range, supporting the domain specificity of orientation tuning for faces and the gateway role of horizontal content to access any type of facial information. Participants judged the gaze direction of faces staring at a range of lateral positions. They additionally performed an identification task with upright and inverted face stimuli. Across tasks, stimuli were filtered to selectively reveal horizontal (H), vertical (V), or combined (HV) information. Most participants identified faces better based on horizontal than vertical information confirming the horizontal tuning of face identification. In contrast, they showed a vertically-tuned sensitivity to gaze direction. The logistic functions fitting the “left” and “right” response proportion as a function of gaze direction were indeed steeper when based on vertical than on horizontal information. The finding of a vertically-tuned processing of gaze direction favours the hypothesis that visual encoding of face information flexibly switches to the orientation channel carrying the cues most relevant to the task at hand. It suggests that horizontal structure, though predominant in the face stimulus, is not a mandatory gateway for efficient face processing. The present evidence may help better understand how visual signals travel the visual system to enable rich and complex representations of naturalistic stimuli such as faces.
Collapse
|
8
|
Yu D, Chai A, Chung STL. Orientation information in encoding facial expressions. Vision Res 2018; 150:29-37. [PMID: 30048659 PMCID: PMC6139277 DOI: 10.1016/j.visres.2018.07.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 06/25/2018] [Accepted: 07/12/2018] [Indexed: 11/23/2022]
Abstract
Previous research showed that we use different regions of a face to categorize different facial expressions, e.g. mouth region for identifying happy faces; eyebrows, eyes and upper part of nose for identifying angry faces. These findings imply that the spatial information along or close to the horizontal orientation might be more useful than others for facial expression recognition. In this study, we examined how the performance for recognizing facial expression depends on the spatial information along different orientations, and whether the pixel-level differences in the face images could account for subjects' performance. Four facial expressions-angry, fearful, happy and sad-were tested. An orientation filter (bandwidth = 23°) was applied to restrict information within the face images, with the center of the filter ranged from 0° (horizontal) to 150° in steps of 30°. Accuracy for recognizing facial expression was measured for an unfiltered and the six filtered conditions. For all four facial expressions, recognition performance (normalized d') was virtually identical for filter orientations of -30°, horizontal and 30°, and declined systematically as the filter orientation approached vertical. The information contained in mouth and eye regions is a significant predictor for subject's response (based on the confusion patterns). We conclude that young adults with normal vision categorizes facial expression most effectively based on the spatial information around the horizontal orientation which captures primary changes of facial features across expressions. Across all spatial orientations, the information contained in mouth and eye regions contributes significantly to facial expression categorization.
Collapse
Affiliation(s)
- Deyue Yu
- College of Optometry, The Ohio State University, Columbus, OH, United States.
| | - Andrea Chai
- School of Optometry, University of California, Berkeley, CA, United States
| | - Susana T L Chung
- School of Optometry, University of California, Berkeley, CA, United States
| |
Collapse
|
9
|
Royer J, Blais C, Charbonneau I, Déry K, Tardif J, Duchaine B, Gosselin F, Fiset D. Greater reliance on the eye region predicts better face recognition ability. Cognition 2018; 181:12-20. [PMID: 30103033 DOI: 10.1016/j.cognition.2018.08.004] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Revised: 08/03/2018] [Accepted: 08/06/2018] [Indexed: 01/17/2023]
Abstract
Interest in using individual differences in face recognition ability to better understand the perceptual and cognitive mechanisms supporting face processing has grown substantially in recent years. The goal of this study was to determine how varying levels of face recognition ability are linked to changes in visual information extraction strategies in an identity recognition task. To address this question, fifty participants completed six tasks measuring face and object processing abilities. Using the Bubbles method (Gosselin & Schyns, 2001), we also measured each individual's use of visual information in face recognition. At the group level, our results replicate previous findings demonstrating the importance of the eye region for face identification. More importantly, we show that face processing ability is related to a systematic increase in the use of the eye area, especially the left eye from the observer's perspective. Indeed, our results suggest that the use of this region accounts for approximately 20% of the variance in face processing ability. These results support the idea that individual differences in face processing are at least partially related to the perceptual extraction strategy used during face identification.
Collapse
Affiliation(s)
- Jessica Royer
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Isabelle Charbonneau
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Karine Déry
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Jessica Tardif
- Département de Psychologie, Université de Montréal, Canada
| | - Brad Duchaine
- Department of Psychological and Brain Sciences, Dartmouth College, United States
| | | | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada.
| |
Collapse
|
10
|
Amorim M, Pinheiro AP. Is the sunny side up and the dark side down? Effects of stimulus type and valence on a spatial detection task. Cogn Emot 2018; 33:346-360. [PMID: 29564964 DOI: 10.1080/02699931.2018.1452718] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
In verbal communication, affective information is commonly conveyed to others through spatial terms (e.g. in "I am feeling down", negative affect is associated with a lower spatial location). This study used a target location discrimination task with neutral, positive and negative stimuli (words, facial expressions, and vocalizations) to test the automaticity of the emotion-space association, both in the vertical and horizontal spatial axes. The effects of stimulus type on emotion-space representations were also probed. A congruency effect (reflected in reaction times) was observed in the vertical axis: detection of upper targets preceded by positive stimuli was faster. This effect occurred for all stimulus types, indicating that the emotion-space association is not dependent on sensory modality and on the verbal content of affective stimuli.
Collapse
Affiliation(s)
- Maria Amorim
- a Faculdade de Psicologia, Universidade de Lisboa Lisboa , Portugal.,b School of Psychology, University of Minho , Braga , Portugal
| | - Ana P Pinheiro
- a Faculdade de Psicologia, Universidade de Lisboa Lisboa , Portugal.,b School of Psychology, University of Minho , Braga , Portugal
| |
Collapse
|
11
|
Abstract
Observers make a range of social evaluations based on facial appearance, including judgments of trustworthiness, warmth, competence, and other aspects of personality. What visual information do people use to make these judgments? While links have been made between perceived social characteristics and other high-level properties of facial appearance (e.g., attractiveness, masculinity), there has been comparatively little effort to link social evaluations to low-level visual features, like spatial frequency and orientation sub-bands, known to be critically important for face processing. We explored the extent to which different social evaluations depended critically on horizontal orientation energy vs. vertical orientation energy, as is the case for face identification and emotion recognition. We found that while trustworthiness judgments exhibited this bias for horizontal orientations, competence and dominance did not, suggesting that social evaluations may depend on a multi-channel representation of facial appearance at early stages of visual processing.
Collapse
|
12
|
Pachai MV, Bennett PJ, Sekuler AB. The Bandwidth of Diagnostic Horizontal Structure for Face Identification. Perception 2018; 47:397-413. [PMID: 29350095 DOI: 10.1177/0301006618754479] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Horizontally oriented spatial frequency components are a diagnostic source of face identity information, and sensitivity to this information predicts upright identification accuracy and the magnitude of the face-inversion effect. However, the bandwidth at which this information is conveyed, and the extent to which human tuning matches this distribution of information, has yet to be characterized. We designed a 10-alternative forced choice face identification task in which upright or inverted faces were filtered to retain horizontal or vertical structure. We systematically varied the bandwidth of these filters in 10° steps and replaced the orientation components that were removed from the target face with components from the average of all possible faces. This manipulation created patterns that looked like faces but contained diagnostic information in orientation bands unknown to the observer on any given trial. Further, we quantified human performance relative to the actual information content of our face stimuli using an ideal observer with perfect knowledge of the diagnostic band. We found that the most diagnostic information for face identification is conveyed by a narrow band of orientations along the horizontal meridian, whereas human observers use information from a wide range of orientations.
Collapse
Affiliation(s)
- Matthew V Pachai
- Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Switzerland.,Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, ON, Canada
| | - Patrick J Bennett
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, ON, Canada
| | - Allison B Sekuler
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
13
|
Balas B, van Lamsweerde AE, Saville A, Schmidt J. School‐age children's neural sensitivity to horizontal orientation energy in faces. Dev Psychobiol 2017; 59:899-909. [DOI: 10.1002/dev.21546] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Accepted: 06/17/2017] [Indexed: 11/09/2022]
|
14
|
Balas B, Auen A, Saville A, Schmidt J. Body emotion recognition disproportionately depends on vertical orientations during childhood. INTERNATIONAL JOURNAL OF BEHAVIORAL DEVELOPMENT 2017. [DOI: 10.1177/0165025417690267] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Children’s ability to recognize emotional expressions from faces and bodies develops during childhood. However, the low-level features that support accurate body emotion recognition during development have not been well characterized. This is in marked contrast to facial emotion recognition, which is known to depend upon specific spatial frequency and orientation sub-bands during adulthood, biases that develop during childhood. Here, we examined whether children’s reliance on vertical vs. horizontal orientation energy for recognizing emotional expressions in static images of bodies changed during middle childhood (5 to 10 years old). We found that while children of all ages had an adult-like bias favoring vertical orientation energy, this effect was larger at younger ages. We conclude that in terms of information use, a key feature of the development of emotion recognition is improved performance with sub-optimal features for recognition – that is, learning to use less diagnostic features of the image is a slower process than learning to use more useful features.
Collapse
|
15
|
Goffaux V, Greenwood JA. The orientation selectivity of face identification. Sci Rep 2016; 6:34204. [PMID: 27677359 PMCID: PMC5039756 DOI: 10.1038/srep34204] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2016] [Accepted: 09/09/2016] [Indexed: 11/29/2022] Open
Abstract
Recent work demonstrates that human face identification is most efficient when based on horizontal, rather than vertical, image structure. Because it is unclear how this specialization for upright (compared to inverted) face processing emerges in the visual system, the present study aimed to systematically characterize the orientation sensitivity profile for face identification. With upright faces, identification performance in a delayed match-to-sample task was highest for horizontally filtered images and declined sharply with oblique and vertically filtered images. Performance was well described by a Gaussian function with a bandwidth around 25°. Face inversion reshaped this sensitivity profile dramatically, with a downward shift of the entire tuning curve as well as a reduction in the amplitude of the horizontal peak and a doubling in bandwidth. The use of naturalistic outer contours (vs. a common outline mask) was also found to reshape this sensitivity profile by increasing sensitivity to oblique information in the near-horizontal range. Altogether, although face identification is sharply tuned to horizontal angles, both inversion and outline masking can profoundly reshape this orientation sensitivity profile. This combination of image- and observer-driven effects provides an insight into the functional relationship between orientation-selective processes within primary and high-level stages of the human brain.
Collapse
Affiliation(s)
- Valerie Goffaux
- Research Institute for Psychological Science, Université Catholique de Louvain, Belgium
- Institute of Neuroscience, Université Catholique de Louvain, Belgium
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands
| | | |
Collapse
|
16
|
Wang J, Li W, Li X, Li P, Zhang Y, Jia X, Chen Y, Vanhoy M, Sun B. A differing bidirectional impact on the recognition accuracy of facial expression. INTERNATIONAL JOURNAL OF PSYCHOLOGY 2016; 53:194-199. [PMID: 27478037 DOI: 10.1002/ijop.12371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Accepted: 07/01/2016] [Indexed: 11/08/2022]
Abstract
This study explored a bidirectional impact on the recognition accuracy of various facial expressions deriving from both the observer and sender in a sample of Chinese participants. A facial manipulation task was used to examine the ability of an observer's facial feedback to modulate the recognition of various facial expressions. Furthermore, the effect of a sender's facial expression with an open or closed mouth on recognition accuracy was investigated. The results showed that only recognition accuracy of a sad facial expression was influenced simultaneously by bidirectional sources from a sender and observer. Moreover, the impact of the unidirectional cue of a sender's facial feature (i.e., mouth openness) on happy and neutral faces was found to influence the recognition accuracy of these faces, but not the observer's bodily state. These findings indicate that the bidirectional impact derived from an observer and sender on facial expression recognition accuracy differs for emotional and neutral expressions.
Collapse
Affiliation(s)
- Jingjing Wang
- Institute of Psychology, Zhejiang Normal University, Jinhua, China
| | - Weijian Li
- Institute of Psychology, Zhejiang Normal University, Jinhua, China
| | - Xinyu Li
- Institute of Psychology, Zhejiang Normal University, Jinhua, China
| | - Ping Li
- Institute of Psychology, Zhejiang Normal University, Jinhua, China
| | - Yuchi Zhang
- Institute of Psychology, Zhejiang Normal University, Jinhua, China
| | - Xiaoyu Jia
- Institute of Psychology, Zhejiang University, Hang Zhou, China
| | - Yue Chen
- Department of Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada
| | - Mickie Vanhoy
- Department of Psychology, University of Central Oklahoma, Edmond, OK, USA
| | - Binghai Sun
- Institute of Psychology, Zhejiang Normal University, Jinhua, China
| |
Collapse
|
17
|
Taubert J, Goffaux V, Van Belle G, Vanduffel W, Vogels R. The impact of orientation filtering on face-selective neurons in monkey inferior temporal cortex. Sci Rep 2016; 6:21189. [PMID: 26879148 PMCID: PMC4754760 DOI: 10.1038/srep21189] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2015] [Accepted: 01/19/2016] [Indexed: 11/09/2022] Open
Abstract
Faces convey complex social signals to primates. These signals are tolerant of some image transformations (e.g. changes in size) but not others (e.g. picture-plane rotation). By filtering face stimuli for orientation content, studies of human behavior and brain responses have shown that face processing is tuned to selective orientation ranges. In the present study, for the first time, we recorded the responses of face-selective neurons in monkey inferior temporal (IT) cortex to intact and scrambled faces that were filtered to selectively preserve horizontal or vertical information. Guided by functional maps, we recorded neurons in the lateral middle patch (ML), the lateral anterior patch (AL), and an additional region located outside of the functionally defined face-patches (CONTROL). We found that neurons in ML preferred horizontal-passed faces over their vertical-passed counterparts. Neurons in AL, however, had a preference for vertical-passed faces, while neurons in CONTROL had no systematic preference. Importantly, orientation filtering did not modulate the firing rate of neurons to phase-scrambled face stimuli in any recording region. Together these results suggest that face-selective neurons found in the face-selective patches are differentially tuned to orientation content, with horizontal tuning in area ML and vertical tuning in area AL.
Collapse
Affiliation(s)
- Jessica Taubert
- Face Categorization Lab, University of Louvain, Louvain-La-Neuve 1348, Belgium
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Leuven 3000, Belgium
| | - Valerie Goffaux
- Face Categorization Lab, University of Louvain, Louvain-La-Neuve 1348, Belgium
| | - Goedele Van Belle
- Face Categorization Lab, University of Louvain, Louvain-La-Neuve 1348, Belgium
| | - Wim Vanduffel
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Leuven 3000, Belgium
- MGH Martinos Center, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Rufin Vogels
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Leuven 3000, Belgium
| |
Collapse
|
18
|
Balas B, Huynh C, Saville A, Schmidt J. Orientation biases for facial emotion recognition during childhood and adulthood. J Exp Child Psychol 2015; 140:171-83. [DOI: 10.1016/j.jecp.2015.07.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2014] [Revised: 07/03/2015] [Accepted: 07/05/2015] [Indexed: 11/16/2022]
|
19
|
Balas B, Huynh CM. Face and body emotion recognition depend on different orientation sub-bands. VISUAL COGNITION 2015. [DOI: 10.1080/13506285.2015.1077912] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
20
|
Balas BJ, Schmidt J, Saville A. A face detection bias for horizontal orientations develops in middle childhood. Front Psychol 2015; 6:772. [PMID: 26106349 PMCID: PMC4459095 DOI: 10.3389/fpsyg.2015.00772] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2015] [Accepted: 05/23/2015] [Indexed: 11/13/2022] Open
Abstract
Faces are complex stimuli that can be described via intuitive facial features like the eyes, nose, and mouth, "configural" features like the distances between facial landmarks, and features that correspond to computations performed in the early visual system (e.g., oriented edges). With regard to this latter category of descriptors, adult face recognition relies disproportionately on information in specific spatial frequency and orientation bands: many recognition tasks are performed more accurately when adults have access to mid-range spatial frequencies (8-16 cycles/face) and horizontal orientations (Dakin and Watt, 2009). In the current study, we examined how this information bias develops in middle childhood. We recruited children between the ages of 5-10 years-old to participate in a simple categorization task that required them to label images according to whether they depicted a face or a house. Critically, children were presented with face and house images comprised either of primarily horizontal orientation energy, primarily vertical orientation energy, or both horizontal and vertical orientation energy. We predicted that any bias favoring horizontal information over vertical should be more evident in faces than in houses, and also that older children would be more likely to show such a bias than younger children. We designed our categorization task to be sufficiently easy that children would perform at near-ceiling accuracy levels, but with variation in response times that would reflect how they rely on different orientations as a function of age and object category. We found that horizontal bias for face detection (but not house detection) correlated significantly with age, suggesting an emergent category-specific bias for horizontal orientation energy that develops during middle childhood. These results thus suggest that the tuning of high-level recognition to specific low-level visual features takes place over several years of visual development.
Collapse
Affiliation(s)
- Benjamin J Balas
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| | - Jamie Schmidt
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| | - Alyson Saville
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| |
Collapse
|