1
|
Prunty J, Jenkins R, Qarooni R, Bindemann M. Face detection in contextual scenes. PLoS One 2024; 19:e0304288. [PMID: 38865378 PMCID: PMC11168631 DOI: 10.1371/journal.pone.0304288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 05/09/2024] [Indexed: 06/14/2024] Open
Abstract
Object and scene perception are intertwined. When objects are expected to appear within a particular scene, they are detected and categorised with greater speed and accuracy. This study examined whether such context effects also moderate the perception of social objects such as faces. Female and male faces were embedded in scenes with a stereotypical female or male context. Semantic congruency of these scene contexts influenced the categorisation of faces (Experiment 1). These effects were bi-directional, such that face sex also affected scene categorisation (Experiment 2), suggesting concurrent automatic processing of both levels. In contrast, the more elementary task of face detection was not affected by semantic scene congruency (Experiment 3), even when scenes were previewed prior to face presentation (Experiment 4). This pattern of results indicates that semantic scene context can affect categorisation of faces. However, the earlier perceptual stage of detection appears to be encapsulated from the cognitive processes that give rise to this contextual interference.
Collapse
Affiliation(s)
- Jonathan Prunty
- School of Psychology, University of Kent, Canterbury, United Kingdom
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
| | - Rob Jenkins
- Department of Psychology, University of York, York, United Kingdom
| | - Rana Qarooni
- Department of Psychology, University of York, York, United Kingdom
| | - Markus Bindemann
- School of Psychology, University of Kent, Canterbury, United Kingdom
| |
Collapse
|
2
|
Wang Z, Ni H, Zhou X, Yang X, Zheng Z, Sun YHP, Zhang X, Jin H. Looking at the upper facial half enlarges the range of holistic face processing. Sci Rep 2023; 13:2419. [PMID: 36765162 PMCID: PMC9918552 DOI: 10.1038/s41598-023-29583-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
Previous studies suggested that upper and lower facial halves might be involved in the human holistic face processing differently. In this study, we replicated and extended the finding above. In Experiment 1, we used the standard composite-face task to measure holistic face processing when participants made judgements on the upper and lower facial halves separately. Results showed that the composite-face effect was stronger for the upper facial half compared to the lower half. In Experiment 2, we investigated how facial information was integrated when participants focused on different features, using the perceptual field paradigm. Results showed that: (1) more "peripheral faces" were chosen when participants fixated at the eyes than when they fixated at the mouth; (2) less "peripheral faces" were chosen for inverted faces regardless of the fixated features. Findings from both experiments together indicate that more peripheral facial information were integrated when participants focused on the upper facial half, highlighting the significance of focusing on the upper facial half in face processing.
Collapse
Affiliation(s)
- Zhe Wang
- Department of Psychology, Zhejiang Sci-Tech University, Zhejiang, China.
| | - Hao Ni
- Department of Psychology, Zhejiang Sci-Tech University, Zhejiang, China
| | - Xin Zhou
- Department of Psychology, Zhejiang Sci-Tech University, Zhejiang, China
| | - Xiteng Yang
- Department of Psychology, Zhejiang Sci-Tech University, Zhejiang, China
| | - Ziyi Zheng
- Department of Psychology, Zhejiang Sci-Tech University, Zhejiang, China
| | - Yu-Hao P Sun
- Department of Psychology, Zhejiang Sci-Tech University, Zhejiang, China
| | - Xiaohui Zhang
- Department of Psychology, Zhejiang Sci-Tech University, Zhejiang, China
| | - Haiyang Jin
- Division of Science, Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
3
|
Effect of perceived eye gaze on the N170 component – A systematic review. Neurosci Biobehav Rev 2022; 143:104913. [DOI: 10.1016/j.neubiorev.2022.104913] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 10/03/2022] [Accepted: 10/09/2022] [Indexed: 11/06/2022]
|
4
|
Prunty JE, Jenkins R, Qarooni R, Bindemann M. Ingroup and outgroup differences in face detection. Br J Psychol 2022; 114 Suppl 1:94-111. [PMID: 35876334 DOI: 10.1111/bjop.12588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 02/22/2022] [Accepted: 07/11/2022] [Indexed: 11/30/2022]
Abstract
Humans show improved recognition for faces from their own social group relative to faces from another social group. Yet before faces can be recognized, they must first be detected in the visual field. Here, we tested whether humans also show an ingroup bias at the earliest stage of face processing - the point at which the presence of a face is first detected. To this end, we measured viewers' ability to detect ingroup (Black and White) and outgroup faces (Asian, Black, and White) in everyday scenes. Ingroup faces were detected with greater speed and accuracy relative to outgroup faces (Experiment 1). Removing face hue impaired detection generally, but the ingroup detection advantage was undiminished (Experiment 2). This same pattern was replicated by a detection algorithm using face templates derived from human data (Experiment 3). These findings demonstrate that the established ingroup bias in face processing can extend to the early process of detection. This effect is 'colour blind', in the sense that group membership effects are independent of general effects of image hue. Moreover, it can be captured by tuning visual templates to reflect the statistics of observers' social experience. We conclude that group bias in face detection is both a visual and a social phenomenon.
Collapse
Affiliation(s)
| | - Rob Jenkins
- Department of Psychology, University of York, York, UK
| | - Rana Qarooni
- Department of Psychology, University of York, York, UK
| | | |
Collapse
|
5
|
Qarooni R, Prunty J, Bindemann M, Jenkins R. Capacity limits in face detection. Cognition 2022; 228:105227. [PMID: 35872362 DOI: 10.1016/j.cognition.2022.105227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 07/13/2022] [Accepted: 07/14/2022] [Indexed: 11/16/2022]
Abstract
Face detection is a prerequisite for further face processing, such as extracting identity or semantic information. Those later processes appear to be subject to strict capacity limits, but the location of the bottleneck is unclear. In particular, it is not known whether the bottleneck occurs before or after face detection. Here we present a novel test of capacity limits in face detection. Across four behavioural experiments, we assessed detection of multiple faces via observers' ability to differentiate between two types of display. Fixed displays comprised items of the same type (all faces or all non-faces). Mixed displays combined faces and non-faces. Critically, a 'fixed' response requires all items to be processed. We found that additional faces could be detected with no cost to efficiency, and that this capacity-free performance was contingent on visual context. The observed pattern was not specific to faces, but detection was more efficient for faces overall. Our findings suggest that strict capacity limits in face perception occur after the detection step.
Collapse
Affiliation(s)
- Rana Qarooni
- Department of Psychology, University of York, UK
| | | | | | - Rob Jenkins
- Department of Psychology, University of York, UK.
| |
Collapse
|
6
|
Gonçalves A, Hattori Y, Adachi I. Staring death in the face: chimpanzees' attention towards conspecific skulls and the implications of a face module guiding their behaviour. ROYAL SOCIETY OPEN SCIENCE 2022; 9:210349. [PMID: 35345434 PMCID: PMC8941397 DOI: 10.1098/rsos.210349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 02/11/2022] [Indexed: 06/14/2023]
Abstract
Chimpanzees exhibit a variety of behaviours surrounding their dead, although much less is known about how they respond towards conspecific skeletons. We tested chimpanzees' visual attention to images of conspecific and non-conspecific stimuli (cat/chimp/dog/rat), shown simultaneously in four corners of a screen in distinct orientations (frontal/diagonal/lateral) of either one of three types (faces/skulls/skull-shaped stones). Additionally, we compared their visual attention towards chimpanzee-only stimuli (faces/skulls/skull-shaped stones). Lastly, we tested their attention towards specific regions of chimpanzee skulls. We theorized that chimpanzee skulls retaining face-like features would be perceived similarly to chimpanzee faces and thus be subjected to similar biases. Overall, supporting our hypotheses, the chimpanzees preferred conspecific-related stimuli. The results showed that chimpanzees attended: (i) significantly longer towards conspecific skulls than other species skulls (particularly in forward-facing and to a lesser extent diagonal orientations); (ii) significantly longer towards conspecific faces than other species faces at forward-facing and diagonal orientations; (iii) longer towards chimpanzee faces compared with chimpanzee skulls and skull-shaped stones, and (iv) attended significantly longer to the teeth, similar to findings for elephants. We suggest that chimpanzee skulls retain relevant, face-like features that arguably activate a domain-specific face module in chimpanzees' brains, guiding their attention.
Collapse
Affiliation(s)
- André Gonçalves
- Language and Intelligence Section, Primate Research Institute, Kyoto University, 484-8506 Aichi, Japan
| | - Yuko Hattori
- Center for International Collaboration and Advanced Studies in Primatology, Primate Research Institute, Kyoto University, 484-8506 Aichi, Japan
| | - Ikuma Adachi
- Language and Intelligence Section, Primate Research Institute, Kyoto University, 484-8506 Aichi, Japan
| |
Collapse
|
7
|
Or CCF, Goh BK, Lee ALF. The roles of gaze and head orientation in face categorization during rapid serial visual presentation. Vision Res 2021; 188:65-73. [PMID: 34293612 DOI: 10.1016/j.visres.2021.05.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 04/29/2021] [Accepted: 05/12/2021] [Indexed: 10/20/2022]
Abstract
Little is known about how perceived gaze direction and head orientation may influence human categorization of visual stimuli as faces. To address this question, a sequence of unsegmented natural images, each containing a random face or a non-face object, was presented in rapid succession (stimulus duration: 91.7 ms per image) during which human observers were instructed to respond immediately to every face presentation. Faces differed in gaze and head orientation in 7 combinations - full-front views with perceived gaze (1) directed to the observer, (2) averted to the left, or (3) averted to the right, left ¾ side views with (4) direct gaze or (5) averted gaze, and right ¾ side views with (6) direct gaze or (7) averted gaze - were presented randomly throughout the sequence. We found highly accurate and rapid behavioural responses to all kinds of faces. Crucially, both perceived gaze direction and head orientation had comparable, non-interactive effects on response times, where direct gaze was responded faster than averted gaze by 48 ms and full-front view faster than ¾ side view also by 48 ms on average. Presentations of full-front faces with direct gaze led to an additive speed advantage of 96 ms to ¾ faces with averted gaze. The results reveal that the effects of perceived gaze direction and head orientation on the speed of face categorization probably depend on the degree of social relevance of the face to the viewer.
Collapse
Affiliation(s)
- Charles C-F Or
- Division of Psychology, School of Social Sciences, Nanyang Technological University, Singapore.
| | - Benjamin K Goh
- Division of Psychology, School of Social Sciences, Nanyang Technological University, Singapore
| | - Alan L F Lee
- Department of Applied Psychology, Lingnan University, Hong Kong
| |
Collapse
|
8
|
Does automatic human face categorization depend on head orientation? Cortex 2021; 141:94-111. [PMID: 34049256 DOI: 10.1016/j.cortex.2021.03.030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 11/11/2020] [Accepted: 03/19/2021] [Indexed: 01/23/2023]
Abstract
Whether human categorization of visual stimuli as faces is optimal for full-front views, best revealing diagnostic features but lacking depth cues, remains largely unknown. To address this question, we presented 16 human observers with unsegmented natural images of different living and non-living objects at a fast rate (f = 12 Hz), with natural face images appearing at f/9 = 1.33 Hz. Faces posing all full-front or at ¾ side view angles appeared in separate sequences. Robust frequency-tagged 1.33 Hz (and harmonic) occipito-temporal electroencephalographic (EEG) responses reflecting face-selective neural activity did not differ in overall amplitude between full-front and ¾ side views. Despite this, alternating between full-front and ¾ side views within a sequence led to significant responses at specific harmonics of .67 Hz (f/18), objectively isolating view-dependent face-selective responses over occipito-temporal regions. Critically, a time-domain analysis showed that these view-dependent face-selective responses reflected only an earlier response to full-front than ¾ side views by 8-13 ms. Overall, these findings indicate that the face-selective neural representation is as robust for ¾ side faces as for full-front faces in the human brain, but full-front views provide a slightly earlier processing-time advantage as compared to rotated face views.
Collapse
|
9
|
Huang P, Cai B, Zhou C, Wang W, Wang X, Gao D, Bao B. Contribution of the mandible position to the facial profile perception of a female facial profile: An eye-tracking study. Am J Orthod Dentofacial Orthop 2019; 156:641-652. [PMID: 31677673 DOI: 10.1016/j.ajodo.2018.11.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 11/01/2018] [Accepted: 11/01/2018] [Indexed: 10/25/2022]
Abstract
INTRODUCTION Studies concerning the visual attention of laypersons viewing the soft tissue facial profile of men and women with malocclusion are lacking. This study aimed to determine the visual attention to the facial profile of patients with different levels of mandibular protrusion and facial background attractiveness using an eye-tracking device. METHODS The scanning paths of 54 Chinese laypersons (50% female, 50% male, aged 18-23 years) were recorded by an eye-tracking device when they observed composite female facial profile images (n = 24), which were combinations of different degrees of mandibular protrusion (normal, slight, moderate, and severe) and different levels of facial background attractiveness (attractive, average, and unattractive). Dependent variables (fixation duration and first fixation time) were analyzed using repeated-measures factorial analysis of variance. RESULTS For normal mandibular profiles, the fixation duration of the eyes was significantly higher than that of other facial features (P <0.001). The lower face and nose received the least attention. As the degree of protrusion increased from slight to moderate, more attention was drawn to the lower face accompanied by less attention to eyes in the unattractive group (P <0.05). When protrusion degree increased from moderate to severe, attention shifted from nose to lower face significantly in the attractive group (P <0.05). Attention shift from eyes to lower face was also found in the average group when protrusion degree rose to moderate protrusion from normal profile (P <0.05). A significant interaction between facial attractiveness and mandibular protrusion was found in the lower face duration (P = 0.020). The threshold point (the point of mandibular protrusion degree that evoked attention to the lower face) of the attractive facial background was higher than that of the unattractive background. Once evoked, the effect of mandibular protrusion of the attractive group tended to be stronger than that of the unattractive group, though without statistical difference. CONCLUSIONS Eyes are the most salient area. The increasing degree of mandibular protrusion tends to draw attention to the lower face from other facial features. Background attractiveness can modify this behavior.
Collapse
Affiliation(s)
- Peishan Huang
- Orthodontic Department, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China
| | - Bin Cai
- Orthodontic Department, Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China
| | - Chen Zhou
- Orthodontic Department, Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China
| | - Weicai Wang
- Orthodontic Department, Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China
| | - Xi Wang
- Orthodontic Department, Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China
| | - Dingguo Gao
- Psychology Department, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Brain Function and Disease, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience, Mental Health, Guangzhou, Guangdong, China.
| | - Baicheng Bao
- Orthodontic Department, Hospital of Stomatology, Guanghua School of Stomatology, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, Guangdong, China.
| |
Collapse
|
10
|
Gavrilescu M, Vizireanu N. Predicting Depression, Anxiety, and Stress Levels from Videos Using the Facial Action Coding System. SENSORS (BASEL, SWITZERLAND) 2019; 19:E3693. [PMID: 31450687 PMCID: PMC6749518 DOI: 10.3390/s19173693] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 08/11/2019] [Accepted: 08/20/2019] [Indexed: 12/24/2022]
Abstract
We present the first study in the literature that has aimed to determine Depression Anxiety Stress Scale (DASS) levels by analyzing facial expressions using Facial Action Coding System (FACS) by means of a unique noninvasive architecture on three layers designed to offer high accuracy and fast convergence: in the first layer, Active Appearance Models (AAM) and a set of multiclass Support Vector Machines (SVM) are used for Action Unit (AU) classification; in the second layer, a matrix is built containing the AUs' intensity levels; and in the third layer, an optimal feedforward neural network (FFNN) analyzes the matrix from the second layer in a pattern recognition task, predicting the DASS levels. We obtained 87.2% accuracy for depression, 77.9% for anxiety, and 90.2% for stress. The average prediction time was 64 s, and the architecture could be used in real time, allowing health practitioners to evaluate the evolution of DASS levels over time. The architecture could discriminate with 93% accuracy between healthy subjects and those affected by Major Depressive Disorder (MDD) or Post-traumatic Stress Disorder (PTSD), and 85% for Generalized Anxiety Disorder (GAD). For the first time in the literature, we determined a set of correlations between DASS, induced emotions, and FACS, which led to an increase in accuracy of 5%. When tested on AVEC 2014 and ANUStressDB, the method offered 5% higher accuracy, sensitivity, and specificity compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Mihai Gavrilescu
- Department of Telecommunications, Faculty of Electronics, Telecommunications and Information Technology, University "Politehnica", Bucharest 061071, Romania.
| | - Nicolae Vizireanu
- Department of Telecommunications, Faculty of Electronics, Telecommunications and Information Technology, University "Politehnica", Bucharest 061071, Romania
| |
Collapse
|
11
|
Fysh MC. Individual differences in the detection, matching and memory of faces. Cogn Res Princ Implic 2018; 3:20. [PMID: 30009250 PMCID: PMC6019413 DOI: 10.1186/s41235-018-0111-x] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2017] [Accepted: 05/22/2018] [Indexed: 11/10/2022] Open
Abstract
Previous research has explored relationships between individual performance in the detection, matching and memory of faces, but under limiting conditions. The current study sought to extend previous findings with a different measure of face detection, and a more challenging face matching task, in combination with an established test of face memory. Experiment 1 tested face detection ability under conditions designed to maximise individual differences in accuracy but did not find evidence for relationships between measures. In addition, in Experiments 2 and 3, which utilised response times as the primary performance measure for face detection, but accuracy for face matching and face memory, no correlations were observed between performance on face detection and the other tasks. However, there was a correlation between accuracy in face matching and face memory, consistent with other research. Together, these experiments provide further evidence for a dissociation between face detection, and face matching and face memory, but suggest that these latter tasks share some common mechanisms.
Collapse
Affiliation(s)
- Matthew C. Fysh
- School of Psychology, University of Kent, Canterbury, CT2 7NP UK
| |
Collapse
|
12
|
Affiliation(s)
| | - Rob Jenkins
- Department of Psychology, University of York, York, UK
| | | |
Collapse
|
13
|
Pongakkasira K, Bindemann M. The shape of the face template: geometric distortions of faces and their detection in natural scenes. Vision Res 2015; 109:99-106. [PMID: 25727491 DOI: 10.1016/j.visres.2015.02.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Revised: 01/27/2015] [Accepted: 02/16/2015] [Indexed: 11/16/2022]
Abstract
Human face detection might be driven by skin-coloured face-shaped templates. To explore this idea, this study compared the detection of faces for which the natural height-to-width ratios were preserved with distorted faces that were stretched vertically or horizontally. The impact of stretching on detection performance was not obvious when faces were equated to their unstretched counterparts in terms of their height or width dimension (Experiment 1). However, stretching impaired detection when the original and distorted faces were matched for their surface area (Experiment 2), and this was found with both vertically and horizontally stretched faces (Experiment 3). This effect was evident in accuracy, response times, and also observers' eye movements to faces. These findings demonstrate that height-to-width ratios are an important component of the cognitive template for face detection. The results also highlight important differences between face detection and face recognition.
Collapse
|
14
|
Face detection differs from categorization: Evidence from visual search in natural scenes. Psychon Bull Rev 2013; 20:1140-5. [DOI: 10.3758/s13423-013-0445-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|