1
|
Dumont H, Roux-Sibilon A, Goffaux V. Horizontal face information is the main gateway to the shape and surface cues to familiar face identity. PLoS One 2024; 19:e0311225. [PMID: 39374235 PMCID: PMC11458052 DOI: 10.1371/journal.pone.0311225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 09/16/2024] [Indexed: 10/09/2024] Open
Abstract
Humans preferentially rely on horizontal cues when recognizing face identity. The reasons for this preference are largely elusive. Past research has proposed the existence of two main sources of face identity information: shape and surface reflectance. The access to surface and shape is disrupted by picture-plane inversion while contrast negation selectively impedes access to surface cues. Our objective was to characterize the shape versus surface nature of the face information conveyed by the horizontal range. To do this, we tracked the effects of inversion and negation in the orientation domain. Participants performed an identity recognition task using orientation-filtered (0° to 150°, 30° steps) pictures of familiar male actors presented either in a natural upright position and contrast polarity, inverted, or negated. We modelled the inversion and negation effects across orientations with a Gaussian function using a Bayesian nonlinear mixed-effects modelling approach. The effects of inversion and negation showed strikingly similar orientation tuning profiles, both peaking in the horizontal range, with a comparable tuning strength. These results suggest that the horizontal preference of human face recognition is due to this range yielding a privileged access to shape and surface cues, i.e. the two main sources of face identity information.
Collapse
Affiliation(s)
- Helene Dumont
- Psychological Sciences Research Institute (IPSY), UC Louvain, Louvain-la-Neuve, Belgium
| | - Alexia Roux-Sibilon
- Psychological Sciences Research Institute (IPSY), UC Louvain, Louvain-la-Neuve, Belgium
- Université Clermont Auvergne, CNRS, LAPSCO, Clermont-Ferrand, France
| | - Valérie Goffaux
- Psychological Sciences Research Institute (IPSY), UC Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IONS), UC Louvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
2
|
Devi B, Preetha MMSJ. An Innovative Facial Emotion Recognition Model Enabled by Optimal Feature Selection Using Firefly Plus Jaya Algorithm. INTERNATIONAL JOURNAL OF SWARM INTELLIGENCE RESEARCH 2022. [DOI: 10.4018/ijsir.304399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper intents to develop an intelligent facial emotion recognition model by following four major processes like (a) Face detection (b) Feature extraction (c) Optimal feature selection and (d) Classification. In the face detection model, the face of the human is detected using the viola-Jones method. Then, the resultant face detected image is subjected to feature extraction via (a) LBP (b) DWT (c) GLCM. Further, the length of the features is large in size and hence it is essential to choose the most relevant features from the extracted image. The optimally chosen features are classified using NN. The outcome of NN portrays the type of emotions like Normal, disgust, fear, angry, smile, surprise or sad. As a novelty, this research work enhances the classification accuracy of the facial emotions by selecting the optimal features as well as optimizing the weight of NN. These both tasks are accomplished by hybridizing the concept of FF and JA together referred as MF-JFF. The resultant of NN is the accurate recognized facial emotion and the whole model is simply referred as MF-JFF-NN.
Collapse
Affiliation(s)
- Bhagyashri Devi
- Department of ECE, Noorul Islam Centre for Higher Education, India
| | | |
Collapse
|
3
|
Balas B, Auen A, Saville A, Schmidt J, Harel A. Children are sensitive to mutual information in intermediate-complexity face and non-face features. J Vis 2021; 20:6. [PMID: 32407437 PMCID: PMC7409612 DOI: 10.1167/jov.20.5.6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Understanding developmental changes in children's use of specific visual information for recognizing object categories is essential for understanding how experience shapes recognition. Research on the development of face recognition has focused on children's use of low-level information (e.g. orientation sub-bands), or high-level information. In face categorization tasks, adults also exhibit sensitivity to intermediate complexity features that are diagnostic of the presence of a face. Do children also use intermediate complexity features for categorizing faces and objects, and, if so, how does their sensitivity to such features change during childhood? Intermediate-complexity features bridge the gap between low- and high-level processing: they have computational benefits for object detection and segmentation, and are known to drive neural responses in the ventral visual system. Here, we have investigated the developmental trajectory of children's sensitivity to diagnostic category information in intermediate-complexity features. We presented children (5–10 years old) and adults with image fragments of faces (Experiment 1) and cars (Experiment 2) varying in their mutual information, which quantifies a fragment's diagnosticity of a specific category. Our goal was to determine whether children were sensitive to the amount of mutual information in these fragments, and if their information usage is different from adults. We found that despite better overall categorization performance in adults, all children were sensitive to fragment diagnosticity in both categories, suggesting that intermediate representations of appearance are established early in childhood. Moreover, children's usage of mutual information was not limited to face fragments, suggesting the extracting intermediate-complexity features is a process that is not specific only to faces. We discuss the implications of our findings for developmental theories of face and object recognition.
Collapse
|
4
|
Abstract
Our objectives were to investigate alexithymia in burnout patients while controlling for depression and anxiety, as well as to evaluate whether alexithymia may be part of a profound emotional processing disorder or of a mentalization deficit. Alexithymia, depressive, and anxious feelings were compared in patients with burnout, depression, and healthy controls using an age-, sex-, and education-matched cross-sectional design (n = 60). A facial emotion recognition task and an emotional mentalizing performance test as well as physical and emotional violation experiences were conducted. Alexithymia was significantly increased in burnout patients, mediated by negative affect in this group. No impairment of facial emotion recognition or mental attribution could be shown. Burnout patients demonstrated slightly increased emotional abuse experiences in early childhood. The present results corroborate the supposition that alexithymia in burnout primarily depends on affect and may rise due to current strain and overload experience, rather than based on a profound developmental disorder in emotion processing.
Collapse
|
5
|
Hashemi A, Pachai MV, Bennett PJ, Sekuler AB. The role of horizontal facial structure on the N170 and N250. Vision Res 2019; 157:12-23. [DOI: 10.1016/j.visres.2018.02.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 02/01/2018] [Accepted: 02/03/2018] [Indexed: 10/17/2022]
|
6
|
Fixed or flexible? Orientation preference in identity and gaze processing in humans. PLoS One 2019; 14:e0210503. [PMID: 30682035 PMCID: PMC6347268 DOI: 10.1371/journal.pone.0210503] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 12/23/2018] [Indexed: 11/19/2022] Open
Abstract
Vision begins with the encoding of contrast at specific orientations. Several works showed that humans identify their conspecifics best based on the horizontally-oriented information contained in the face image; this range conveys the main morphological features of the face. In contrast, the vertical structure of the eye region seems to deliver optimal cues to gaze direction. The present work investigates whether the human face processing system flexibly tunes to vertical information contained in the eye region when processing gaze direction. Alternatively, face processing may invariantly rely on the horizontal range, supporting the domain specificity of orientation tuning for faces and the gateway role of horizontal content to access any type of facial information. Participants judged the gaze direction of faces staring at a range of lateral positions. They additionally performed an identification task with upright and inverted face stimuli. Across tasks, stimuli were filtered to selectively reveal horizontal (H), vertical (V), or combined (HV) information. Most participants identified faces better based on horizontal than vertical information confirming the horizontal tuning of face identification. In contrast, they showed a vertically-tuned sensitivity to gaze direction. The logistic functions fitting the “left” and “right” response proportion as a function of gaze direction were indeed steeper when based on vertical than on horizontal information. The finding of a vertically-tuned processing of gaze direction favours the hypothesis that visual encoding of face information flexibly switches to the orientation channel carrying the cues most relevant to the task at hand. It suggests that horizontal structure, though predominant in the face stimulus, is not a mandatory gateway for efficient face processing. The present evidence may help better understand how visual signals travel the visual system to enable rich and complex representations of naturalistic stimuli such as faces.
Collapse
|
7
|
Yu D, Chai A, Chung STL. Orientation information in encoding facial expressions. Vision Res 2018; 150:29-37. [PMID: 30048659 PMCID: PMC6139277 DOI: 10.1016/j.visres.2018.07.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 06/25/2018] [Accepted: 07/12/2018] [Indexed: 11/23/2022]
Abstract
Previous research showed that we use different regions of a face to categorize different facial expressions, e.g. mouth region for identifying happy faces; eyebrows, eyes and upper part of nose for identifying angry faces. These findings imply that the spatial information along or close to the horizontal orientation might be more useful than others for facial expression recognition. In this study, we examined how the performance for recognizing facial expression depends on the spatial information along different orientations, and whether the pixel-level differences in the face images could account for subjects' performance. Four facial expressions-angry, fearful, happy and sad-were tested. An orientation filter (bandwidth = 23°) was applied to restrict information within the face images, with the center of the filter ranged from 0° (horizontal) to 150° in steps of 30°. Accuracy for recognizing facial expression was measured for an unfiltered and the six filtered conditions. For all four facial expressions, recognition performance (normalized d') was virtually identical for filter orientations of -30°, horizontal and 30°, and declined systematically as the filter orientation approached vertical. The information contained in mouth and eye regions is a significant predictor for subject's response (based on the confusion patterns). We conclude that young adults with normal vision categorizes facial expression most effectively based on the spatial information around the horizontal orientation which captures primary changes of facial features across expressions. Across all spatial orientations, the information contained in mouth and eye regions contributes significantly to facial expression categorization.
Collapse
Affiliation(s)
- Deyue Yu
- College of Optometry, The Ohio State University, Columbus, OH, United States.
| | - Andrea Chai
- School of Optometry, University of California, Berkeley, CA, United States
| | - Susana T L Chung
- School of Optometry, University of California, Berkeley, CA, United States
| |
Collapse
|
8
|
Nirmala Sreedharan NP, Ganesan B, Raveendran R, Sarala P, Dennis B, Boothalingam R. R. Grey Wolf optimisation‐based feature selection and classification for facial emotion recognition. IET BIOMETRICS 2018. [DOI: 10.1049/iet-bmt.2017.0160] [Citation(s) in RCA: 100] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Affiliation(s)
| | - Brammya Ganesan
- Resbee Info Technologies Private LimitedThuckalay629175India
| | | | - Praveena Sarala
- Resbee Info Technologies Private LimitedThuckalay629175India
| | - Binu Dennis
- Resbee Info Technologies Private LimitedThuckalay629175India
| | | |
Collapse
|
9
|
Abstract
Although adults' ability to recognize materials from complex natural images has been well characterized, we still know very little about the development of material perception. When do children exhibit adult-like abilities to categorize materials? What visual features do they use to do so as a function of age and material category? In the present study, we attempted to address both of these issues in two experiments that we administered to school-age children (5–10 years old) and adults. In both tasks, we asked our participants to categorize natural materials (metal, stone, water, and wood) using original images of these materials as well as synthetic images made with the Portilla–Simoncelli algorithm. By including synthetic images in our stimulus set, we were able to assess both how material categorization develops during childhood and how visual summary statistics are recruited for material perception across age groups. We observed that when asked to provide category labels for individual images (Experiment 1), young children were disproportionately bad at categorizing some materials after they were synthesized, suggesting material-specific changes in information use over the course of development. However, when asked to match real and synthetic images according to material category without labeling (Experiment 2), these effects were weakened. We conclude that while children have adult-like abilities to encode and compare images based on summary statistics, the mapping between summary statistics and category labels undergoes prolonged development during childhood.
Collapse
Affiliation(s)
- Benjamin Balas
- Department of Psychology and Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, USA
| |
Collapse
|
10
|
Balas B, van Lamsweerde AE, Saville A, Schmidt J. School‐age children's neural sensitivity to horizontal orientation energy in faces. Dev Psychobiol 2017; 59:899-909. [DOI: 10.1002/dev.21546] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Accepted: 06/17/2017] [Indexed: 11/09/2022]
|
11
|
Balas B, Auen A, Saville A, Schmidt J. Body emotion recognition disproportionately depends on vertical orientations during childhood. INTERNATIONAL JOURNAL OF BEHAVIORAL DEVELOPMENT 2017. [DOI: 10.1177/0165025417690267] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Children’s ability to recognize emotional expressions from faces and bodies develops during childhood. However, the low-level features that support accurate body emotion recognition during development have not been well characterized. This is in marked contrast to facial emotion recognition, which is known to depend upon specific spatial frequency and orientation sub-bands during adulthood, biases that develop during childhood. Here, we examined whether children’s reliance on vertical vs. horizontal orientation energy for recognizing emotional expressions in static images of bodies changed during middle childhood (5 to 10 years old). We found that while children of all ages had an adult-like bias favoring vertical orientation energy, this effect was larger at younger ages. We conclude that in terms of information use, a key feature of the development of emotion recognition is improved performance with sub-optimal features for recognition – that is, learning to use less diagnostic features of the image is a slower process than learning to use more useful features.
Collapse
|
12
|
Koprowski R. Blood pulsation measurement using cameras operating in visible light: limitations. Biomed Eng Online 2016; 15:111. [PMID: 27716321 PMCID: PMC5048457 DOI: 10.1186/s12938-016-0232-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Accepted: 09/27/2016] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). METHODS The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. RESULTS AND CONCLUSIONS The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).
Collapse
Affiliation(s)
- Robert Koprowski
- Department of Biomedical Computer Systems, Faculty of Computer Science and Materials Science, Institute of Computer Science, University of Silesia, ul. Będzińska 39, 41-200, Sosnowiec, Poland.
| |
Collapse
|
13
|
Goffaux V, Greenwood JA. The orientation selectivity of face identification. Sci Rep 2016; 6:34204. [PMID: 27677359 PMCID: PMC5039756 DOI: 10.1038/srep34204] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2016] [Accepted: 09/09/2016] [Indexed: 11/29/2022] Open
Abstract
Recent work demonstrates that human face identification is most efficient when based on horizontal, rather than vertical, image structure. Because it is unclear how this specialization for upright (compared to inverted) face processing emerges in the visual system, the present study aimed to systematically characterize the orientation sensitivity profile for face identification. With upright faces, identification performance in a delayed match-to-sample task was highest for horizontally filtered images and declined sharply with oblique and vertically filtered images. Performance was well described by a Gaussian function with a bandwidth around 25°. Face inversion reshaped this sensitivity profile dramatically, with a downward shift of the entire tuning curve as well as a reduction in the amplitude of the horizontal peak and a doubling in bandwidth. The use of naturalistic outer contours (vs. a common outline mask) was also found to reshape this sensitivity profile by increasing sensitivity to oblique information in the near-horizontal range. Altogether, although face identification is sharply tuned to horizontal angles, both inversion and outline masking can profoundly reshape this orientation sensitivity profile. This combination of image- and observer-driven effects provides an insight into the functional relationship between orientation-selective processes within primary and high-level stages of the human brain.
Collapse
Affiliation(s)
- Valerie Goffaux
- Research Institute for Psychological Science, Université Catholique de Louvain, Belgium
- Institute of Neuroscience, Université Catholique de Louvain, Belgium
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands
| | | |
Collapse
|